Friday Squid Blogging: How Flying Squid Fly

Someone has finally proven how:

How do these squid go from swimming to flying? Four phases of flight are described in the research: launching, jetting, gliding and diving.

While swimming, the squid open up their mantle and draw in water. Then these squid launch themselves into the air with a high-powered blast of the water from their bodies. Once launched by this jet propulsion, these squid spread out both their fins and their tentacles to form wings. The squid have a membrane between their tentacles similar to the webbed toes of a frog. This helps them use their tentacles as a wing and create aerodynamic lift so they can glide ­ similar to a well-made paper airplane.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on May 2, 2014 at 4:10 PM128 Comments

Comments

kashmarek May 2, 2014 4:22 PM

Found on Slashdot:

Yahoo Stops Honoring ‘Do-Not-Track’ Settings

http://tech.slashdot.org/story/14/05/02/1353211/yahoo-stops-honoring-do-not-track-settings

Yahoo! says they want to use tracking data “…to provide a personalized web-browsing experience, which isn’t possible using do-not-track.”

I have yet to see from anywhere that such data has ever or will ever perform the function they indicate. It is all about marketing and marketing is unlikely to ever provide what they indicate is their objective. Their marketing is about getting ad dollars, no more, no less.

Jonathan May 2, 2014 4:58 PM

Proven is an awfully strong word. Do you mean “presented a theory backed by supporting experimental evidence”? 🙂

KnottWhittingley May 2, 2014 6:02 PM

Proven is an awfully strong word.

Only if you’re hung up on Popperish terminology, which is kind of artifcial bullshit.

Popper was wrong about scientists never proving things, only disproving them, for any reasonable non-man-worshipping interpretation of the “proving.”

“Prove” in normal not-anal parlance always implicitly means something like “beyond reasonable doubt”, and always has had that sense available in natural language. (Even in mathematics, there’s an implicit “beyond reasonable doubt in there somewhere—e.g., it’s assumed that we’re reasonable to assume the law of the excluded middle or at least modus ponens, and we’re not just a hallucinating a universe in which logic appears to work.)

In reality, science is almost always about trying to prove things. Disproofs of reasonable alternative hypotheses are usually just a means to that end, because we generally want to explain how things do work, not just rule out some ways they don’t work.

Fuck Karl Popper! 🙂

KnottWhittingley May 2, 2014 6:03 PM

non-man-worshipping -> non-math-worshipping

I swear I previewed, just not long enough. At least I fixed the italics tag.

DB May 2, 2014 9:43 PM

@ KnottWhittingley: So then how many alternate theories do you have to disprove in order to prove something? 🙂

Mirar May 3, 2014 1:27 AM

I’m getting spam to the email address I only used to comment here (which resides in my own domain; I use a new email address for everything I do just so I can cut it off if it’s leaked to spambots).

I wonder how it was leaked?

Benni May 3, 2014 3:09 AM

An interesting quote from the discussion with hayden and greenwald is where hayden says that nsa does not collect browsing histories, emails and adress books.

“I love that snowden quote. It covers your text messages, your webhistory, your searches you ever made… That’s google, that’s not the nsa”.

Certainly, the nsa does not “cover” websearches, since it does not provide an engine for websearches. The nsa just sits on googles dark fibers with project muscular and thereby it “copies” not “covers” your websearches.

I Can somebody tell them to just stop lying and lying and lying

Clive Robinson May 3, 2014 4:05 AM

@ figureitout,

This one is definatly one for you to have a think over,

http://drewdevault.com/2014/02/02/The-worst-bugs.html

I once had a similar bug with keyboard interupts and puting a microcontroler into low power sleep mode.

What I had done was write a key bounce removal sub that started a counter and then disabled they keyboard interupts, when the counter timed out it would present a debounced key press to the interupt structure. The counter was driven by the interupt structure from the fast system tic.

What was happening was that sometimes the system would freeze and not wake on a keypress. After spending many days trying to catch the bug I finally caught it the only symptom was that the keyboard interupt was clear not set as it should have been.

No amount of debugging code worked on finding the problem, so I rsorted to playing “Paper Computers” and still got now where. I was begining to panic as the mask program code release date was iminent.

In a moment of blind panic around two AM I made a simple mistake and put the code line setting the keyboard interupt in the sleep routien back a few lines of code. All of a sudden the problem got a whole lot worse and was almost repeatable.

Around four AM the penny dropped what was happening was a key was being pressed at the same time the sleep sub set the keyboard interupt bit which caused the key debounce interupt to run between the end of the interupt bit setting instruction and the next instruction which was the sleep/powerdown instruction. The key debounce sub was of course clearing the the keyboard interupt bit… so cause found, half an hour later with a slight rework of the code problem solved and I crawled under my bench for a couple of hours sleep befor the boss got in.

As with a number of difficult to find bugs once found they are easy to see and usually simple to fix. What slows the finding down is not being able to repeate the fault so you can’t pin the bug down on your disecting bench and get to it’s guts…

Clive Robinson May 3, 2014 4:17 AM

@ DB

So then how many alternate theories do you have to disprove in order to prove something?

I guess it depends on your razor… Friar William kept his well wheted and stroped and was keen to apply the principle of lex parsimoniae with it…

Clive Robinson May 3, 2014 5:21 AM

@ Nick P, and others 🙂

This might be of interest,

http://lynxline.com/inferno-raspberry-pi-image-beta1/

They are porting Inferno OS over to a Raspberry Pi and you can get a beta download.

And for those not sure what Inferno OS is, it’s derived from Plan 9 back in the late 1990’s and it’s virtual machine (register based) has some advantages over that of java’s virdual machine (stack based),

http://en.m.wikipedia.org/wiki/Inferno_(operating_system)

MikeA May 3, 2014 11:15 AM

@Clive: “HODIE NATUS EST RADICI FRATER” on a Z80? Cool.

But mostly posting to ask for opinion whether the askpdf spammer is blasting these comments net-wide or somebody really thinks the readers of this blog are likely to fall for it.

AlanS May 3, 2014 11:49 AM

For anyone following the link, posted by Benni above, to the Munk debate in which
Glenn Greenwald and Alexis Ohanian debate Alan Dershowitz and Michael Hayden, you need to skip the first 27 minutes and 15 seconds. If you want to skip the intros as well go to the 33 minute mark. Hayden starts speaking at 34 minutes.

Plausible Deniability May 3, 2014 1:13 PM

“Here are the secret rules of the internet: five minutes after you open a web browser for the first time, a kid in Russia has your social security number. Did you sign up for something? A computer at the NSA now automatically tracks your physical location for the rest of your life. Sent an email? Your email address just went up on a billboard in Nigeria.

These things aren’t true because we don’t care and don’t try to stop them, they’re true because everything is broken because there’s no good code and everybody’s just trying to keep it running.”

http://stilldrinking.org/programming-sucks

Jacob May 3, 2014 1:26 PM

@ MikeA

The spammers use bots that looks for well known blogging software (like MovableType – used here – or WordPress)with known admin posting file names and do a POST to auto insert a spam message.

They don’t care if you read it or not – they do so that Google bot, when indexing the spammed site, will see links back to their site and raise their page ranking. High ranking is worth money, and when they get enough link-backs they sell their site for a profit.

Nick P May 3, 2014 1:57 PM

@ Plausible Deniability

That article was hilarious. He’s wrong about all software & languages being like that: many counterexamples and I even posted some here. That the article applies to 90+% (99+%?) of programming seems accurate. However, if he thinks programming is bad, he should try security engineering. Even better, security engineering + programming + support. I think if I didn’t choose this, I’d have lived a long, healthy, life of few worries & plenty leisure time.

And to think I used to believe I made smart decisions… 😉

CallMeLateForSupper May 3, 2014 5:36 PM

:OFF topic

@Benni
“Now the us military police told the demonstrators that they will not leave before the protesters leave. So, the demonstrators are preparing for a long weekend….” LOL!

Are you old enough to remember the “Atomkraft Nein danke” (Nucear technology No thanks) bumper sticker of 1980 or so? I saw them everywhere, especially in uni cities. (I think The Green Party were responsible.) Anyway, while I was waking up in the shower this morning “NSA kraft Nein danke” popped into my head. Sounds like a good slogan for those demonstrators.

Siderite May 3, 2014 6:30 PM

I was considering the issue of password disclosure when a storage device is seized by authorities. I am not an American, so I am not discussing American law here (or at least not only American).

I was thinking that the best way to not be able to divulge a password is if you don’t remember it. You can encrypt something with a complicated, computer random generated string, password, then write the password on a piece of paper.

Bear with me here, I know it sounds silly and ridiculously low tech, but you can hide a small rolled piece of paper anywhere on your person: clothes, lining of a bag, any book you carry with you, even under the skin in a special container or inside your mouth, if you are really paranoid, etc. Let’s assume you would put it in a little fold of your sleeve. Unless the people who seize you are binding your hands, then force strip you, you have easy access to the password paper, but they have to physically search every inch of your clothing to find it. At any moment of freedom you can take out the paper and swallow it, preferably without them noticing, thus destroying it.

In this scenario, no one can possibly prosecute you for not disclosing the password, because you do not remember it. Even if they see you swallowing the paper, they cannot accuse you of anything, because there is no way of proving what was on that paper. Even if they torture you, you cannot disclose what you don’t know (although if it gets to that, I don’t think privacy is your first concern). After being released, you can get the password from a backup that is similarly stored on a low tech piece of paper that can be hidden anywhere and extremely difficult to find.

What do you think of this system?

Benni May 3, 2014 6:37 PM

I think there should be more demonstrations. Here is a nice chart with us military bases often used for drone strikes, and nas bases in germany:

http://www.geheimerkrieg.de/#entry-5-7879-das-projekt

When we want to move them somewhere else, the other facilities should face peaceful demonstrators every week, too. And of course all their activities should be carefully documented with cameras, since the shy spies are relatively rare lifeforms, and if they move out, they would not leave a trace, if their actions were not carefully documented.

As for an nsa sticker, I think this here is nice:

http://www.heise.de/newsticker/meldung/Die-NSA-und-Mobil-Apps-Geheimdienste-schnueffeln-Angry-Birds-aus-2098447.html?view=zoom;zoom=2

it is a reference of nsa sitting in angry bird, a game which has this “mighty eagle” figure:

http://www.heise.de/newsticker/meldung/Die-NSA-und-Mobil-Apps-Geheimdienste-schnueffeln-Angry-Birds-aus-2098447.html

By the way, one of the countries that are part of the mystic program (i.e where every phone call is stored for 30 days) is not just iraq, but the european country austria:

http://www.heise.de/newsticker/meldung/NSA-hoert-angeblich-auch-Oesterreich-komplett-ab-2165101.html

and there, some architecture lovers also have begun demonstrations:

http://www.heise.de/newsticker/meldung/200-Architekturfreunde-fotografieren-Wiener-NSA-Villa-1952279.html

They are especially interested in certain houses that have large satellite dishes in their garden, and documenting the architectural details of these facilities with great care in their photos.

Jacob May 3, 2014 9:16 PM

@ Siderite

It depends on the jurisdiction. If you are at a place where authorities have the legal power to ask you for your password, not remembering it is not a real-life option.

It will probably go along these line:

You: I don’t remember it
They: You are not cooperating. You are a computer guy, right? We don’t believe that you have an important stuff that need to be protected and you don’t remember how to access it. Does not make sense. Are you a terrorist? if not, prove it to us and show us the files. We will not keep the files, just see that there are no terror-related stuff.
You: I understand, I want to help, but truely I don’t remember.
They: We will tell you what: you are now a suspect that need to be thoroughly investigated. You are under arrest. We gonna put you now in a cell and tomorrow will ask a judge for a 5 days extension to investigate you. We doubt that the judge will accept your funny “don’t remember claim”. You will spend the 5 days in a small cell with Vlad the Impaler
After 5 days, we may extend your staying with us or file formal indictment. We will then ask the judge to incarcerate you until the trial begins – 2-3 months at the min. On similar cases in the past the prosecution had a 93% conviction rate.”

Is your data THAT important?

Clive Robinson May 3, 2014 9:24 PM

@ Petréa Mitchell,

The UK’s chief medical officer was interviewed about this problem ( chrisis is a word she used) a few days ago along with another person.

Some interesting facts and opinion were given. Apparently there has not been a new class of antibiotic since 1986 and drugs manufacturers are not putting forward new antibiotics for approval… The reasons given for this are that drugs companies have switched their focus from drugs that cure disease to those that reduce the symptoms of a disease and thus have to be taken for life not just a short period of time. Further one of the main causes of antibiotic resistance is the “standard use” of antibiotics in feed stuff for animals in high intensity farming (where the use of antibiotics can be shown to increase animal weight by as much as 20% over otherwise healthy animals at the same age points that recieve the same diet except minus the antibiotics).

Further it was pointed out that if new antibiotics were restricted to human only use then the drugs companies would not be interested in doing the research let alone entering into the approvals process.

This failure of a market to produce what is clearly needed is yet another example of why the slavish devotion to “Free Market Economics” is a very bad idea. And in this particular case might end up killing us all (or atleast a significant percentage of us). I guess it should be said that for the common virus infections what causes the majority of the actual deaths is the secondary bacterial infections…

And before anybody asks, no I don’t have any solutions to this issue I would have any confidence in. The reason is there are to few drugs companies so they act in effect as a cartell without being one by legal definition, and this has occured as a result of legislation…

One thing that has been pointed out in the past is there are other ways of getting bacterial infections under control, one of which is the use of phages and dates back quite some time. Which is a problem in that as the methods and processes are prior art getting patent or other protection of investment is not available. So the general perception is that no company is going to sink money into either research or gaining approvals. However one US company (Intralytix) based in Baltimore, Maryland is actualy developing phage based products for the food processing industry and have FDA and EPA approvals, so it can be done.

Clive Robinson May 3, 2014 9:53 PM

@ Siderite,

As for swallowing the piece of paper, don’t think about it most juresdictions have legislation relating to presenting false evidence / tampering with evidence / destruction of evidence with significant jail terms (upwards of six years and life or longer in others).

Also there is no need to do it that way, Nick P, Wael, myself and others have discussed various “legal” methods in the past on this blog.

Clive Robinson May 3, 2014 10:22 PM

@ MikeA,

@Clive: “HODIE NATUS EST RADICI FRATER” on a Z80? Cool

Ahh the Multics code “that should not speak it’s name”…

There are various stories about the how why and what of it, but the most reliable sourcse blaim a hardware error in the Virtual Memory system. I guess Nick P has a link to the actual method squirled away in his link farm 😉 but if I remember correctly it uses a “Spaghetti Stack” which is oddly coincidental in that Latin and Spaghetti both originate from the same place, as well as the Holy Roman Empire –Roman Catholic Church– which has a standard incantation from which the Multics error message was supposadly derived.

My coding problem was not on a CPU as powerfull as a Z80, it was an embedded reduced version of the 6800 from Motorola that had hardware extras which included the sleep function and keyboard interupt function to awake from the low power state.

As for Knight OS for the calculators, I have one of the “offending items” as does Figureitout, however I’ve always had more pressing issues to deal with that stop me putting on a new OS or porting over FIG Forth or a Z80 based BASIC I wrote back in the 1980’s 🙁 why does life always get in the way of fun when you are over thirty…

herman May 3, 2014 11:49 PM

Siderite:
Doesn’t work like that.

In most jurisdictions you can be detained indefinitely for ‘contempt of court’. The judge could keep sending you back to the cells every two weeks, until you remember the password, or is positively identified with Altzheimer’s decades later, or die – whichever comes first.

Figureitout May 4, 2014 2:23 AM

This one is definatly one for you to have a think over
Clive Robinson
–Jesus…Haven’t done much ASM at all to say anything about it besides I just want to get it working cleanly and get above it. I’m getting better but nowhere near just being able to atleast “follow” the code. I’m trying to limit my hogging the bandwidth but I’ll feel rude if I don’t reply. I deal w/ weird-ass sh*t nearly everyday it seems like; many times I can’t solve them and they disappear. I’ve got an infection problem, I don’t know how it spreads and stays alive, I don’t know where it is or what; but it keeps hiding. I can’t see any code of it (yet), but I’m working on some forensic tools, found another one tonight that was pretty good and extremely easy to use (diskspy by miray). But I still have a functioning computer which can do somethings so…Latest one (tonight actually), I “improperly” shutdown my openBSD pc.; I was able to login initially for some reason but now no. Had to use “boot -s”, get in and change /etc/passwd and /etc/myname but again won’t login. Aaaaannnd I just sent the shell into an endless loop. Just trying to start programming some stuff offline…Think I’m going to wipe it and put another linux on it, get all the programming crap I could possibly need and document software, and maybe some SDR; it’s got too many juicy ports to not to. My beaglebone black seems like a nice candidate for openBSD then; just need another screen, maybe the one the agents attempted to break in the garage.

Speaking of the bug, near the end he had another typo…(that they key was), he should put in a joke w/ that thez, key was :p But he could use an emulator as can I, I don’t get how people did debugging “back in the day”; incredibly difficult and lots of broken chips I imagine. The emulator could always be wrong too based on real freaky chip behavior.

OT, speaking of ports, besides one of my TI’s (need to create a serial cable as I lost mine), I really want to get this old Cassiopeia E-115 up and running again w/ linux or at least windows. There’s some sort of power issue happening and I may have already done something they said not to do; broke the little pen thing too lol, damnit…

Siderite
–I got illegally searched one time by coppers, the incompetent pigs didn’t check a zipper pocket right in their face. If it’s an OTP just say it’s a password for something like one of the job sites these days, you have to create an account for each job applied for. No need to dispose of evidence unless they did catch you exchanging code books; anyway it’s good for them to go about “cracking” OTP’s (they can already make up crap that it decrypted to and charge falsehoods).

Was going to post this, now it’s pretty relevant for you, kind of my vision a little, but this guy got a nifty P.O.C. working except I would nitpick some things like just pure LCD and maybe try another filesystem (not to take away from a nice hack). I’d just make it w/ a male and female port so you just exchange .txt files “in the field”.

http://jaromir.xf.cz/projs2/microreader/microreader.html

Wael May 4, 2014 2:27 AM

@ Siderite
As @ Clive Robinson mentioned below, the subject matter was discussed previously from various angles. There is another viewpoint to add, and that is the purpose of the password. It makes a difference. The password could serve various purposes (hint, that’s one reason one shouldn’t use the same password for all) — Separation of Duties? Roles?

  • Remote server password: I don’t think they need to go through the account holder for this. The entity that hosts the service is a better choice.
    • Email account: ISP, “Free” like yahoo and gmail,…
    • Social media account: Facebook, LinkedIn, etc…
    • Cloud storage: Box, iCloud, Google docs,…
      Your password maybe obtained. If there is a pass phrase that protects a private key locally stored on the device, and you follow your technique, then the outcome is likely to be similar to a personal device password.
  • Personal (what else can it be? Public? Shared? Hint, hint) computer – laptop, tablet… One tactic previously discussed was to create different accounts with different passwords. Be cautioned that If you get caught and end up with @Jacob’s “Vlad the Impaler”, you might as well change your account name to a subtle variant of @ Figureitout’s recommended, most eloquent, email address name choice.
  • Work computing device – Work laptop, Smart phone… Well, it’s not a good idea to store personal information on devices one doesn’t “own”, but your mileage may vary…

you can get the password from a backup that is similarly stored on a low tech piece of paper

If you still believe that, then I have Rolex to sell you, and I’ll wrap it for free in some papyrus

Wael May 4, 2014 2:47 AM

@ Figureitout,

I’ve got an infection problem, I don’t know how it spreads and stays alive, I don’t know where it is or what; but it keeps hiding.

I suggest you image a pristine device then burn the image on a read only media. Separate your “more stable” OS and configuration files from your “mutable” data files.

Wael May 4, 2014 3:25 AM

@ Figureitout,

but this guy got a nifty P.O.C. working…

I like it! Thanks for sharing. Could be adapted for some security related ideas.

yesme May 4, 2014 4:23 AM

@ Nick P

Offtopic: You should see this interview with Niklaus Wirth. Especially when he talks about FPGA microcode (from 12:00).

The problem of today is that we think too complex. We are very good at optimising, but the bigger thought just isn’t there. And the bigger thought is of course simplicity.

… mumbling again about ridiculous complex and way too much IETF shit …

Thinking about it, I come to the conclusion that, besides a serious reduction and rethinking of IETF standards, simple languages and OSses are the only real solution of computer vulnerabilities.

Oberon makes more and more sense to me.

Siderite May 4, 2014 5:45 AM

@Jacob, Clive, herman, Wael, Figureitout, thank you for the replies!

What you describe is scary. First of all, this “contempt of court” idea seems to override most of the existing laws. They could use it for anything, really, as they did in the McCarthy era. Isn’t there some oversight for it? It’s scarier than NSA snooping, as it can affect absolutely anyone that gets into a courthouse. How can one accuse you of contempt for not believing you forgot something? (or, better said, never memorized in the first place, and then lost the piece of paper) What about reasonable doubt? It seems weird that they cannot hold me for more than 24 hours in a cell without proving I am a murderer, but they can hold me for 5 days without proving I had a password in my head and not forgotten it.

As for destruction of evidence, shouldn’t they be able to prove what was the evidence you destroyed? Just getting a small white thing from your sleeve and swallowing it could be anything: a mint, an anxiety pill, a piece of loose skin.

There is no question of the importance of data. It could be anything from deleted junk emails to Snowden data or child porn. It is a question of finding a way, accessible to anyone, to carry an encrypted hard drive and be able to withhold the password without repercussions. Can you please direct me towards the legal solutions you guys discussed about in the past?

And I am pretty sure I would be very uncomfortable in prison, even without weird inmates in my cell, but Vlad the Impaler would be OK, since I am Romanian, too 🙂

Siderite May 4, 2014 5:56 AM

@Benni

The NSA dragnet snooping is annoying, I agree, but in a sense I sympathize. I am offended by the strong distinction that they make between American citizens and the rest; it feels like they discuss “real” citizens versus the rest, kind of like ancient Rome. But anyway, I sympathize because as a software developer I would probably come up with a similar solution: just stick everything in a large database that you can query. After all, there should be some oversight on WHAT you are querying, not what you are storing.

Of course, then reality is rearing its ugly head and makes the concept Orwellian and prone to misuse and unconstitutional and so on, but what software dev really lives in reality anyway? 🙂 And since I am the devil’s advocate here, am I the only one bothered by the kind of “speaker for the masses” persona that Snowden is now displaying? OK, he did a great job publishing the info that he had, but why is he also discussing it and giving interviews everywhere? I strongly believe it got to his head and, unchecked, it could hurt rather than help the public outcry against the NSA snooping.

farshid.shaikhrezai May 4, 2014 8:00 AM

As a reaction to this court ruling:

The Danish Data Protection Authority (DDPA) has recently made an announcement (in Danish) about the risk of Public authorities hiring US companies, as these companies are obliged to comply to danish law of protection of personal information on the customer side AND demands as in the above court ruling, even when servers are on danish soil.

Referring to the recent US court ruling against Microsoft, DDPA stated they can not exclude ordering government authorities to nullify such contracts with US companies.

In a parallel development , a danish version of the News of the World hacking scandal has come to light, where revelations so far mount up to a tabloid newspaper have paid a sysadm in 4 years for SMS messages about credit card usage of celebrities and politicians.

While Snowden revelations have attracted little interest from politicians and government figures, this time ministers and parliament members are lining up condemning the tabloids act of bribe and pledging new laws for better protection of personal data.

The company responsible of handling credit card information is Nets, which used IBM services at the time of the security breach. It seems the suspected sysadm was an IBM employee at the time responsible of Mainframe administration. Nets take over by a consortium comprising 2 US investors, was recently inked in by government, under heavy criticism about the state of data protection after the take over.

Jacob May 4, 2014 10:55 AM

@ Siderite

Two comments:

  1. You are mistaken if you think that “they cannot hold me for more than 24 hours in a cell without proving I am a murderer”.

They can not keep you more than 24 hrs without presenting your case to a judge.

When they do present your case to the judge, many times under sealed evidence, no proof for any wrongdoing is necessary – just reasonable suspicion, supportive argument and a need for further investigation. Most judges agree to a couple of holding extentions without getting some evidence of real investigative progress from the police – unless your defence attorney can make a compelling counter claim.

  1. To realize how stressful the situation of being interrogated is, even if you had done nothing wrong, please see:

http://mondoweiss.net/2012/06/do-you-feel-more-arab-or-more-american-two-arab-american-womens-story-of-being-detained-and-interrogated-at-ben-gurion.html

Unless you are a seasoned criminal who had passed a few interrogations in the past, most probably you will hand over your password.

Miranda at Heathrow did.

“They were threatening me all the time and saying I would be put in jail if I didn’t co-operate,” said Miranda. “They treated me like I was a criminal or someone about to attack the UK … It was exhausting and frustrating, but I knew I wasn’t doing anything wrong.”
“I was in a different country with different laws, in a room with seven agents coming and going who kept asking me questions. I thought anything could happen. I thought I might be detained for a very long time,”
“They got me to tell them the passwords for my computer and mobile phone. They said I was obliged to answer all their questions and used the words ‘prison’ and ‘station’ all the time.”

Clive Robinson May 4, 2014 11:52 AM

@ Siderite,

Don’t make the mistake of beleiving laws on how long you can be held apply to courts, they don’t, they only apply to the police and some (but not all) investagative authorities with a right to detain you. If you are in a country with a Napoleonic court system the investigating magistrate can order you detained until their investigation is compleate which might take ten years or more. If you are in a country with an adverserial system the judge can compel you to reveal information by detaining you in solitary confinment in the courts prisons which are often not designed for longterm detention and thus lack beds, toilets, washing facilities or even those to provide cooked food or hot/cold drinks or even water fit for drinking or any kind of excercise or medical support. The only right of appeal is to the court it’s self which usually means the same judge… I beleive the longest any one was held this way in the US was seventeen and a half years, simply because the judge chose to beleive the wife over the husband in a divorce case and the husband was only released after the judge was nolonger in charge of that court… Also the US has special prisons where you have no right of communications except to court appointed persons. You effectivly disapear as nobody inside or outside the prison who is not an appointed person has the right to know who you are, where you are or even if you are alive… Israel is known to have similar prisons as is Italy and Spain. You can have a look a Amnasty International for details on other countries.

But it gets worse there are two places where legislation applies and courts have power to have you produced, when you are inside a countries boarders and international space which is not part of any country but international law applies which leaves the space inbetween the two of the border area. At an airport it’s usually called “air side” here you have no leagal protection what so ever and you are in limbo. Just recently the US decided to expand this zone to a depth of one hundred miles inside US borders which leaves few places where border authorities can not do to you what they wish, and guess what they don’t have to tell anyone they have you or give you access to legal representation or where they are holding you or even if they have transported you to another country, this includes having you submited to surgical proceadures to recover anything you may have concealed in your body… it’s your guess who is responsible if you suffer longterm harm or die during such a proceadure, but you can be reasonably certain you or your nearest and dearest won’t be able to bring action against them…

Figureitout May 4, 2014 1:11 PM

Wael RE: Pristine image on a pristine machine from infected machines and backdoored connections
–Appreciate the tip but I think you and aren’t as green to believe it. Newer machines w/ these insane fab layouts and every component just gets smaller (and harder to possibly remove and replace). The reset button on the BeagleBB is so small now I have to push it w/ my nails, about 3X2 mm. I compare it to like old and new cars. Older cars you could customize easy, minimal computers in them; newer cars are now an entire new attack surface for hacking and they have plastic covers over the engine to make it harder to change your own oil, inspect the engine, etc. Even the “secure” boot UEFI trying to lock in the pre-burned ungodly amount of extra crap that’s standard w/ commerical pc from the store; going to have to evade that. SDR is exploding right now, there could be a hidden encrypted protocol and a TLA backdoor in chips just waiting for the magic command. Tracing a circuit board of an older computer and now these days is night and day. These problems aren’t changing for the better, only getting worse. If the blog (and me) is still around in a few years when I feel confident enough to try and setup a secure modern pc, I’ll post my plan throughly and easy to follow; from buying to first boot. It’ll be extreme and I’ll try to put in “breakpoints” where you add your own bit of randomness to keep the overall protocol intact. For now I have ~7 computers, some micro’s, radios, and other knickknacks I can train on.

RE: my classy email address
–It was meant to be stated as a sentence. Just checked it, Bob “pumpkin butt” Smith is doing great; I was reminded of it as I stumbled upon a funny picture I won’t post (involves pumpkins & breasts, google it and you’ll die), which would’ve been a good partner for Bob. :p

RE: microreader
–Yeah that’s the first thing I thought when I saw it. Simple, 3 buttons and tiny screen, very limited functionality. The guy’s a very capable hacker who put together a 4-bit computer in a weekend; decent sized circuit. Looking forward to what else he comes up w/.

Siderite
it could be anything…a piece of loose skin
–Can’t…resist…bad…joke: http://i.qkme.me/3t2qx0.jpg
I’m sure some of the other Dutchmen w/ “goldmembers” of the blog could give you some better insight into that. :p

Wael May 4, 2014 1:43 PM

@ Figureitout,

Appreciate the tip but I think you and aren’t as green to believe it…

Sadly, you got a point there! It’s a product of “lack of control” and “lack of awareness”. With newer technologies, as you pointed out, both are dwindling, although every now and then some story is unveiled and consumer awareness is heightened. That said, adhering to some basic security principles can’t hurt, given that one maintains guard. If you boot from read only media (if you are allowed), then it should be easier to isolate the problem. It may very well be in the firmware or hardware. Right now, you don’t know where it is.

Nick P May 4, 2014 2:33 PM

@ Figureitout

I already told you what you need to do to solve this problem. Just save up money and do a version of it. Will save you plenty of stress.

Just remember to have another person buy it for you. Not one at your school, but maybe at the church. Just say someone has been messing with your mail, you dont trust a package sitting at your door unattended, and even Post Office Box wont allow electronics. Offer to compensate them a bit. Project a semi defeated, yet determined, attitude.

Someone will pity you and help. Also remember to tell them to call yiu in a time range where you can leave instantly. Gives an opponent a tiny window of time too impactical to work with. Also, dont bring your phone on the trips to set this up or have batteries out. Better to use an old car or bicycle to avoid OnStar, etc.

Siderite May 4, 2014 3:59 PM

@Jacob

Jesus, why did you send me that link for? I won’t leave any country now, ever! You stranded me to Italy, damn you! To think I almost accepted a job at an Israeli firm and they told me that one of the conditions of employment was to get to Israel for a week and asked if it was OK, and I said yes. I won’t sleep tonight, I am sure. Horrible link! Why not make a Hollywood movie about it? Wait, don’t answer that… [still in shock]

Jacob May 4, 2014 7:58 PM

@ Siderite

Sorry to ruin your evening…

Anyway, if you still entertain that job offer in Israel, here are some tips:

  1. If you are connected to the Arab/Palestinian world, or are a Muslim, or politically active on the left side of the map, or had participated in demonstrations against Israel, or you seem to be an illegal migrant worker / sex worker, then you might get interrogated, your belongings and electronic storage devices / emails searched, and you might be refused entry.

If you do not fit the above profile, then there is small chance for being stopped on entry. It will help to have the contact of the inviting company and a letter of invitation handy on arrival.

  1. On exit, the story is different. Here they don’t care much whether you are a “trouble maker” or not, but they do care about flight security. A security guy asks each person on the way to the ticket couter 3-4 questions, and based on some unknown profiling/factors, especially if you are not a Jew, there is a chance that they might stop you for a thorough search and questioning.
    If they do decide to give you the rub, then you might expect:
  • to be intimidated and be treated rudely.
  • to be asked blunt personal questions
  • to be thoroughly searched – clothes, body (exterior), belongings, papers, notes, books.
  • to have all content of electronic equipment / email / chats history looked at or copied.
    If they feel that they need a deeper look at these, they may not release your notebook / smartphone and say that you will get it back in a few days (there is some anecdotal evidence that occasionally they have lost the “detained” notebook and the person got nothing back – nor any compensation)

This exit rub will normally be carried out until a few minutes before scheduled departure time. At that point, they will put your smartphone etc in a bag and accompany you to the exit gate. It is rare that you miss your flight.

I would guess that this ordeal happens to a few persons in any given day. It happens much more frequently to arab/muslim persons.

Practical suggestions:
– keep handy an invitation letter from the local company on arrival.
– keep handy a summary/meeting letter from the local company on departure.

Prior to the trip:
– prepare a disposable email box with some benign entries in it. This is the box you will log into if requested by security.
– sanitize you computer from all things private or important. Have just the OS, dev system and required utilities / comm programs / browser. Load the actual private/important stuff to an accessible computer/cloud and download it when you reach destination.
– Be prepared to leave the computer in their hands and possibly never to see it again.

Although soeme Israeli newspapers occasionally complain about this rude “welcome” or “bon voyage” to visitors, and even some higher ups have taken notice, the practice continues. My personal believe is that they treat you roughly as a method to keep you off-balance – maybe you would divulge under stress some interesting information. But it doesn’t look good, gives Israel a bad reputation, and I wish they would find a different method to keep travel secure while being polite and respectful to the unsuspecting visitors.

Clive Robinson May 4, 2014 7:59 PM

OFF Topic :

As some of you may know afluant Chinese where they are alowed to are investing heavily in property in foreign cities. In London for instance the increase in property value is ridiculous when compared to other property in adjoining regions and England as a whole. It has got to the stage where it’s a “Property Bubble” in all but official name.

The question is why are the Chinese wealthy elite doing this? Well it appears that the Chinese property bubble like that of Japan a few decades ago is about to burst. And as the bubble is one of the mainstays of China’s growth over the past few years as the rest of it’s economy has slowley stagnated you have to ask what happens when the China bubble bursts.

Obviously sufficient of China’s wealthy elite think it’s going to burst very soon which is why they have been cashing out in China and cashing in in London and other places (creating new property bubbles in the process).

But the question of what happens when the Chinese bubble bursts is important for other reasons, China holds a lot of US Paper if their home economy starts to go they will probably try to capitalise on the US paper in one way or another which may cause some real pain in the US and contraction in certain areas (any bets on ICT security?).

A number of people have noticed the parellels between the current state of China’s economy and that of Japan’s a few decades ago. And are thus looking at the Chinese economy in that historic light (remember Japan very nearly brought the world economy down back then, and the world economy was in a stronger state than it currently is). You can read more about this at,

http://qz.com/198458/zombies-once-destroyed-japans-economy-now-theyre-infecting-chinas/

Siderite May 4, 2014 9:16 PM

@Jacob

Thanks, that is a nice summary and very useful. Also, solved the problem of nightmares, I am not sleeping tonight 😉

Anura May 4, 2014 10:04 PM

@Clive Robinson

Right now, the US and EU have very fragile economies due to many years of rising inequality and lack of action from governments. Basically, we are in the same situation we were in during 2007, where we were relying entirely on investment for growth, while government investment was low. So basically, all it takes is for something to scare investors, and we end up with another recession.

In the US, neither Gross nor Net Private Domestic Investment has reached the same level as it was in 2007 (which wasn’t even the peak); Net Private Domestic Investment is actually at the lowest level since 1993. Gross Government Investment in the US is actually at about 2005 levels, while Net Government Investment is down to 1996 levels.

So you basically have to scare already frigid investors. Not going to take very much. The thing is, if private investors don’t invest until the economy is strong again, and without a political climate that allows for government investment, we need to rely on consumer spending to increase… but if investors pull their investments, businesses will layoff employees, and consumer spending will go down… So that leaves us with government being the only entity that can pull us out. Anyone who has paid attention to the political climate knows that that’s not going to happen. So we are looking at good times ahead if China has a crash and US and EU investors get scared.

Nick P May 4, 2014 10:28 PM

@ yesme

Simplifying languages for teaching makes sense. I agree with him on that. I enjoyed his recollection of his first personal computing experience. He went from a terminal connected to a large machine serving hundreds to a personal computer that allowed him “more flexible text editing & fonts & graphics & all that.” Seriously, Wirth, text editing is the first benefit that comes to mind? Lol. Well, he did a lot of it so it makes sense. I love this statement though:

“And I decided that I wouldn’t want to program with the old dinosaurs anymore and I had to have one of these things [PC’s], too. But they were not on sale. They couldn’t be bought. And the only thing I could do was to decide to build one myself.

Lesser men would’ve griped all the way back to their timesharing systems. Niklaus Wirth says “So you won’t let me buy a cutting edge GUI PC? I guess I’ll just build one.” And that’s what he effing did. Layer by layer, piece by piece, useful app by useful app, all in a language suitable for safe & maintainable system design. Far as old school labels, do we call him a hacker, guru or the rare ‘wizard.’ I think in his niche he deserves wizard. Gutknecht needs one of the labels, as well, as he contributed plenty to these efforts.

“Thinking about it, I come to the conclusion that, besides a serious reduction and rethinking of IETF standards, simple languages and OSses are the only real solution of computer vulnerabilities.”

Yet, I come to a different conclusion than that. I’m putting it in a new essay instead of this comment.

Nick P May 4, 2014 11:20 PM

Are simple, safe languages the answer to security woes?

Simple is actually dangerous. As an old essay argued, focusing on simplicity over other things led to security disasters such as UNIX & C. This is not to say simplicity is bad: it’s just one of many attributes that concern security. The goal we should seek is “simpler.” High assurance engineering has taught us such systems must be modular, have good interfaces, be thoroughly tested, be thoroughly reviewed by qualified people, and be implemented in a way that facilitates review. Plenty of simplification was often used to accomplish this. Yet, minimal isn’t always better for developers that will end up building on it later.

Specifically, a simple & secure interface that average developers can use (esp with management pressures) might need to be fairly complex underneath. Bernstein et al’s Ethos project gives us an example of this concept at the OS level. Their constructions greatly simplify life for application developer, while internally being quite complex compared to what some would push focusing on simplicity. The complexity is intrinsic to the work to be done. Sacrificing the complexity for simplicity just means that those same developers would create ad hoc versions of this for themselves that were very insecure. Matter of fact, this is the status quo.

The language, tools, hardware, OS, etc all need to present a good baseline to build on. The baseline should be simple enough to understand & use in an effective, safe way. If this complicates its implementation, so be it. Designers should just use the best systems engineering practices they can to break it into manageable pieces. High assurance model suggest keeping each component comprehensible, easily validated, and written in safe way. Each interaction is treated similarly. This concept allows us to securely compose things we understand piece by piece, abstraction by abstraction, into an effective system that’s also easy to use safely/securely. That last point is ultra-important as a failure there will cause the system to be ignored or abused by its own potential users.

My model for developing a better language

I’d start with existing languages used to solve the problems. I’d look at languages such as C#, Java, Python, etc. that have type safety, productivity boosting features, & widespread use. I’d look at how often each feature is used, what effects it has on comprehensibility/safety, and so on. I’d trim out anything that rates negative on this list. I could pick and choose every single feature I like from primitives (eg typesafe function pointers) to syntatic sugar (eg foreach). I’d consider LISP-style macro’s because it’s a new language & why the hell not. 😉 The list would include features from non-mainstream languages that are advantageous. This list becomes the upper limit of what the language can become in terms of features.

The next component is the lowest layer. I’d look at ISA’s, VM’s, etc I’d look for a representation that can effeciently map to existing hardware, possibly be implemented more directly in future, is easy to analyze, enforces optional safety checks on operations, and so on. This layer is the target of any other languages. It might also be callable directly in a high level language for performance or some other justifiable reason. This layer will also be safe by default for that reason. As P-code exemplified, it will likely be easier to port to new architectures and platforms than a full language + compiler.

The next component is a core subset of the language. This is just enough of the language to efficiently implement an arbitrary program, esp itself. This is akin to VLISP’s PreScheme, Stackless’s RPython, or Wirth’s Oberon, with or without garbage collection. This language must be expressive, efficient, and safe enough to write applications, compilers, OS’s, drivers, etc with runtime parameters tweakable for use case. It will have a bulletproof compiler a la CompCert. As such, it will have many uses the more promising of which is a compiler target from other languages, including initially the extended version of itself. It’s a good way to bootstrap it, even easier if macros are included.

The next component is the full language. The full language will have mandatory and optional constructs. Constructing it is the point of this post. I suggest that we construct it gradually. The core language is first level, giving user enough useful functions for anything with easy learning. Level 2 constructs are extra functions that real-world programmers use 80+% of the time to get more done than a Level 1 language user. Level 3 are the rest that are nice-to-have for certain situations, but totally optional. Tools might be developed to downgrade each level into a lower level with comments & functionality intact. Additionally, coding guidelines and build environments might enforce a given level to be used.

So, how big should we make the full language? Size hurts understanding. We can always expand software using libraries. We also know that certain langauge features make writing software so much easier, resulting in software that is itself more understandable. So, what’s the right tradeoff? My wild guess is that Level 1 should be as easy as a Wirth language, Level 2 shouldn’t take more than 50-100 pages of a SAMS Teach Yourself book to explain extra features, and Level 3 shouldn’t take more than 100-250 pages on top of that. If it goes past a certain point, the complexity is too high and the designers must start pushing features into libraries to lower complexity.

As a sidenote, Wirth’s heuristic was how easy was it to compile. This should be factored in to some degree. However, easier to write, read, extend, & validate are much more important. Plus, we can always do a IDE that deals with this problem. For example, many modern IDE’s can compile a module in the background right after you finish changing it. The scripting languages run changes pretty much instantly. LISP has incremental, function-level compilation that took milliseconds on single-core machine. All in all, I think compilation speed should only be a metric as far as “Let’s avoid anything causing compiles to get in way of developer’s flow during their normal develop-run-modify process.” The compilation of production version might take considerably longer as it might employ many extra checks & run on a dedicated machine/cluster.

All of the above components will be developed simultaneously. The L4.Verified experience (and others in high assurance) showed that it’s invaluable to have people working on very different aspects working in parallel with good communication. In short, all kinds of issues are caught & improvements made early on. I’m extending this to say that the language should be developed, implemented, documented, AND used all at the same time. There should be periodic acceptance tests where ordinary developers are trained on a given version & try to use it in various projects to provide feedback on what to expect upon widespread use. Doing all this at once, while maintaining consistency, will knock out so many problems other language teams might wonder why they didn’t do it themselves.

That’s my take on it. One shouldn’t take simplicity too far. I think Wirth might and Moore (i.e. Forth) definitely does. On other end, LISP showed how the basic mechanisms can be fairly simple, safer than average, and extremely powerful at same time. Python is a watered down LISP designed for readibility & practical use (“batteries included”) with incredible results so far in uptake, productivity & defect rate of average programmer. Yet, I wouldn’t call the combo of language & standard library (must exist as whole) simple. So, I’m aiming for something like Python that can be used in areas where C/C++/C#/Java are more typical & done within framework I described above. I think the scheme has potential to improve developer experience & system safety, while removing many risks across the entire lifecycle that exist in mainstream approaches.

Anura May 5, 2014 1:47 AM

If something new is going to be designed, I think one goal should be to make portability as easy as possible. Beyond just allowing you to have builds for various architectures, I think the end goal should be to make all software be distributable in a manner that is completely independent of OS or architecture. Java has a VM which is good for portability, but there is definitely some overhead. What I would like to see is an intermediate language that an existing language like C or C++ can be compiled to that is designed to be as close to assembly as possible while being architecture independent.

There should be both a textual and binary representations. Then instead of distributing binaries targeted at specific ISA/OS, you distribute IL binaries that then get compiled to native code at install times (which is much faster than building from source, while also allow closed-source companies to adopt it). Support can probably be added for some pieces to by JIT compiled, just to allow for things like generics.

Ideally a driver API would also be defined, that would allow driver makers to make machine independent binaries at only a slight performance cost for an additional layer of abstraction.

yesme May 5, 2014 2:22 AM

@Anura,

I would rather see the source code as being portable 😉 Since Java I don’t believe in intermediate formats anymore. Just compile the software to the metal. If the language is simple it’s not that hard to write platform specific code for the compiler.

Closed Source software != safety

Anura May 5, 2014 2:51 AM

@yesme

Closed source isn’t going away any time soon, and I would rather be able to make the OS a choice independent of software and hardware (within reason) than to limit the choices in order to push an agenda. If anything, getting the software industry to adopt a cross-platform framework is going to increase adoption of open-source OSs.

Anura May 5, 2014 2:56 AM

I should also note that compilers already compile to an intermediate language anyway, otherwise you would have to write a separate implementation for every combination of architecture and language. The difference here is that you standardize it and everything is compiled at install time (unless the language has generics).

yesme May 5, 2014 3:43 AM

@ Nick P

“Simple is actually dangerous. As an old essay argued, focusing on simplicity over other things led to security disasters such as UNIX & C.”

No. That is incorrect.

C and UNIX were designed with being as close as possible to assembly in mind. In C you could do multiple things in one line of code, incl. pointer arithmethic.

It was designed by PhD guys for qualified college guys.

But it wasn’t designed as a teaching language. It didn’t have a proper type system and readability lacked too (which is the most important thing in software). It wasn’t designed as a programming language for the masses. And because it was designed for such limited and expensive hardware, safety checks didn’t enter the language either.

That was the ’70s in California.

Today (40+ years and an increadible leap in Moore’s law later) we see the devastating effects of these shortcomings on almost a daily basis. One catastrophe even bigger than the other.

It’s the language semantics, the implementation, installed base and the fact that still a lot of people believe in it that is causing the damage.

I still firmly believe that a simple language, that is designed with good readability, typesafety, rangechecks, GC, modularity and namespacing (I don’t mention OOP!) is a quite good answer to implement safe software.

For the GUI or other end user stuff, use interpreted languages without types or something like QML.

Simple languages do have their drawbacks. One of them is that sometimes it takes a couple of lines to write what you can do in one line in a more enhanced language. So be it. I write this story of probably 30+ lines without any pain.

yesme May 5, 2014 4:03 AM

@Anura

“Closed source isn’t going away any time soon…”

Yes, that’s true. But that doesn’t mean it’s anywhere safe. You have to trust each guy who is working on the software at the company, their vendors, and the management.

Ken Thompson once said:

“You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.)”

So I stick with my previous statement:

Closed Source software != safety

Clive Robinson May 5, 2014 7:18 AM

The problem with languages is many fold and I’ve made a few of them here in the past.

One of the real issues is the underlying hardware and laws of physics. Which now we have in a number of ways hit the buffers on Moores Law needs serious consideration in the highest language levels just as it did when C was a necessity over assembler back nearly half a century ago.

We are moving from single CPU/core systems to multi CPU/core systems even in lowly microcontrolers. Any new language that does not take this into account is not going to get to fledgling status let alone fly. Which means multi threading and parellel programing has to be built in in the very foundations in an easy and secure way.

Also not only does the language need to have strong but flexable type safety and invisable –to the programer– memory managment and garbage collection it should also have other desirable functions such as functions as first class objects and lazy evaluation. Further they need to rely less on compile time and more on run time optomisations and execution order. Which is also why imperative languages are going to be steadily less useful with time at the higher levels on the programing language stack.

Unfortunatly a number of the languages that exist (Ada and lisp included) just don’t “cut the mustard” in all these areas and we need to look at languages that are currently considered “bookish” or “academic” in nature for our prototypes (one such is Haskell [1]).

But there are issues and problems with these high level languages you just cannot avoid bumping into. That is when you get to the metal you will find CPUs and the languages used by them are imperative in nature and require eager evaluation and specific memory addressing. The same is true at higher levels in specific areas such as IO and certain asspects of process control. In fact some types of system such as those with “Real Time” response requirments just cannot be done in anything other than an imperative manner. Look at it this way when you put your foot on the brake in your car you don’t want all the delays of lazy evaluation and garbage collection getting in the way, otherwise the most likely next garbage collection will be you in the back of an ambulance or your car on a tow truck…

There is of course another problem once called “The nut behind the wheel” which is many programers for all their “OO goodness” are imperative not functional programmers in their thinking and don’t want to or can’t change…

Part of thi is you just can not “pick up” functional programing “as you go along” you realy need to go back and almost learn to program again, which is going to be seen as a major bump in the road for most “code cutters” and their managers and will almost certainly get inflated further by needless argument (hmm is it me or is it going to get toasty 😉

However those who have taken the time to learn functional programing find improvments not just in productivity, reliability, maintainability, code reuse and security in such a language but in the way they think about code from a design perspective. And this carries across many of the advantages into subsiquent programing in an imperative language. Which tends to suggest a lot of the problems are actualy not the language used to write the code but the thinking process of the person writing the code…

Which when you consider the “Language Stack” it becomes obvious we need to use languages at the level of the code and type of code we are writing. That is we see languages that allow a drop down into lower level languages where required, the most obvious being in line assembler in C, any high level language is going to require this if it is going to be effective in the market place.

The acid question is to how… C uses inline assembler which has a lot of issues, Pascal alowed it by having a clearly defind API that allowed not just assembler but any other language that conformed to the API to be linked in at run time. The use of P-Code or other bytecode interpreter alows for both methods in a more structured and code level/type sensitive way.

Thus the first step would be to get a sensible bytecode interpreter up and running. But even this has issues, in essence the bytecode interpreter is a virtual CPU that is Turing Compleate, but the next layer up is a very problematic issue do you make this virtual CPU register or stack based and how do you deal with errors, exceptions, signals, interfacing etc.

Stack based has many advantages but speed tends not to be one of them, register based usually has speed but many disadvantages not least is the number and types of registers you define and the massive performance hit you take when the actual underlying hardware CPU does not natively support them and has to cludge them in software and memory which loses the advantage of caches and other hardware optomisations (memory to memory copy through a CPU register can be a thousand times slower than a CPU register to CPU register transfer). Thus the design of the bytecode interpreter is going to either be “a jack of all trades” to alow for all likely underlying CPU types or it’s going to be taylored to a specific class of CPU with specific issues around memory addressing…

Just looking at ARM and IAx86 it can be easily seen that just in the area of memory addressing the constraints this will put on a bytecode interpreter will hurt performance significantly on both architectures.

Finding a way to resolve the bytecode issue will thus probably end up with a secondary “architecture compile” step in the tool chain which will result in a compiler that is in essence cross-compiler capable by design which is effectivly what GNU compiler currently is. Which as those working down at that level will tell you adds a degree of complexity security could do without…

Thus the usual issue of optomisation arises, the more you do in any given direction the worse it usually becomes in other areas, which means you end up Sweet Spot hunting…

[1] http://www.haskell.org/haskellwiki/Introduction

name.withheld.for.obvious.reasons May 5, 2014 12:35 PM

Okay, have to weigh in on this one…

Languages for programmatic expression of machine instructions are more about religion than science. Comments on C for example span the horizon as to usibility, robustness, efficency, and safety. In my experience all have been true and false, the trick is to match the tool to the task. I’ve seen excellent examples of code written in C for the purpose of embedded application support that is inferior and other applications in C for accounting that are amazing.

Analog devices published a couple of years ago several application note/examples for some of their product line (tying their uController to other components). It was terrible–I explained to an associate the lack of effort in their software offering. Don’t get me wrong, Analog Devices provides some very useful components for various applications, but the lack of focus on the software side was a real drag on development efforts. In our conversation he suggested that I re-write the source for them–I laughed. I don’t have the time to fix other people’s problems, I am busy making my own.

If our target here is safety than a series of options for application programming become more obvious (as mentioned before, perl or php for realtime programming is not a choice). And, writing a web server in ASM is not practical. Personnally, to reduce the level of obscurity (and there are arguments against this) writing to the platform at the lowest level is a possible choice I prefer–but don’t expect. First I evaluate the application; the purpose, lifecycle, implementation costs, maintainability (sometimes I ignore this), and toolchain environment (sourcing/supplier based issues), and a host of other nominal issues to make a decision about the approach. For example, I’m a big fan of VHDL or Verilog only designs for applications targeted in hardware. Comes from not trusting the toolchain. ASM is preferred for any realtime application. And HLL are for the “presentation” layer of an application in most cases–though I’ve written user interfaces that were extensible in ASM.
I don’t understand the resistance to macro assemblers–as far as I am concerned it represents the same lexical layer above the object emitter in the classic compliation toolchain. Lexical analysis is necessary once you step away from assembler (though some ASM hobbyists can make obscure source out of intrinsics). In summary I am with Nick P on providing guidance across a set of application frameworks (hardware, OS, appliations, etc.). Tying your hands on purposes should have some benefits–that’s what they told me school.

NobodySpecial May 5, 2014 3:40 PM

@jacob If you are connected to the Arab/Palestinian world, or are a Muslim, or politically active on the left side of the map,

So ironically the best way to get through Israeli security is to dress as a Nazi?

Nick P May 5, 2014 4:13 PM

@ yesme

“No. That is incorrect. C and UNIX were designed with being as close as possible to assembly in mind. In C you could do multiple things in one line of code, incl. pointer arithmethic.”

That’s commonly told, but inaccurate. The designers of UNIX laid out their philosophy in various papers and interviews. Although UNIX philosophy had plenty good points, their putting simplicity & efficiency higher than anything else had a huge, negative impact on resulting architecture. Any problem that undermined simplicity was pushed on developers & users. Best examples in C were array/buffer issues & null-terminated strings. Each was done to simplify implementation on machines with limited resources & no concept of safety.

First example in UNIX that comes to mind is interrupts. Some OS’s saved almost all internal state before dealing with interrupt, preserving a running process’s integrity. UNIX, for simplicity, saved almost none of it & just let app designer handle issue. Imagine that: having to design interrupt handling into every part of your application because OS was too ‘simple’ to do it for you. Others are listed in the classic UNIX Hater’s Handbook.

So, yes, simplicity was both their solution to porting their language/OS to many machines efficiently AND the problem that led to many safety/security/maintenance issues in UNIX architectures. UNIX shows too much simplicity is a bad thing. There’s a limit to how simple an effective system can be. Wirth made better tradeoffs in his systems which were also portable and reasonably efficient. Burroughs B5000 hardware and OS weren’t simple, yet making safe applications was because of this. The lesson, as in Worse is Better essay, is that the interface simplicity is more important than implementation simplicity. Because we write the code once, then use it many times.

“I still firmly believe that a simple language, that is designed with good readability, typesafety, rangechecks, GC, modularity and namespacing (I don’t mention OOP!) is a quite good answer to implement safe software.”

That’s consistent with my above principle. A language like you described provides a safe interface with or abstraction for the underlying hardware.

“For the GUI or other end user stuff, use interpreted languages without types or something like QML.”

Or Visual Basic 6. 😉 That the GUI language can be different than main system language is a good point. I note that the E6-evaluated CA for electronic money wrote the GUI in C++ for target platform & wrote security-critical code in Ada. That they interacted well supported the effort. Likewise, starting with my framework, someone might make langauges for prototyping, GUI’s, etc that interoperate well with main language.

“Simple languages do have their drawbacks. One of them is that sometimes it takes a couple of lines to write what you can do in one line in a more enhanced language. So be it. I write this story of probably 30+ lines without any pain.”

Modern software engineering best practice is to focus on how easy something is to read and maintain than write. Python focuses on former, Perl focuses on the later. Which do you think leads to more robust code? Adding a little complexity to the language can go a long way in reducing complexity of application. Can’t add too much, though. Common mistake in mainstream.

@ Anura

“If something new is going to be designed, I think one goal should be to make portability as easy as possible.”

I’m actually battling with myself on that one. Tight integration with a good architecture is what Wirth, high ass. security, & formal methodists taught me. Making one size fits all usually just leads to something that’s far from safe, secure or efficient. Yet, portability has its own benefits that I know you’re aware of. So, will I make it portable? Or portable in a limited way? Or ignore portability? Decisions, decisions.

“Java has a VM which is good for portability, but there is definitely some overhead. ”

Java has faux portability. The platform is just too heavy. If you’re interested in portability, look at the P-code system. Pascal was hardly used before Wirth wrote that layer. Then, it got ported to over 70 architectures. That’s pretty amazing considering how different architectures were back then. Modern architectures are much more consistent with each other, meaning it would be easier today. I also usually tell people to look at Apache runtime or Mozilla’s portability layer for clues at to how to handle various issues. Both do a good job.

“There should be both a textual and binary representations. ”

I disagree for now. Binary is a circumstance. It should be handled by tools. Prior work showed a low-level textual representation can meet all my requirements. Might use a binary format like XDR for storage or transmission, though. That the programmer doesn’t directly work with binary remains my rule.

“Then instead of distributing binaries targeted at specific ISA/OS, you distribute IL binaries that then get compiled to native code at install times (which is much faster than building from source, while also allow closed-source companies to adopt it). ”

This is a good idea. A superior (imho) form of this was invented in the Juice project. Juice attempted to replace Java applets with Oberon applets. Bandwidth was limited then & there were safety concerns. Solution was to convert Oberon programs to abstract syntax tree, then send that to browser which compiled (and typechecked it) on the spot. Result was the user wouldn’t experience a visible delay, program ran at full speed as native code, AST was more bandwith efficient than full source, & AST could be compressed before transmission. The project failed in the end for same reasons all good designs seem to fail. Yet, there’s a lot of good wisdom in that effort that we shouldn’t forget.

Note: The typed assembler and proof-carrying code fields have maintained some of the benefits you describe with properties I desire while taking code all the way to assembler. So, even AST might be far from optimal & we just can’t see it yet. However, I think an AST for a relatively simple language is a nice tradeoff for size, analyzability (safety), and performance.

@ Clive Robinson

Long story short, you’ve just discovered the joys of functional programming and are promoting it. 😛 Most of what you mention is consistent with my framework. I say this as the functional languages such Haskell compile to more imperative abstract machines, which are typically converted to machine code too. The abstract machines can be implemented by hardware microcode, the low level intermediate language, or the core imperative language. So, if anything, the kind of solution you describe could be compiled onto what I describe and/or co-exist with it.

The SAFE architecture team are doing something like this with SAFE instruction set, TEMPEST language for system programming, and BREEZE language for application programming. ‘Right tool for each job’ tends to be a good philosohy. Hard for me to see functional programming doing it all as well as a combo of it and imperative languages. Of course, they’re inventing each from scratch so it’s hard to say if they’re being wise or not. We’ll have to wait and see.

“Part of thi is you just can not “pick up” functional programing “as you go along” you realy need to go back and almost learn to program again”

That’s entirely true. I’m specifically focusing on imperative languages as it will be easy to teach average programmer. The results with Oberon family are a good example. A language that performs well, is easy to use, and is safer from ground up is best bet. Functional programming isn’t. It’s better for another project targeted at people that will put in the effort. The success of Erlang is encouraging on that subject.

“Pascal alowed it by having a clearly defind API that allowed not just assembler but any other language that conformed to the API to be linked in at run time. The use of P-Code or other bytecode interpreter alows for both methods in a more structured and code level/type sensitive way.”

I agree. My framework mimicks that while being careful to dodge it’s problems. Well, at an abstract level as many of the problems depend on implementation decisions.

“how do you deal with errors, exceptions, signals, interfacing etc.”

All of that should be carefully decided. I’ve seen too many problems arising from language or platform designers pushing that stuff on developers. However, I am in favor of offering several solutions that are decided with a compile time switch, with safest/best general option being default.

” Thus the design of the bytecode interpreter is going to either be “a jack of all trades” to alow for all likely underlying CPU types or it’s going to be taylored to a specific class of CPU with specific issues around memory addressing…”

That’s a good point. Of course, my framework talks about designing hardware, runtime, and core language hand-in-hand for consistency. Another quick point is that there doesn’t have to be one runtime: there can be several for different classes of machines with their own compile and run time strategies. As I’ll focus on best option, I think it’s likely there will be two runtimes’s: one that I make that’s close to ideal; one for the most popular, legacy machine. Knowing this is also why my research deals with safe ground up and safer legacy systems in parallel.

“In fact some types of system such as those with “Real Time” response requirments just cannot be done in anything other than an imperative manner.”

The Lustre and Functional Reactive Programming people would scowl at you for that. 😉 This paper (2001) provides a nice survey of the functional efforts at real-time & different issues. I also just found this gem that focuses on FP in embedded space applications. Like I speculated above, they basically just implemented an abstract machine in an efficient language (Forth) and then targeted Haskell to it.

Random musing:

My research into hardware development languages showed me that the key difference is that they’re inherently concurrent. The typical assumption is that each component in hardware is always running. So, one must avoid concurrency issues at every turn. Software has been mostly sequential. The software designers are now trying to learn concurrency. I find it amusing that hardware people worked so hard to make their concurrent designs pretend to be [mostly] sequential machines for programmers that are now trying to squeeze effective concurrency out of those same sequential machines. “You’re Doing It Wrong” seems like an understatement here. 🙂

@ name.withheld

” the trick is to match the tool to the task.”

Or design the right tools for the right tasks, as in my essay. A very tricky part to get right without spilling over from reason to religion it seems.

” And, writing a web server in ASM is not practical. ”

But it only takes 98 lines. 😛 I’ve also seen one that uses only 30 bytes of RAM to handle a connection. I’ll agree they aren’t practical, though, although first was deployed in a backdoor application. Of course, Menuet and Kolibri OS people wrote their whole system in assembler. I digress though…

” Personnally, to reduce the level of obscurity (and there are arguments against this) writing to the platform at the lowest level is a possible choice I prefer–but don’t expect.”

It’s a good decision. It boosts efficiency & reduces likelihood of problems between abstraction gaps. It’s why I’m including something at that layer in my framework.

” For example, I’m a big fan of VHDL or Verilog only designs for applications targeted in hardware. Comes from not trusting the toolchain. ASM is preferred for any realtime application. And HLL are for the “presentation” layer of an application in most cases–though I’ve written user interfaces that were extensible in ASM.”

Seems reasonable. I’m trying to give you better tools to work with using my framework. Yet, if you avoid existing one’s for assembler in embedded apps I could understand. And I’m glad you mentioned VHDL instead of just Verilog. 🙂

“I don’t understand the resistance to macro assemblers–as far as I am concerned it represents the same lexical layer above the object emitter in the classic compliation toolchain. ”

I believe the earliest effort here was Dijkstra (?) writing one for IBM System/360. The nature of the assembler led to people producing unmaintainable garbage. The “high level” assembler made it so much easier to write more maintainable code, yet still efficient, that it took off and became a standard offering for IBM. Btw, looking for the reference to this claim I found this excellent presentation on problems of typical assembler and benefits of high-level/macro assembler. Thought you’d enjoy it, maybe even copying some of it for your use. I’m definitely adding it to what Clive calls my “link farm” lol.

“In summary I am with Nick P on providing guidance across a set of application frameworks (hardware, OS, appliations, etc.). Tying your hands on purposes should have some benefits–that’s what they told me school.”

I appreciate your peer review & positive comment. I particularly hoped people such as you and Clive with experience at all layers would weigh in. Many application programmers might comment based on experience with one layer, while having no idea how badly their decision will affect other layers. Or their own when it interfaces to the others. Robustness in general seems to be a holistic thing. I regret I spent so much time focusing only on the software side as I might have built a safe machine by now had I known to work in & integrate all layers. Foolishness of youth. 😉

Anura May 5, 2014 4:45 PM

@NickP

I disagree for now. Binary is a circumstance. It should be handled by tools. Prior work showed a low-level textual representation can meet all my requirements. Might use a binary format like XDR for storage or transmission, though. That the programmer doesn’t directly work with binary remains my rule.

I don’t intend for binary to be used directly by the programmer, just standardized for distribution. The advantage of a binary format for distribution is that it is more compact and faster to parse than the textual representation (as you don’t have to read every character individually).

Nick P May 5, 2014 4:58 PM

@ Anura

That makes sense. I see a few possibilities:

ASN.1. It’s made for syntax encoding, has a binary variant, saw plenty use in field, and was implemented using high assurance techniques by Galois.

XDR. It’s binary, supports key data types, efficient, and worked in the field.

Protocol buffers or serialization. Essentially, create a way to produce encoders & decoders for arbitrary data types in the language. The end result can be binary.

With any of these, I’d customize the supported types and encoding stategies to the source language & target platforms.

Nick P May 5, 2014 9:07 PM

@ Wael, Clive, name.withheld

Check this out:

http://chess.eecs.berkeley.edu/pubs/690/Wang.pdf

These people are trying to solve concurrency issues by using discrete control theory. They noticed it’s often used for correct by construction systems in the real world. There’s also plenty of proven synthesis tools. So, they modify a compiler to use a version of it to prevent deadlocks, etc.

@ All

A few papers I found that relate to programming, validation, concurrency, etc.

Synthesis of provably correct hardware with options 2010 Dossis
http://www.researchgate.net/publication/224223396_Synthesis_of_provably-correct_hardware_with_options

[All I read was the abstract. However, the abstract was pretty neat.]

CGCExplorer: A Semi-Automated Search Procedure for Provably Correct Concurrent Collectors 2007 Vechev et al
http://researcher.watson.ibm.com/researcher/files/us-bacon/Vechev07CGCExplorer.pdf

[Reminds me of my old automatic & evolutionary programming research. If it’s hard for humans to do right, then just tell the computer how to do it & let it invent what we need. I love that kind of work. Adding formal methods to the process is even better. We need more work like this across the board.]

Provably-correct hardware compilation tools
based on pass separation techniques (2006) McKeever & Luk
http://www.doc.ic.ac.uk/~wl/papers/06/fac06swm.pdf

[They use a variant of VHDL called Pebble to deal with vendor incompatibilities. Then, they use pass separation & formal methods to prove the compilation. They provide many details about their process that might be used to re-create it if the tools aren’t available.]

Provably correct code generation of real-time controllers 2006 Macquet
http://www.ulb.ac.be/di/ssd/nmaquet/master-thesis.pdf

[Might be useful in embedded scene, esp if combined with a verified stack a la Verisoft. It’s also a master’s thesis & I like giving students credit when they actually do something useful in one.]

A Formal Method for Developing Provably Correct Fault-Tolerant
Systems Using Partial Refinement and Composition 2009 Jeffords et al
http://www.dtic.mil/dtic/tr/fulltext/u2/a525368.pdf

[Yet again gotta love DTIC. Interesting method for constructing fault-tolerant systems. It would need modification for secure systems.]

Lambda the Ultimate discussion on concurrency. Plenty interesting comments. Plenty chaff too, as usual. 😉
http://lambda-the-ultimate.org/node/193

Convergence in Language Design: A Case of Lightning Striking Four Times in the Same Place by Van Roy
http://www.info.ucl.ac.be/~pvr/flopsPVRarticle.pdf

[Looks at several approaches to concurrency, distributed programming, etc. Note they succeeded by using a similar, layered structure. Might be worth considering in relation to my language design framework.]

Concepts, Techniques, and Models of Computer Programming 2004 Van Roy and Haridi
http://www.info.ucl.ac.be/~pvr/book.html

[Found this digging through other stuff. Might be an interesting book to read. One third is dedicated to concurrency. Uses Mozart Programming System.]

That’s all for now folks.

name.withheld.for.obvious.reasons May 6, 2014 2:02 AM

@ Nick P
You’ll enjoy ‘System C’ if your looking at timed or concurrent programming. Me, I prefer Occam 5–spoiled bastard that I am. Once you’ve been to the “mountain” all else is relative. But, I can’t help but think (as with Wirth for example) it was all solved by the late ’80’s–what are we doing back in time. Seems like a title, Back to the Future, a movie where capacitors were in flux–probably required an SCR or two to modulate, especially where e-fields are measured in jigawatts.

name.withheld.for.obvious.reasons May 6, 2014 2:15 AM

@ Nick P
Speaking of framework, I have made a significant cut at a new engineering process model that is based on history–going back to lessons from aerospace before if became a professional management organization (mid ’70s). A modest twist on both the classic compartmentalized and agile approaches. Intend to offer it as open source. It has UML as the top-level modeling framework and uses, believe it or not, VHDL as the underlying programmatic abstraction layer. It is based on a multi/inter disciplinary perspective with major classes of data/information/knowledge representations of all sources of inputs to an engineering and/or fabrication process. RobertT and Buck would probably find this of interest. I must say that a serious cut must be made at the Fab level. Work yet to be done–if I live that long.

Wael May 6, 2014 2:31 AM

@ Nick P,

The desertation title is “Software failure avoidance using discrete control theory”, but the main focus was deadlocks (the second problem). Then at the end of the presentation, in the discussion section, a question is asked:

To what extent can tools, e.g., testing, static analysis, runtime analysis, and control synthesis, help eliminate software bugs?

Logical answer: If “Help eliminate” means “reduce”, then the answer is “Yes”.
Mathematical answer 1: If it means the ultimate goal is to eliminate bugs, then I’ll have to sum the series and get back to you if it converges.
Mathematical answer 2: It will make bugs asymptotically approach zero, I would think.
Fuzzy logic answer: Tools will help eliminate software bugs to a great extent
Honest Answer: Just like coffee, it’s not my cup of tea 🙂

Clive Robinson May 6, 2014 5:18 AM

@ Nick P, and others,

You might want to read this letter,

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD611.html

It was written a number of years ago but still has relevance. However I urge you to read it before looking at the bottom to find out who the author is. Oh although the author is European with an excelent command of “The Queen’s English”[1] they are not British.

[1] Whilst it is called “The Queen’s English” the English rarely speak it, in fact you are more likely to hear it spoken “north of the border” in Scotland… which if the Scots do vote for independance will make the problem of what to call it some what interesting… The petty English politicos are making veiled threats about currancy, EU membership, freetrade and all sorts of other sillyness to get a NO vote but then they are politicos faced with loss of power/dominion so I guess from their point of view it’s not silly…

Clive Robinson May 6, 2014 8:14 AM

@ JJ,

Why am I not surprised…

A few years ago I wanted to go for the British Gas “Dual fuel” deal, which also required an online sign up for the service which was fair enough. What however was not acceptable was they wanted you to enter your bank details for a direct debit online via an insecure process… I took issue with this and sent the detaile by post on a number of occasions but they failed to act upon them.

But from my point of view it was even worse the bank I was with then changed it’s terms and conditions and had hidden a new condition which basicaly said Put your bank details into any online service and you lose all protection agsinst fraud and theft irrespective of how it’s commited.

Any way British Gas used it as an excuse to muck me about and put me on their most expensive service even though I had not signed for it and then failed to sort the problem out. This gave rise to other issues with other energy suppliers.

The upshot is I would advise anyone to treat British Gas as a bunch of scammers. And in this respect I am not alone another person treated in a similar way took them to court for harisment, British Gas lost big style and the judgment against them was compleatly scathing and in the process has set new legal president.

As far as I’m concerned the directors of British Gas knew that what they were doing was wrong both legaly and moraly and should be not just ashamed but pillored for their busniess practices that are used to extort money from people. This issue with passwords shows they still have compleate and utter contempt for their customers and are still carrying on their sleezy business practices presumably with the full knowledge and agreement of their board of directors and executives as well as some if not all of their major shareholders.

name.withheld.for.obvious.reasons May 6, 2014 8:17 AM

@ Nick P
From a control theory perspective, the academic approach resemble the thesis of PID and hybrid control theory. Well known, understood, and completely actualized in the real-time taxonomy. That said I see that it is separable from the hardware layer–examples include exceptions thrown by an underlying operating system or an actual hardware failure. In theory, this approach works well until the “anomaly” or out of bounds condition. Boundary conditions are always a problem–whether represented as nyquist in a signal or parametrically by logic or level.

From a formal perspective, “Adaptive Control Theory” holds much in way forward with fault tolerance and performance. Having worked with massively parallel systems engineering the biggest issues surrounding these applications is recovery from error–to state it simply. Other problems exist at the “modeling” or application layer which is where this paper plays. I’ve always argued that we use highly symmetric systems to solve largely non-linear problems. Until a “non-linear” hardware solution (and I have done some research in this area) the advances in robust and “galaxy” scale computing will be limited. Meaning “re-booting” will happen a little less frequently…

name.withheld.for.obvious.reasons May 6, 2014 9:01 AM

A REHASH–TITLE: Failure IS the option (formerly called “Plan B”)

War is about conquer and conquest, at least from the victor’s point of view, and in modern history it has been translated as action in the absence of sociopolitical resolve…sound familiar. By this definition congress is fighting a war with itself.

The drug war, a war upon itself, upon its citizens…represents the complete failure to achieve victory and is not just a failed war…it represents the systemic institution failure of governments and peoples. It’s a failure to sufficiently enumerate a problem (classification of drugs, subjective, can be explained by the example of tobacco and alcohol as to the rational disconnect).

Risk based benefits to actions taken to address a relative abstract concept such as a war on things makes no sense. Seeking to conquer inanimate objects cannot be useful, understanding what produces the most effective strategy to address a “perceived” public ill should not be minimized–but we do. The history of our inability to deliberate beyond the visceral is extensive–American exceptional-ism –isn’t.

Until we as a society can hypothesize and formulate solutions to problems, our actions amount to nothing more than the juvenile act of kicking sand in the face of reason resting on a beach blanket. Discourse around sociopolitical issues (the use of drugs, pills, food, drink, philosophy, etc.) is so narrow that no one is served by our current system(s), period.

Until intelligent life visits this planet, we are doomed to suffer the indignity of our own ignorance. This issue is reflective of our inability to understand, let alone address, causation in any number of our “abstract” system(s). Complete knowledge is fantasy–much as the NSA believes that if they have the data a solution becomes apparent–really?

If we are going to be intellectually honest, we need to call the “War on People who use or Abuse things we don’t like” and execute the final war…“The War on Stupidity”. I’d argue further, the war on drugs is a failure of ideas. Puritanical judao-christian norms provide subjective, emotive, and immutable “facts”. The same is true of ignorance, it infrequently “solves” a problem but moreover sells a solution; when a mountain lion stalks you on the trail, turning around and saying “Here kitty, kitty.” is probably not a good idea. Ignorance allows one to make decisions and policies without concern of or for the consequences. The mountain lion is simply a kitty…as long as you’re not the one getting scratched.

vas pup May 6, 2014 9:25 AM

@Siderite May 4, 2014 5:45 PM.
Yeah, please check history of this respected blog on police state discussion/posts. You’ll find good clarification on subject matter. Regarding Laws and Constitution. It was bitter joke in Soviet Union.
Man came to the KGB office and asked: ‘Do I have a right? (for whatever was clearly state in the Constituition). KGB: Yes, you do. Man: Can I ? KGB: No, you cannot.’ I guess you got the point that it is huge gap between you have right and you could use it without any trouble (be in jail, fired from work without any reason stated, black listed, emotionally harassed, etc.). Please read attetively Wikipedia’s article on Stasi (Eastern German secret police). But time and again, accordeon laws with unclear/ambigues content are at the root of letting apply them selectively and secretly by all legal system (cop/LEO -> prosecutors -> courts -> detention facilities).
@NobodySpecial. You may try, and the let us know how Nazi uniform is working with Israelis. My point is they did not give that ….(you know stinky stuff) for political correctness/profiling when it is about security and preventing acts of real terrorism.

K2 May 6, 2014 10:08 AM

What is the rationale for finance institutions not giving you the ability to set a “notify only” contact channel, that’d tell you if your account changed, but would not allow you to use the channel to make such changes?

vas pup May 6, 2014 11:55 AM

@K2 • May 6, 2014 10:08 AM.
Because you and your interests are not part of their calculations at all which are driven by unrestrained greed (that is not applied to Canadian bankers – respected guys). Rationale is that you (and me, and I guess almost all other respected bloggers) are source of financial institutions’ profits only, and your interests/needs could be taking somehow into consideration and protected by (I know most of you hate that idea – but there is no other option, because their self-regulation is your self-deception)government regulation and oversight until some next generation will see their attitude change to Canadian bankers type. By the way, the attitude of the latter was generated as result of such regulation.

Nick P May 6, 2014 2:45 PM

@ name.withheld

I looked at SystemC again. It’s definitely interesting for prototyping. In the process, I found this tool while I was looking it up. Btw, if you like Occam, check out this OS written in Occam.

The framework is interesting. The use of VHDL surprises me. I’m guessing this framework is about hardware rather than software design? Hard for me to imagine doing software effectively in VHDL. The Flow-based Programming scheme I linked to above is the closest thing to that. It seems that integrating that software scheme with a hardware language might produce some interesting results. I particularly think it might help in hardware/software codesign like the kind people do with FPGA accelerators in eg Mitrion-C. Making software more easily map to hardware avoids many problems in imperative programming.

Re control theory

Thanks for Adaptive Control Theory reference. I’ll look into that.

@ Wael

“The desertation title is “Software failure avoidance using discrete control theory”, but the main focus was deadlocks (the second problem). Then at the end of the presentation, in the discussion section, a question is asked: To what extent can tools, e.g., testing, static analysis, runtime analysis, and control synthesis, help eliminate software bugs?”

I know paper focuses on deadlock & askes a BS question at the end. Point of me bringing it up was to see what guys with embedded experience think about applying Discrete Control Theory (esp validation & synthesis) into software to more easily make it robust. Any potential for you see in that or is it a waste of time?

re rest of it

LOL. Reminds me of this old joke.

@ Clive Robinson

I knew who it was immediately because the page loaded & the tab had his name in it. So much for the surprise… (rolls eyes)

The first few paragraphs about made me want to reach across the Atlantic, slap him, and say “Get to the point!” The points he makes are a mix of possibly sound & otherwise just looking for ways to bash America. I wish he’d leave out the latter as I’m genuinely interested in comparing and contrasting various nationalities’ approach to software. So, I’ll comment on the points he made:

  • the Buxton Index explaining differences & allowing better communication.

Not sure how that works. A person’s goals, personality, social skills, & working style has more influence on this than anything. How far ahead we look is miniscule in comparison. Making some rules about the work environment (or project or whatever) that lets diversity work for rather than against the effort is quite beneficial. The rule about honesty & clarity he had, for example, is a good one.

  • NSF funds work with short term focus

I think he’s right about that with the funding, yet wrong about the result. We do know open-ended, long-term, fundamental research into fields will typically produce the biggest breakthroughs. Yet, most useful R&D results are a series of gradual improvements to existing work. Many of the papers I’ve posted here were NSF funded so it shows the organization is producing results. Also, long-term work still happens but it’s broken into several short-term efforts with associate deliverables. Whether that is good or not is debatable. Yet, even Karger said in high assurance work it’s best to ensure it keeps producing intermediate deliverables with value to justify ongoing investment into long-term work. That was a rule for commercial projects, though.

I’d be interested in knowing how European countries and companies handle R&D in comparison. Oh and one more point: NSF isn’t only game around. Lots of governments, nonprofits, and for-profit companies fund research that’s not expected to have an immediate payoff. Dare I say certain Universities’ Comp Sci R&D has more of that than the other kind.

  • scientists in America being considered eggheads and getting less respect

That doesn’t happen in European countries? People don’t think of techies as nerds or dispute what they say for political reasons? If so, then this is a true difference. However, the author is a little off in that this effect depends on the area. Different locations in US have different levels of respect for and trust in science. I mean, look how many scientists that are funded in this country. We do tons of science, honest and fraudulent, useful and moronic. We’re flexible. 🙂 Yet, the public is quite detached from it.

Maybe it’s because science is wrong so much & Americans expect them to get answers right. Maybe American’s just don’t identify with it, hence treating it like a separate social class. Who knows. There is an effect like he described in most of the country, though. I suspect our poor educational system combined with incentives of our economic system are the biggest culprits. Little motivation to push reason, science and honesty when it pays of in so many ways to do the other things.

  • Europe is Platonic, USA/Canada more pragmatic

We’re definitely pragmatic as a whole, with sprinkles of Platonic. I’m not sure how Europe is or if it varies by country.

  • Compsci tied to math departments; soft sciences having little respect

Those are definitely differences. There are empirical ways to handle soft sciences to a degree so they are sciences in that respect. If anything, this difference risks Europeans getting left behind in those fields if prejudice prevents innovations. I’ve seen (and used) plenty of interesting results from soft sciences. I’d love Europeans to put more effort as diverse perspectives are very important to the soft sciences.

  • gripe at ‘integralism’

It almost seems like the claim could be leveled directly at me & my security critiques here. 😉 I see the value of the method he pushes. Yet, in our field, context & integration are utterly important to achieving the goals. If anything, they’re where some of the worst problems happen. Not properly anticipating them while focusing on one aspect can lead to trouble. That said, being able to do razor sharp focus on one aspect of a design in isolation is quite useful. There’s a balance here that’s hard for me to articulate. I’d have to think more on it.

  • universities focus on preparing for jobs leads to less innovation & reinforces bad habits

In America, Universities are seen as serving three functions: improving oneself; learning job skills to make more money; giving parents of teenagers at least 2-4 years of peace. That companies demand affects what is taught is often, but not always, true. Recall all the articles written in the US griping that you have to unlearn what you learn in college. If the author is correct, then that wouldn’t be as necessary as college would be preparing them for work.

In reality, it depends on the institution & instructor. There are some that directly tie their offerings to in demand job skills. An example is one community college nearby offers COBOL, RPG, etc courses as the biggest employers need those skills & that’s the only reason for it. Author hits nail on the head, here. Yet, at some other schools they teach engineering-style design methodologies, Scheme, etc. Totally opposite of industry. Our institutions also have done plenty of cutting edge work in languages, tools, software engineering, etc. They push the envelope every day whether industry wants a given innovation or not. So, if anything, the author once again tries to misrepresent American institutions as homogenous and inferior.

  • American Comp Sci uses same language, publishers, manuals, etc

This seemed right when I first read it. Yet, we have to remember that the field over here is split among academic, hobbyists, and professionals existing in many areas. The different motivations & environments mean there’s all kinds of stuff out there. Look at ACM & IEEE those people are similar at least in how and where they present. The rest, from focus of work to skill to practicality, varies quite a bit. Much of our most successful language work came from hobbyists, not ACM/IEEE. And their documentation approaches aren’t similar. Overall, this claim of his is false.

See a pattern emerge? Have you figured out why he keeps getting it wrong? I’m going to shortcut to the answer: he falsely assumes computer science in US is homogenous. It’s not. Our culture causes plenty of diversity, dare I say more so than in Europe. In a company or organization, conformity is often expected. In hobbies or marketplace, differentiators are preferred. I mean, there are certainly standards and traditions that many might conform to. Otherwise, they try to be different which results in the author’s homogenity-based claims falling apart. He’s intellectually trying to fit a round peg in a square hole because he desires to think of the peg as square.

  • difficulty of programming & how it’s like math

He’s got some decent points on that. Yet, I don’t think we have to jump right to it being about math. It’s really about proper abstractions that map what’s in our head to what’s on a machine. The choice of language, tools, and engineering method solve these problems. I think the author has a strong math background and it’s making him see programming as a mathematical thing. I also think certain programming work, esp in tools & scientific computing, can benefit from a mathematical perspective or straight up are mathematical. Yet, the success of COBOL et al shows that lay people without a math background can write useful apps that work reasonably well. So, it being doable with almost no math knowledge would seem reinforce my point that one doesn’t inherently need the other far as coder is concerned.

Btw, his disdain for regular software developers reminds me of all your “code cutter” posts.

  • John von Nuemann’s habit to describe systems & parts in anthropomorphic terminology; adopted in USA more than Europe.

I’m not sure what he’s talking about. I’d like some examples. Maybe when people say “this app talks to that one,” “remembers,” etc? OOP & agent-oriented programming might do this too albeit with benefits. Anyway, most developers I know & books I read talk about software like it’s something we produce that acts on data, does work, & interacts with user via an interface. That’s more mechanical than anthropomorphic.

Come to think of it, there is a trend to think of business systems as a sort of organic thing that evolves and adapts to a changing environment over time. It’s an interesting metaphor with some claimed benefits. Far as I know, it wasn’t prevalent in his time.

  • to forget that program texts can be interpreted as executable code

That’s all we think about them, except for “code as data” people. I think he was just hanging around with some odd people. Or things were different back then.

  • his trouble with LISP 1.5

That was HILARIOUS. It’s ironic he had so much trouble with a radical, academic language rooted in mathematics after he’s consistently implied Europeans would more easily learn non-standard or math-focused tech. Open mouth, insert foot.

  • his recollection of ACM visit

Entertaining. Welcome to America haha.

  • going through motions to please sponsor in America, not in Europe

This is a common thing in America. Sponsors don’t get courtesy/preferential treatment or say over direction of research in any European countries? That would be a major difference.

  • Americans having a greater capacity for dishonesty and honesty

I’ll buy that claim just because it’s a logical outcome of of our Constitution, economic model and culture. You put them together, you can get this result.

END OF REVIEW

Overall, beside a seeming prejudice corrupting his analysis, my overall gripe is that he compares America to Europe. I think this is a bad idea. Continents don’t make software. Continents don’t write or review mathematical proofs. Continents don’t do about anything cohesively aside from stuff like NAFTA and EU. In our field, results are driven by individuals and groups. The proper comparison is between them, not countries or continents.

I’ll illustrate that with a few groups in US and UK. Microsoft throws garbage together, ships it, and uses lock-in to keep making money. For a long time, they used monolithic architectures with poor reliability & security traits with little code inspection. On other side, Green Hills developed software and tools with a strong focus on good architecture (eg microkernel), low defects, and so on. In UK, Micro Focus uses regular software practices to construct IDE’s that ensure your beloved code cutters can keep writing more COBOL (!) with predictable quality. On other end, Altran Praxis uses their “Correct by Construction” process to develop very low defect systems in many safety-critical industries.

So, it varies group by group, company by company, agency by agency. It’s really about the group’s goals, work ethic, tools, and attention to quality. These combine to separate the good IT from the bad IT. The country they’re located in? If it has an effect, I’m just guessing it’s minimal compared to the others I listed. That’s just a guess though. I’m sure norms, cultures, and laws can create an environment that directly impacts software quality. I just have little hard data connecting these for European countries. Ours is individualistic, largely profit motivated, and has little to no liability so the quality is quite predictable. 🙁

name.withheld.for.obvious.reasons May 6, 2014 3:39 PM

@ Nick P

The Framework, I’ve termed Data, Document, and Information Management Policy Framework (DDIMPF, is the set of DDIMP components that are parametrically co-linear to a btree of nodal lines process/procedure/function–Bruce has used a similar risk analysis tree to describe a coherent model–but it’s not that).

The ability to abstract loosely or tightly coupled DDIMP components with boundaries that allow integral or algebraic geometries with regard to complex operational chains–this is a highly efficient method to collapse a series of function objectives (quality control, auditing, change and configuration management, release and publication, etc.). The graphic I have for the model seems daunting–and maybe it is–haven’t had anyone with enough cycles or sufficient background to review the work.

And no, the simple answer is this “business” or organizational process model that is highly formalized using data, documents (this is a broad description), information, and knowledge to establish a conformal (broad terms) system that is responsive and efficient. I see it as a completely new thesis in process management. This sprang from an analysis of the current situation we find ourselves in–I began the development and design process for this in October of 2012 understanding that the government had bedded and co-opted the tech community…there needed to be an organization response to a failed command and control process management model that leaves us “compartmentalized” in many ways.

Benni May 6, 2014 6:50 PM

Here is something for the user “sceptical” or those who believe the USA have a free press:

https://www.youtube.com/watch?v=L6sYB5d1Bu4

For me as a german, these statements by hillary clinton just belong to the most awkward things that ive seen.

I could somewhat understand if putin sets up a propaganda news tv station, since this is generally something that an authocratic and anti-democratic government does, but a developed country, with a working democracy like the USA should never desire to “win the information war” by influencing the media to send propaganda. If a democracy behaves like that, then one might expect worse…

Figureitout May 6, 2014 8:21 PM

Wael
adhering to some basic security principles can’t hurt
–Oh I do, for the very few things that haven’t yet been corrupted; basically boils down to discipline (which costs you time and friends). Doesn’t matter much when you have agents breaking in your home (still attacking, this time the psychopath put out his cigarette in droplets of shower water in my shower) and they get the phone company to route all cable traffic straight out of my home. I don’t know what judge is authorizing this behavior but it’s continuing and it’s straight up police state behavior. I think it’s been long enough that I’ve demonstrated I’m not a threat, in fact I’m trying to secure our systems by starting w/ a system small enough that I can wrap my head around. Basically says to me that I’ve struck a nerve (agents should never let their emotions compromise their cover) and I’m such a difficult target that they had to resort to such extremes and get pleasure beating an already dead horse to a pulp.

Nick P
–You didn’t give a specific protocol, I already was doing it and knew what you told me, and there’s a lot of unanswered questions. I’d rather not bring others into my hell, it’s not something I just bring up and I’d have to conduct a non-intrusive background check to check for obvious signs of cops or agents. I’ve only told one person in real life (and the internet now…) and even still I’m not sure, I’m at my wit’s end at this point if all my closest friends ever betrayed me for agents… The main reason I’m doing this is b/c I think it would be very beneficial for others to recover your systems from a serious attack (think an attack that went undetected for months or more). I’d need at least 2 computers (unsafe/untrusted ‘net one and shielded-airgap-24/7-never-leaves-my-sight one). Somehow…I need to have my infected hardware “touch” my “fresh” computer and not pass along whatever this is; not smart when I don’t know what it is or more importantly…where. Ideally I need a device that takes in data to an isolated insecure area, and prints the ALL data to a screen where I manually check all data page-by-page to another area; that will be so hard I’m not sure it’s possible how I envision it. I need to reflash the firmware in my router (not happening while in daddy’s basement), I need a physically new location to access the net since getting a fresh copy of software just leaves more questions and is impractical sadly. I need a lot more but I’ll save it for a more refined post that I’ll link to in the future.

That’s not trivial and I know it’s highly unlikely I’ll succeed. Still going to try though.

OT: For: Clive Robinson RE: Catching attackers w/ honeypots
–Figured I’d return the favor, I always return the favor. :p Interesting break down of a server intrusion which is then used for DoS. Lots of college students should be able to follow what’s happening for the most part if you’ve taken a Unix class; and now we have another attack to test and try to defend against. More of these articles also let the attackers know…what seems like an easy target, may be a little too easy…haha

http://draios.com/fishing-for-hackers/

Wael May 7, 2014 1:13 AM

@ Nick P,

Any potential for you see in that or is it a waste of time?

I don’t think it’s a waste of time. I am also not sure this is the most suitable approach. Embedded systems or otherwise, I think, is orthogonal to the proposed method. Discrete control theory is not my cup of tea. If I chose to use it, I would lean towards applying such methods to the hardware (cores) and the lower layers of the OS (scheduling, resource allocations, …) — not to the application layer components. Still, if it seems like a waste of time today, it maybe a different story in the future. I know this answer is not as analytic as you’d hoped, but it’s not within my area of expertise — if I ever had any…

Clive Robinson May 7, 2014 2:20 AM

@ Nick P,

With regards the letter, I linked to it in respect to the historical context, it was written quite some time ago and as far as I can tell it is fairly accurate in it’s outlook. It was writen a little while after they had helped on the Burrows computer design.

In essence it shows how Europe has moved from it’s postwar position of the formative years of the 50s&60s and moved to the US model over the next thirty years or so.

As you have noted much has changed in that time in Europe. But if you think back to your own comments on how researchers are these days continuously reinventing the wheel over security that has done and dusted by the mid 1970s you might want to think if the authors perspective about the different research types is valid or not, and if not why not…

If you look at hardware and basic architecture, nearly all the research from the late 80s has been on how to make a non concurrent out of date hardware model stay on Moores Law and only in recent times have people in general finaly realised that parellel computing with multiple CPUs/cores and all it’s concurancy issues is the way they have to go as all the tricks to make large imperative only systems have hit the ROI buffers.

Some years ago a new design of architecture was proposed called Transport Triggered Architecture (TTA) [1] which exposed the internal data transport busses of the CPU to “the programer”. In some respects it is a mide way poit between RTL and assembler and if used correctly offers a lot of concurant processing inside a single CPU.

It is however a bit of a nightmare for 99.99% of programers as in many it breaks the Standard Model which we are taught in higher education.

The TTA CPU design is used but mainly in “Application Processors” but it’s design started me thinking on it’s security issues and how you could expand the design in a more “programmer friendly” way.

[1] http://tce.cs.tut.fi/tta.html

Clive Robinson May 7, 2014 3:11 AM

@ Nick P,

The old joke you refere to is an updated version of,

Last night I was sitting there looking up at the moon and stars, and I thought that’s nice, somebody’s stolen the roof of the priviy…

As for Holems and Watson jokes I know one or two, but the moderator would get upset 😉

yesme May 7, 2014 3:57 AM

One final remark about simplicity and defencies.

The brilliant Tony Hoare once said[1]:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature.

He also said:

[About Pascal] That is the great strength of PASCAL, that there are so few unnecessary features and almost no need for subsets. That is why the language is strong enough to support specialized extensions–Concurrent PASCAL for real time work, PASCAL PLUS for discrete event simulation, UCSD PASCAL for microprocessor work stations.

[About Ada] For none of the evidence we have so far can inspire confidence that this language has avoided any of the problems that have afflicted other complex language projects of the past. […] It is not too late! I believe that by careful pruning of the ADA language, it is still possible to select a very powerful subset that would be reliable and efficient in implementation and safe and economic in use.

The recend discussions about Oberon made me look more into the language and OS. Of course the OS is now 30 years old and the GUI part compares more with RIO from Plan-9, the first versions of Turbo Vison and the ncurses library than what we expect from an OS of today, it is still a sane thought.

The 2013 updated documentation about Project Oberon[2] shows some really mindblowing features and numbers. It clearly showed that solving the problems at the core results in significant simplicity. Because Würth wrote 3 pages of FPGA RISC microcode he could reduce the compiler to less than 2900 lines of code[3]. This is the Oberon compiler!

And on even the rather slow hardware the compiler and OS compiles in 3 and 10 seconds! The OS now has NONE assembly code![3]

To bring back my first Tony Hoare’s quote:

“so simple that there are obviously no deficiencies” -> Oberon

“so complicated that there are no obvious deficiencies” -> OpenSSL, C++, GCC (and a long list)

[1] http://en.wikiquote.org/wiki/C._A._R._Hoare
[2] http://www.inf.ethz.ch/personal/wirth/ProjectOberon/index.html
[3] http://www.inf.ethz.ch/personal/wirth/ProjectOberon/PO.System.pdf – page 6

yesme May 7, 2014 4:26 AM

And one final errata:

About Würth FPGA microcode[1]:

Clearly “real”, commercial processors are far more complex than the one presented here. We concentrate on the fundamental concepts rather than on their elaboration. We strive for a fair degree of completeness of facilities, but refrain from their “optimization”. In fact, the dominant part of the vast size and complexity of modern processors and software is due to speed-up called optimization. It is the main culprit in obfuscating the basic principles, making them hard, if not impossible to study. In this light, the choice of a RISC (Reduced Instruction Set Computer) is obvious.

He is right of course. The OpenSSL accelerated assembly crypto code for AES on 586 platform alone is 2980 lines. The AES crypto has 15 such accelerated assembly platform support.

[1] http://www.inf.ethz.ch/personal/wirth/ProjectOberon/PO.Computer.pdf – page 1
[2] http://www.openbsd.org/cgi-bin/cvsweb/src/lib/libssl/src/crypto/aes/asm/aes-586.pl?annotate=1.6

Wael May 7, 2014 4:37 AM

@ Nick P, @ Clive Robinson,
I liked the joke. As for concurrency, I tend to believe there should be as few locks as possible or performance could degenerate to a level that defeats the purpose of concurancy. I am referring to tens of thousands of hardware threads running on a multicore GPU. I would rather spend effort in this area than to see where “bad locks” are. Effort here means crypto algorithm highly concurrent implementations rather than the serial implementations in use these days.

name.withheld.for.obvious.reasons May 7, 2014 5:20 AM

Observations based on commentary posted to the Federation of American Scientists site on 30 April 2014. Comments on a decision by a federal district court in Washington DC saying no remedy for death by drone.

HOW I SEE IT…NO AUMF LEGITIMACY
The case exposes so much that is wrong with our government and the court’s inherent tendency to protect government and not to justice for the citizenry. First, under the color of the AUMF, the courts are essentially ignoring the explicit stricture in the constitution requiring the “Declaration of War”. If congress has had a problem with this concept (effectively since the Korean War) then amend the constitution–constructing a text to make the subtitles match the narrative of the movie but ignoring some of the words is not useful–especially when war is the LAST and MOST POWERFUL instrument of the state.

The reason the AUMF is flawed is based on constitutional purpose for requiring the “Declaration of War” from congress and how this flaw has abused the citizenry.

  1. Congress is vested with the power to declare war, kings are not to be trusted with amassing armies for whatever purpose.
  2. The declaration of war is required to enable the full force of the state to be used in repelling invasions or insurrections. The bombing on 9/11 represents an act of war, not an invasion or insurrection. The constitution is clear on this.
  3. Declaration of war confines the use of armies by a would be king (the president). If the president is given power akin to war powers without enumerating it as war–then what restrictions can be overcome? This is dangerous–this is exactly why the statement in the constitution is stated so clearly.
  4. Declaring war is not like rationalizing a sexual predilection for prostitutes. For example:
    a.) “You are authorized use of a
    non-traditional
    sexual liaison.” – [affair]
    b.) “You are entitled to the rights
    and appearance of a traditional
    relationship.” – [marriage]

If the context can be changed, what constitutes a declaration of war, and instead enumerate the use of a power–employing the military to kill political enemies can be justified.

The colonists’ truly feared the immense power of central governments–King George had given them plenty to fear. One person, based on their attitude, could move fleets with armies to their shores. By side stepping the reason to specifically declare war allows tertiary use of military and thus becomes an instrument of raw power.

How the subversion of constitutional purpose (spirit of the law) gets violated by the courts and judges is shameful–if not an indirect form of treason as it allows for the overthrow of the republic.

Nick P May 7, 2014 12:06 PM

@ Wael

re Discrete Control Theory

That’s exactly the kind of intuitive response I was looking for. Thanks.

re concurrency

I’ve seen both processors and languages (ParaSail) that make the stuff easy while still allowing HLL languages or existing toolsets. So, we might not have to go entirely GPU on it. Plus, far as crypto goes, it’s usually one of the fastest components in a system with others slowing things down. Fast primitives (eg Bernstein) or hardware acceleration of key primitives seems to suffice for it. It’s why I’ve always loved the concept of FPGA logic in chip or FPGA’s on the board. Just push off anything that needs acceleration to them while using the same SOC’s or board gives volume pricing.

@ Clive Robinson

“As you have noted much has changed in that time in Europe. But if you think back to your own comments on how researchers are these days continuously reinventing the wheel over security that has done and dusted by the mid 1970s you might want to think if the authors perspective about the different research types is valid or not, and if not why not…”

It might have been correct back then. I wasn’t in Comp Sci in 70’s so I can’t really speak to it.

“Some years ago a new design of architecture was proposed called Transport Triggered Architecture (TTA) [1] which exposed the internal data transport busses of the CPU to “the programer”. ”

This is interesting in that I was recently looking a high-level microcoding as a solution to one problem. More on that later.

“The TTA CPU design is used but mainly in “Application Processors” but it’s design started me thinking on it’s security issues and how you could expand the design in a more “programmer friendly” way.”

The pages I read on it note that it sucks for interrupts, preemptive threads, context switches and so on. My & DARPA’s designs knock out two of these but others remain. There’s also the issue that the programmer has to worry about timing of everything. Would be great for covert channel analysis, a pain in the ass for… everything else. 😉

So, on to what I’ve been thinking about lately. The problems of the systems, esp causes of code injection, are well-known. I’ve posted many architectures & chips that prevent many by good hardware design. The trick is who wants to screw with several dozen full hardware projects at once? Additionally, for projects leveraging abstract machines (eg M-code, JVM), it’s a new hardware project per language. Yet, I’ve seen two potential shortcuts: microcode & PALcode.

Microcode is the most obvious. Many of these CISC, safe, etc processors are actually typical data throwing machines underneath maybe with a few dedicated components (eg tagging unit). The microcode is used to effectively transform them into the other machine. Many changes to that machine can be done in microcode. So, it might be beneficial to just create a series of functional units that could emulate most of these processors, then make the microcode easier to write (tools or new languages). Then, people trying to improve & experiment with processors could start by microcoding existing hardware instead of learning everything about digital logic, etc. Speaking of myself, I thought the microcodes I’ve read were much more comprehensible than processor specs.

Similar idea with PALcode. Actually, you could say Alpha’s PALcode was an implementation of my idea in limited way. It effectively allowed programmer to define new instructions out of existing instructions. These also executed as atomic instructions, a capability that would have tremendous effects in concurrency. PALcode was instrumental in VAX Security Kernel project to get kernel to run with security & performance. A processor with something like PALcode, albeit with power closer to microcode, could be very useful in these efforts.

Another component might be high-speed scratchpad memory only microcode or PALcode can access. Reason for this is to emulate aspects that weren’t implemented in hardware yet. For example, the tagging engine might have state related to its job. If RISC core has no tag unit, then microcode or PALcode can emulate one with support of scratchpad memory which might store tags or access rules. Scratchpad is just one idea, as I’m quite open to whatever gives flexibility while maintaining performance.

So, to recap, the design effort must be kept minimal by continuously producing components that can be reused or extended elsewhere. The common functional units of processors will be created in HDL, of course. At least one RISC setup with components used 99+% of the time is integrated in HDL with microcoding and/or PALcode ability. That can, by itself, become any number of CPU’s, IO-CPU’s, or ASIP’s. Hardware existions can initially be implemented by VM’s partly written in microcode or PALcode, later written in HDL’s extending the hardware. There might also be FPGA cells connected to it for this purpose. And all of this is to be written in an HDL that supports open source and/or provably correct synthesis tool use.

@ yesme

Definitely interesting work. Btw, if you want modern, the proper one to look at is A2 Bluebottle. It’s the latest incarnation of the system.

The principles of Hoare & Wirth are certainly sound. One problem with Wirth, though, is that he’s more focused on making the compiler and language simple. That means that he’s more likely to leave off a great security feature & push the concern into application. That application designers can’t be trusted to ensure security is one of the reasons we’re discussing new hardware to begin with. So, certain critical features that support safety/security abstraction of developers must be implemented in hardware & always on. This increases complexity. So, as I said before, we can simplify things wherever possible but adding security will add complexity.

An example is Intel i432. They had plenty great features for making OS and app designer’s more productive, maintainable, and secure. Yet, they put so much stuff into it that it was only 25% as fast as competitors. Utterly failed in the market. So, there’s obviously a cutoff point for extra complexity. With i960MX, they transformed i432 by trimming off what fat they could. End result was a simpler, RISCy architecture that performed well & supported plenty key i432 features. Yet, it was more complex than competing RISC designs as it needed the extra capabilities.

So, in security engineering, we have to make tougher tradeoffs than those merely wanting speed or small compilers. We have to keep the machine easy to implement (HW), easy to manage (OS/runtime), and easy to develop on (apps/compiler). It’s tricky and the solutions aren’t always as simple as many would like. 😉

Nick P May 7, 2014 12:13 PM

@ Clive Robinson

I hadn’t seen that. Really nice work they’re doing. The Python crowd never ceases to amaze me at how many uses and tools they derive for the language. This tool should certainly benefit hardware designers, esp in prototypes & verification. That it’s still essentially a HDL means I can’t use it without learning such things. Hence, my continued look at things like microcode, PALcode, I.P integration, HLL to FPGA compilers, etc.

Figureitout May 8, 2014 1:17 AM

Clive Robinson
–Up earlier in the thread I mentioned that I was unable to get the Cassiopeia E-115 to boot up. Well I finally got that f*cker to boot up; the solution turned out to be trivial of course. I know you’re probably rolling your eyes right now, “kids these days”, and maybe have a story where you popped out the womb, w/ umbilical cord still attached you did the “moonwalk” and got one of these old things to boot up w/ one hand. :p Maybe there’s somethings you could help w/, before I just go off on my own researching, but what does a typical charging circuit look like (there’s a few candidates), b/c I can’t find any schematics of course on this. Also, do you know about “thermistors” on battery terminals and why I would get a voltage reading of 0.18V over ‘+’ and ‘-‘ w/ the battery in, while getting a reading of 2.8V over ‘T’ and ‘-‘? Also I was getting 5.2V on a capacitor very near the AC power.

Basically, besides an old battery, which I’m certain of, I think there may be other circuit issues like a charging circuit, and I wondered if you ever replaced those before in a “DIY” way. The solution ended up simply using my digital power supply and a couple wires to inject 3.7V directly on the ‘+’ and ‘-‘ main battery leads; I know it sounds obvious but I was thinking that it needed contact w/ the thermistor lead too and maybe some other signal…but it didn’t.

And right about now the MOD, Bruce, and a few readers are probably foaming at the mouth for me to STFU, and what’s the security implications here or am I just chatting it up. I’ll tell you:
1) Removable ROM chip on this device, would require some work but definitely doable w/ an engineering team

2) Everytime the backup battery is removed all program memory is supposedly wiped. So you make a file and store it to a memory card and now the memory card is what you need to protect.

3) According to the specs, no wifi and what really makes me happy…NO BLUETOOTH. 2 protocols I can most likely not worry about; but I still need to test this myself.

4) This is a commercial device that would require agents to “go back to the library” to find exploits; yet is still actually a very user-friendly device. It’s even got frickin’ Solitaire in the ROM.

Clive Robinson May 8, 2014 10:02 AM

@ Nick P,

You can’t live in a world without gates be it our human world or the digital word so you might as well get to grips with them 😉

The reality of what you are likely to come up against is a logic cell that in essence is a programable map (memory) and multiplexor (MUX) and in some cases a latch/flipflop to give register functions etc. All of which you program with your required functionality. Amongst the many advantages of such cells is a constant delay time because you always end up going through a fixed number of gates (ie the AND OR array map and MUX).

Back in the good old days when you realy did play with gates one of the most time consuming things was working out the various delays that gave the higher level logic delays and metastability criteria.

Back then your next step was working out your data flows around functional blocks and the registers used to drive them and store the results. This gave rise to what used to be called the Register Transfer language (RTL) which is what you wrote your layer one microcode in. However it also defined the way your CPU functioned. Microcode can be as simple as a large memory map or as complex as a convaluted logic state machine. The former tends to lead to a very wide control bus the latter a slow throughput.

As IBM discovered the wider the control bus the more you can do per clock cycle but more importantly how quickly you can correct microcode mistakes with minimum disruption. I would urge you to consider this aspect quite seriously if you do end up “rolling your own”, not just for the afore mentioned reasons but due to “heat death” of logic. Basicaly we’ve reached the point in miniturisation that we are now actually “Power Limited” not by geometary etc. As it happens memory is about the lowest power for any given area of silicon and thus is almost “free” when it comes to power disapation. It’s this which has given rise to the large increase in simple cache memory we see on CPUs these days along with other memory types.

Layer one microcode used to be the simple operations you had to do every CPU cycle and to get data in and out of registers from or to the external CPU buses and internal functional blocks of the ALU etc. It usualy did not provide actual assembler instructions this was done by layer two microcode in simple RISC architectures and higher layer levels in complex CISC architectures. In the former an almost pure memory map system is possible in the latter it would have been seen as to costly in real estate and the layer two or three would have seen a state machine. These days some instruction decode and control sections rival a generation or so agos CPUs in functionality.

Irespective of the external control and data bus architecture most CISC systems are internaly Harvard architecture because it makes the use of “go faster” pipelines and caches easer to implement.

One of the problems in CPU design is the length of time required by ALUs and other internal functions to compleate. That is an XOR between two registers is very fast, but an ADD or MUL is not and gets slower the wider the internal data width is. The solution that used to be used was to throtle back the CPU cycle time to that of the slowest operation, which whilst simplifing the design makes it slower than it could be (this was usualy acceptable due to the likes of incrementing the program counter register etc).

Whilst many SoC systems for microcontrolers in embeded systems remain Harvard architecture through out the CISC CPUs used in more general systems join the code and data buses into one external memory bus so that programs etc can be loaded more easily. This has unfortunate consiquences for both security and high level languages making imperative not concurant systems easier to implement.

With regards TTA systems yes there are the problems you highlight but they are often only of relevance in single CPU systems. Single CPU systems are however a thing of the past in general purpose computers and in the case of GPUs often they are more powerfull than the main CPU in the system –when used correctly–, they are also more likely to be amenable to architectures that support concurancy of which TTA is one.

Whilst TTA systems are more complex for programers to get their heads around so are the multiplicity of CISC assembler instructions which few programers will ever even attempt to get their heads around. The solution has been for some time now to let the high level language compiler do the work and the same would apply to TTA systems provided the high level language supported concurancy at such a low level (which most high level languages in use today don’t).

This brings up the issue of at what level in the computing stack do you become concurrant, and the answer is as is often the case a trade off based on what you are doing…

For graphics and digital signal processing and similar generaly the lower the better thus inside the CPU at the logic level below the microcode. However for more general computing above the microcode is where the sweet spot is likely to be with the CPU core being the accepted imperative variety.

Nick P May 8, 2014 11:36 AM

@ Clive Robinson

Thanks for the tips. Yes, all the timing issues are what I kept seeing in articles on the subject. It also appears synthesis tools are horrid for hardware compared to software. Article argued quite well why it was hard in general. I think better tools & hands on development of complex projects in academic are the only way new generations will even begin to catch up. Funny that there’s at least one IT sub-industry still dominated by the old folks. And that doesn’t involve COBOL. 😉

re Harvard

A truly pure-Harvard design seems to avoid certain problems. Yet, I’m not convinced it’s even necessary given what I know of tagged and capability architectures. Many problems in software remained to be solved with a Harvard architecture. The solutions to many of these problems can also be used to protect code on a von Neumann machine. Harvard essentially creates two segments, code & data, with no further granularity. The machines like SAFE & CHERI can do so much more than that. Additionally, we’ve had so many exemplar von Neumann machines to build on & so few Harvard’s dare I guess that we’re more likely to screw up a secure Harvard architecture project.

Note: This is one of those topics that my opinion can change wildly day to day, month to month, and year to year.

re microprogramming & compilers

Good that you brought up compilers as it’s exactly what I was looking into yesterday. I found these gems that show microprogramming can not only be made easy: it can involve almost no microprogramming. 🙂

On the design of a microcode compiler for a machine-independent high-level language
http://www.computer.org/csdl/trans/ts/1981/03/01702839.pdf

A retargetable compiler for a high-level microprogramming language
http://ls12-www.cs.tu-dortmund.de/daes/media/documents/publications/downloads/1984-Sigmicro-15.pdf

High-level microprogramming – An optimising C compiler for a processing element of a CAD accelerator
http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1070&context=csearticles

So, higher-level microprogramming certainly can be done. All these papers are old, too, so I’m sure modern tools could push envelope even further. Only question is “Would it be easily done in the kind of chip I described and for abstract machine implementation?” The chips [mostly] use common functional units and the abstract machines can be modeled as state machines. So, I don’t see [yet] what would prevent HLL microprogramming from being used there.

Clive Robinson May 8, 2014 12:11 PM

@ Figureitout,

and maybe have a story where you popped out the womb, w/ umbilical cord still attached you did the “moonwalk”

No –I’m too old– but I do have one about my son very nearly bungie jumping –on his unbilical cord– off the end of the gurny when he shot through the Midwifes hands (nearly knocking the camera out of mine) having just caught him by his arm and a leg my son arm outstreached pointed an accusing index finger and started to bawl his head off… apparantly others have similar tales, so I’m waiting to hear one about a low flying new born bouncing into somebodies iPad etc and taking their first selfie.

Any way that aside charging of rechargable bateries. The simplest circuit is a mains transformer a bridge rectifier and a resistor (but importantly no smothing cap). Most but not all rechargables have two charging currents specified, the first is the standard charging current, the second is the trickle/holding charge current. Both are assumed to come from a constant current source (which old style chargers almost never are).

The two things you have to scale are the transformer output voltage and the series resister, it’s usually safe to assume that the bridge rectifier has two silicon diode drops of ~0.7V thus a total of 1. 4V. Transformers are normaly rated at their full load RMS voltage with a peak voltage root two times that.

The trick is to pick a transformer voltage where on RMS rating minus the bridge drop, the resistor would give the standard charge current if the battery was shorted out and power rate the resistor to twice the short circuit power. If you don’t know what the standard charge current is assume one tenth of the mA rating of a cell (so about 200mA for AA cells) this will charge the cells in about 24hours. Check that the peak voltage minus the bridge drop and fully charged cell voltage will give an RMS current that aproximates the trickle/hold current (if not known assume around a quater of the standard charge current so around 50mA for AA cells).

So a quick aproximation as a start point is pick a transformer with the RMS value being equall to the full charge cell value plus the bridge drop, which for three nicads is 3.6 + 1.4 = 5Vrms. The resistor value is going to be 3.6/0.2=18R with a power rating of 23.60.2=1.44W. Which gives a peak over voltage of 7.071V – 1.4 – 3.6 ~=2.1V, giving the trickle charge as 2.1/18 *~0.5 ~= 58mA or less which is abou right.

Clive Robinson May 8, 2014 1:02 PM

@ Nick P,

You have two generalised choices when it comes to the chip tools VHDL and Verilog.

The problem with Verilog is it’s more akin to a traditional programing language and most beginers make the mistake of using it just like a programing language. The trouble is programers use loops and reentrant code as standard, hardware does not, this causes hugh netlists that are oh so slow and invariably don’t work the way a programer expects.

VHDL has other issues but on a gate by gate basis it’s easier (if more long winded) to pick up and tends to produce reasonable and working netlists even for beginers.

As I indicated I’m very “old school” and do gate designs in my head with paper and pencil, I then chuck it at a keyboard jocky to bang it into VHDL to get a simulation out that I then run a jaundiced eye over. It’s not the way you should use such tools but old habits die hard for various reasons, one of which is the human brain can do trade offs “as they go” which untill recently the tools could either not do or not do well… There is a running joke about my abilities to beat CAD tools because I tell younger engineers with tracking and other problems “If I was you I wouldn’t start from here…” (just like the farmer leaning on the gate in the original joke when asked by a couple for directions).

If you are keen on rolling your own I’d advise you to ges an FPGA demo board from a manufacturer that supplies free tools where you have the option of both VHDL and Verilog, start with VHDL and only when comfortable with that have a go at Verilog.

The other route is get a book with a CD rom of tools. On such is “Fundementals of Digital Logic with XXX Design” by Stephan Brown & Zvonko Vranesic. They do both a VHDL version and a Verilog version (substitute for the three Xs in title). However take care when buying the prices vary very wildly in price from around 40USD to a couple of hundred (why I’m not sure but the fact the cheeper versions are marked “student” give a clue it’s the same book priced for companies or students with what the market can bare… Oh a starting ISBN is 978-0071-2688-06.

Noah Löfgren May 8, 2014 1:43 PM

Press Release | “United States of Secrets”: How the Government Came to Spy on Millions of Americans

http://www.pbs.org/wgbh/pages/frontline/pressroom/press-release-united-states-of-secrets-how-the-government-came-to-spy-on-millions-of-americans/

FRONTLINE Presents
United States of Secrets
http://www.pbs.org/wgbh/pages/frontline/united-states-of-secrets/

Part One: Tuesday, May 13, 2014, at 9 p.m. on PBS
Part Two: Tuesday, May 20, 2014, at 10 p.m. on PBS

(Check local listings)
http://www.pbs.org/wgbh/pages/frontline/local-schedule/

pbs.org/frontline/united-states-of-secrets
http://www.facebook.com/frontline
Twitter: @frontlinepbs #USofSecrets #frontlinepbs
Instagram: @frontlinepbs

vas pup May 8, 2014 2:36 PM

http://www.euronews.com/2014/04/29/driving-into-the-future/
New emotion detection application with substantial
potential for wide range of security applications (e.g. guys in IBM silos on controls, pilot of commercial airlines – no more 370 story, interrogation for Intel purposes (not for court as evidence). Time and again, any new technology is NOT substitute for LEOs to think with their own heads, but just aid).

Figureitout May 8, 2014 6:49 PM

low flying new born bouncing into somebodies iPad etc and taking their first selfie
Clive Robinson
–Oh you out did my joke! :p Thanks, what I was meaning was on board (I could send you pictures somewhere), I can’t immediately find a schematic for the device; I suppose I could try to reverse engineer it…It won’t be a NiCad either, a LiIon; so a little more complicated and more dangerous (I don’t like the prospect of an exploding battery). So I’ll see if I get a fresh battery if it can power up just off that (fingers crossed); and I just need to make a charger for it. Here’s a design:

http://homemadecircuitsandschematics.blogspot.com/2012/05/make-this-li-ion-battery-charger.html

Going to have to watch out for WinCE viruses (yay…), really want to try to get some sort of unix instead…

http://www.pocketpcfaq.com/faqs/virus/virus.htm

While searching for a file encryption program, I came across this. Bruce you would get a kick out of this:

<

blockquote>HideIt! Pro belongs to the military class of cryptography systems. It utilizes the RSA 128-bit per key algorithm. For every password phrase you enter, HideIt generates 48 more passwords and applies the algorithm to all of them.

This means that the total encryption scheme is utilized by no more than 6144 bits, ensuring your privacy.

http://www.allaboutsoft.com/software/2998-HideIt_Pro_1_04.html

Anyway, back to my hole.

Nick P May 8, 2014 9:05 PM

Recent attacks on a few anonymizing networks

Trawling for Tor Hidden Services – Detection, Measurement, and Deanonymization (2013) Biryukov et al
http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf

The Sniper Attack: Anonymously Deanonymizing and Disabling the Tor Network (2014) Jansen et al
http://www.robgjansen.com/publications/sniper-ndss2014.pdf

Practical attacks against the I2P network (2013) Egger et al
http://wwwcip.informatik.uni-erlangen.de/~spjsschl/i2p.pdf

Freenet always interested me more due to it being a distributed data store. However, I bet I’d have a paper on it too if it was getting attention from smart researchers like these. Usable & robust anonymity is just really hard to do.

Nick P May 10, 2014 12:44 AM

@ Clive Robinson

“according to some”

cough Dijsktra and his groupies cough

Did you read the “Bashing BASIC” section of the article? It quotes him on that and then provides counterpoints. I like one of them: ” ‘I’ll go out on a limb and suggest the degrading of BASIC by the professionals was just a little bit of jealousy–after all, it took years for us to develop our skill; how is it that complete idiots can write programs with just a few hours of skill?’ (Kurtz) BASIC may not have made sense to people like Edsger Dijkstra. That was O.K.—it wasn’t meant for them. It made plenty of sense to newbies who simply wanted to teach computers to do useful things from almost the moment they started to learn about programming.”

And one of them built so many useful tools that he became addicted to IT enough to turn into a security engineer of actual talent. 😉

“You might find this page of interest,”

It was fun. I’ve seen BASIC on all kinds of machines from microcontrollers to servers. I was thinking there’s not much that can be done with it that stands out anymore. Then, it dawned on me that I haven’t seen it used for one thing: supercomputers. A BASIC dialect along the lines of High Performance Fortran, X10 or Parasail shouldn’t be too hard to do. With BASIC, the language always hid many details anyway. I just think it would be hilarious if the next Watson, simulated brain, etc. ran on a gazillion core supercomputer coded efficiently in… “ParallelBASIC.”

Critics: “It’s programmed in WHAT!? I mean… they aren’t bright enough to even give it a good name. They don’t make it sound like an element, a famous scientist, a word that would impress Comp Sci majors… they just combined “BASIC” and “Parallel.” And this thoughtless language was allowed to execute on a $100+ million dollar machine? Aghhhh!!!”

So, I typed “Parallel BASIC” into Google just in case and saw HPC BASIC. (!) Turned out to be a reference to Julia language alluding to it being the modern BASIC of scientific programming. Seems my idea of bringing an actual BASIC to supercomputing is still novel. Or it shows that only one guy is crazy enough to even publish such nonsense on a public forum. Could go either way.

Figureitout May 10, 2014 1:55 AM

Clive Robinson
–If you’re so scared of providing a public email address to contact you w/, you’re saying something by saying nothing regarding email security/protocols. Just wanted to demonstrate for readers out there; obviously at this point I don’t care how awkward I can be. Email cannot be trusted whatsoever; entirely new protocols are needed.

Clive Robinson May 10, 2014 2:33 AM

@ Nick P,

Yes he was one of the “some” but there were others, and as always with such things there is a germ of truth in it. For instance Church-v-Turing. It’s been argued that the Labda Calculus Church invented was unduely overshadowed by Turing’s universal engine which gave rise to imperative systems that currently blight our thinking and hardware keeping us from the goodness of concurancy and parallism we needed to be in back in the 1990’s…

Personaly I think the history of computing mainly shows the pragmatism of doing what’s possible at the time with the resources available. Unfortunatly this way of doing things often suffers from the “Apollo problem”, which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more. It’s one of the reasons we have the crazy IAx86 architecture with *nix or some failed improvment on *nix that’s in effect a poorman’s knock off dressed up like a pig in a ballgown. The alternatives that came along that were better in oh so many ways got killed off for not being porcine compatable…

History shows you need a tipping point where change has to happen, but for some reason we’ve not realy had one, and I wonder what it will be. As I’ve said befor we have got to the point where there is no ROI on trying to stick with Moore’s Law and the only way to increase computing power cost effectivly is by concurancy and paralellism at various points in the computing stack. The chip makers know this which is why we have multiple core and multiple CPU systems but the OS and Apps have by and large failed to capitalise effectivly on this, the two questions being, Why? and What’s going to give first?…

The main disadvantage with BASIC was also it’s main advantage it was both interpreted and overly simple. It was thus very easy to learn by experimentation bordering on play, but painfully inefficient and slow.

I suspect that the notion of paralellBASIC is not wrong, but it won’t be BASIC it will be a language that has pure functions supported by immutable variables otherwise concurancy and paralellism will be way to difficult to do either efficiently or effectivly, but it will have the “easy play” asspects of BASIC. The What follows Fortran article you linked to gives a number of options but none of them appear to be ready for Prime Time currently. I suspect it will be Python that will be the next BASIC but will it make concurancy / paralellism easy and at what point in the stack, if it does and at the right point then it will probably be the way of the future…

My guess is concurancy will be above the CPU level of the computing stack for a whole heap of reasons, which means the bottle neck will be as it has been for some time now the OS, which is one of the reasons I was looking at less than microkernals in the C-v-P design with lightweight RISC CPUs with local scratch memory and arbitrated access to main memory being done via a hypervisor.

Wael May 10, 2014 2:51 AM

@ Nick P, @ Clive Robinson,

This is the first BASIC computer I used . I bought a humongous 8K memory module with it, forgot for how much…. I still have it, sitting next to my Commodore 128. The Timex Sinclair and the Vic-20 disappeared…

Clive Robinson May 10, 2014 2:56 AM

@ Figureitout,

Yes email protocols are broken beyond repair not just from the security asspect but from the overloaded technology and social asspects as well, this has been the case for most of this century.

You only need consider one small aspect (spam) to see that we need to fix Email at all sorts of levels, but we also need to do it with considerable care lest we give rise to other issues or loss of usefull and needed aspects (anonymity and deniability etc) we have with physical mail.

One way is a “Dead Drop Box” system where you seperate the message and notification asspects, with the notification system also being made secure by some method. However such systems require the use of side channels of one kind or another and this is the major stumbling block currently.

Wael May 10, 2014 3:26 AM

@ Clive Robinson, @ Nick P,

Personaly I think the history of computing mainly shows the pragmatism of doing what’s possible at the time with the resources available. Unfortunatly this way of doing things often suffers from the “Apollo problem”, which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more.

This is the dual of biological evolution. You are describing “Silicon Evolution” with it’s two pillars: small-scale evolution, and large-scale evolution. We cannot short-cut evolution, I think. You are also implying that large-scale “Silicon Evolution” is over due… C-v-P could be a viable catalyst for this sort of evolution.

Wael May 10, 2014 3:51 AM

I read a lot of complaints about “Code-cutting”. I don’t know why it’s so stigmatized! This is what you get when you engage in Poem-cutting 🙂

Ladies and Gentlemen, Hobos and Tramps,
Cross-eyed mosquitoes and bow-legged ants;

Readers and bloggers, short and stout,
I will tell you a lie I know nothing about;

Next Thursday, which is Good Friday,
There’s a Mother’s Day meeting for fathers only; // Mother’s day is this Sunday, in the US

Wear your best clothes if you haven’t any,
Please come if you can’t; if you can, stay at home;

The admission is free so pay at the door,
Pull up a chair and sit on the floor;

It makes no difference where you sit,
The man in the gallery’s sure to spit;

I stand before you,
To sit behind you;

I lived on the corner in the middle of the block,
In a two-story tower on a vacant lot;

I watched from the corner of a big round table,
I the only witness to the facts of my fable;

As I was going down the stair, I met a man who wasn’t there.
He wasn’t there again today, I think he’s from the CIA; /* For @ Figureitout’s pesky friends */

As I was walking by, a bird dropped whitewash in my eye,
I did not laugh, I did not cry, I just thanked god that cows don’t fly;

One bright day in the middle of the night,
Two dead boys got up to fight;

Back to back they faced each other,
Drew their swords and shot each other;

One was blind and the other couldn’t see,
So they chose a dummy for a referee;

A blind man went to see fair play,
A dumb man went to shout “hooray!”;

A deaf policeman heard the noise,
And came to arrest the two dead boys;

A paralyzed donkey passing by,
Kicked the blind man in the eye;

Knocked him through a nine inch wall,
Into a dry ditch and drowned them all;

A long black hearse sailed to cart them away,
But they ran for their lives and still gone today;

If you don’t believe this lie is true, // The liar paradox
Ask the blind man, he saw it too;

Through a knothole in a wooden brick wall,
And the man with no legs took a long stroll;

The show is over, but before you go,
Let me tell you a story I don’t really know.

Steeeeeerike 2 🙁

Clive Robinson May 10, 2014 5:01 AM

@ Wael,

The pocket computer was only for sale in the early 1980’s and I’m guessing that you were old enough to earn money so 16 or older which puts you in your fourties or so…

Also it was mainly sold in Europe and Far East which might mean you spent part of your formative years outside of the US…

The first computer I purchased that had BASIC on it was the Apple ][ which cost me around 2000GBP when I bought it back in 1980 which was about three months middle class proffesional earnings or a family car equivalent back then…

However it was not the first computer I had bought or designed and built. You mentioned Sinclair well way back in the late 70’s it was Cambridge Research and they sold an SC/MP based single board computer the MK14 for 40GBP which was still a lot of money… The first system I built from scratch was based around an 1802 processor (which are still made today) I had aquired whilst involved with some “space research” in Surrey in the UK. Back then memory was quite literly worth it’s weight in gold and the advent of a 1K chip that was 256×4 bits was the hight of desirability. I wire wrapped the design on my desk at home and after repeated checking finaly powered it up and put in the first simple loop program from the front pannel switches. I later got hold of a copy of Forth for it and added a few niceties such as a UART to talk to a terminal and casset interface using a Signetics PLL chip. Over the years I also built 6502, Z80, 6800, 68K and 2900 bit slice designs on the same desk, and I’ve still got some of them around in my loft/garage along with a couple of Acorn Atoms and a BBC Home Computer, a ZX80 and Jupiter Ace Forth home computer as well as most bits of a PDP11-70 and a VMS box and other ancient bits and bobs like ICL core store 8inch floppy drives and other stuff to numerous to mention. Then there are the PC boxes Unix/Zenix boxes an early 68K based Netware box and parts of a NeXT box Apple Lisa, Sun kit etc. All used if not abused by me for various work and personal projects, and assumed to be still working… so more a dusty store than museum or scrap yard 😉

Wael May 10, 2014 5:23 PM

@ Clive Robinson,

The pocket computer was only for sale in the early 1980’s and I’m guessing that …

Pretty good analysis… Impressive accuracy of around 90%

Nick P May 11, 2014 1:28 AM

@ Clive Robinson

” Unfortunatly this way of doing things often suffers from the “Apollo problem”, which means that instead of continuous improvment with time you get a burst of activity followed by a long period of inactivity which leaves technology in a culdersac for fourty years or more. ”

That is an interesting way of looking at it. I’m not sure it’s supported by evidence. Yet, there was a similar concept that focused on equilibriums, peaks, or something like that in ideas and adoptions. My memory fails me here. It said there were moments of improvements here and there, but otherwise not. It just wasn’t as extreme an example as Apollo program.

“The main disadvantage with BASIC was also it’s main advantage it was both interpreted and overly simple.”

Not slow due to interpretation. There’s BASIC’s specifically designed to compile to fast native code, even for game engines. That’s just what was common long ago and with more academic projects. Overly simple applies to many, yet there are BASIC’s that address that too. And there are some that address it WAY too much. 😉

“I suspect that the notion of paralellBASIC is not wrong, but it won’t be BASIC it will be a language that has pure functions supported by immutable variables otherwise concurancy and paralellism will be way to difficult to do either efficiently or effectivly, but it will have the “easy play” asspects of BASIC.”

Well, ParellelBASIC is a joke so it’s OK if it doesn’t make it. Yet, the easiness of BASIC is definitely its appeal. That’s why some are talking about Julia as a BASIC for scientific computing due to its combo of easy, performance, and legacy integration. Python is actually dominating that, though, as it’s been integrated with fast scientific libraries and extended with capability to create fast, native codes with Python subsets. Your guess is on the mark so far.

“My guess is concurancy will be above the CPU level of the computing stack for a whole heap of reasons, ”

I’ve seen a few processors that solved the concurrency problem in different ways. One accelerated message passing to make those models extra fast. One included a few changes that made multithreading more efficient and safe. Several essentially supported a form of transactions where a series of statements were executed as a whole without interference from other computations. Then, there was hardware such as i432 that even did scheduling at hardware level below OS. So, it certainly can be done in hardware.

Yet, I think it should be an OS thing with hardware only accelerating the primitives. Arguably, that was the case for the three designs. Goes to show hardware can have about as much effect as software in this.

“which is one of the reasons I was looking at less than microkernals in the C-v-P design with lightweight RISC CPUs with local scratch memory and arbitrated access to main memory being done via a hypervisor.”

And our designs are converging a bit. I’m playing with the hardware level now in my designs more than in the past. Yet, I’m still looking for “this provably can’t happen by good design” and on most critical aspects. So, you look at RISC CPU’s and hypervisors, I look at whatever CPU can track context of operations to prevent obviously bad ones and allow probably good ones. Tags, capabilities, etc can do a lot in that area. Still searching.

@ Wael

“This is the first BASIC computer I used . I bought a humongous 8K memory module with it, forgot for how much…. I still have it, sitting next to my Commodore 128. The Timex Sinclair and the Vic-20 disappeared…”

Your first was a portable. Nice. Mine was portable… after enough gym time and with assistance of a vehicle. 🙂

“We cannot short-cut evolution, I think. ”

I think the very existence of human brain has caused that plenty of times. We’re ahead of it in many ways, yet still behind it (or controlled by it) in critical ways. Whether I can get the human race or market in general to do this in a specific way is another issue. A harder issue.

You said an evolution is overdue. Something is certainly overdue. It’s going to be a revolution, though, as it will be radically different from status quo. I’ve mentioned many architectures that are largely evolutions of older one’s with excellent safety, security, or verification properties. Yet, compared to existing architectures, it’s like throwing out everything people know and do. That’s the kind of change that takes… it’s not easy to make happen in an overall market.

One would hope Snowden disclosures, pervasive malware threats, constant disruptions of availability, maintenance/integration woes in software, etc would do it. They largely haven’t. So, I’m not sure what discussions, tipping points, etc. would lead to such a change. I am quite pessimistic about what majority or even significant market share will take up in this field. Smart card, DO-178B, etc markets give me about the only glimmer of hope as they have real quality or security improvements. Meanwhile, I continue doing what I do on principle hoping one day it might benefit many in practice.

Great poem. Gotta wonder who the two boys were. And I heard “Steeeeeerike 2” in Leslie Neilson’s voice as a certain scene made it funnier that way.

yesme May 11, 2014 2:12 AM

@Nick P

What is it exactly that you want to do (just curious)? The discussion about operating systems and programming languages can only last that long. (unless it’s just for fun)

I think most of us are aware that from a security pov C and C++ stinks. But Ada and its subset Spark isn’t. So it’s here already and for quite a long time now. And altough a bit bureaucratic, Ada is IMO a very professional and productive language. It is very fast, modular, has an increadible type system, can run embedded without OS and it has advanced features.

Operating systems however, that’s a different story.

Nick P May 11, 2014 3:31 PM

@ yesme

What I want to do is help readers understand what properties a secure system will have. I also promote any project, technique, etc that can be used to build secure systems. I’m also exploring new designs that prevent code injection or data leaks from the hardware up, while supporting integration with COTS I/O devices & development in safer languages. I’ve been posting various architectures and shortcuts here to that effect.

I might not be able to build the systems any longer. My goal is to give others what they need to do it. If they want secure & democracy-preserving technology, I’ve told them plenty about how to build it & they just have to put in effort/sacrifices. I’ve done [and still doing] my part. Just waiting for it to take-off or an existing project to get production ready. Then, I’ll put a whole stack on it or [as usual] tell others how to do it right and simply so.

Systems I can trust that force surveillance states to work very hard are what I want. I developed them in the past. My old work isn’t available anymore past what I’ve posted here. I want to see myself or someone else do this again for the modern threat. And it get put into widespread use.

Wael May 11, 2014 9:08 PM

@ Nick P,

You said an evolution is overdue. Something is certainly overdue. It’s going to be a revolution, though, as it will be radically different from status quo…

We are in agreement. Between 2750 BC when Ancient Egyptian manuscripts mentioned electric eels (or fish), and around 1950 when “electronics based” computing machines were developed, is a span of 4700 years. That’s the time it took computers to evolve from the start of observing a phenomenon to the time it was harnessed for computing. Next stop may be a liquid state computer, where chemists, not solid state engineers, design the beast. That would count as large scale evolution because it resembles a different species, so to speak. I would not count an optical computer as large scale evolution, even if it uses glass instead of copper, and mirrors and prisms instead of whatever their counter part is 🙂 Another possible large scale evolution is a system with millions of tiny processors that behave like a human brain. Not likely to witness either in my lifetime…
Another possibility is the discovery of a new phenomena (equivalent in magnitude to the discovery of electricity) that gives rise to new ideas and implementations. Maybe Gravity is one candidate, I sent a link before talking about the speed of gravity, but this sounds too crazy…

Nick P May 12, 2014 11:14 AM

@ Wael

“Another possible large scale evolution is a system with millions of tiny processors that behave like a human brain. Not likely to witness either in my lifetime…”

Then, I guess this is a gift of a lifetime. You even have to understand brain function just to program it. 😉

Anura May 12, 2014 3:24 PM

@Nick P

Programming for the human brain?

One Weird Trick To Compute Factorial of a number
    get a calculator
    enter the number on the keypad
    press the factorial button (!)
    what does it say?

Figureitout May 12, 2014 4:59 PM

Clive Robinson
–I just don’t get how you’re unable to drop a throwaway email address, using a random wifi network and a tiny gadget you undoubtedly possess… unless there’s some serious holes us mortals don’t know about.

Anyway, to beat the issue to death, here’s a cool breakdown of charge circuits (a counterfeit one too), this is why I’m iffy diagnosing the problem, the circuit isn’t trivial and it’s even worse to troubleshoot when the boards are stacked on each other (they put glue on the ROM-chip screws, perhaps so I would burn up the threads and have to dremel the screws out to get in). But I am getting 5.2V on an input cap, so…somewhere on the charge circuit has to be the problem.

http://www.righto.com/2014/05/a-look-inside-ipad-chargers-pricey.html

Also if no one can see the relevance of a charging circuit here, where do you think one of the first places to start building a computer is? It needs a regulated DC voltage AND I want to try and filter the power good too to cut out power analysis attacks (besides just looking at consumption, actual injections and such).

Wael May 12, 2014 9:30 PM

@ Nick P,

a gift of a lifetime…

Thank you! It’s not exactly what I was thinking, but I’ll dig deeper into it and add it to the queue you accumulated on me 😉

Wael June 24, 2014 9:26 PM

@Nick P,

Then, I guess this is a gift of a lifetime. You even have to understand brain function just to program it. 😉

It may very well be a gift. Fits within what we were discussing in C-v-P 🙂
One thing off my queue, I suspect we’ll revisit it!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.