Friday Squid Blogging: Cannibal Squid

The Gonatus squid eats its own kind.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on September 30, 2016 at 4:14 PM • 147 Comments

Comments

LucasSeptember 30, 2016 4:36 PM

Can you recommend me a privacy and security blog from the European perspective? I have a feeling we have a little more privacy right over here, or at least more debate on the topic, or at least, a different debate which doesn't really have a voice in American Mainstream Internet Media. :)

RobOctober 1, 2016 3:59 AM

@Lucas, @Yerkle, The UK - which is still in Europe :-) - has also made attempts to restrict encryption, and to impose detailed surveillance. Reports suggest that is still the intention if it can be achieved. It also hosts GCHQ. So S on S is still a very useful resource, wherever you are based. In any case some of the commenters are evidently based in Europe.

Having said that, a blog that keeps European (both national and EU) policies and attitudes in focus could be useful.

AnonOctober 1, 2016 3:59 AM

The EU practices what Orwell calls "Doublethink". It's where you hold two opposing view-points whilst believing both to be true simultaneously.

In the case of privacy, it says it cares and appears to pass laws that do that, but then it acts in entirely the opposite manner. Whilst it can be shown the EU doesn't actually care about privacy, people still think it does.

Many people don't realize how dangerous the EU really is. It's trying to build a superstate in the guise of "helping people", but it is really doing it for power and control. Just look at how nasty they are deliberately being towards the UK leaving the EU after a democratic vote. The EU is completely undemocratic, and will do what it can to scare others into staying, whilst arguing for tighter integration.

It's a take-over by stealth. No-one wanted or asked for the EU, except those who conceived it out of the ashes of WW2 in a desperate bid to continue what they couldn't take by force.

LucasOctober 1, 2016 5:01 AM

Thank you for the posts and links relevant to my inquiry, folks, I'm new here. Should I have mentioned @Bruce in the OP (https://www.schneier.com/blog/archives/2016/09/friday_squid_bl_546.html#c6735315), maybe he knows something for me. What I'm really looking for a similar blog to Bruce's, but from a European perspective, more European coverage. Does it exist at all? OK, it can even be a UK-based blog, let it be! :)

CuriousOctober 1, 2016 7:17 AM

Someone on twitter points out that this bank uses voice biometrics for authentication:


http://www.citibank.com.au/vb/

From their FAQ about "Is Voice Biometrics secure?"

"Yes - Your voice print will be stored in a secure server using industry leading encryption methods and cannot be 'played back'."

JacobOctober 1, 2016 7:49 AM

@ Curious re voice print

If this method becomes common, LE will be able to ID voice comm users - be it phone, voice chat or whatever.

CuriousOctober 1, 2016 7:50 AM

@Jacob

What I thought as well. I don't like it. And I don't like the police state either.

Nick POctober 1, 2016 10:22 AM

Very, interesting design posted on Hacker News recently: ORWL - The first, open-source, physically-secure computer. A person in the project showed up in the comments to answer questions. Some good discussion. I was impressed they were doing a lot right for their use case. I suggested they clone IBM's old one since Ross Anderson's team failed to break tamper-resistance. Close the gap between the two designs.

The first one will use Intel chips as a practicality. Later models could use RISC-V or OpenPOWER with a secure OS. Important thing is that *someone* is working on open-source, usable, inexpensive tamper-resistance. It can be reused in other designs once they get it working.

Clive RobinsonOctober 1, 2016 10:34 AM

@ Yerkle Twerkle,

With regards France-Germany, historicaly even when their governments were at war their companies still traded... Thus to say they are as "thick as thieves" is a bit of an understatement.

It is this axis of --evil-- power that aims to corupt the rest of Europe to a subservient will. In this respect it is no different to the US, Russia, China and others, all of whom claim they are not building Empires but the equivalent of co-prosperity spheres (see Japan pre WWII).

The aim is to move "real wealth" inwards whilst moving faux monetary wealth outwards. The hitch with.the plan currently is very low interest rates, thus the "beggering" effect of inflation on money, bonds and other monetary wealth is not bringing the satellite nations to heal. It's therefor no supprise to see Germany forcing southern europe to sell it's assets in exchange for loans with usery rates of interest. These assets then get picked up by German or German controled entities, or illegaly French Government entities.

The last thing such entities want to lose is the inteligence that enables them to asset strip etc the satellite nations. Thus any impediment they turn or try to turn into a crime. Worse the idea is not to imprison via such legislation but to embegger those they chose as targets. Effectively winning twice by forcing a "fire sale" of assets that they buy with the fines imposed.

The use of encryption that you need to protect private communications stands in the way of such systems, in much the same way that Client-Attorney Privilege used to at one time. Thus outlawing it is a priority.

Clive RobinsonOctober 1, 2016 12:08 PM

@ All,

As some of you are aware LLVM are getting "legal advice" to turn the project into a corporate machine by changing it's Open licence.

Well aside from the fact it is an attempt at "Copyright theft" it does not bode well for the future of LLVM.

OpenBSD champion Theo de Raadet has the following interesting take on it,

https://marc.info/?l=openbsd-misc&m=147503691302850&w=2

Sadly a number of Open projects get one or two who think there is "muchous libra" to be made from changing licences (even though history shows otherwise).

People have to remember the general corporate strategy is to make vague suggestions of funding etc but say XXX clause is an absolute stumbling block. The block gets removed but the funding rarely if ever happens due to the next stumbling block the corporate legal staff can invent. Meanwhile the corporate plunders the IP of the many contributors, such is the way of the world.

rOctober 1, 2016 12:11 PM

"Police surveillance: The US city that beat Big Brother"

http://www.bbc.com/news/magazine-37411250
https://news.ycombinator.com/item?id=12602696

The city council's decision to limit the DAC was a victory for Oakland Privacy and Hofer, who has since been elected chair of the city's first Privacy Advisory Commission, which has been given the task of scrutinising every new piece of equipment the police department wants to buy.

Go Raiders!

Last week the ACLU launched proposed legislation in 11 US cities, including New York and Washington DC, that would, if passed, establish community control over police surveillance.

rOctober 1, 2016 12:12 PM

@Clive,

I was gonna post the LLVM thing but weighed it as too snarky from my mouth.

"Not even red team is red, some are blue and others are green."

IanashA_iitocIhOctober 1, 2016 1:18 PM

@Nick P & SoS fans, in general

I enjoyed reading the comments about the ORWL (Orwell, as in 1984, perhaps) computer. In addition, a link there sent me elsewhere regarding Qubes 3.2:
https://news.ycombinator.com/item?id=12604417

Regardless the current Apple Macbooks may be viable substitutes for some threat models. Similar to the ORWL, Intel Core M chips are used iirc (with VT-d (IOMMU), VT-x (HVM) but with ? for TPM, in general, for Macs) in the Macbook.

http://arstechnica.com/apple/2016/04/review-the-2016-retina-macbook-is-a-faster-version-of-the-same-machine/3/
https://www.apple.com/macbook/

Has anyone tried installing and testing Qubes 3.2 on late model Macbook Airs, Macbook Pros or Macbooks? For example, Qubes-OS hardware compatability list could probably reflect more recent PCs from Apple and, perhaps, from other manufacturers:
https://www.qubes-os.org/hcl/

btw Qubes has certified the Purism Librem 13 as a Qubes-ceritified R3.x laptop:
https://www.qubes-os.org/doc/certified-laptops/

PuppyOctober 1, 2016 8:53 PM

big thank you to the folk whom recommended Spy Bot, both programs, in recent months

'anti-beacon' (for blocking telemetry) and 'Search and Destroy'

really helpful, essential programs, highly recocmmended

ab praeceptisOctober 1, 2016 8:57 PM

Nick P

Thanks for ORWL link. While I'm less positive about it than you are I agree that they at least really try and that they have some really smart approaches. I also agree with you that sometimes later a non x86 powered version would be strongly desirable.

My major itches are x86, USB port and (as far as I understood) cubeos. But OK, Rome wasn't built in 1 day and they have a quite reasonable start.

Talking about OSs (and hence about model spec/verif, languages and verif etc.) you might want to have a look at "leon" at the lara team. Not perfect and, of course, there are things that don't make me exactly happy but all in all a very attractive approach.

ab praeceptisOctober 1, 2016 9:15 PM

Clive Robinson

"LLVM" - I'm not too surprised. After all LLVM is a project that - in corporate eyes - begs to be taken over (and away from open source).
Also, not meaning to flame, but I remember quite well what I thought when I saw major apple involvement there.

I might be wrong but apple being a heavyweight again and seeing MS's impressive progress re. safe software (or at least the tools to build them) it must have looked very attractive if not even necessary for others to get some solid base, too. That market ist just too valuable to leave it to MS.

As far as I'm concerned (read: being not that much interested in llvm anyway (but seeing it as a good thing)) 3.9 is a pretty usable thing. And re. what apple and/or others make out of it I'm not too interested anyway.

Not at all funny sidenote: A project of LLVMs size will hardly survive without some corp (or state) sponsoring. Now, assume, one comes up: Will he be trusted after what's just happening?

ISPs as POLICEOctober 1, 2016 9:52 PM

War by DDoS: In the wars unleashed by unsecured IoT and computers, we are going to see a greater push for ISPs to reign in the out of control devices, but that also has dangers if the ISPS are given unlimited powers.

Clive RobinsonOctober 1, 2016 10:38 PM

@ yikyak,

You don't say what your interest in that 2000 MIT spin-off is?

It's 16years old in an industry that changes significantly every two or three years. Perhaps you need to do more research on what it is you are looking for and what your objectives are.

That said Philip Greenspun (the driving force behind the course) has very much more recent things to say about such education ventures,

http://blogs.harvard.edu/philg/2016/09/28/georgia-tech-online-masters-in-computer-science/

rOctober 1, 2016 10:43 PM

@by the rules,

The question, post-revelations; isn't whether something will be trusted. That's the wrong question and more suited to another more suitable position, but more CAN something be trusted.

Clive RobinsonOctober 1, 2016 10:56 PM

@ ISPs as POLICE,

War by DDoS: In the wars unleashed by unsecured IoT and computers...

Think of "War by DDoS" as like prohibition era Chicago and it's bootleg "drive by shooting" wars.

The problem was not gangster on gangster killing but collateral damage to uninvolved bystanders aided and abetted by the politicians and authorities of the time who were very much "on the take".

The main result of interest here was it resulted in todays FBI...

So historically it does not bode well...

Clive RobinsonOctober 1, 2016 11:13 PM

@ Nick P, ab praeceptis,

With regards the ORWL (apparently pronounced as "Orwell" of 1984 fame), I've not had much time to look into it.

But one thing hit's me in the eye that @Thoth may want to add comment on. It uses some HSM security features one of which is a protective screen/net that shatters easily thus breaking a circuit that causes --I'm guessing-- encryption key loss/erasure.

As far as I'm aware such technology is only used in "fixed position" HSM's. Which raises a couple of points as the OWRL is an awkward shaped portable device that could very easily be dropped,

1, Is it likely to shatter if the OWRL is dropped?

2, If so what backup procedures are in place for key restoration?

Obviously when you start down this train of thought a whole can of worms wriggles into view and some are quite unplesant, which raises further usability questions...

Clive RobinsonOctober 1, 2016 11:26 PM

@ ab praeceptis,

With regards LLVM, like you it is mainly of peripheral interest to me, however...

There are a great many people using it as a foundation for their products / apps / tools some of which are getting to be "front and center" for even those with extream tunnel vission.

We only need look at the likes of certain crypto libraries to see what can go wrong if they go bad etc. Thus the question of "What happens next?" has a rather broader interest than it first appears.

Likewise what will happen if a fork happens, the history of such "forced seperations" is not particularly good especially when the commercial side starts in on protective measures like obtaining US Software Patents on key methods.

Nick POctober 1, 2016 11:37 PM

@ Clive Robinson

"1, Is it likely to shatter if the OWRL is dropped?"

I didn't even think about that. Probably since they explained it as such that the tamper-resistance is active when user is away. They implied it at least. Still might be affected by a fall or someone smacking it. Worth asking them. Might make a nice denial of service attack, too. ;)

ab praeceptisOctober 2, 2016 12:12 AM

Clive Robinson

"ORWL", shattering - the superficial answer is: depends on the case you choose. Their (hardened) glass case is said to survive 0,6 m onto marble.

The less superficial answer is even less pleasant. Nick P already - and correctly - saw the danger of inviting DOS attacks.

In between re. an earlier post of you and relevant here: It seems you were mistaken. The "security" mesh seems to be some circuitry "printed" (lasered) onto the "inner shell", i.e. it doesn't protect a chip (like in high grade HSM etc) but the whole board.

They talk about 0,5 mm distance between the mesh lines and they explicitely mention drilling into it (the inner shell) to trigger an alarm event. This then leads to deleting - what isn't too clear but they create the impression that "all data" are deleted. Which, of course, is a rather bad DOS for the owner of the device as it comes down to the capability of taking all his data down.

I assume that there is some way for an ORWL owner to somehow get at and save his drive key but looking at the "cool" and "designy" device I doubt that many potential buyers will make that effort.


"ORWL", security - I might be mistaken but my impression is that they came across the Maxim security chip (the heart of their devices security) and then had the idea to build a "secure computer board" around that chip. Indeed, looking at that chip is suggestive and it seems to make sense to use it to build a more secure mainboard than normal ones.
Still, my impression is that they did *not* design a secure computer (the way people like ourselves would approach that task) but that they (correctly) identified the potential of the Maxim chip and then, based on a quite limited conceptual approach, designed the ORWL.

Another indicator (to me but that is probably subjective) is the fact that they "solve" the obvious problem that hardware alone can't get one a secure system by simply throwing cubesos on top of it.
This IMO not only indicates a lack of understanding and proper concept but it also creates mistrust in that cubesos seems to be rather tightly linked to TOR which IMO is to be considered badly tainted.
The fact that I see very clear marketing thinking in that project (the movie, the designer case, etc.) doesn't help me to trust that box.

All in all I would recommend that thingy to a rich client who thinks in terms of "one can buy everything. One can also buy security".

"LLVM" - there you find me rather cold-blooded. I liked the idea behind LLVM from day 1 and I mistrusted it from day 1. When I saw apple entering the game LLVM (for me) turned into something without a trustworthy longterm OS future (but still interesting).

I'm less worried than you, it seems, as I find LLVM useful and cool but not earth-shattering. It's approach (and basic idea) is beautiful and powerful - but not new which also translates to not much to be patented (says me, well noted, who is completely clueless in legal questions). Intermediate code? Not new, by no means. AST-based multiple pass optimization? Not new. multiple backends? Half-new but not conceptually but rather practically. Etc.

And: It's out there in the wild. Anyone can look at it and learn how to do it. Moreover I expect some of the LLVM people (like e.g. the people behind quite some backends) to jump ship and/or to support a fork.

Finally the gnu compilers should not be forgotten, particularly as they show that pretty all needed pieces are available patent-free. All in all I have no doubt that LLVM will survive as a fork. If I'm worried than in the opposite direction: LLVM is just too attractive, hence there might be diverse forks, some public, some by state agencies, etc. This "spreading" might create dangers, for instance in the form of developer uncertainty which of the forks to join.
But again, LLVM 3.9 isn't bad; it's not like the world couldn't survive comfortably with that.

CallMeLateForSupperOctober 2, 2016 8:17 AM

The "Shadow Brokers" just are not getting the buzz - nor money - they think they deserve, and they are not amused.

"Peoples is having interest in free files ... But people is no interest in #EQGRP_Auction.”
“TheShadowBrokers is thinking this is information communication problem.”

(snigger) Le'see.... usurious price; white-hot merch; no refund of non-winning bids; no authenticy guarantee; no fit-for-purpose guarantee. What's not to love?!

‘Shadow Brokers’ Whine That Nobody Is Buying Their Hacked NSA Files
https://motherboard.vice.com/read/shadow-brokers-whine-that-nobody-is-buying-their-hacked-nsa-files

IanashA_titocIhOctober 2, 2016 8:56 AM

Peiter C. Zatko, @dotMudge, is associated with http://cyber-itl.org/

"The Cyber Independent Testing Lab (CITL) was organized exclusively for scientific and educational purposes, with the mission of advising software consumers through expert scientific inquiry into software safety." ...

From http://cyber-itl.org/press
A 'Consumer Reports' For Software Vulnerabilities - GCN

Other Links:
http://cyber-itl.org/blog-1/2016/8/12/our-static-analysis-metrics-and-features

Regarding macOS, Windows 10 and Linux (Ubuntu 16.04 LTS):
http://cyber-itl.org/blog-1/2016/8/31/score-distributions-in-osx-win10-and-linux

Regarding macOS:
http://cyber-itl.org/blog-1/2016/9/12/a-closer-look-at-the-osx-continuum

ab praeceptisOctober 2, 2016 9:45 AM

r

While I am, of course, pleased that finally some insight seems to reach mankind (particularly the obvious insight that it extremely hard to create safe and reliable code in C and its derivatives and friends), I couldn't but note that virtually all of the linked "articles" where in the style of "modern western science" (read: lots of blah, lots of "experts" saying this or that - and next to no tangible content).

And in about the only tangible content there was (in the quanta-whatever "science" thingy) they - surprise! - told BS. They explained that formal verification is done with Coq or Isabelle. That's quite rare (to avoid saying "BS" again); actually pretty much always automatic (usually SMT) solvers are used. Funnily, evil corp was even mentioned (in blah style) yet they missed Z3 (a very major solver). Or, more correctly, those autosolvers are used by the tools that developers use, and usually there is some middle layer in between (like WP, Why, etc).

Nick POctober 2, 2016 10:52 AM

@ ab praeceptis

Almost all major work in formal verification is done in Coq or Isabelle/HOL. That's not myth. It's straight up in the papers and repositories. In hardware, ACL2 dominates with occasional stuff done in PVS. Then there's these lightweight, formal methods that try to prove less with more automation. The Coq and HOL platforms get most of the serious stuff. The main, two guides people are recommending for newcomers are also Coq-oriented. That's Chlipala's Certified Programming and Software Foundations.

yikyakOctober 2, 2016 10:53 AM

Clive Robinson

hmmm...I knew the material was old but i thought that it would still be relevant to todays industry. Every 2-3 years? Things change really fast. I tried comparing it to ga tech core curriculum, some classes look the same.

IanashA_titocIhOctober 2, 2016 11:05 AM

From dotMudge at https://twitter.com/dotMudge:

1) DoD data (cleared for release) shows on average 1/3 of vulns in government systems is in the security software.

2) tl;dr summaries of some of the findings from cyber-itl above

Dukk N. DodgerOctober 2, 2016 1:14 PM

This week's project was to tweak the Windows firewall with a third party controller app, making settings and feedback easier.

I was amazed at the number and intensity of outbound connections to ms, facebook, twitter, ads, trackers, beacons and the like. There are likely several thousand such connections every day, for everyone. (Note: I never use fb or twitter of course and try to block them at the router, apparently unsuccessfully)

And, I have tightened up security on my W10 computer a great deal. I would say better than 90% of normal users.

The assumption has always been to allow all outbound because inbound is where the bad stuff happens. But, these days it's clear to me outbound is where it's happening. That's where your personal data, not to mention every electronic utterance, is going and if you have been captured by a bot or other malware is how the user is dragged into being an attacker.

I have found the Windows firewall to be fairly robust, although making adjustments fairly tedious. I think maybe it was actually created by COMODO for one thing. And, if you work it right a lot of the seeming mandatory connections to ms can be filtered. For example hard coded "allow" rules are overridden by "block" rules.

Clive RobinsonOctober 2, 2016 1:16 PM

@ yikyak,

I tried comparing it to ga tech core curriculum, some classes look the same.

Some things do not change like the fundementals of logic and maths, ADTs etc but as you get higher you get into more changable territory. And it's as true of the hardware as it is of the software.

For instance back in 1995 Java was still in effect experimental. As it went mainstream many things changed it was still in flux back at the time of the course. Since then like C++ it's become over burdened and is less and less popular with programers (oddly perhaps it appears that C is more popular than both java and C++ and is atracting more interest as is C#).

Thus you have to consider what each part of a sixteen year old course is giving you with respect to a more modern course. If you find the course fits your particular needs more than another then you'd be better off with it. As I said you did not realy specify your interest in the course or why, which makes evaluating it for you at best uncertain, at worst likely to be largely wrong.

Late F. Supper, Esq.October 2, 2016 2:19 PM

@CallMeLateForSupper

A million US dollars is usurious? Really? (Doctor Evil, is that you?)

Yes, that is more than most people have in their piggy bank. But no, most people are not prospective winning bidders.

Sure, whether anybody at all is a winning bidder is sort of up to Shadow Brokers. But, again: if you don't have that kind of money to risk, you aren't the audience.

Did you read the actual message that Vice article is (loosely, ostensibly) based on? Or did you only the Vice article?

Critical reading is not mandatory, of course. One can read the news, casually, for shits and giggles.

But when you do, it bears repeating, you will be getting mostly the former.

CzernoOctober 2, 2016 5:35 PM

@Dukk N. Dodger : are you at liberty to tell a bit more about that 3rd party controller for the Windows FW ? Is it something we can try ? Free or not, commercial or open ? Does it work on earlier versions of Windows, including Seven ? And most importantly, does it allow to tweak rules with better granularity than the built-in interface will allow ? I'm thinking for instance of allowing/rejecting/controlling raw IP traffic per /protocol number/, a feat which I haven't found how to accomplish yet.

Dukk N. DodgerOctober 2, 2016 6:22 PM

Czerno

I don't want to name the control I use because it costs money, because it will seem like a commercial and most important I suspect part of the package was a couple very bad unwanted popups via IPV6, which of course I now block.


But, the Windows firewall can block any protocol by number as follows:

From "Run" enter firewall.cpl

Advanced Settings/Inbound Rules(or Outbound)/ New Rule (in rght column)/Custom radio button/Next/Next and from the drop down select Protocol Type select specified protocol from list or "Custom" and select the Protocol Number/Next/Next: select Block the Connection/Next/Enter a name/Finish.

That's why I said creating rules is a bit tedious.

RomanOctober 2, 2016 7:28 PM

Police body cameras cut complaints by over 90 percent:

http://www.telegraph.co.uk/news/2016/09/29/police-body-cameras-lead-to-90pc-drop-in-public-complaints-again/

https://www.reddit.com/r/news/comments/552h4k/uk_study_police_body_cameras_cut_complaints_by_93/

Surveillance (or in this case, accountability) changes behavior. From the article:

Commenting on the report, Dr Ariel said: "We couldn't analyse exactly what happened in every police incident involved, but we think the change has more to do with officers' behaviour.

"They are the ones well-trained to deal with these situations and know how to behave, so now there is a tool to make sure they are doing their job.

"But we think the cameras can also reduce frivolous complaints and false allegations that are made even when officers have done nothing wrong. In the study we saw that all complaints went down - in some areas they went down to zero."

yikyakOctober 2, 2016 7:41 PM

Clive Robinson

I see your point. If i said i would like to get a general knowledge of the cs field that isnt specific enough either. I am interested in networks, malware, reverse engineering and something that ryhmes with slacking.

Uknow WhooOctober 2, 2016 8:47 PM

@Daniel

FBI and NSA agencies are so secretive they wouldn't tell you what time it is, yet this seemingly embarrassing revelation appears.

hmmmm.

My thought is the defense lobby is dusting off the Russians as the perennial bogeyman to keep the world wars going, tax dollars flowing to them and away from us. Islamic terrorism isn't scary enough anymore.

Be afraid of the Russian Bear children, and taxpayers. Boo!

ab praeceptisOctober 2, 2016 11:09 PM

Nick P

"Verification" - You are right - but not for software verif. which was the matter at hand. That, verif. in software dev. was what it was about. And there pretty much everything is about automatic solvers. In fact, most verif. systems do not even offer an interface to Isabelle or Coq and if they do it's usually cumbersome.
Same btw. for spec/model; hardly anywhere Coq or Isabelle, pretty everything done by autosolvers.

That only changes in pre-design, concept work, i.e. where the boundaries between math and compsci get blurred. However, I have rarely seen that at all. I guess that in 99% of all software dev. one must be very happy if some formal spec/model/verif is done at all.

I'm *working* with those tools and what I typically find is cvc4, z3, etc., usually with some middle layer between the tool I use and the solver.

Btw: As LLVM came up recently here it might be noteworthy that there are some projects where the compiler front-end creates LLVM intermediate code as well as Z3 "instructions", which, looking closer is almost "natural" as both are SSA based. For the time being those projects are usually functionally centric but anyway, I find that a very attractive approach that could be used in non-functional settings, too, albeit with some kind of (say acsl like or similar) formal annotations.

ab praeceptisOctober 2, 2016 11:19 PM

yikyak

If i said i would like to get a general knowledge of the cs field that isnt specific enough either. I am interested in networks, malware, reverse engineering and something that ryhmes with slacking.

Besides the fact that those are vastly different areas I think that Clive Robinson has got it right. Looking closer one finds that pretty everything cool and supermodern bleeding edge is a new version or an extension of decades old knowledge.

All of the above mentioned areas will strongly profit from you learning the basics. My suggestion would be to first polish up your math basics and to then throroughly working through the classical basics of compsci. The good news: You will find the basic blocks over and over again in the different areas.

If you absolutely want to start it from the other end (and see first results quickly) my advice would be to learn a "friendly" (more or less) functional language with *good support* (community, books, etc.). Ocaml might be a candidate.

yikyakOctober 3, 2016 1:44 AM

ab praeceptis

The classical basics. That what i intended to start with. From there i have an idea where to go from there. The basics are the basics afterall.

Clive RobinsonOctober 3, 2016 2:01 AM

@ yikyak,

I am interested in networks, malware, reverse engineering and something that ryhmes with slacking.

In other words you "realy want to understand how things work from the lowest layers up, not superficialy from the top layers down".

Well you find out more about how something works when it's broken than when it's working fine. Unfortunately the lower down the stack something breaks the harder it is to diagnose and then fix.

Testing techniques are key to this, if you can develop that mind set of curiosity to tinker at the edges to charecterize how something works or more importantly does not you will be able to work at any level with the appropriate domain knowledge.

To get an education in real testing techniques is difficult these days, fifty years ago it was easy. The reason for this is "reliability", we have designed systems these days in such a way that they do not fail in degrees but totally and very infrequently. Back when the main electronic component was the valve with as little as a thousand hours life "repair" was a fact of life, thus fault diagnosis was the core part of training. As part of that you were taught how to be a "toolmaker" from almost the ground up.

Most graduate CS courses don't teach you that for reasons I've gone into in the past, and the older Technician / Craft Skills courses died out in the late 1980's. Oddly perhaps you would learn more from the practical / lab side of non CS science and engineering courses.

If you are going down the Do It Yourself side education I would recomend an interest in hobby electronics from a very early age with the aim of getting a full ham radio ticket in your early teens. It gives you the confidence, and more importantly "The Licence" to tinker without fear of LEO's and other "Guard Labour". With their habit of sticking their "I'm God Delusions" in your direction that the likes of even photographers and artists have been subjected to in the past decade and a half. Likewise learning mechanical engineering skills which you can get from an early age with repairing and making a push bike or model trains and aircraft.

The thing about testing techniques is they are a very very transferable and these days rare skill. Often your only way in is through "the analog world" where "engineering slop" teaches you about "edge cases". It's no coincidence that the first people to call themselves hackers came out of a model railway club, where the fun was not to be had from watching the trains go around but making signaling and control systems that made them go around.

Some of the leading edge control is done in "drone racing" where developing the chops just to get a quad copter flight control system to work is beyond many CS graduates abilities. Likewise model rocketry and various "power vehicle" sports, and if you are very lucky in designing space vehicles like satellites (I have a 1U CubeSat sitting on my work bench at home that is a work in progress prototype to be used to teach teenagers how they can build payloads and how to build tracking stations etc for the CubeSats already up there).

The fun of the original hacker was to get the best out of things and dare to sit on the bleeding edge thumbing your nose at Murphy and his laws, even occasionally endangering your own life to get that bit extra performance out of systems. That fun comes in part from knowing in the real world you only die once, thus learning how to push the envelope and come back is important. It's what makes lesser individuals climb mountains and race power vehicles on the technology hackers have designed to make things that bit safer for others to follow in their footsteps.

Thus the passion to make things from odd bits that might have come from broken or failed systems and thus create from destruction and failure. To do that you have to have the tools that you can not buy, you have to make them yourself from lesser tools and knowing what their strengths and above all their weaknesses are.

It's something Universities do not teach in classrooms but in labs where safety clothing is not a fashion accessory but a necessity due to breakages being expected as part of the learning process. Sadly many Uni's don't have labs any more just cyber-cafe's without the refreshments, most CS students are more at risk in the car park than they are doing their course work.

However if you talk to old school radio enginers listen when they tell you why a wooden stool is better than a chair and the importance of putting your feet on the cross bar and keeping one hand in your lab coat pocket and your wedding ring, watch and other jewelry out of the work area, oh and not to have trendy hair styles. It's just part of staying alive on the job when working with high voltages, currents and powers. Then there is not walking close to antennas where your guts could behave like that bag of curry sauce you popped in the microwave.

You get similar stories in the hard sciences and hard engineering labs where learning is tactile and tangible. Only start chucking in such modern day joys as ionizing radiation and poisons that will kill you in the next week or so, invisable laser beams that can and do cut through metal, plastic and human flesh with ease, flying sparks, grit and other things that will take out an eye or your whole head. Modern day alchemy where survival is predicated on real knowledge, good judgment, and occasionally good luck or that hard to acquire "sixth sense" that Bruce calls "thinking hinky".

When you have aquired that sixth sense then computer security becomes a domain you can prosper in rather than drown.

Clive RobinsonOctober 3, 2016 2:20 AM

@ Daniel,

The FBI is investigating the NSA over the Shadow Brokers leak.

No worries there, the purpose is to kick it into the long grass, by "turning a blind eye". Such behaviour was once called "Doing a Nelson" from "I see no ships", these days it's a case of "I see know wrongdoing" which we should call "Doing a Clinton" or "Doing a Comey" or even "just being a C"...

65535October 3, 2016 5:26 AM

@ Clive Robinson

You mentioned 1U CubeSat which caused me to do an internet search. To my surprise it is even in Wikipedia! It is an interesting and tangible subject.

See: CubeSat

https://en.wikipedia.org/wiki/CubeSat

The frame/structures seem low cost [if this is actually what you are talking about]:

[Picture of small square plastic frame for 59 USD]

"1U cube sat in White Strong & Flexible, Width Height Depth 10 cm 10 cm 11.604 cm "


"This model is 3D Printed in White Strong & Flexible: White nylon plastic with a matte finish and slight grainy feel. Last updated on 03/09/2012"

See: shapeways[dot]com

http://www.shapeways.com/product/CGULTQXBL/1u-cube-sat?li=gmerchant&utm_source=google&utm_medium=cpc&gclid=CO7B-puuvs8CFQ9xfgod-pwCWg

[Total cost CubeSat not including Launch costs]

“…a basic 1U CubeSat can cost about $50,000 to construct…” -Wikipedia

“..for most CubeSat forms, the range and available power is limited to about 2W for its communications antennae. They can use radio-communication systems in the VHF, UHF, L-, S-, C- and X-band. For UHF/VHF transmissions, a single helical antenna or four monopole antennae are deployed by a spring-loaded mechanism…” –Wikipedia

https://en.wikipedia.org/wiki/CubeSat#Telecommunications

With enough of these in low earth orbit cubeSat’s a medium size organization could construct a telephonic infrastructure. And, with the correct encryption and protocols could open a semi-secure method of satellite communication for journalists and others.

[Launch costs somewhat costly but not out of reach of some individuals or small organizations]

“…launch prices have been about $100,000 per unit, but newer operators are offering lower pricing.”

https://en.wikipedia.org/wiki/CubeSat#Costs

[But you have to hitch a ride on a real satellite launch]

"NASA has launched more than 30 CubeSats over the last several years, and as of 2015, it has a backlog of more than 50 awaiting launch.[86] No matter how inexpensive or versatile CubeSats may be, they must hitch rides as secondary payload on large rockets launching much larger spacecraft, at prices starting around $100,000. Since CubeSats are deployed by P-PODs and similar deployment systems, they can be integrated and launched into virtually any launch vehicle. However, some launch service providers refuse to launch CubeSats, whether on all launches or only on specific launches, two examples are ILS and Sea Launch...SpaceX and Japan Manned Space Systems Corporation (JAMSS) are two recent companies that offer commercial launch services for CubeSats as secondary payload, but a launch backlog still exists... India's ISRO has been commercially launching foreign CubeSats since 2009 as secondary payloads." - Wikipedia

https://en.wikipedia.org/wiki/CubeSat#Launch_and_deployment

This could be an interesting project for those who want real life experience in satellite communications.

GrauhutOctober 3, 2016 9:26 AM

Hmmm... Since when does a standard Google Analytics account give you real IP address lists and not only statitical reports on traffic they measured?

Did Google work knowingly for the DoD in these cases?


DoD production of fake al Qaeda propaganda films:

"U.S. marines would take the CDs on patrol and drop them in the chaos when they raided targets. Wells said: “If they’re raiding a house and they’re going to make a mess of it looking for stuff anyway, they’d just drop an odd CD there.”

The CDs were set up to use Real Player, a popular media streaming application which connects to the internet to run. Wells explained how the team embedded a code into the CDs which linked to a Google Analytics account, giving a list of IP addresses where the CDs had been played.

The tracking account had a very restricted circulation list, according to Wells: The data went to him, a senior member of the Bell Pottinger management team, and one of the U.S. military commanders."


http://www.thedailybeast.com/articles/2016/10/01/pentagon-paid-for-fake-al-qaeda-videos.html

CzernoOctober 3, 2016 9:28 AM

According to one Tor developer (Yawning Angel), tcpcrypt is cràp - I read it yesterday, coincidentally, while researching something else on the Tor stackexchange Q&A service.
Lemme look it up again... here's :
Quoting :
TCPCrypt is, and I can't stress this enough, absolute trash.
It provides no protection against an active MITM attacker, it's not authenticated.
The remote party too must support TCPCrypt, otherwise it won't work.
Any remote party could just use TLS or an onion service and gain greater protection than that offered by TCPCrypt.
Quoted.

CzernoOctober 3, 2016 9:35 AM

My bad day ! I also f^cked the Tor developer's identity, it was "Canonizing ironize" not Yawning...

GrauhutOctober 3, 2016 10:45 AM

@Czerno: Tcpcrypt is great, because of its transparent simplicity. And its not either or, its an additional encryption layer.

And when it comes to MitMs, nobody can play that role on all tcp connections. Just imagine for a moment what it would take to try... :)

IanashA_titocIhOctober 3, 2016 12:30 PM

Corrected Posting from above (bad twitter link):

From dotMudge at https://twitter.com/dotMudge :

1) DoD data (cleared for release) shows on average 1/3 of vulns in government systems is in the security software.

2) tl;dr summaries of some of the findings from cyber-itl above

CzernoOctober 3, 2016 12:47 PM

@Grauhut: re TCPcrypt.

OK OK I'm no expert, see, was just referring you all to "Canonizing ironize"'s expert (tho often acid) PoV.

"Tcpcrypt is great, because of its transparent simplicity."

- Let it be! However it's just another "opportunistic" encryption scheme, isn't it ? Supported by a microscopic fraction of hosts on the general internet, right ? So while TCPcrypt coud be a good idea in certain applications, it'll be mainly between coopted, cooperating parties. If I am being wrong correct me, please !

"And its not either or, its an additional encryption layer."

- Granted !

Sancho_POctober 3, 2016 2:51 PM

@Daniel (FBI [lol] investigating the NSA over Shadow Brokers leak)

Indeed it could be funny if it wasn’t the truth, they want to learn how to.

The NSA “unintentionally” dropped baits and the arch enemy / evil Putin didn’t swallow any of them.
Three (of several upsetting) possibilities:
- The enemy got better tools and realized it’s small-criminal crapware.
- The enemy used it silently to study where the NSA stands, improving mitigation.
- The enemy is honest and doesn’t want actively to go cyber.

Leaks happen on both sides, the NSA is first who should know.

Dropping baits reminds me of the Pentagon’s “unintentionally” dropping CDs with professionally made ISIS propaganda and advertisement (of course made by a UK company because it might be unlawful to do so in the US).
Then they were watching the CD’s distribution through Google Analytics, the Real Player popping up where the material was used to recruit and radicalize idiots from all over the world, including western teenagers.
Rest assured, they had them all on their list, classified intelligence, very valuable for the victims of terror and parents of lost childs!

How much did it cost? Over half a billon dollars?
“… if it saved one life it [was] a good thing to do.”
Seriously?

See:
http://www.thedailybeast.com/articles/2016/10/01/pentagon-paid-for-fake-al-qaeda-videos.html

Sancho_POctober 3, 2016 3:08 PM

@Grauhut: Sorry, missed your comment re Pentagon CDs and Google Analytics …
Of course that has to be a special agreement.

GrauhutOctober 3, 2016 3:21 PM

@Czerno: Whatever it will be in the end, we need some kind of automatically applied opportunistic transparent encryption that doesnt need software changes or user configuration / updates. Just a driver rolled out with open source OSses. Until the pressure on commercial OS vendors gets high enough to force them to support it too.

As soon as Android for instance supports it the rest will follow.

Tcpcrypt is imho doing the right thing. First get your RFC, then do the pr work.

GrauhutOctober 3, 2016 3:32 PM

@Sanch_P: "Pentagon CDs and Google Analytics. Of course that has to be a special agreement."

And such an agreement to be allowed to hide behind Google Analytics would mean willing cooperation far beyond fisa / national security letter level...

Interesting. :)

GrauhutOctober 3, 2016 6:27 PM

@Revolving Door Policy: "Have you been hiding under a rock?"

Do you understand the difference between a conspiracy theory and a real case? :)

"...or a new communications tool for espionage or war. Somebody has to build and manage those projects" is ct level.

But if someone says they had a weaponized version of Google Analytics then this is the real deal! ;)

GrauhutOctober 3, 2016 7:00 PM

P.S.: Hmmmm, revolving doors... They should have a new TDN for these jobs: .goov

Enables revolving without changing domain names! :)

Revolving Door Policy October 3, 2016 8:57 PM

@Grahut

Or just use their legal names as a TLD ;)

https://wikileaks.org/clinton-emails/emailid/12166

Department of State is not the DoD, and wikileaks itself could be a conspiracy theory, but where do you draw the line?

I see no evidence that The Daily Beast has interviewed anyone involved with software or network engineering. Remember, this all happened at a time when Google was using unencrypted links between datacenters. Or, was that a conspiracy too?

ab praeceptisOctober 3, 2016 10:37 PM

Czerno, Grauhut

For a start I do not see Tor developers in a position to belittle tcpcrypt. I also strongly dislike that Tor is largely based on trust outsourcing.

I assume their basis for calling tcpcrypt "crap" is in the fact that tcpcrypt brings OpenSSL into the kernel (but there is a user space implementation, too) and also reuses quite some stuff from OpenSSL. Moreover thay also created OpenSSL versions with their tcpcrypt stuff in it.

I can understand to a degree that for some this spells "crap". But while I agree that anything that touches, let alone builds on (parts of) OpenSSL, is to be considered gravely tainted, I also see that it's definitely not idiots who are behind tcpcrypt. Those people have lots of know-how and their thinking is certainly not below Tors.

That said, who cares anyway? tcpcrypts usage seems to be not widely spread and rather modest. There is a variety of candidates for reasons to explain that, ranging from user ignorance ("we have SLL/TLS, so we are secure anyway") over being OpenSSL tainted and to network or server admins disliking the idea of network packet headers fiddling and changes (and fearing problems, e.g. with firewalls).

All in all, I wouldn't generally recommend using tcpcrypt (though there certainly are cases where it's an attractive option) but I would definitely recommend to read a paper or two on tcpcrypt. Quite some knowledgable and smart thinking in there.

Wesley ParishOctober 4, 2016 2:18 AM

Anyone yet read the article
Your Body is a Wonderland–For Transmitting Passwords
https://www.onthewire.io/your-body-is-a-wonderland-for-transmitting-passwords/

What surprises me is the method: when I'd read about IBM's body-area networks in the late 90s, I'd envisaged them as using the body's own electric field, that is, using the biochemical conduction of the nervous and muscular system. This experiment used the body as a conductor for

electromagnetic signals below 10 MHz

Opens up all sorts of interesting possibilities, and a good many of them are SFnal.

GrauhutOctober 4, 2016 3:15 AM

@rdp: "I see no evidence that The Daily Beast has interviewed anyone involved with software or network engineering."

Min. 00:04:00 https://vimeo.com/183694713

https://www.thebureauinvestigates.com/2016/10/02/fake-news-and-false-flags-how-the-pentagon-paid-a-british-pr-firm-500m-for-top-secret-iraq-propaganda/

The journalists are working for The Bureau of Investigative Journalism

https://en.wikipedia.org/wiki/Bureau_of_Investigative_Journalism

The Bureau of Investigative Journalism
PO BOX 73125
London
EC1P 1GJ

Phone: 07969 466285
Email: info@thebureauinvestigates.com

rOctober 4, 2016 7:50 AM

What's the difference between a public recommendation that's opaquely adopted without review maybe through a covert employee's (or employees) subterfuge and an external red team event infiltrating your source repository or one of your developers fingers?

Red, blue, green, yellow. It's all the same game.

You need further proof? We could convict them in a court of law (if there was one) that hadn't been scared into using Tor to pick up the Israeli Czechs (hint: not Russian, I'd hate to roll another 6).

GothOctober 4, 2016 9:06 AM

@Roman 7:28, right, cameras stop crime by police. That's why cops turn them off when they're going to murder or torture citizens. So the best cameras are the ones supplied by the public. Police try to disable them too when they want to commit crimes:

https://www.aclu.org/blog/free-future/police-accidentally-record-themselves-conspiring-fabricate-criminal-charges-against

In these cases the only security comes from the criminal cops being too stupid to destroy the evidence. Ultimately, our only protection from criminal cops is the Wonderlic upper cutoff score. Idiot cops are safer cause they can't get away with it so much.

Freezing_in_BrazilOctober 4, 2016 1:56 PM

@ Late F. Supper, Esq.

A million US dollars is usurious? Really? (Doctor Evil, is that you?)

That would be a million Bitcoins [do the conversion].

CuriousOctober 4, 2016 2:49 PM

Apparently Yahoo has/had implemented a scanning tool for the US intelligence.

CuriousOctober 4, 2016 2:55 PM

Off topic:

I wonder if FBI Comey made sure that everybody inside the scope of the investigation of Hillary's email scandal wouldn't feel a need to snitch so to speak. I am thinking that maybe if one started talking, then everybody would be in trouble.

Clive RobinsonOctober 4, 2016 6:33 PM

It would appear Signal got subpoenaed for an Easter District of Virginia grand jury with gag order etc. But with the help of the ACLU they have been able to say something,

https://whispersystems.org/bigbrother/eastern-virginia-grand-jury/

I would expect to see more of this happening, and more fighting of gag orders.

Which means if the fights are successful the Feds etc are going to be pushing for different ways to gag which might well include new legislation. Recent history suggests that US politicos will que up to vote for any and all such legislation.

Clive RobinsonOctober 4, 2016 8:10 PM

And one from the "are they still doing that" list,

From Dmitry Vyukov of Google we have CVE-2016-7117, Use After Free Remote code execution vulnerability in Linux kernel networking subsystem (in the recvmmsg exit path).

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=34b88a68f26a75e4fded796f1a49c40f82234b7d

It appears all Linux and Android kernels have it, but apparently no known exploits out in the wild so far (probably not long before there are ;)

Revolving Door PolicyOctober 4, 2016 8:29 PM

@Grahut

Thanks, I skimmed right passed the byline. Regardless, not any more indicative of the close relationship than any of the other circumstantial evidence. All we have here is an interview with a video editor of limited knowledge working in a classified military installation:

"The tracking account had a very restricted circulation list, according to Wells: The data went to him, a senior member of the Bell Pottinger management team, and one of the U.S. military commanders."

"It is understood the key principals who were involved in this unit deny any involvement with tracking software as described by Wells."

See 2:47 in the video you linked. I can forgive a professional video editor not understanding the difference between VCDs and RealPlayer (mainly amateur formats), but it doesn't engender too much confidence in his knowledge of web analytics. Ihe "white screen" may even imply something else.

I have no doubt that Google would/does provide that sort of assistance, but this piece is nothing more than we already have.

ab praeceptisOctober 4, 2016 10:47 PM

Clive Robinson

"Use after free" - So much for the holy 1000 eyes dogma ...

I happened to use that a while ago. It's (afaik) the only IPC mechanism that allows passing (socket/file/...) handles from one process to another, so it is very relevant in terms of some security mechanisms. Example: proc A serves as the gate guard and proc B takes over connections and processes them once A has done some checks. Assuming for instance that B, for whatever reasons, needs to run as root, it's quite useful and important to have that IPC mechanism available.

But it's rather complicated, a PITA to use, and quite error prone. It's very easy for a developer to get things wrong in userland. Hence it's relatively rarely used afaik which might quite well be the reason for that UAF to have survived for so long.
Note also that, because it's complicated and a PITA to use, chances are that developers fell over that bug already many times but assumed it was *their* error or wrong usage.

GrauhutOctober 4, 2016 11:15 PM

@rdp: "...not understanding the difference between VCDs and RealPlayer (mainly amateur formats)"

Once upon a time... Real was a streaming market leader! :)

I think Wells used the term VCD not in the sense of an antique Philipps CD format, but as a category, "a CD that contains video".


And it doesn't make me wonder why they used Realmedia. Very flexible container format! ;)

Just have a look at W32/Realor.worm

http://www.mcafee.com/threat-intelligence/malware/default.aspx?id=140899

Clive RobinsonOctober 5, 2016 4:14 AM

@ ab praeceptis,

So much for the holy 1000 eyes dogma...

You can also add an "out-of-bounds" zero day in the OpenJPEG library --writen in C-- to that... Researchers from Cisco revealed it last friday[1].

Apparently it effects all sorts of things one of which is PDF files... Which you may remember a commenter here was saying just the otherday was one of only three files her organisation allowed to be downloaded on penalty of instant dismissal for any others[2].

Which if correctly stated by her kind of shows the limited reasoning ability of the person drawing up that policy... Such "by the book" bureaucracy usually has the "opposit" of intended outcome as users just repeat the mantra of "Only PDFs...) rather than think about where the file originates from... Thus it's not just users who have limited knowledge / foresight but those who manage them as well.

[1] The flaw is in the JPEG 2000 image file format parser implemented in the OpenJPEG library (openjp2 version 2.1.1.) and could allow an out-of-bound heap write to occur resulting in heap corruption and arbitrary code execution. The find is credited to Aleksander Nikolic from the Cisco Talos security team.

[2] https://www.schneier.com/blog/archives/2016/10/security_design.html#c6735546

Clive RobinsonOctober 5, 2016 8:23 PM

@ yikyak,

What books and resources do you recommend?

It rather depends on what you want to do. A big area of "hands on" security investigation is "reverse engineering" which requires you to get as a minimum "down and dirty" with assembler in it's various forms.

There are two basic hardware architectures of CPU chip which are Harvard and von Neuman, however modern design is usually Harvard inside and wrapped in a von Neuman IO as it has advantages in general purpose computers and a lower package pin count. However with "embedded" systems where the external pins are "port pins" not buses Harvard architecture has advantages, untill you get up to higher level languages like C or above, where "abstraction" realy gets in the way.

Also affecting the hardware architecture but at a higher level of the stack is the instruction set format. The two basic types are Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computers (RISC) most non-legacy architectures are RISC. CISC architectures have always been a bit of a cludge and came about to try to get around the likes of memory bus limitations.

The three main general computer chips you will find are ARM, MIPS and IAx86. Of these the first two are straight RISC CPUs and currently have expanding market share which will only increase with IoT. Intels IAx86 is suffering at the desktop end as ARM based pads and ARM/MIPS smart phones replace a lot of the traditional desktop market. Also at the server end Intel are now getting competition from the likes of ARM, whilst MIPS core based systems still appear in the "super computer" lists due in part to their Graphics Technology history. Of the three and depending on your view point MIPS has the easier to get to grips with instruction set.

However Intel are looking for a "Viagra Solution" to the aging CISC legacy issues they have and the "heat death" problems it causes, and are thus looking to add Field Programable Gate Arrays (FPGAs) to their CPUs initialy as hardware Co-Chip in package. Thus you also need to consider logic programing languages. I learnt the hardway with "wire wrap programing" of Register Transfer Level (RTL) which uses the notions of sychronous (storage/state/register) logic and combinitorial logic and describes the flow of information from register to register. However RTL is not the lowest form of logic design the combinatorial asspect uses Gate-Level Design which is based on the fundemental Boolean Logic and Truth Tables, and you realy need to get to grips with it. There are various methodologies at this low level ABEL and PALASM being two that were directed at Programable Array Logic (PAL) which formed a step between the use of standard logic parts (74xx TTL and 4xxx CMOS) and later Gate Arrays. ABEL can still be found in use as it was not particularly device specific (you can however find PALASMs original Fortran source code on the Internet if you hunt around and it's worth investigating for the gate optimisation methods).

Due to "the need for speed" in design etc two Hardware (logic) Description Languages (HDL) languages exist VHDL and Verilog, you will find many discussions on the Intetnet as to which is best but they are almost always application slanted (VHDL for FPGA and Verilog for ASIC). My advise is till you realy know what you are doing at the logic level is give VHDL a miss. This is because to be good at logic programing you realy need to develop a different outlook to most sequential languages which VHDL emulates (Ada).

At a slightly higher level up the computing stack you will need a good understanding of the "bag of bits" containers for various data types and the structures they are used in which is covered by Abstract Data Types as well as those used for storage etc. For some reason ADTs tend to get forgoton in CS courses and their high level language orientation. The reason you need to know them is simple, all computers only understand "positive integers" from bit's to the native word size. The likes of "signed integers", "fixed point" and "floating point" is a trick using positive integers and such tricks have consequences you need to be aware of, especially as some DSP techniques such as saturable integers that don't wrap around start making their way into more general purpose CPU cores.

Thus there is a large "abstraction gap" between low level, assembler languages etc and the bottom end of higher level programing languages. The reasons for this are what are perceived as the twin evils of "platform independence" and "abstraction" conversion. Going from the general to the specific involves many concessions and attendent cludges that few get taught. And it's this gap area where many vulnerabilities have their origins or roots.

One of the first high level languages you come to as you go up the computing stack across the abstraction gap much as it is derided here and in other places is C. However there is a quite valid counter culture of "If you don't know C you don't know nuffing". This is because it's almost the standard by which other programing languages and libraries are written and written in. However when looking at C like languages do yourself a favour and give C++ a miss it is an evolutionary culder-sac which is starting to show signs of "environment extinction" it's resulting object code can be reverse engineered easily without knowing it's burdensome features and syntactic obscurities and down right oddities of "Objectism fettish". Java is another language you don't need to know in great detail for RE work because of it's middle ground bytecode interpreter which you should know. Though Googles problematic Android Java compiles down to native base platform code which brings you back to knowing assembler in depth.

Whilst passing knowledge of lisp and other higher level languages gives bredth of view, RE does not need you to know them, just recognise how the compilers produce their object code. The same is true from that point and up the computing stack the likes of functional and logic programing and "formal methods" translate to nothing more than "coding style" when viewed from the platform specific side of the abstraction gap.

Where you also need a working knowledge is in the fundamentals of mathmatics with "logic and sets" and some of the more interesting branches to understand crypto in it's logical and arithmetical forms.

Oddly perhaps you will find most of what you need to know at the foundation level in not much more than ten graduate level books.

But there is one set of books I find I use rather more than others Donald Knuth's "The art of computer programing" covers much of what you will need to know at the algorithm and lower levels (though look at MMIX rather than the original MIX). It is --all Bill Gates comments aside-- a quite readable series of books for those that aim to be more than language specific code cutters.

The one area I find not well covered by readable books is "memory usage" which falls in the abstraction gap. There are two areas there that many consider "black arts" which are allocation and garbage collection, both of which realy require an indepth knowledge of pointers, which usually are the least certain parts of most programers --and many authors-- knowledge. Therefore the subject tends to get treated in a "Here be Dragons" type mentality involving invocation of "spells and charms" and considerable hand waving. The thing is it's not the pointers that are the problem, it's how higher level languages abstract them which is the real problem. I'm known to tell people to take a bit of responsability and "cast to void" of byte pointers. This is especially true for RE where pointers are as they should be, not overly complicated by "simplifying abstraction". Also if you are doing embedded system programing in C and higher those pointer abstractions entail a huge amount of hidden pointer math, that needlessly swallows large amounts of resources you can realy do without (including in CISC CPUs silicon real estate of instruction decode and it's attendent heat death and high power issues).

Clive RobinsonOctober 5, 2016 8:42 PM

@ Slime mold...,

Shadow Broker busted

From the information available, the man sounds more like "collateral damage" of a witch hunt more than an actual member of the --so far unknown-- Shadow Brokers.

As we don't know what the clasified documents are but do know that the NSA etc "over classify beyond reason" there is a probability that the only real secret in the documents are the authors name and contact details. It's not unknown for the NSA to witch hunt and terrorise people who have unclassified information that the NSA suddenly decide to classify for otherwise unacountable reasons.

If you look at for arguments sake the TAO catalogue, I've yet to find any technical asspect in it which was not already "knowledge in the public domain". Thus this moves into the ambiguous "methods and sources" where things like the brand, quantity and who changes them, of the toilet rolls in the executive toilet is classified well beyond secret "just in case".

Clive RobinsonOctober 5, 2016 10:24 PM

@ Slime Mold...

After a little digging around the case of the arested man gets more curious...

From unnamed sources it's been said that what he had was "computer code" (classifying of which is problematical at best). Another that the collection of documents has been going on for some time prior to ED Snowden. Likewise the stuff the Shadow Brokers are alleged to have copies of. So he's been collecting for more than half a decade maybe a decade or so.

But asside from that is allegations of "thousands of documents" yet only six are pulled out for charges. Which leaves the questions of are there thousands of classified documents or just six in many other unclasified documents and if and when they became classified?

Further other than the usual "shock and awe" FBI SWAT/murder team tactics there is no information as to how he was identified for such draconian tactics. Even the authorities say that they have no knowledge of him even attempting to pass information on, just an Obama "control freak" side kick trying to talk it up as maybe he intended to but bottled out. Others in authority saying he's a patriot and thus they can not figure him out.

Common sense sort of says if he's been collecting for between five yeats and a decade or so then if his intention was to sell or release then he would have done so long ago. Likewise if he intended to sell but bottled out then why keep the stuff around. But tellingly if he knew or had any idea it was classified how come he had it just lying about in his house and car?

But from public and named sources he was "at school" finishing of a PhD thesis that he had been doing for the past ten years at the University of Maryland’s Interactive Systems Research Center. The title of his PhD is “Exploration of new methods for remote analysis of heterogeneous and cloud-computing architectures".

Hmm now let me think "remote analysis" could that be the head or the tail of the data stream from somebody like Google's servers. With the head being "inserts/implants/taps" and the tail being the analytics on the NSA servers the illicitly obtained information was stored/analysed... Or similar?

I guess it's unlikely he will get to finish his PhD so we will probably never know...

Hopefully this is not another case of a researcher being scape goated because of the US Gov agencies being incompetent.

.

Clive RobinsonOctober 5, 2016 11:50 PM

Some of you know about the $30 DIY Epi Pen[1] published on the Internet in response to the monopolistic price hikes by the US company Mylan who have raised the price from $100 to over $600 in a very short time.

Well it turns out they have not for the first time been making false claims to the medicaid funding to their considerable financial advantage,

http://www.ibtimes.com/epipen-price-figures-mylan-accused-bilking-medicaid-program-misclassifying-drug-2427053

But it appears what Mylan is doing is not uncommon thus they be one of the lesser scammers...

[1] http://www.techtimes.com/articles/178904/20160922/meet-the-epipencil-the-30-diy-epipen-alternative-created-by-pharma-hackers.htm

Clive RobinsonOctober 6, 2016 4:47 AM

@ yikyak,

With regards crossing the "abstraction gap" and what lurks within it some people are researching formal methods to shine a spot light on errors on the platform specific side,

https://www.cl.cam.ac.uk/~mom22/decompilation.html

It's a fascinating area in it's own right and is a lot more interesting than "writting proof of concept" code for attack vectors you have found. It also has the potential to be of mainstream interest within a few years, which may give the "early adopters" an advantage CV wise.

JG4October 6, 2016 6:51 AM


@Clive and others

I think that various verifications of code can be managed by something like artificial intelligence, which is able to keep track of the huge number of variables and states, and the trajectories between them. I've thought in the past that obfuscated code could be resolved by something like AI. That is closely related to the problem of decompilation worked by Myreen.

I've said before that hardware verification is much more difficult in that there can be arbitrarily deep levels of undocumented features. I've mentioned system identification as a method for probing response functions of hardware and that the parameter space of all arbitrary sequences is intractable for exhaustive analysis. Crowd-sourcing the system identification (used here with far broader meaning than classical control theory) helps, but cannot defeat hardware backdoors that are tied to the hardware serial number, being different for each copy. Equally bad are hardware back doors tied to real-time clocks or numbers of cycles, although those could be defeated by powering down at intervals, or perhaps by mild x-ray irradiation to discharge everything. One attack on undocumented features would be to allow only vetted sequences of data and instructions to be executed. Unfortunately, that severely damages the utility of the hardware, but may be acceptable in the quest for deterministic behavior.

The hypervisor addresses a key part of the determinism problem with hardware, but there are complementary approaches. As always, I appreciate the high level of discourse in attempting to resolve the problems of the human condition.

Slime Mold with MustardOctober 6, 2016 8:10 AM

@ Clive

1. When and why did you start spelling "American" (i.e. Mold v. Mould)?
For that matter, when did you start spelling? Are you an AI?

2. RE: Shadow Brokers
I very well may have been taken in by an often errant, sometimes subverted, and always hyperbolic press. "He was a patriot" So was Kim Philby, like Richard Sorge was a Nazi. Kindly refrain. Sorge's house was a rat's nest of both secret and unclassified documents. He was expected to pass on valuable intelligence. We know that most things with a stamp are useless.

3. RE: Epipen

Medicaid and Medicare fraud is estimated to cost a mere $300 billion per year. Want to bet that empty store front near you is listed as a surgery by the by the NHS? Fighting fraud is what I do, and this shit pisses me to no end.


Clive RobinsonOctober 6, 2016 12:13 PM

@ Slime Mold...,

1. When and why did you start spelling "American" (i.e. Mold v. Mould)?

Err to match the spelling you use in your nic, just in case you searched by it.

As regards,

I very well may have been taken in by an often errant, sometimes subverted, and always hyperbolic press

It's their job to "sell the story" according to someones wishes, be it the Editor, the Proprietor or some external enterty with leverage with them, or who can "gull the journo".

However in the article you quoted they did get around to alluding that he was not a Shadow Broker as the alleged clasified info did not fit (read into that what your level of skepticism alows).

I did however as I said have a look around other journalistic ouput on the subject, and what I showed was what I could find.

I tend to rate "named sources" and "publicly available" data with less skepticism than "unnamed / confidential sources" especially when they are aligned with the self interested such as politico's.

As for the current US Pres, it is fairly clear from his earliest actions that he is a control freak (see the waivers he made people sign). Further it has become clear he has gone to war on whistle blowers despit the rhetoric. As for the FBI they are rapidly earning a "badge of incompetence" on their investigatory abilities and if they are prepared to setup those of low IQ as terrorists, then scapegoating an individual for political reasons would as some of the legal bretherin have noticed be par for the course. As has often been said "Justice Must Be Seen To Be Done" which means pageantry, spectacular, spectical, drama and publicity, but actuall justice, why let that get in the way of a good show?

Others have indicated that some members of the FBI are actually barbaric in their behaviour. They look at the Boston Bombing and the mob handed behaviour in a known associates home, where apparently the unarmed suspect had to be shot dead as he was behaving in a threatening way. They point to the shooters rather dodgy past as to his mentality... They also imply that not only do dead men not tell tales, they also can not defend themselves or get justice, which is correct legaly.

As I said above make of it what you will as far as your skepticism alows, the problem is we are very unlikely to ever find out the real truth, especially as it's in many other peoples interest that we do not. Unlike others he does not have the power to get the likes of the FBI or DoJ to not go to trial, thus he is very unlikely to not go to jail or become bankrupt such is the way these things have gone in the past.

Clive RobinsonOctober 6, 2016 12:55 PM

@ JG4,

I've said before that hardware verification is much more difficult in that there can be arbitrarily deep levels of undocumented features.

Yup and currently no reliable way to test for subversion, even destructively. It's why in the past I've talked about mitigation as the only viable path. Hence my comments about hardware from unrelated suppliers and voting protocols along with signature watching hypervisors (what @Wael likes to call C-v-P ;)

As I've also indicated as part of that is "bubbling up" attacks are not effected by any high level type safe language or formal methods applied due to the simple fact machine code is what actually executes, not the high level language or formal methods used to semanticaly verify it. It realy has to be done at the lowest level it can on the computing stack. Even the verification of machine code will not work if the microcode inside the CPU can be changed, or extra code injected via memory managment or direct memory access from other processors or I/O. As "rowhammer" has shown, even correctly set up and otherwise secure page tables used to prevent against memory level attacks can be altered thus alowing memory level attacks to take place.

It was knowing that memory attacks to "bit flip" were possible via a variety of methods --such as EM fault injection-- that made me consider other protections in C-v-P such as jailing CPUs puting them in a halted state such that a hypervisor could actually "walk the memory" in the jail and check it was as it should be. It could also check the "letter box" buffers/streams by which the jailed CPU could communicate with other parts of the system. Likewise it could check the bounds on the memory in the jail, such that there was not a place for malware to hide.

But as I also noted any system nomater how secure can always be got at by an insider, by taking the system down and physicaly changing things prior to bringing the system back up.

At the end of the day you can not have a usable system that is 100% secure, trying to achive that with single CPU systems is an impossibility. Whilst the use of a security hypervisor can stop outside malware etc and mitigate against chip supply subversion, it can not stop insider attacks where they can shut the system down make changes to it and then bring it back up again.

What the formal method at the machine code level does do is work on the lower stack level than the "abstraction gap" and as such is actually much preferable security wise to formal methods much further up the computing stack as it makes certain types of attack that much harder (but not impossible).

As part of the original C-v-P conversations here I pointed out that formal methods at the time were to far up the stack and that security had to be based on a meet in the middle process of formal methods going down as far as they could, and with hardware based mittigation limiting attacks bubbling up the stack from below.

As I have indicated in the past, often you can read about it here on this blog first, long before others get to play with the same ideas.

ab praeceptisOctober 6, 2016 1:44 PM

Clive Robinson

While I value striving for 100% (security) as an important intellectual endeavour that often brings us quite practical insight, we should not forget that 100% security is a wet dream.

As for hardware: No, sir, hardware can be formally verified and that is done (not always, maybe even not often, but it can be done and it is done).

Most importantly, however, I'd like to remind us that the vast majority of (security related) problems does not arise from chips whose taping has been perfidiously tampered with. Nope. We have problems because some system ass^H^Hdministrator consider it to burdensome to properly has passwords (i.e. at least SH256 rather than md5) or because maintainers find it to burdensome to change some defaults for an OS or simply because plasticboxen "router-firewalls" are of frightning non-quality and the manufacturers don't find it important to close 5-year old security holes.

While you talk about formal verification of software being just not good enough, I happen to see that the vast majority of developers either never heard of freely available *simple* tools, or finds them still too complicated, or simply doesn't care shit.
Example: cpachecker - it can't get much simpler than that. Yet most developers don't care to run even minimal tests. ("da compila don't throw no errors so it's fine").

So, yes, striving for 100% is a worthwhile and honorouble endeavour - but let's not forget that (I guess) 99% of all problems stem from not thinking, being too lazy or too unprofessional to type a simple commandline, "thinking" that md5 is good enough, and the like.

As there is a current thread on the qeustion whether users should/can be educated or not: frankly, I sometimes don't see that much difference between most stupid ever users and developers.

So, once more: How about first looking at problem classes that, with 20% effort, can solve 90% of our security problems, rather than being overly concerned about problem classes which to solve takes 80% effort and offers 9,5% solved problems, let alone problem classes that are way above what 99% of developers/engineers can understand, let alone solve and that make maybe 0,5% of our security problems?

rOctober 6, 2016 4:41 PM

@ab,

I was wishy washy with my 2nd post on the other thread, it took me a while to come to terms with blurting it out. I neglected to mention over there that it's awfully nice of rust (and others) help us defeat mistakes - but - just putting this out there for the sake of documentation - rust (and the like) do not protect us from intentional "mistakes".

You scoff (friendly) at 1000 eyes, but what makes me smile is I can name roughly 5 (eyes)++ that do see the problems.

Rust is good, but we still need to verify and test our asses off really.

Clive RobinsonOctober 6, 2016 4:56 PM

@ ab praeceptis,

As for hardware: No, sir, hardware can be formally verified and that is done (not always, maybe even not often, but it can be done and it is done).

Not by the end user and not by the chip designer. It's why the US DoD amongst others are still spending big money on certain projects that just might detect supply chain poisoning, or malfeasance / subversion by the chip manufacturer.

As for,

we should not forget that 100% security is a wet dream.

I guess you did not read me saying,

    At the end of the day you can not have a usable system that is 100% secure, trying to achive that with single CPU systems is an impossibility.

I've said why in the past not that it matters to most people, but simply a Turing engine can only do a limited range of things, and the logic behind it can not verify it's own consistency. From a practical view point a CPU can only tell you what it's software can do, and it's a simple thought process that there is no way the CPU can tell if the software is as it should be or something different when it executs it. Thus the CPU will only tell you what the software it executes wants you to know.

To check executing code you need another CPU, which in turn will tell you only what it's software --thst it can not check-- wants to tell you. It's a "lesser flea" issue. But even if you added an external state machine that was not Turing compleate and every state known, it can only tell you what is passing it's test points. With most CPU's extetnal busses is as close as you can get. But even if you design the chip to have such circuitry you still come back to the fact that it will only tell you what the chip manufacturer wants it to tell you. And as the circuit is only there to try to detect malfeasance by the chip manufacture they can make it lie to you.

Thus for the sake of your sanity, the best thing to assume is that the chips can not be trusted. So you look for a way to mitigate the problem. One way is to use different chips from different manufacturers and put them in a voting circuit, which works in the same way the NASA voting circuits work.

As for,

While you talk about formal verification of software being just not good enough, I happen to see that the vast majority of developers either never heard of freely available *simple* tools, or finds them still too complicated, or simply doesn't care shit.

Firstly I did not say formal methods were not good enough, I pointed out the limits of high level formal methods and the break point of the "abstraction gap". But formal methods are effectivly static not dynamic, in that just like code signing and similar they work on the programe code long prior to the execution of that code.

For formal methods to realy work they have to "follow down" as far as possible, but when talking as a customer not the manufactuter of the CPU the real hard limit is "at the Package Pins". So you as a customer can not take it down to the microcode level, nor the RTL level below that or the combinatorial logic below that.

Further even if you could, it still does not solve the "bubbling up attacks" where formal methods mean crap, as they can not deal with attacks on the programme memory by the likes of I/O etc during execution. Which was my point about rowhammer and EM fault injection attacks that are dynamic to the execution which formal methods are not.

I first thought about then investigated EM fault injection attacks back in the 1980's and whilst most of the industry is totally oblivious to it, it can have devistating effects. The one academic paper on it showed that a simple unmodulated EM carrier when directed at a certified Hardware Security Module from IBM containing a TRNG it took the entropy down from over 32bits to around 7bits. Back in the 80's I was experimenting with modulating the EM carrier that is even deadlier in that it alows you to synchronise fault injection with the executing code on the CPU and make the likes of specific branch tests fail in the attackers favour.

Think what just being able to change the state of the zero flag in the CPU at a specific point in code execution can do for you...

Whilst I would agree there are many other attacks that could easily be prevented, you have to think long term not short term in a "low hanging fruit" game. Eventually --we hope-- sysadmins etc will fix the easy problems which.means attackers will up their game to stay in. Thus at some point these types of attack will become used, the advantages of them are way to good to pass up. In effect they give you the benifits of directed malware but in a way that can not be found by the current methods...

Any way this post is long enough.

rOctober 6, 2016 5:46 PM

@Clive,

I will never look at TF2 the same. I will not longer be using simple predictable cmp/test/jxx pairs.

There may be ways to code a double-logic trap to detect that, FLAGS!=DATA - but it would have to be put EVERYWHERE.

What is that a target against RTOS and PLCs?

JG4October 6, 2016 8:09 PM

@Clive

Thanks for your helpful discussion. I didn't mean to imply that I was the only participant to mention the problem of hardware verification. You have done a good job many times of elucidating the problem. Obscure hardware is a good start. Even combinations of relatively obscure hardware that cannot be ID'd outside of the energy-gap are good. If I didn't say it explicitly, a webcam is a remarkably high-bandwidth tool for transferring data in one direction and only one direction across one inexpensive and reasonably effective energy-gap. A pair of webcams is bidirectional and the problem reduces to effective filtering.

be sureOctober 6, 2016 9:30 PM

@JG4

your bidirectional webcams dont have any LEDs, wireless radios, hidden SDR capabilities, or thermal response curves? then why use it as a continuous filtered link

ab praeceptisOctober 6, 2016 11:10 PM

Clive Robinson

Hmmm, I think, we're having a misunderstanding. While you put the fence at the chips pins I (possibly mistaken) took your statement to say that chips can't be verified *by their designers/manufacturers*-

It's not at all to do with you but rather with personal bias but I just don't like to bet on "stochastic thinning". Buying from different sources may work when buying groceries for the governor or president (and is done, btw) but I think it doesn't help much with (non trivial) chips. But I accept that sometimes it's the only option left to at least spread out risk.

If I were a state agency player field, I'd probably grab open source designs and spend some man years to verify them. Now, probably you will object and say that that still leaves me in the hands of the fab, and you would be right but I see a somewhat helpful relation in that players who can't even build a small scale fab are quite probably not major targets for nsa and the like.

But still, you are right. I saw that when I (quite a while ago) studied the situation in Russia. Those guys (I love them anyway, also for their excellent academia) have somehow managed to keep something, anything alive even in worst the times. Granted, they had their on designs (Elbrus being the best known, I guess); anyway, when they were deep in the mud in the 90'ies they still kept that alive and a little later they almost manually produced a small amount of processors in a university cellar. Seriously.

Funnily the 2nd path they took was the same the europeans had taken in ESA, namely building on the open sourced Sparc design. And while the european "leon" is next to floating dead the Russians grew theirs and today they have 8-cores of which up to 4 (iirc) can be put together on a board. That's by no means a racing car that can run against intels modern xeon but it's reliable und their full control and damn good enough for the jobs for which they need it.

Back to formal verif. The way I see it, we're both right; you certainly are. But still: Every mile starts with 1 step. And no matter how rotten or trustworthy your chip is, you'll need realiable software anyway. And btw. knowing for sure that my software pieces are 100% correct also gives me a good basis to learn more about my processor and, in particular, it allows me quite some checking of the chip.

FigureitoutOctober 7, 2016 12:13 AM

Clive Robinson
--If my understanding that "microcode" is even close to correct, there's basically another ROM where the microcode is stored that can generate certain control signals in CPU or mixes and/or modifies w/ other regular instructions. There can be hardware microcode too, just like hardware implementations of generating these signals that can't be changed or reprogrammed. It's said to be "impossible" to access and modify via normal programming channels where you program other areas of a MCU or whatever. It can alter any part of the processing stages, since it can access MAR, MDR, PC, SP, IR, and ALU etc., any aspect of CPU. I was thinking in class, if there's a bug in microcode, or that ROM got damaged, the design would be broken, it has to be almost perfect.

Even though it kind of looks like asm, you have even more real control of the hardware. Looks hard, but probably very rewarding. Want to try it sometime.

Regarding fault attacks, I think it's pretty overrated in terms of being very specific attacks that would affect code execution like flags in a ROM of code. You need physical access w/ scope to time the injection just right or very close remote access (like a centimeter or 2) eh? But simple attacks on the edges would be devastating...I've already said here, any embedded people will know, if certain bits are set or unset, the regular programming sequence might not work if it's kind of a raw programming method. Any major lock bits on most processors these days is all it would take to be a major pain in ass. Or unset BOD bits and watchdog bits to make your product look more crappy if a cosmic wave alters a bit and causes a lockup. And all it would take to do a DoS attack on a product I'm working on (we need a low power watchdog, there isn't one so we need a bit of a miracle and we should've chose another chip (I wasn't there to make that decision or speak up when decision was made...)) is to modify the I2C comms to a small receiver chip. I don't know why, but the designers had a grand time making a pointless custom compression function that modifies any input via this function. So you have to calculate what values are after compression function as to how you want to configure this chip, it just has registers that some, what they call "digital logic" acts on. Simply modify those comms in any way will likely corrupt and cause the chip to lockup in some strange way until it gets the right values to configure it correct. They obviously won't give up some of the secret sauce easy on how the control part (they don't call it a CPU but you could probably say it's one) reads and acts on those registers.

FigureitoutOctober 7, 2016 12:35 AM

Clive Robinson
--Correction RE: "Simply modify those comms in any way", it has to be the values in the I2C comms. So if it completely corrupts comms such that it's no longer I2C, it may just be ignored. That's the dumbest of DoS attacks. So remote injection that doesn't affect the slave address, just the data line. Easy right? No. The attack will fail more than it succeeds.

Clive RobinsonOctober 7, 2016 2:49 AM

@ r,

What is that a target against RTOS and PLCs?

Back when I first played with it CPU's were "40 pin DIL" 8bit microprocessors (think 6502, Z80 chips used in those clasic "home computers")

Some (1802) were "radiation hardend" "space qualified" parts that also had extrodinary MTTF figures as well so they got used in Remote Telemetry Units (RTUs). Back then rather than design your own radio systems it was more cost effective to buy in hand held radios that had appropriate safty ratings. Many engineers gave the likes of these "radiation hardened" parts more immunity than they deserved. That is the assumed --incorrectly-- that it was "all EM radiation" not just certain types of ionizing radiation. As I licenced Amature Radio enthusiast who had designed and built my own equipment including 10Ghz --3cm-- kit I new just how succeptible LSI TTL chips were to even quite low powers of RF (ECL less so). One reason was something called "meta-stability" on the likes of D-Type and Edge Triggered latches (often used in "frequency counters"). Put simply you could interfear with the latch functioning at the transition point from high to low etc.

Any way I got into an argument with an engineer who was old enough to have been around in the early days of "Ladder Logic" using relays etc for industrial control (have a hunt around for "Norbit" logic) about the placment of a radio unit in a design for a small RTU. And it resulted in me shoving the antenna of a VHF two way radio next to his CPU board and keyed it up, his control sequence went for a "walk in the park" and he was understandably "not a happy camper".

Any way it kind of turned into a sniping war so I decide to "characterize the problem" in my own time. Which led to a lot of interesting discoveries using an SSB/CW setup I had. Let's just say that antenna theory applies to PCB tracks, and in chip protection diodes can act as AM demodulators. One other is that transmission line theory about open/shorted multiples of a quaterwavelength on things like driver chips on serial data systems caused "remodulation" in a not to disimilar way to "Therimin's Thing" or "Great Seal Bug". Even though the chips might have max clock speeds of only a few MHz the PCB tracks chip package pins and bond out wires etc were still susceptable upto 10Ghz (which was as high as I could go back then with my own kit). Oh and ventilation slots and gaps in cases work nicely as "slot antennas". You can use this to get a 10Ghz signal into a case, and if you AM modulate it with say a 2Mhz signal, semiconductors will AM "envelop detect" the 10GHz signal and the wires connected to them will radiate the 2Mhz inside the box...

I'll let you think on what effect that might have on clock lines etc.

As for the Instruction Set Architecture (ISA)[1] interface which is what you get at the CPU pins, --which is effectively the lowest interface a "customer" or ordinary developer can go-- there is much that is hidden but EM signals can reach.

If you have a look at CPU internals the metal layers are just like a miniture PCB most lines are very short but some such as control signals and power routing are much longer. Traces for the likes of the "zero flag" get all over the place including out of the ALU block back to the "instruction decode and control block" as an input. It would not be to difficult to tapeout that signal adjacent to an EM susceptable power supply trace and mutual coupling transfer EM energy across to cause various things to go wrong... If the mask maker or chip manufacturer did this the chip would be functionaly correct to whatever specification a chip designer had produced and it would likewise test correctly in all ways a chip designer would carry out. But still be vulnerable to the right sort of EM signal.

Yes it's the stuff of nightmares but something the likes of the US DoD are concerned about. Imagine if you will a "smart bomb" all it realy is is a a printed circuit in an tube with a open to EM front end to look for the laser dot. It's often not part of the bomb it's self but straped on the front of a conventional dumb iron bomb. It uses "bang bang" control of small wings to guide the dumb iron bomb to the target. Because the bomb is smart, it's usually not "aimed" in the traditional sense like a dumb iron bomb would be. Thus if you are in a bunker etc and have a powerfull EM emmitter with the right frequency and modulation, those smart bombs become unaimed dumb iron bombs and will miss by quite a long way. Thats the sort of nightmare that could easily be a solid reality the DoD has to think about and deal with.

But there is another aspect it's known that some
AWACS Boeing E-3 Sentry updated radar systems can cause absolute havoc with electronics at quite some distance, think what a specialy designed system could do? It's something various defence organisations have considdered as "Drone Hunter Killers" etc. Also as non-destructive EMP style directed energy weapons --that don't need nukes or similar to drive them-- sometimes called HERF guns, knowing the right frequency and modulation would increase their effective range dramaticaly, or make them considerably smaller for the same range. They are in effect a "next step" in the ECM war.

[1] Not to be confused with the ISA Bus that IBM developed for the IBM-PC.

Clive RobinsonOctober 7, 2016 4:13 AM

@ Figureitout,

If my understanding that "microcode" is even close to correct, there's basically another ROM where the microcode is stored that can generate certain control signals in CPU or mixes and/or modifies w/ other regular instructions. There can be hardware microcode too...

Your understanding is correct in all but one small but quite critical implementation detail, it's not the traditonal ROM any longer, it's easily alterable.

I don't know if you remember the Pentium Maths Bug, where Intel engineers for some reason --not made public-- did not correctly populate a ROM lookup table? But the cost to Intel was a definate walnut corridor attention grabber.

Thus they and others moved from non mutable (ROM) to semimutable (EEROM) or fully mutable (RAM). Unfortunatly it has alowed other things to happen, such as CPU startup patching with binary blobs. It's something the OpenBoot people have documented as being a major issue.

With regards,

Even though it kind of looks like asm, you have even more real control of the hardware. Looks hard, but probably very rewarding. Want to try it sometime.

If you think about it, it's actually a VLI extension of RISC like instructions, even though it came first by thirty years or so from England ;-)

Some early IBM machines actually loaded up the microcode at start up as large capacity ROMs were slow compared to RAM, it also alowed startup test code which we would now call BITE to run and get cleared out to save very very expensive memory. And yes programmers could use it like ASM to encode their own "fast code" CISC like blocks even though the microcode bit width was around 128bits (it's rumoured that like certain Cray instructions it was done to sell to the NSA).

It's actually not as hard as it sounds to get up and running, have a look at RTL programing, thankfully these days you don't have to program with a wire wrap tool. But it can be very hard to get it to be realy efficient, in an area where efficiency realy counts. You also have to remember it is in a way like parallel programing with very short very fast execution ultra time critical threads.

It's why you need a different outlook on programing beyond the run of the mill sequential programing. And just for that reason alone I would recommend people to get to understand it. Especially by playing with it.

But there is an even more important reason coming up the pipe, Intel are adding FPGA co-chips in some of their processors, with the aim of eventually integrating programable logic onto the CPU chip. This will give a 5-50 times speed boost for quite a few types of algorithm. Those that grok it will be able to leap on the lucrative end of the consulting wave it will start.

There are atleast a couple of reasons Intel would be doing this a couple of obvious ones and a blue sky one. The obvious ones are those involved with Intel getting out of the "CISC bind" it's in with it's heat death problems and to try to stay on track Moores Law performance wise. But also the fact that RISC based systems are riding the change from Desktops to Pads and Smart Phones but also they are getting in on the server market as well so are a real threat to Intel's position/market share.

AS for "blue sky" it's a stepping stone on the path to quantum computing, and could actually "dodge the need" for QC in all but a few very specialised areas. You only have to look at the price D-Wave ask to realise just how profitable such a lucrative market it could be...

Any way, the funny side of the FPGA's as co-chips is it kind of turns the wheel around back to those early IBM machines.

@Nick P, will possibly have something to say about it if he gets off posting to "Hacker News";)

Which makes me think, we have not seen Wael or Dirk recently (could be Rolf fatigue or waning interest in subject matter).

Clive RobinsonOctober 7, 2016 4:29 AM

@ Figureitout,

So you have to calculate what values are after compression function as to how you want to configure this chip, it just has registers that some, what they call "digital logic" acts on. Simply modify those comms in any way will likely corrupt and cause the chip to lockup in some strange way until it gets the right values to configure it correct.

That flies in the face of sound engineering practice for communications. You don't use unprotected compression in critical communications paths.

The usuall practice is to use both run limited compression and Forward Error Correction. You use them to find not just a "bandwidth/reliability" sweet spot but also to noise shape the communication profile similar to whitening such that you get more energy per information bit, but stay inside the transmission mask (a bit like Direct Sequence Spread Spectrum).

I can see this comming back to bite but the chances are it won't be those "engineers" that get the bite marks as reminders not to go off half cocked.

JG4October 7, 2016 7:12 AM


@be sure

One of the web cams is inside the energy-gap enclosure and only a limited visible spectral bandwidth, which could be monochrome, is allowed in and out. The characteristics of the webcam are not particularly important, because the output and controls are filtered by semicustom hardware, perhaps an FPGA board or Arduino. The internet-connected machine is assumed to be compromised all the way from the correspondent's internet-connected machine, at every piece of hardware and software in between. The sender and receiver have secure communications, because their devices inside the energy-gap enclosures cannot be compromised (via the webcams) and run vetted encryption/decryption code. There is nothing magical about webcams, as countless other data diodes will serve much the same purpose. Just an interesting way to ship data from one screen to another.

be sureOctober 7, 2016 2:49 PM

@JG4

sounds like a pretty good setup if no external light can leak through. otherwise not really energy-gapped, and flickering lights in enclosure can be used to exfil. if you can be sure the secure machine is never compromised, it does not matter anyway

Wesley ParishOctober 8, 2016 12:35 AM

@Clive Robinson

The one area I find not well covered by readable books is "memory usage" which falls in the abstraction gap.

One would go so far as to say that understanding memory and its management is the central most important part of learning to program. I for one never understood objects in object orientation until I understood that they were a method of allocating memory for the related functions and procedures.

tyrOctober 8, 2016 11:02 PM


@Clive, FigureitOut

If you looked at big iron startup, the code
had to be imbedded in the program load, then
with the rise of microcomps some smart type
figured out how to offload the setup sequence
onto a microcomp. That meant once it had run
your big iron processor could run the customers
code with no fiddly startup overhead. Not the
same model as your desktop since very few small
iron machines have much in the way of attached
I/O channels even now. The whole way of doing
business was quite different when 'microcode'
was introduced. It was basically the same idea
as a bootstrap PROM which gets shadowed out
after the initial load takes place.

I'm sure the idea has mutated since then in a
lot of different directions because a useful idea
gets copied and changed as it passes through the
comp field.

FigureitoutOctober 8, 2016 11:40 PM

Clive Robinson
--For a company like Intel, I can see why they'd want microcode not locked in silicon, but now they probably don't offer much of any chips that have unchangeable microcode? Trusting Intel to not put backdoors in...nah I don't trust them. Or to leave that attack surface. Can't wait for RISC-V chips and resulting firmware and other RISC/ARM chips keep getting better, then have laptops based on them. I don't want changeable microcode in my CPU's that I really care about, gets too murky. If I can grok it, may take a crappy PC (I've got a $10 laptop I wouldn't mind destroying b/c it's either been destroyed by malware or just too old to do anything in normal time frames, so I can have fun w/ it), and simply corrupt the microcode and see what happens. Depends on setup time. I expect a darkscreen lol, would love a bluescreen (it's running XP, I couldn't even access BIOS on this piece of crap, pretty good security keeping me out, it's even hard to get screws off lol). The EEPROM is in CPU chip or separate on motherboard? How do I get at it?

RTL programming, like VHDL and Verilog? I'm probably going to take a class w/ more VHDL, but I didn't really like it when introduced (didn't get taught to program, just had small programs shoved in my face via labs). Maybe it grows on you. Seeing FPGA's work though was very nice, just some simple logic and basic circuits.

RE: Wael & Dirk
--Well you could use email you know. :p If you're in London you could take a train to Belgium in like an hour and meet Dirk for a beer, not too hard in Europe when things are pretty close to each other.

That flies in the face of sound engineering practice for communications.
--Well why would that be? Got a reference on that? Not sure FEC is used everywhere as some standard. We're using their solution b/c we're way too small to make out own, hate being reliant on others...I don't think they use much protection, some parity bits; but claim high EMI immunity. I'm not sure why the compression is used, for 16 bit down to 8 bit, maybe b/c they can only squeeze 8 bit registers in the space. Faster to compress and send less data? I don't know. But digging in further to the interface, all we have is a small data sheet, and I found a small undocumented feature, just inferred how to dynamically write to a register doing the same thing for another register (have to stop measurements, write, then resume). So it's not complete and probably much more holes. There's some other tricks to check things that a coworker came up w/ but you have to do that externally w/ a separate micro, and that micro can't do anything in sleep mode (argh...other chips seem to have low powered separately clocked controllers, and you can have some code executing (an else condition in our case) at such low power, now it's getting ridiculous how low). Doesn't have a watchdog, too primitive a chip, have to make one ourselves. Reason we use it is it's lowest power (well was, that may not be case anymore), being low power affects everything...low power design is hard b/c it brings everything down w/ it. I'm really looking forward to some projects where this isn't a design constraint, feels like a straitjacket.

FigureitoutOctober 8, 2016 11:46 PM

tyr
--So some kind of cache or purely registers for speed? Just seems to complicate things IMO, especially in CISC chips. I don't get how things don't fail more often w/ such a complicated chain of events.

WaelOctober 9, 2016 12:35 AM

@Figureitout, @Clive Robinson,

Email may leak more info than necessary, unless one is careful. This blog is as good as email with a comparable privacy posture, and better traceability properties if one uses a nick name (false alarm @Nick P, if you searched for your name.)

Clive RobinsonOctober 11, 2016 8:15 PM

@ Wael,

Just some eye problems...

It's ironic, I ask the question, then like you I go absent without leave due to illness.

Usually I take my phone with me, but I'd left it on charge, so I've come home to a stack of texts saying "Where are you" etc :-(

Clive RobinsonOctober 11, 2016 8:48 PM

@ tyr,

I'm sure the idea has mutated since then in a lot of different directions because a useful idea gets copied and changed as it passes through the comp field.

Only some "useful ideas" get "copied and changed", of those that do it's usually for the worst in the longrun as the primary motivator appears to be to 'save expense'. My favourite example is BIOS/boot code moving from fusable link PROM, through UV ROM to Flash ROM and other "electrically alterable" memory. Originaly it had genuinely been WORM and care was taken to get it right before release. With UV ROM code "rework" became possible without "chip change" thus coding became subject to "hurry up" and less care. With Flash coding standards have gone out of the window into "patch tuesday" "gar gar land". I won't name the company but their BIOS coding standard dropped so low, that the part of the code needed to do flash updates / over-writes got corrupted and was not tested before shipping. This was because it was Cut-n-Paste from an earlier version thus "known to work" when in fact due to a parts change it did not... Opps.

The problem with "patch tuesday" view on life is that security is not even known in "gar gar land", which means that it can be ovetwritten by malicious software from god alone knows where. Great if you are some Intel Agency looking for deniability, bad if it's "Russian criminals" that have slurped all your health and financial records etc... After all "technology is agnostic to use" so "sauce for the goose may not be sauce for the gander".

Clive RobinsonOctober 11, 2016 9:39 PM

@ Wesley Parish,

One would go so far as to say that understanding memory and its management is the central most important part of learning to program.

It is but... That's not the way it gets taught, due to "abstraction".

Let's face it "pointers and addresses" are fairly simple at the memory layer of the computing stack upto the machine code level... Then on the other side of the abstraction gap it's at best complicated.

Part of this is due to trying to hide Virtual Memory issues, part in trying to make high level source code more readable, but mainly to hide issues with basic data type sizes. The resulting compiler generated pointer updating code is usually a large mess which in embeded systems takes up valuable resources unnecessarily. Worse it gives the illusion of "type safety" from a top down view, "which all goes to shite" when as little as a single pointer gets "inadvertently" changed...

From an embedded developers point of view "cast to void" and take responsibility for your own pointer math can be a lot better way to go.

As for "object orientated" --sound of spit ricocheting off spitoon stage right-- it's nearly a meaningless term due to the inordinate number of different ideas as to what it is and importantly what properties it has in any given language. It's why we have inane platitudes like "object goodness". Once upon a time we had the simple "Programs = Data + Code", then the Abstract Data Type view of containers spread out from data to code. Whilst it is useful to think "Object = Data + Methods" it's rare to find Objects as well corralled as Programs are by OSs. And it's not just that Objects leak private info unreliably, but Objects frequently come with godam awful interfaces that are usually more a reflection of how clever a code cutter thinks they are, than of real use to others.

I guess I'm more of a "nuts and bolts" "make it work well for ever" type of guy than a "rock star" "make it a smoke and mirrors one off performance" type.

Whilst there are some languages where objects fit in by design, bolt on's are never a good idea. Likewise some programmers can write readable and usefull object orientated code but all to many can not.

Clive RobinsonOctober 11, 2016 10:23 PM

@ Figureitout,

The [microcode] EEPROM is in CPU chip or separate on motherboard? How do I get at it?

It's usually buried inside the instruction decode unit deep within the chip. I've been told in the past but have never had reason to check, that Intel microcode updates are "signed" binary blobs but AMDs are not "signed". The implication of Intel having a signing mechanism is that there is a lot of extra hardware in their chips that is only ever used at reset etc.

With regards my,

    That flies in the face of sound engineering practice for communications

Well it sort of goes back prior to Claude Shannon, he and a number of others thought about the problem and came up with the basis of what we now call Information Theory.

Shannon came up with the noisy channel model which put a mathmatical limit on what is achievable in any communications channel.

One aspect of this is "redundancy", most natural languages are highly redundant to get around various noise source issues including echoes, and as a result appear in normal circumstances to be grossly inefficient. The point is it's "worst case" type behaviour, natural language most needs to work when there are dangers present that are often quite noisy. Compression trys to remove all redundancy to get high channel utilisation, as such it's "best case" type of behaviour, that "works well untill it's needed".

Over the years error correction has been a moving target, in part due to improvments in technology and in part due to improving understanding. There are many quite variable asspects to consider when deciding on the what and how of error correction and many books have been written on the subject (many in languages that were once behind the Iron Curtain).

The big trade off is "retransmit" time -v- "redundancy" where a particular tipping point comes is very application specific. Roughly the shorter the distance the less error correction and retransmit delay is needed. Which is kind of what you would expect with channel noise being roughly a constant on a per unit of distance measurment.

Thus to decide on any method to use you have to characterize the channel that will be used first.

FigureitoutOctober 12, 2016 1:48 AM

Clive Robinson
It's usually buried inside the instruction decode unit deep within the chip.
--Ok, I'll find it someday.

Thus to decide on any method to use you have to characterize the channel that will be used first.
--Yeah, I've learned a lot for what is ultimately pretty small products, and we have to go forward w/ what we have now. But I know I can do even better on these 2 products, but it'll take time to port to 2 new chips and potentially get new FCC cert. Porting a nice I2C driver proved to be too hard for me, I want another crack at that. A few hardware changes too (ack lines on from RF chip to sensor chip, further diagnose some weird signals coming from just pulling a line low, and I would like another ack line or just an error line sensor chip can wait for an interrupt, just needs 1 more interrupt line...).

If I get the chance to make a living designing or studying something like the proper channels for transmitting info (super robust yet don't use all the resources in the world) then I'll do it. This is just one aspect of the problems I need to deal w/. Until then, I have to accept some failures in this product if there's too much noise or channel corruption at time of operation. For instance, how do you program to stop after so many error conditions? You can't afford to do it until the channel comes back, that'll kill the battery if you do too many re-tries. It's just a decision you make of what's reasonable based on your tests and what you think is good enough. Hopefully there's an issue in the field that's unreasonable.

Clive RobinsonOctober 12, 2016 5:07 AM

@ Figureitout,

You can't afford to do it until the channel comes back, that'll kill the battery if you do too many re-tries.

Yes and No, firstly you have to decide what your acceptable error rate is as no comms channel is perfect. It can vary a lot depending on what you are doing at the time.

Usually missing a "start" signal is not that important but a "stop" signal can be a disaster. Think about an RC car for instance not starting to move forwards has little consequence but fail to stop and you could be making holes in walls. Think on how stearing signals often work, they do not have to be a one off "turn 30 degree left" but a repeated "turn half a degree left" which will get the car to 30 degrees even if some signals go missing. Thus moving that idea to a start signal can make it safer by being for a short time duration, and if it does not get repeated the car stops. In essence you design the remoye to "fail safe". You could improve this by adding an integrating function that alows faster speeds if signals are reliable but keeps it slow if not. In effect you are adapting to the channel conditions by reducing your need for control bandwidth, which is also an often used trick.

In RF systems the most likely form of signal loss is due to burst noise at the receiver end. Thus finding a way to over come it is to repeat the transmission several times, which was the earliest form of Forward Error Correction (FEC).

Since then FEC has got a whole lot more mathmatical, but in essence the idea is the same you are modeling the channel. Yes you lose bandwidth with high quality signals, but you gain a lot at low quality signals where just getting one or two command signals through instead of thousands could be important. Which gives you the idea of prioritizing commands to transmit not queuing all the commands, which is another technique.

Some call it the "design for fail" or more politely "defensive design" mentality which tends to be the opposite of the usuall optomistic design mentality.

To develop the required outlook requires a little bit of "oh shit the worlds about to end" foresight. To quote the NASA sayings, to be "a steely eyed missileman" "you have to work the problem", and that applies as much to machines as it does to humans. Apollo 11 was nearly a disaster due to a simple mistake, according to Buzz at a Q&A session I attended he had not switched off a ranging "rendezvous radar" on the lander used for docking. This was due to a mistake in the checklist, the result was a whole slew of input to the computer the system designers had not thought about and it overloaded. However the design of the computer was such that it was fault tolerant, it expected parts of it's program to crash and would restart them, importantly though it priorotized this on a state by state basis which ment that though degraded the computer did what it was supposed to. Margaret Hamilton was responsible for the design of this part of the computer, and it was her maths skills and insight which gave rise to the design.

FigureitoutOctober 12, 2016 7:49 AM

Clive Robinson
you have to decide what your acceptable error rate is
--We have for now, 3X transmitting if no acks received, on last TX we don't wait for ack and go directly to sleep. Can change whenever. Every millisecond we're awake is killing the battery. I hate thinking like this but this is what you need to do for low-powered long-term switches. Near the end of life, the battery is going to be hitting BOD resets and the signal strength drops along w/ LED brightness, so more re-TX and closer to dead battery. If I had a bigger battery that is available (and pretty expensive), I'd be breathing easier. But no, there's some reasons (size of it, remaking the case) why we can't do that for now and it sucks.

There's also error conditions on the sensor, since one of them needs to operate outside (yay, all the issues w/ electronics working outside...). So, there's being able to again to stop if it's activating too many times when all the intel you have to make a decision is a charge on a plate. Some little kid holding the switch on, yeah we'll stop that. I'd have to think more how to use a timer when we're constantly going to sleep, some kind of simple software timer would be ideal. The big issue here is constant reset loops if there's water (which enough can have a charge to look like human touch) on the plate. This took a long time to get to where it is now, and seemed impossible at one point. But I'm pretty much done w/ it and ready to move on, just finishing up touches which is taking forever.

WaelOctober 12, 2016 10:55 AM

@Clive Robinson,

It's ironic, I ask the question..,

Ironic it is! Did I tell you the scorpion story? It's even more "ironic"!

Clive RobinsonOctober 12, 2016 2:58 PM

@ Wael,

Did I tell you the scorpion story?

Not that I remember, which means either no, or my memory is going fuzzy with age, a bit like my beard.

WaelOctober 12, 2016 3:44 PM

@Clive Robinson,

Many moons ago, a female colleague came to work late because her husband was stung by a scorpion. I asked her if it got him in the foot or the hand. She said in the face! I laughed and said a scorpion sting in the face? How strange.

A week later, when I was asleep, I felt some pain in my face. I thought it was one of the wires I play with. So I grabbed it without opening my eyes and put my finger on its end and it pricked me, so I threw it on the floor and continued my sleep. The pain increased and I thought it can't be a wire. I looked on the floor and it was a scorpion that I killed when I tossed it on the floor.

So I called a doctor an hour later. He said if I'm still alive, I should be good, and I should take an Advil for the pain. Unlike snakes, scorpion's poison has no antidote.

The exterminator that came to my place to spray insecticide saw the scorpion and said it could not have stung me because it's the most poisonous scorpion in Arizona. I told him it stung me twice, once in my face. So I learned a lesson. The pain lasted a week :)

Clive RobinsonOctober 12, 2016 8:40 PM

@ Wael,

The exterminator that came to my place to spray insecticide saw the scorpion and said it could not have stung me because it's the most poisonous scorpion in Arizona. I told him it stung me twice, once in my face. So I learned a lesson. The pain lasted a week :)

Ouch...

Did you find out which species it was?

As far as I'm aware nearly all scorpions in the US are basicaly harmless, with the exception of the bark scorpion "Centruroides exilicauda" (there always has to be one that breaks the rules ;-) And in general that you are more at risk from "lock jaw" from tetanus. Worldwide only about thirty of the near two thousand species of scorpion are of any real danger, and they tend to be the smaller species.

The upside for you if it was the bark scorpion is you must have been reasonably healthy at the time (apparently in general very few scorpion stings are more dangerous than bee stings, and quite a few in the US go unnoticed unless you get an infection in the wound site).

Mind you "the most poisonous scorpion in..." is wrong, you can eat them without many problems, as they do in most parts of the world where the larger ones are common. Like many insects they actually don't taste unplesant (infact some species are becoming endangered because people eat them covered in sweet sugar syrup as treats).

Even "most venomous" may not mean much, it depends on your local flora and fauna. For instance I've been well within stinging range of Britain's most venomous wild scorpion.

A ways outside East London there is a place called Sheerness on the northern banks of the Thames Estuary a large part of it used to be part of a gun testing range and had a Victorian army baracks next to it (since converted to luxury housing). I had occasion to visit a number of times during my stint wearing the green. It had quite a few very large Victorian brick walls that faced south and is a pleasent place to watch wildlife whilst "on stag" (guard duty). There is/was a colony of scorpions there that would have originated from Southern Europe. I discovered one whilst on guard duty one night when I heard it moving within a few inches of my head as I leaned up against the warm wall (yes some scorpions are heat seaking and more sensitive than most anti aircraft missiles).

Being "a bit keen" when it comes to wildlife I reported it in passing to a "superior officer" who thought I was taking the pi55... Any way she decided to push the issue, I said I was telling the truth, she said in effect I was a loonie because everybody knows scorpions live in deserts and are dangerous... (so much for "common knowledge" in a very common mind).

Well I said that I had definitely seen it (I shone a flash light on it so no real chance of being mistaken) she not being that bright raised the stakes and tried to get me charged[1]. I dug my heals in and it was getting serious, luckily for me the permanent camp administrator heard about it and confirmed there was a colony of scorpions near where I had spotted it. We were asked not to spread it about as they were basicaly harmless to humans. Anyway the following night one or two of us went out "scorpion watching" and we saw a couple more, they were quite slow and did not respond threateningly even when a finger was put infront of one to see what it would do, which was walk around it oh so slowly...

The knowledge about them has slowly spread,

http://www.jasonsteelwildlifephotography.yolasite.com/uk-scorpions.php

So yes, although I did not pick one up or get stung, I've been well within stinging range of Britain's most venomous scorpions, not that it would have done me much harm (I've been stung by all manner of wee beasties in my time and even been bitten by a venomous snake, which along with an acetylcholine rich hornet sting in the side of the neck was probably the most painfull, overall it certainly hurt a lot more than breaking my leg ;)

[1] Possibly because she previously blaimed me without any kind of proof (hence no official action taken), when she woke up one morning to discover a sheep in her tent ;)

WaelOctober 12, 2016 10:51 PM

@Clive Robinson,

You are correct, it was a bark Scorpion that probably came from one of the palm trees next to me and fell on my face from the vent in the ceiling.

Man! these are some nasty looking pictures you sent! The bad boy that stung me fits this profile: https://en.wikipedia.org/wiki/Arizona_bark_scorpion#/media/File:Bbasgen-scorpion-front.jpg

infact some species are becoming endangered because people eat them covered in sweet sugar syrup as treats

I don't mind if those insects become extinct. Yep, people eat them in Arizona, too. Whenever I go there, I sometimes buy a few of these:
http://archive.azcentral.com/centennial/news/articles/2012/01/04/20120104things-made-in-arizona-scorpion-lollipop.html . I don't eat them -- my stomach is too sensitive.

tyrOctober 13, 2016 4:27 PM


@Wael

You might want to knock your shoes before putting
them on, some of Arizonas nastier things like to
climb in looking for moisture at night. The
centipede won't kill you but it has the most
painful venom in the area. I hear someone got
stung by a Tartantula Hawk ( blue black 3 inch
wasp with bright orange wings ) but I always made
sure to avoid incidents with them by giving them
a wise berth. Arizona is a lot like OZ, lots of
nasty things for them unwary to get bitten or
poisoned by.

@Clive, FigureitOut

What is weird is that most of the incentive for
the microcoding in CPUs came about in an attempt
to increase the yield per wafer of big CISC in silicon.
By placing the ability to reprogram a defective chip
with redundancy areas already built in by the masking
they were able to get better yields of functional
CPUs. That this opened a huge security hole for any
one who could reach that functions was considered minor
at the time. Just like the ability of all the Net nodes
to communicate was considered a useful feature.

Long ago we were told that complete testing of CPUs
was impossible. They didn't reference the halting
problem but it was for similar reasons that the test
suites only cranked them through a quick and dirty
functionality after they were sliced from the wafer.
Keep in mind that that was Intel 8080s which look
like a joke next to a modern Intel CPU. Meaning that
the problem is orders of magnitude bigger now with
no clear solution in sight.

The bootstrap I was referring to was related to how
much memory and storage was available at that time.
If you had to keep the setup for I/O resident in
your 16K mmemory you didn't get much workspace left.
Most of modern speed up functionality was tried as
discrete logic speedup solutions before they were
folded into the CISC processor, look ahead, prefetch,
high speed caches etc.

Clive RobinsonOctober 13, 2016 5:07 PM

London Zoo "Big Ape Escape"

Late,this afternoon a mature male "silverback" gorilla escaped from it's "den" into a secure area[1] the keepers use to access the den.

This is how the BBC reported the incident,

http://www.bbc.co.uk/news/uk-england-london-37650295

And this is how an infamous UK "t1t5-n-13ums" Red Top reported it,

http://www.mirror.co.uk/news/uk-news/women-trapped-london-zoo-after-9041469

Ahh UK "White-van-man" MSM at it's mucky best ;-)

Oh and the Gorilla is alive and well although a little grumpy, and will no doubt be ploting future adventures.

[1] The behind the scenes area is secure because it's the access point through which escape is more likely

rOctober 13, 2016 9:42 PM

@Clive,

The contrast is interesting, it's a stark reminder of the white van dilemna you were illustrating.

BUT

If I may,

The other side of that coin is that the BBC is down playing the issue that was [out of|at] hand.


@Wael,

Arachnids, they're older than we are - unless you like me subscribe to the whole god is an autodidact thing. Artist, Painter, Parent, Theorist. It really is magnifiscent isn't it?

WaelOctober 14, 2016 12:58 AM

@r,

unless you like me subscribe to the whole god

Actually, I do. But why does a scorpion resemble a lobster... hmmm! Looks like they came from the same assembly line. Or perhaps one is a knockoff? You could say a scorpion is a miniature knockoff lobster with a backdoor ;)

@tyr,

I always check my shoes before I wear them. Not because of scorpions, but because one of my six cats put a badly wounded mouse inside one of them as a gift for me :) Was an ugly feeling...

FigureitoutOctober 14, 2016 1:07 AM

tyr
--I see a potential where a user could load in some signed microcode (or build their own and it matches signed code, you need a "micro" assembler apparently) and be able to verify at least timing of instructions which would could be documented. In theory, if the microcode is the same but the hardware changed, that may cause some weirdness to reveal a hardware flaw.

That's one scenario where it may be good for security, but those documented tests could be falsified naturally, and if a network interface can reach where the microcode is stored, can't trust anymore.

Otherwise you have to trust the hardware, but most of us have no way to really verify it anyway (I'm happy enough loading code toggling pins w/ certain timing and it must line up, then changing and it lines up again showing at least some programmable control; but definitely know that the tiniest of bugs can be devastating...bleh it's exhausting) so why have tiny area of attack surface that could corrupt every computation on the chip...and a design which would be hard to track down bugs (multi-bus CISC design w/ microcode) even littered w/ probes.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.