Steven Hoober October 19, 2012 5:53 PM

Not sure what I think of it yet, but have you been tracking the new Apple connector?

Supposedly, the /cable/ has a security chip. Very odd system, where Apple (so we’re told) has control of all of them, then as a peripheral maker, you buy the chips from Apple.

Or, apparently, a “security” chip. Supposedly already cracked, which makes me wonder what the point was. Not that they are necessarily bad at security, but maybe that it’s true purpose is to increase cost, or… something else I can’t decipher.

J. October 20, 2012 8:42 AM

W.r.t. the Apple connector, has anybody checked similarities to the bq26100? Hooked up an oscilloscope to the pin used for communication and looked at the signals?

Davis X. Machina October 20, 2012 10:08 AM

“Supposedly already cracked, which makes me wonder what the point was…”

Killing the second-source market. The spread on a MacBook replacement battery from an OEM and from Apple for my 2009-vintage unit is rapidly approaching $100.

Think of getting that for a cable.

Nick Leverton October 20, 2012 4:27 PM

#sucks teeth and wanders round kicking the tentacles

insurance on that mate ? It’ll cost you a fair few squid …

Clive Robinson October 20, 2012 7:31 PM

@ Nick Leverton,

That joke is almost but not quite as bad as,

Have you heard about that short doctor who’s opened up a new market in elf insurance.

Or the various vet joke variations with the punch line of “That’s sick squid”.

MikeA October 21, 2012 12:53 AM

Presumably Apple has the same point as a certain video-game mfg with their lockout chip. It serves as both a tax on third-parties and a way (if Apple goes the original v.g.mfg route of forcing 3rd parties to use captive factories) to keep innovative products off the market (via “unfortunate delivery problems”) while Apple readies its version.

Clive Robinson October 21, 2012 5:59 AM

OFF Topic:

As some of you are aware I take an interest in the security (or lack there of,) of medical equipment both implantable and diagnostic.

What some of you might also be aware of is the civilian trauma expression of “the golden hour”, which bassicaly says in major trauma actions carried out within the hour have significantly better outcomes for patients than if done after that hour. Some people might even have heard of “the platinum five minutes” talked about with battlefield injuries where soldiers can bleed to death in less than five minutes or be brain dead from respritory failure etc etc etc.

The whole point is that the faster you take actions the much higher the probability is of success for the patient. So fast and reliable diagnostic equipment is vital for trauma and other kinds of patient.

So as a patient you realy realy do not need diagnostic equipment that is slow or inaccurate because it’s bogged down by malware…

Which is sadly happening all to frequently these days not just in the hospital but all the way back into the design stage,

Nick P October 21, 2012 10:43 AM

@ Clive Robinson re medical malware

I would say we’ve all seen this coming but I’d be lying. Of course, medical equipment makers use COTS OS’s and their boxes get compromised. That was predictable. The part that caught me by surprise is that they’re intentionally not patching & claim the FDA is the reason.

I’ve heard of FDA software guidelines & reviews that were about ensuring quality. It’s shocking to think that they don’t have provisions for easily patching OS vulnerabilities. Until I can review the issue further, I have a guess: the regs were written with standalone, embedded, basic code in mind. They probably have an expensive maintenance/update portion that doesn’t fit economically with the penetrate and patch game the unregulated PC market plays. So, once medical is in that game, we have “free” penetrate, “expensive” patch, & only one of the two see’s mass adoption. 😉

Side note:

And then Kaspersky thinks he’s going to make a bulletproof OS. That news bite is still funny to me. Even if he combed our discussions, he’d still probably fail. Plus, any good security engineer knows “secure”, “general purpose”, and “runs legacy” are about impossible to do.

Martin Bonner October 21, 2012 10:55 AM

FDA guidelines: Probably assume that patched code is not noticeably different from new code, and it all needs to be revalidated. Of course that ignores the case old code + malware is not the validated system either. It would be nice if the OEMs were lobbying for this. (I wonder what the NHS guidelines are like – it wouldn’t surprise me if they consisted of “use the FDA guidelines”)

Figureitout October 21, 2012 12:02 PM

Paper from link provided by @kingsnake in previous blog post.

Whereby someone w/ right knowledge and equipment (hence not your Joe Blow immoral thief) can eavesdrop on power consumption (and begin to form patterns of living, thus giving optimal time frames of burglary) in neighborhoods w/o physically scoping targets and looking suspicious. Data isn’t encrypted and recommendation was a “defensive jamming” (which wasn’t really given a good enough explanation in my view). I say the transmissions should just be turned off or only once every month or so.

Easy, 12 pg read w/ links to more resources.

Nick P October 21, 2012 12:19 PM

@ Martin Bonner

Here’s a few links I dug up on it. In case any readers wanted to research the situation. I’m a bit busy right now, so I might get around to it & might not.

General Principles of software validation – final guidance for industry & FDA staff (2002)

Preliminary thought

The requirements, specification & coding advice in (2012) is typical of assurance-oriented developments. The evidence of the process is as important as the product being certified. However, early versions of Windows certainly weren’t produced by such a process. “Weakest link” concept implies Windows as a foundation breaks the whole assurance chain. That manufacturers are allowed to use it says the guidance doesn’t apply to their products or exceptions were made. Not a good thing.

Good news is we have many alternative options, even with GUI’s. Linux, QNX, INTEGRITY, popular RTOS w/ graphic middleware, DO-178B components, and even iOS are better bets. (Many of these passed certifications already, as well.) They each have plenty of functionality. Tradeoffs can be made for cost, performance, features & 3rd party libraries/apps. Far as development, there are cross-platform toolkits like Qt & the Firefox runtime with plenty of support & example code. You could say they have more justification for and opportunity to switch over than they ever had. I think they’re just making excuses.

Joe October 21, 2012 10:15 PM

A different take on the “golden hour” which suggests that sometimes intervention isn’t the best approach for trauma patients.

Ari Maniatis October 22, 2012 3:45 AM

Open source election software

The ACT (a territory which surrounds the capital of Australia much like Washington State surrounds Washington the city) voted on the weekend using a mixture of paper and electronic voting. The voting software has been developed over a number of years and is open source, with the software published for all to see.

Note that the software is open source, but not openly licensed. That is, you can study the code but it isn’t Apache/BSD/GPL/etc licensed. The Hare-Clark voting system used in the ACT is insanely complicated with the way preferences flow.

Interesting was that the original reason the software was developed was not about cost or speed of getting results, but accuracy of the counting. This contrasts reports about the reasons electronic systems were implemented in the USA.

Now, open source software is only one part of the problem. How do voters know that this software was indeed loaded on the machines they voted with? What about the other security aspects to a voting implementation from hardware to network and storage choices?

Random832 October 22, 2012 9:33 AM

@Ari Maniatis “much like Washington State surrounds Washington the city”

You mean the District of Columbia. Washington state is over 2000 miles away from Washington the city.

curtmack October 22, 2012 11:34 AM

@Clive Robinson – Back in college (read: one year ago) I was at a NASA-sponsored event touting the praises of graduate school research. One of the groups presented their research on a medical drone for zero-incision surgery. You swallow it in pieces, it assembles itself inside your stomach, the surgery is controlled through a console the doctor has, and when it’s done, it disassembles itself and is passed out over the next couple of days.

I asked the question “So what’s stopping a bunch of teenagers from hijacking the bot and making it carve racial slurs into my stomach during the procedure?”

The response was something to the effect of “Well… hmm.”

GregW October 22, 2012 8:48 PM

It’s not computer security, but I thought some of the rest of you might appreciate some some of the technical details of how Lance Armstrong and his team evaded the international “anti-doping” security testing systems, and some of the more fundamental challenges facing “defenders” in that security landscape:,0,1364931,full.story

I do wonder if there’s any fundamental wisdom from computer security to be applied to the anti-doping testers’ situations. The economic incentives at play push all the way down from these rarified circles down to the local high school athlete willing to pay or cheat for fame and better odds at an athletic scholarship.

GregW October 22, 2012 9:23 PM

(I just found the full US Anti-Doping Agency report detailing all their findings; see the “Reasoned Decision” tab at and related appendices in another tab. I was curious how the banking and email records referenced (~$1 million in payments!) were obtained during the investigation; it appears the team doctor was involved in an Italian court case and those records became available through that.)

Clive Robinson October 24, 2012 1:38 AM

OFF Topic:

Has one of the senior members of the DHS lost the plot?

Well apparently so if an article in Computer World is to be believed,

Apparently DHS deputy undersecretary for cybersecurity Mark Weatherford gave a speech at a cybersecurity awareness conference in Santa Clara, California a few days ago, which is not unusual (it’s part of the job description).

However what was unusuall for some one who is in his position is he apparently prefaced his speech with,

I have no idea if this is legal or conceptually even possible, but it’s something to think about.”

Before going on to talk about the problems some US banks have had for more than a couple of weeks with DDoS attacks claimed to be out of Iran.
Of which he said,

“It’s got a lot of people’s attention. Not just the banks, but the ISPs and some of the other third-party providers as well.”

And reinforced it with,

“I can tell you, because these big banks have just gone through it, they did not have enough capacity, or they barely had enough capacity, no one was hurt too bad over the last couple of weeks, but we need to think about different ways of sharing resources among like minded organizations,”

And dropped in his little idea of,

“How about developing a co-op kind of a model for these Web content delivery providers, like an Akamai or Prolexic or some of those folks, where you buy a bunch of servers more than any one company might need at one time, but you co-op that for like-minded organizations and when someone needs that kind of service you point it at them and they have it available to them,”

If this is representative of senior DHS cybersecurity level thinking then perhaps they should give the money back and quickly.

Clive Robinson October 24, 2012 1:59 AM

OFF Topic:

Whilst the supposed Iranian DDoS on US banks has been occuping news headlines it appears Kosova want in on the act as well…

According to Sophos’s Naked Security site the “Kosova Hacker’s Security” hacked the NOAA servers at,

Apparently the KHS’s reason for doing this “me to” activity is US Gov “anti-Muslim” attacks,

“They hack our nuclear plants using STUXNET and FLAME like malwares , they are bombing us 27*7, we can’t sit silent – hack to payback them,”

You have to wonder at the obviousness of this if in fact it’s actually not as portrayed but somebody else running a “fund-raiser” etc as we get closer to US Presedential election time…

Clive Robinson October 24, 2012 2:28 AM

OFF Topic:

Here is an interesting one thats poped up on the Federal Register,

NIST are looking for you to give them your resources (if you are a US Company) to assist in filling their National Cybersecurity Ceneter of Excelence (NCCoE).

Or have a look at the NCCoE website,

Sadly I get the feeling NIST are looking for specific soloutions as opposed to generic frameworks. Although specific soloutions are nice it’s a case of “giving a man a fish” where as generic frameworks are “teaching him to fish”.

Clive Robinson October 24, 2012 4:58 AM

OFF Topic:

Some of you might have been following various side conversations on this blog with regards the issues of security where the measure does not involve the whole computing stack from humans down to the actual physics involved with the way the hardware works.

Well over at the UK’s Cambridge labs they have some info on one aspect where there is a gaping gulf which is the hardware-software interface.


As always there are other issues in security one of which is the much bloated elephant in the room of security is a quality process and without a quality process security has not got a chance.

Poul-Henning Kamp who has been around for a few years has valid comments on why the Bazzar has failed but in the process left several generations of programers without the knowledge of how to build more than shanty towns of inexpertly reused detritus left by others that cannot in the majority of cases stand the test of even a single mild winter. But worse the bazaar has turned them into lost generations without the knowledge that there are buildings designed to last centuries of winters discontented or otherwise.

Nick P October 24, 2012 10:37 AM

@ Clive Robinson

Thanks for the two links. I don’t know how that SRI work slipped by me. The more research we have into securing hardware & interfaces, the better. The Clean Slate programs are also still in progress. Slow and steady does it. 😉

Figureitout October 25, 2012 12:20 PM

@Clive Robinson
RE: Bazaar v. Cathedral

Did you read the comments section of the link? What he is saying is that my generation is intellectually incapable of designing a “cathedral”. As much as I hate my generation, you also have to look at who (sometimes no one) raised and taught this generation. I try to imagine a time when there was no internet or as much tech., and I wonder if I could think more clearly than today. Meditation helps, but only a little. I think a big problem is learning things in the wrong order.

I don’t know how far my little knowledge quest will take me, but I’m quite fine if my creations don’t resemble modern tech. at all; they will be quality.

Clive Robinson October 25, 2012 5:46 PM

@ figureitout,

What he is saying is that my generation is intellectually incapable of designing a “cathedral”

Hmm whilst I agree that many if not most of the 90% increase due to DotCon were and may still be incapable for various reasons I would only put it down to lack of intellectuall capacity in some.

It’s like a motor mechanic who changes tyres on wheels and swaps parts of damaged exhast systems being called an “engineer” which we see in the UK a lot (it does not happen so much in other northern EU countries due to “official” as opposed to “unofficial” proffesionalism whereby you have to prove yoursel by examination to be worthy of holding the title by law, much as Doctors etc have to).

If you are not required by law to be qualified before you practice then anybody can hang up a shingle and advertise for work, if demand outstrips supply as it did in DotCon then the price rises and this attracts in chancers who to an inexperianced ear “talk the talk” but instead of “walking the walk” take the cash and run with it.

The reality is it takes between five and fifteen years to theory and practicaly train a top flight engineer, there are no real short cuts except for the very few who can concentrate for more than the norm of five hours a day and have the ability agin to a photographic memory to soak in and process vast amounts of information.

Another issue that I’ve mentioned in the past is the various forms of “code cutter” ills. There are two basic parts to writing code, and to understand this you need to think in terms of spoken languages, there is the basic mechanics of speaking and other rules that are common to many languages and then there is the specific language it’s self. A person may be very fluent in a single language but totaly incapable of speaking any other language, because their whole understanding is from the top level of the language downwards carrying all the languages foibals with it. However a person who speaks several languages well if not fluantly has usually learned the basics or fundementals well and then sees the foibals of any given language as what it is just a mear gloss or shine that is fairly irrelevant.

You see this with code cutters they never learn the language independent fundementals and foundations such as ADT’s etc, what they have learnt is all the little foibals of the specific language that allows some modicum of improvment in run time or other metric. Thus they are only capable of making small incremental improvments that are usually lost in the Moore’s Law improvment before the code even get out the door. An engineer on the other hand knows how to make the fundemental changes that make the big scalable improvments that carry on well past Moore’s Law time scales. Further they also don’t care what the programing language is because they design their programs at the fundemental level not the superficial “froth on top” level, and usually they apply actuall engineering scientific/mathmatical processes rather than the “brush strokes” of the artisanal approch of your code shop code cutter.

A major area that shows this up is code reuse, this is the “holy Grail” of managment. Whilst a sound engineering approach tempered by much practical but varied experiance will alow this, a single language code cutter is only going to deliver useful reuse by chance.

This carries forwards from programing through the maintanence phase and the general result of code cutting is not maintainable code but a jumbled assortment of “go faster hacks” on ill conceived design with little or no uniformaty in interface design and often many hidden assumptions built into base code instead of being abstracted out to a much higher level.

One of the biggest hold backs on secure programing language design is down to edge cases on objects and data types especialy where inheritance is involved. Although it should not be these edge cases are used as the norm not the exception by everyday coders for the perceived (but usually slight at best) speed or other advantages. Thus the bulk of code in existance cannot be carried forward to a secure programing language, the inbuilt language assumptions have to be reworked out, and they usually extend down so far into “code cutter” produced code that it would actually take longer than a complete rewrite from scratch… Saddly the issues appear to have become “formalised” into “standard libraries” with the likes of C++ and Java and endemicaly so with “web” languages.

So yes there are many code cutters who can only Bazaar “code cut” “market stalls” from the detritus of previous Bazaar code cutters failures. They have never ever started with the raw building block materials and produced an engineered product of simple but elegant functionality such as a “house”. So they totaly lack the skills to properly design and manage a project where you take the raw minerals and forge and carve them into the building blocks, that if they could also properly design or manage the work, turn them into a thing of majesty and beauty such as a “Cathedral”. Sadly for some they actually lack the intellectual ability and imagination to do so, and if they were ever to visit a Cathedral would be incapable of learning from what they would experiance.

As I’ve said before there are way to few people capable of doing secure design at the OS or App level, we need to have some method by which we can leverage their skills so that everyday code cutters can use their “off the peg” items safely. Look at it this way materials scientists and product design engineers design such things as nuts and bolts and other base parts and specify them in such a way that other design engineers can engineer them into a new bridge or building etc with the confidence that the parts will not fail them, can we say the same for software?

Hardly… we don’t have an engineering methodology is software design we work with “patterns” which is what pre-victorian crafstmen used to make coach and waggon wheels, there was no science involved just the “break it and bodge it” mentality of “bolt a bit on” to fix a problem where it fails not where it originates which is the same as putting a Band Aid on a broken bone.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.