Comments

Chris January 2, 2015 6:05 PM

Hi not sure how this is meassured, i run about 8 hidden services and the purpose of those services is to get information logs etc. but also as a backup reentry if something goes wrong. then perhaps the hidden service still works. And btw none of those hidden services is public and not even shared, its MY own hidden services.

Childporn well I can understand that this is a problem for those intrested in it and perhaps they use it who knows, so what? thats in my opinion not tors problem, its a social problem.
There is a problem here with the calculation and its this.

How many people are intrested in childporn versus how many people that are not intrested in childporn running tor hidden services. I might be brainwashed but I see many more people and possibilities to use hidden services that has nothing to do with childporn, therefor this in my opinion is bull!”#!”#it…

Wael January 2, 2015 7:18 PM

The second topic related to Moore’s law:
2- The relationship between Moor’s law and surveillance growth. Good article

My advice: Live your life with your eyes wide open, because Moore’s Law of Mass Surveillance is here to stay.

The article has some intresting and well thought observations…

Wael January 2, 2015 7:20 PM

@Figureitout,

See how I dropped the ‘e’ in “Moore’s” law accidentally in the previous post?

Nick P January 2, 2015 7:38 PM

Cracked.com’s 5 Most Embarrassing Failures in the History of War

http://www.cracked.com/article_19981_the-5-most-embarrassing-failures-in-history-war.html

The first one is about a force of evil that I’ve encountered before: emu’s. Cool looking, quick, huge, mean ass birds. I knew a Navy SEAL that told me he was repeatedly attacked by one while fixing some farm machinery. He said it would wait until he was really in the engine, run up to kick/peck the crap out of him, and swiftly went back into the woods before he could swing at it. I saw some up close on a farm and they’re wild looking. Turns out that before they were terrorizing SEAL’s they stomped the Australian army. An epic and hilarious win.

So, seeing a Five Eyes country defeated by Emu’s, the logical solution is to air drop a bunch of them on Ft Meade and other spy HQ’s. Then enjoy the show with high quality brew. 🙂

Nymphet Screencaps, Adolescent January 2, 2015 10:56 PM

Naturally, the first post is a puke funnel for a meretricious study commissioned from a C-list academic. The implied prominence of kiddy porn on Tor, if it’s not simply fabricated as usual, reflects the fact that 80% of Internet pedophiles are cops pretending to be pedophiles, and half of the remainder are pedophiles pretending to be cops pretending to be pedophiles. These nitwits infest every privacy service on the web, entrapping each other and pestering the public with inane come-ons that wouldn’t fool Gary Glitter.

With child porn, as with drugs, the US government does not want to interdict the trade – they want to control the trade for NKVD-style kompromat. When the FBI stumbles upon kiddy porn, they continue to sell it to entrap all-purpose informants. CIA maintains a steady supply of child exploitation content with protected chomos like Lawrence E. King, Marc Dutroux, Jeffrey Epstein.

This ‘80% CP’ study, like the choreographed Twitter harassment rhubarb with Pando, is a non-crypto attack on Tor. It’s encouraging, because if the NSA vermin had compromised Tor, they wouldn’t be trying to discredit it.

Figureitout January 2, 2015 11:25 PM

RE: childporn on TOR
–Blame the perps not the tools used, it’s the sick mind that will use whatever tools to abuse the vulnerable in the worst ways. I don’t know how many times this has to be repeated, I guess since new/younger people simply haven’t been exposed to it yet. What did they use to film the CP? A camera?! Fcking ban that sht! If you use a camera you support CP…I won’t continue w/ this logic (or lack thereof) but for people like OH ME OH MY! this needs to be spelled out again and again.

Wael RE: forgetting the ‘e’
–Yeah well I don’t want to hear it, spell my name right. It’s called respect and proofreading and even still there’ll be typos. Speaking of typos, in the SPARC link, did they seriously say “treaded” instead of “threaded” in the title of their paper? I hope that was an error on website, it just looks bad. If they don’t catch those little errors, it leaves you wondering what else they missed…

Regardless, yeah smartphones, they’re incredible (and highly insecure, yeah yeah). Touch screen computer, high-quality camera, PHONE (original purpose lol), wifi device, and also a spectrum analyzer too amongst other things. What gets me is the spacing there, antenna guys will tell you it’s getting f*cking impossible to squeeze in better antennas in the space limits they’re given. You have GSM/LTE, Wifi (2.4G & 5G), bluetooth, and maybe more. The chip doesn’t have a lot of cooling for it.

You’ve got some of the best engineers working on these and like other leading edge tech. just making it work is an accomplishment, let alone the security for all the features we “supposedly” want (just make it secure!). When I was trying to help someone out find a photo the phone apparently deleted, I also found photos the person did “delete”…the memory mismanagement is a problem on desktops but way more so on smartphones…

They should ultimately be treated as cool toys and if you want more-secure comms you have to look into VOIP and other RF devices.

RF Authentication, a Mini-Version (to be expanded…)

I’ve finally had some free time to fiddle around, right away it was nice have a decent demo firmware burned in that worked like a charm (didn’t have OOK modulation due to “glitch” lol, sometimes that’s all you can say about a bug that owns you). It takes some time to get up and running and compile the example programs which I’m getting to. I won’t bog you all w/ the details and the documents you need to read (I’ll actually condense the ones that cut to the chase and leave links to full material if need be). Once I get that, I believe I could get a chat program working via serial port which would be way cooler than a radar-detection system.

I’m talking about a RF dev. kit from Silicon Labs [PDF Warning] the MSC-DBSB8. I just think it’s a cool board, the LCD has a backlight, Serial and USB port, and can be powered via USB/battery/external 12V supply (I’m using 12V, 1A). It supports OOK, FSK, and GFSK modulation. I’m using the single antenna board (SI4432) but it still can be a transceiver/receiver/transmitter. You could actually get 255 of these off the bat and set up a long chain of them to relay a message (using estimated 2KM range w/ the dinky antenna and 20db TX-power, that’d be a conservative 510KM or 317mile range. NOTE, no I’m not affiliated w/ Silicon Labs at all even though this sounds a bit like an ad…it’s a quick working system and I happened to get my hands on a nice dev kit for free.

Right away, you can select 903,906,908,913,917,or 919MHz frequencies (can be changed a lot), up to 128kbps data rate, 256 ID’s, 64 byte packet length (can be extended technically to 255 but we’ll see), and as many packets sent as you want. Now the fun part was w/o even hooking up any other test equipment I can see the waveform looking how it should w/ a tool everyone in RF should have, that RTL-SDR dongle. Right on cue, crystal clear FSK waveform shows up and GFSK (smaller and harmonics were scrunched together more) using Gqrx software.

So to the obvious, the authentication part. Now this isn’t good yet, and since it’s standard COTS obviously don’t rely on this for overthrowing gov’ts or whatever else you guys want to do lol. The preamble of the protocol is just a 010101010101 sequence that begins the actual pairing, that can be spoofed of course easy. Access to the rest of the protocol and those parameters could be spoofed too. As far as risk compared to authentication over any internet connection, I’d still rely on it personally.

Scenario

So you have a buddy, you guys are exchanging top secret data on your latest hack, or just formulating strategy for the video game you play or just chatting b/c you’re lonely/bored and you want to talk to someone confidentially! lol. Via encrypted channels YOU guys already set up (and the secret keyword for an authentication signal), you should get a preset demo message via this kit using the TX/RX feature in firmware already (that’s going to change real soon). ID numbers, packet length, and modulation mode must all be exchanged manually for good strength to sidestep any internet attackers. You need an antenna for longer range too, which this something that will be addressed too.

You tell your bud to authenticate him/herself by sending 500 packets of data. If you receive it, you can reasonably assume in real time it’s truly your buddy on the line, even w/ eavesdropping. If not, exit program, turn off PC and unplug router. Someone was potentially MITMing you (it’s creepy & disturbing as all hell if it actually happens to you). From there, you have to re-evaluate your setup and back up important data to a storage device you will now access from a pseudo-trustworthy device (could still be storing malware) and potentially wipe HDD’s or build another ISO and make new CD’s and flash router and so on and so on…

Conclusion

Pros: Easy/quick and way more work for remote attackers (at the onset, after a while they may catch on and have time to set up an eavesdropping station, or just be malicious HAMs).

Cons: Not made w/ security as a design goal. Limited to Windows OS right now, not sure if it’d work on a VM, that’d probably be good to test. Very, very common modulation modes. Likely cheaper/better solutions that can be purchased/made.

Bottom line I want an EXTERNAL means of authenticating, and not a text message or phone call either for “2FA”. Not just SMS, different means are needed. An attacker in your computer “technically” won’t be privy to this depending on how much you chat about it (like what I’m doing) and connect to infected PC’s.

Thoth January 2, 2015 11:35 PM

@Oh My!
Have you wondered if those “Child Pr0n” or generally “Pr0n” stuff are actually run by Agencies or HSAs in an effort to collect (map), intercept and disrupt ? They have their agendas too… They are powerful and can run lots of Tor nodes if they want… And the worse is they are the Law of the Land themselves (they live above the Law) and they can impersonate/host/intercept/interrupt …etc… you can imagine… that means they have no restrictions at all on how they do their business which includes hosting “honeypots” and stealing identities as well. It’s all business to them and when Congress comes knocking on the door, they can choose to dimsiss the Lawmakers and Representatives or even outright replace Decision Makers or silence them quietly if needed.

There is always something sinister behind the curtains in thus era and age.

Wael January 2, 2015 11:57 PM

@Nick P,
Re cracked.com:
“Guam Tries to Borrow Ammunition from the Enemy” was pretty funny, especially the body of the email that was supposedly stuck in the spam box (in the 19th century too.)

Benni January 3, 2015 12:33 AM

NSA/GCHQ have attacked the german office of the chancellery with Regin: http://www.heise.de/newsticker/meldung/Offenbar-Spionagesoftware-Regin-auf-Rechner-im-Kanzleramt-entdeckt-2507042.html

This is actually great news. They should place regin and stuxnet in every german ministry and on the personal phones and computers of all german politicians. Additionally they should place bugs in their offices and homes.

Only when Merkel is completely reminded of her former live under STASI in the GDR, Merkel will do anything against NSA/GCHQ. I really think CIA must send more double agents into germany…

Wael January 3, 2015 12:45 AM

@Figureitout, @Dirk Praet,

did they seriously say “treaded” instead of “threaded” in the title of their paper? I hope that was an error on website…

Yup! Not the website’s fault. Google “highly-treaded” and you’ll be surprised 😉 I’ll comment on your other stuff later. Pretty intresting, by the way.

Someone here made the reverse typo. I am sure @Dirk Praet doesn’t know what I am talking about, and my reply to him 😉 Now I didn’t bring this up to make fun of him! I seized the opportunity to correct an inaccurate statement I made (the discussion @Dirk Praet refers to treading on thin ice.) Previously I said “Muslims are not allowed to eat horses”. I was wrong, eating horses is permissible according to most scholars. What happened just now is mind boggling to me! Today I made the statement that I am aware of other inaccuracies I stated in the past, and will correct them when the opportunity arises. So,

1- I post a link with a typo that @Figureitout flags
2- I remember that @Dirk Praet made the reverse typo
3- It was in reference to something I wanted to correct but said I will refrain from talking about

So I get to correct it! Boy, wouldn’t it suck if I was wrong again?

Rick January 3, 2015 1:38 AM

Apple iCloud hack tool reportedly permits dictionary attacks on user accounts. Additionally, apparently these attacks can also be directed against previously locked accounts:

http://www.independent.co.uk/life-style/gadgets-and-tech/news/icloud-accounts-at-risk-after-hacker-releases-tool-allowing-access-to-any-login-9954303.html

If true and verifiable, then a number of accounts will have already been breached given that the average user will likely have specified a short, vulnerable password. Mitigated, of course, if Apple successfully patches it immediately.

Unfortunately, convenience always maintains an adversarial relationship with security. Never more true than with the cloud. Privacy is even more threatened by the cloud; you can never truly trust those who own, maintain and facilitate the servers even when security is above par. Hence the need for the vested interests of owners and stakeholders to be perfectly aligned with the privacy interests of the consumers. And sadly, trustworthy warrant canaries are useful in this day and age, too.

Dirk Praet January 3, 2015 6:49 AM

@ Wael

I am sure @Dirk Praet doesn’t know what I am talking about, and my reply to him 😉

Actually, no. I do remember that threat. It was a real treat 😎

Clive Robinson January 3, 2015 7:27 AM

@ Thoth,

Have you wondered if those “Child Pr0n” or generally “Pr0n” stuff are actually run by Agencies or HSAs in an effort to collect(map), intercept and disrupt ?

I’d kind of assumed they had…

If you look at the other “horse men of the apocalypse” such as drugs and terrorism that Western Politicos trump on about –rather than actually running their countries “within their means”– then you see law enforcment “trying to deliver” rather than be treated as the failures they are otherwise destined to be on these matters.

Thus the law enforcment agencies knowing they cannot deliver lawfully as the public would expect, first, bend, then mangle the laws before breaking them to the point they become criminal enterprises in their own right…

Part of this is “entrapment” in it’s various forms, some of which are legal and some not. Thus the LEAs try acting as “customers” or “vendors” to gather intelligence or evidence which is –within reason– legal. The first problem with this however is “product” which is what the laws are generaly about. The LEAs need product to act as vendors, and can obtain it buy buying it to use for selling. However actually compleating a product shifting transaction is as much of an offence for them as it is for you in many jurisdictions.

However this illegal behaviour by LAOs is usually covered up by a bit of inventiveness and the usuall court game allows them to get away with it.

Except when occasionaly it’s so blatent the Judge, feels it’s in “the public interest” to probe deeper, at which point many grandstanding high profile LEA cases crash and burn ending up not with merit for a job well done but ignominy for the officers and others concerned, and on very rare occasions termination of employment or imprisonment.

So if as we know they are doing this with drugs and terrorism it would be logical to assume the same applies to other crimes that politicos want to demonise for their own benifit.

Perhaps you should wonder at what happens when such grandstanding cases end in tail spins of smoke and flame, much to the embarrassment of many, including the politicos?

Well you can see it in action in the UK with the Minister Chris “failing” Grayling –who holds the title “Justice Secretary”– trying to stop judges and others making enquires into “official lawlessness” by trying to change the rules on “judicial reviews” in the “Criminal Justice and Court Bill”. Using the tired old excuse that a few are frivolous and cost the tax payer money… the reality is few are even close to being frivolous as “failing” well knows, having had his nose put out of joint by lossing atleast four judicial reviews quite publicaly in very recent times.

But worse Failing Grayling has been caught “misleading parliment” –polite speak for lying– with regards to his legislation. He assured the House of Commons that there was an “exceptional circumstances clause” judges could excercise, which it transpires there was not the case. And worse still for Failing, clear evidence that he must have known that to be the case. Thus the House of Commons decived by his assurances, voted against amendments the senior House (Lords) had added to the Bill. The Lords were so incessed by “failings” behaviour they immediately made clear the deception and sent the amendments back to the commons.

Hopefully common sense will prevail and the PM David “Eaton Mess” Cameron will find another job more suited to Failing Grayling’s talents, though for the life of me I can not think of such a post in a reputable government. Even “tea boy” requires the ability to “stir things up” in a proficient way, whilst avoiding “getting into hot water” in the process, things Grayling appears congenitally incapable of.

mike~acker January 3, 2015 8:34 AM

On Product Liability

Security starts when you press the POWER switch.

UEFI is an important step forward: UEFI should help by making sure that a orrect copy of the O/S is loaded,– when you power up.

after that: the O/S must (1) insure that it does not permit any un-authorized modifications to itself and (2) insure that each application programs isolated from other application programs, and (3) insure requests for access to data have proper permissions.

product liability law must be written to describe the responsibility of the operating software and assign liability for clean up to the o/s maker. at least initially liability should be limited to cleaning up the operating software. as we shift from a Wild West mode of computing into something sensible we will have to learn as we go.

applications must learn to implement digital authenication technology such as the Gnu Privacy Guard or equivalent. it is reasonable to expect customers to understand the need for authenticaation and to effect same although a little training will be required along with better software implementations.

much is already known about implementing secure operating software. we just need to reclassify security from a “nuisance” to a requirement.

Thoth January 3, 2015 9:00 AM

@Clive Robinsons
Why aren’t the nations using laws to persecute those blatantly lying in Congress/Parliament when they were suppose to take an oath of truth ?

So much for have a bunch of politicos who simply escape the law (and probably punishment to lie under oath) and not only walk off freely but get their agendas driven into the law itself.

So the saying of the powerful gets more powerful and the rich gets richer seldom fails here … they indeed get away with a whole ton of trouble by being powerful and rich and the commoners get the bad end of the stick.

I am wondering if the eagerness of persecuting someone (sounds like adrenaline rushes) got the better of those LEA/LEO or maybe their angelic white knight belief system as saviours of mankind got the better (and may also spike their hormones and adrenalines) to perform and therefore they trip over all the nasty behaviours and believe that for the greater good (probably influenced by some drugged up state of mind) they could do whatever they want including killing innocents for getting the job done at the end of the day. Probably they might need many cups of tea for a month before getting back to their job (take a break from those adrenaline rushes and hyped state) ?

Thoth January 3, 2015 9:13 AM

@mike~acker
Most products are built around efficiencies and probably pretty much either ad-hoc or efficient and fast development lifecycles to push out the next money milking opportunities. Security is a difficult thing and it is seldom properly understood and cannot be rushed. It is also pretty hard to market security products to the masses as they don’t get it.

Our modern PC/laptop hardware designs rarely contain any secure element or proper designs to allow a secure execution environment. Most of these modern chipsets are deisgned for general uses without much security considerations and that is the main cause of why things like UEFI may not work properly when it should have been secure design because the hardware as the first line of execution is not secure.

What do I mean by secure hardware ? It must be able to detect and prevent physical tampering and probing. It must be able to have a high/low side split of security and a concept of cryptographic key management. We don’t see that in most modern CPUs or chipsets.

Most crypto are implemented as some form of software program (GPG, Open/LibreSSL, SSH..etc…) and they need a secure hardware to begin with otherwise it would be meaningless if someone could open the machine covers while it is running and probe the crypto keys and make changes by influencing the keys via side-channel attacks. Even a high security kernel would not be able to fully realize it’s secure properties if someone could just edit the bytes on the memory physical and grab crypto keys by reading the memory.

A high security kernel like the seL4 already exist (http://sel4.systems) and many other high security OS/kernels exist as well. Why haven’t the common mainstream OSes and kernels not caught up with these secure kernels and OSes ? I am guessing the reason might be the difficulty of integrating these high security features into the mainstream OSes and kernels as they might require extensive changes to the mainstream OSes and kernels (most of them are monolithic).

BoppingAround January 3, 2015 10:06 AM

Nymphet Screencaps, Adolescent,
You again. I have to admit that I like your writing style.

re: kiddy porn on Tor
How do they manage to find it? I have never been able to find even one site; it’s easier to stumble upon it on Clearnet than within Tor.

…what the hell is that red dot on the wall?

Nick P January 3, 2015 11:39 AM

@ Thoth, All

Another example with much more activity is Genode OS. I can’t remember if I told you about them. They started out as European research into microkernel-based, secure systems. There was a microhypervisor using formal methods (NOVA) and an architecture for decomposing the whole system (Genode). They spun them off into open source. It’s been expanded a LOT since then.

The thing I like about this community is that they’re doing it pretty clean slate. They’re starting with a solid TCB and design principles. Then, they work from there building a component at a time. Like I suggested to QubesOS team, they’re leveraging the strongest components (eg NOVA, Fiasco.OC) out of other projects whenever they can. Anyone wanting to build on (and further secure) a project should look into GenodeOS. They have more momentum than any other one that I know of.

It would also help to keep MINIX 3 in mind. Make the components generic at first so it can be ported to various microkernel projects. Then, specialize it for whichever one you’re contributing to. Someone else might pick it up later for another project.

Note: Looking at Thoth’s seL4 link, I also noticed they’re using seL4 kernel in an OS education course. Various OS functionality is implemented on top of it by students. This is a clever decision as it increases the likelihood that people will work on it or other microkernel based solutions.

Nick P January 3, 2015 11:53 AM

@ Thoth

I was concerned about which FOSS license they had in terms of commercial development. Then, I found out that there’s actually a company involved with the TU Dresden people. These were the people behind the Nitpicker secure GUI and Micro-SINA VPN. People whose work I’ve promoted for years because it’s done right. They offer commercial licenses for the software, support and probably development.

So, GenodeOS isn’t just good for volunteers: it’s a nice start for secure commercial offerings. They even have a real-time graphics card.

Nick P January 3, 2015 12:22 PM

Crafting a usable processor, microkernel, and I/O system with strict and provable information flow security
(paper)

“To demonstrate these principles we have created a synthesizable full-system prototype, complete with a pipelined CPU, a micro-kernel that enables isolation and communication by explicitly controlling all micro-architectural state, and an I/O subsystem that allows off-the-shelf I2C devices to be connected to a single shared bus. Our system can provide caches, pipelines, and support for the micro-kernel in only 1/4th the area and with double the clock frequency as more restrictive prior work. Finally, for a system of size 50K logic gates (approximately) and with only 3264b out of 133kB state specified concretely, we can statically verify that the entire hardware-software stack conforms to a specified information flow policy all the way down to its gate level implementation.” (my emphasis added)

This is some great work. Like SAFE and CHERI, they’ve applied the TCB concept to the hardware to make the modification tiny. They also integrate it with the separation kernel work which already produced evaluated components. The coolest part of their work is they prove information flow all the way down to the wires (eg gates). They also have good I/O support, which many projects lack. The result is a whole system argument for info flow security from the apps down to the wires. Awesome stuff.

Note: Projects like Genode with Fiasco.OC and products like Dell SCS on INTEGRITY-178B already leverage microkernels designed for security. Conceivably, the TCB could be ported to a processor designed this way. Then, we’d at least have platforms with high assurance isolation of legacy code + low TCB runtimes for new code.

Wael January 3, 2015 1:51 PM

@Nick P,

<

blockquote>Crafting a usable processor, microkernel, and I/O…</>

At the highest level there are two classes of approach to this problem: best-effort and strict. We define a best-effort approach as one that attempts to manage these information flows by closing known existing holes, managing uncloseable channels through statistical techniques such as clock fuzzing, and structuring hardware and software as a whole to make the job of the adversary as difficult as possible. While this is more than sufficient in many scenarios, the most that a best- effort approach can hope to achieve is a demonstration that, subject to the threat model, no known attacks are feasible. A strict approach, in contrast, carries a higher burden for proof – it should be able to show that, subject to the threat…

Not a bad paper. I particularly like their ability to distinguish between the two approaches: best-effort and strict. The nomenclature they use is somewhat fuzzy, but the two approaches map, to some extent, to what I call “attacker’s hat” and “principle based security design”. We touched on that a few times, for example here. In particular, look at the “unproven theories”. I’ll take a closer look at the paper later on if I have the chance.

Itaka January 3, 2015 2:21 PM

@Thoth

I think about the article recently about the police, when they took pictures from a cellphone they confiscated and used it to set up an account and entrap people. It got out and there was complaints and all because someone stood up.

But who will stand up for a child having their pictures used for entrapment. And I could guess that they pretend to be the children’s parents, uncle or someone else. Makes me think if it has ever gotten wrong where they sort of outed some innocent parents or other people and covered it up in some form. Didn’t go to court or it took years and things where ruined. Just need to happen once.

And they don’t want to persecute people with power, not in USA, Europe or Russia. ( exception is maybe Iceland? ) When was the last time any good western democracy actually persecute any politician with something serious and made it stick? Or anybody else important.
Maybe they all have skeletons in the closet and if they take one guy down, what stops them from going after other powerful people.

@BoppingAround
Used to be on the hiddenwiki, usually you could inf pastebins on the regular net with links to childporn as well as drugs but then after the big raids the got more secretive.

If you want to find it I bet you could, just need to visit shady parts and make friends.

http://www.sakerhetspolisen.se/ovrigt/pressrum/aktuellt/aktuellt/2014-12-29-sakerhetspolisen-blir-egen-myndighet-2015.html ( Swedish )
The Swedish Secret police now changed from being under the Swedish Police Service and is now instead directly below government ( under the politicians who make the decisions ) and it was passed through without any resistance or any real talk about it. Seems like scary step to me.

Grauhut January 3, 2015 3:14 PM

@mike~acker “Security starts when you press the POWER switch.”

But please, without UEFI then, UEFI/SMM is part of the problem, not the solution.

“Attacks on UEFI security, inspired by Darth Venamis’s misery and Speed Racer [31c3]”

media.ccc.de/browse/congress/2014/31c3_-6129ensaal_2201412282030attacks_on_uefi_security_inspired_by_darth_venamis_s_misery_and_speed_racerrafal_wojtczuk-_corey_kallenberg.html#video
youtu.be/ths65a9LH6Y

Thoth January 3, 2015 5:54 PM

@Nick P, Itaka
What would stop police from implanting evidences or use confiscated devices to impersonate might be solved if all devices have a strong high assurance security mechanism with a built in CALEA which Nick P have been harping about. They need to go through the proper road to gain certain access which would drive knee jerk reactions from many but better than nothing. A properly built high assurance device wouldn’t allow simply walking in and implanting evidences and only allow warrant-based collections.

Of course these are not realistic as of now since no such device exist yet. Research should be directed in that area.

@Nick P
The seL4 kernel pretty much sticks in mind since it’s quite a short “seL4” name and quite outstanding in terms of name and high assurance (and it’s from General Dynamics).

Thoth January 3, 2015 6:46 PM

If a portable system protected by a small footprint asymmetric PKI like the cryptoGPS protocol (paywalled and ISO-ed) could be used to sign/encrypt every image. In fact RSA should be easily executable on a portable device and the only concern is the footprint of all sign/encrypted data knowingly that asymmetric crypto simply bloats things up. If someone picks up the portable device a security system in the portable device detects the inverse or trapdoor PIN keyed into the system and destroys the small footprint private key and that would invalid all the subsequent photos/video taken and the intruder could not properly forge the camera images/videos. The on problem is key distribution of the Public Key considering that the portable device has a secure element (like a SIM card) to manage the Private Key although it is known that SIM cards can be tricked and are not that smart afterall.

cryptoGPS (non-open/paywalled PKI for portable devices)
– Original document under paywall and not retrievable
– Commentaries: https://www.emsec.rub.de/media/crypto/veroeffentlichungen/2010/09/05/20091218_cryptogps_icisc_2009.pdf
– Commentaries: http://cardis.iaik.tugraz.at/proceedings/cardis_2012/CARDIS2012_4.pdf
– Open Implementations (pp. 109 start): http://eprint.iacr.org/2009/516.pdf

Surrender monkeys January 3, 2015 6:50 PM

Hey Thoth, my computer has a built in CALEA!

My built in CALEA is FDE and my password is, if I remember correctly,
‘STOP QUESTIONING ME I WANT A LAWYER’

As you doubtless know, the legislative intent of CALEA is as follows: “Nothing in the bill is intended to limit or otherwise prevent the use of any type of encryption within the United States. Nor does the Committee intend this bill to be in any way a precursor to any kind of ban or limitation on encryption technology. To the contrary, section 2602 protects the right to use encryption.”

A wise commenter already told you how to use your built in CALEA, “Use encryption that renders files indistinguishable from empty space (e.g. truecrypt); never tell anyone anything about the contents of the encrypted partition; don’t voluntarily let the police examine the device at all; take advance steps to minimize the fallout from a “computer is on at the moment of arrest” examination; refuse to answer if you are able to decrypt the encrypted partition; refuse to answer if any files at all exist in the encrypted partition; do not tell the police a bullshit cover story, simply remain silent.”

But you’re not satisfied with privacy, you want a glory hole in every crapper.

When FBI can’t get warrants they just write NSLs. When Secret Service can’t get warrants they ask some goggle-eyed local cop to lie and say they have one. When prosecutors can’t get warrants they use some juridical idiocy like Commonwealth v. Gelfgatt. But still you trust these utterly corrupted and eviscerated warrants. You trust them more than cryptography. You trust warrants so much that you’ll cripple cryptography so that everybody has to depend on warrants.

You know the logic that you use to code and design and stuff? Did you know that works in your daily life too?

Thoth January 3, 2015 8:01 PM

@Surrender monkeys
Thanks for an example of the typical knee jerk reaction I was mentioning about 🙂 .

You can go two ways, a system with hardware and software made to be fully protected and you will never cease to exist within a few days of operation. They have all means to take you out including BlackOps to physically make you disappear forever.

Where are you going to station your development of fully protected high assurance systems in a way they will never reach you ?

What are your secure high assurance development lifecycle where they will never be able to compromise it ?

Where are your secure distribution methods and points so they can never compromise it ?

Let’s put it simply, all chipsets are backdoored. All chip makers may already have been subverted or have agents. If you are going to any of them with a high assurance design, I wouldn’t be surprise it would set off all kinds of alerts and alarms and they would find their way in or approach you personally and make you do what they want you to do by various means.

If you have a properly controlled CALEA coupled with high assurance, they can’t simply walk in and do what they want. They have to produce proof and what they can do with the CALEA is pretty restricted anyway but at least it satisfy them to some degree. Of course they will turn greedy as usual and want more but you have shown effort to give them access as needed which is not unreasonable.

Let’s put it this way, crypto has come to the end of it’s hype. It is a bunch of maths that will only protect something if the conditions are properly met and mostly crypto don’t work as they intented (from experience in commercial and open source projects) because it’s all down to a huge sum of complex factors.

It’s very easy to sit on the high stage and start pointing but it’s not easy once you get down to the ground and start working ?

What have you done or contributed to help make assurance happen ?

I am working on it: https://askg.info

Thoth January 3, 2015 8:11 PM

Oh and it’s still work in progress…

Applied Security Knowledge Group (https://askg.info)

Anyone can post in comments for ideas on topics and stuff to add.

Regarding the HTTPS certificate, it’s self-signed.

Thoth January 3, 2015 9:30 PM

@ … redeem …
The US Govt is not “enemy” in the sense a traditional “enemy”. The people behind the posts and offices of the Govt is ignorant, powerful, greedy, misled, selfish. These people do things out of fear of their self-interest and losing of control.

To put it simply, your reactions in your state would simply empower them whereas we try to disempower their disillusionments.

You know that you cannot fight them head-on directly and they are driven by those desires and fear so the best you can do is indirect circumvention and pacification. If you are going to outright declare war on them, they have more resources and manpower at their direct or indirect disposable to ensure you will never ever succeed.

But !!!! If you fulfill some shares of their desires indirectly, they won’t feel too threatened and might be open for dialog and you can try to convince them. They are afterall not whole lot more unreasonable than those who want to bang their heads against the Govt’s steel gauntlets.

It’s suicidal, unproductive, unheroic…

You could get into more trouble directly from those powerful people than to massage your way around situations and try to get the best of both worlds.

And one thing about this post/comments blog, we don’t like people being rude here and prefer intellectual debates so you would need to re-think of your tone. I believe the Administrator nor Bruce would not be sitting easy with such behaviours in his blog.

Mainway Toys and Software January 3, 2015 10:07 PM

What are you the manners police now too?

Let us examine your business plan: you build a worse mousetrap and the world beats a path to your door. Who exactly is going to buy this crippled product, or adopt it?

And what do your television infomercials say:
♪♫ It works great when you don’t need it! ♪♫
You will need a really catchy jingle.

Are you going to trick and trap people into using it? That worked for Microsoft… Perhaps you too could be as universally loathed as rich connected prick Bill Gates.

Or maybe you hope that the government will shove it down everybody’s throat. Like the Clipper Chip!

In the high-powered world of venture capital the technical managerial economics term for this is fucking stupid.

Nick P January 3, 2015 10:54 PM

@ Wael

“Best effort” and “strict” is the distinction I make when I say Low/Medium vs High Assurance. We can only put confidence in High against strong attackers because it requires clear security properties with strong arguments that they hold in all situations. Anything less doesn’t seem to work.

Btw, while you’re looking at that one, check this one out. He likewise took an engineering approach while carefully noting the properties and trying to formally show they hold. Interesting work.

re theories

They’re a mixed bag. It would take whole essays worth of information to support or refute them. Interesting concepts though. I still like your mobile security requirements and breakdown best. It’s a nicely condensed version of our discussions on this blog that could be implemented by 3rd parties.

@ Thoth

re intercept

Remember I specified mine be read-only to prevent planting evidence. It’s easier to implement than you think. It’s much like prior work on securing memory or database operations. Some of that went to high assurance. Just got to keep it simple and make sure the interface limits what the search can do.

@ surrender monkeys

The quote you cited doesn’t protect you at all: [forced] escrow is a possibility. Plus, the leaked NSA slides say the FBI compels “SIGINT enabling” in U.S. companies for NSA. All I know that sounds suspiciously like they have ways to force it on people. They’re also known to kick in colo’s doors and just seize all their stuff, putting them out of business. So, comparing all that to a warrant + intercept, some people find one better than the other.

As such, I assume anyone still in business is either producing a solution weak enough for them to attack or they’re subverted. That leaves the puzzling mystery that is Tor. I know they need it for their own use. So, that’s possibility No. 1. However, it might be that it’s got ideological people, it’s non-commercial, and P2P. The users themselves are the carriers rather than a 3rd party. Maybe someone with more legal skill should talk to their legal counsel to figure out what their defense is and copy that.

@ Mainway Toys and Software

We’ve already had this discussion here. My conclusion was that you’re probably typing this on a subverted machine that’s also made to low assurance standards. It’s wide open to every talented hacker out there. A high assurance backdoor with selective, limited targeting based on warrants on an otherwise highly secure system would mean only one organization could target you reliably. Maybe without ability to plant shit on your machine. That’s several times better than your current situation.

But, hey, I know most people would rather use machines full of backdoors (err, vulnerabilities) that everyone can hack just for the peace of mind they get when the vendor says “but it’s private we promise [with fingers crossed]! And in Switzerland [run by anonymous people]! And you can download our source [which may or may not be the binary we run]!” 😉

Medborgare January 3, 2015 11:15 PM

@Itaka

The Swedish Secret police now changed from being under the Swedish Police Service and is now instead directly below government ( under the politicians who make the decisions ) and it was passed through without any resistance or any real talk about it.

Seems a bit scary to me as well. What are they going to be next, some kind of Swedish version of KGB?

Thoth January 3, 2015 11:34 PM

@Nick P
I was referring to your system. Warrant based read-only would be much better than online active interception from no assurance systems like Microsoft, Apple and some Linux/Unix flavours where people love to run tonnes of crypto apps thinking it is working.

Nick P January 4, 2015 12:12 AM

@ WoW

I only endorse the built-in firewall. Rumored to stop the strongest opponents of humanity.

Itaka January 4, 2015 3:55 AM

@Medborgare
I don’t know.
http://henrikalexandersson.blogspot.se/2015/01/nya-sapo-jobbar-inte-at-polisen-utan-at.html

He made a post about it and in the comments they have some theories, one being that it might have to do with the data-collection that FRA does. It does not seem that far fetched to me, they could get their own systems and their rules in a new way.

I don’t really know but I would like to have some input by people who know what they are talking about. Not sure who to ask though, can’t think of many organizations that deal with this stuff in Sweden. Do you have any ideas?

mike~acker January 4, 2015 8:32 AM

@thoth

=”Security is a difficult thing and it is seldom properly understood and cannot be rushed. It is also pretty hard to market security products to the masses as they don’t get it.”

ummm….. much that is done on computers every day would be difficult or impossible for many people except that the necessary functions have been “packaged” in an easy to use fashion.

the same can be done with security.

Thoth January 4, 2015 10:14 AM

@mike~acker
It is true that we could package security into user computers but the fact we don’t and we left a huge gap in security pretty much raises eyebrows.

A security chipset would cost probably a couple of dollars (USD$) for a smartcard type of security processor rated at CC EAL 5+ (and smartcards usually cost about USD$20 due to packaging, branding, OS system, additional softwares called security applets …etc… ) which if you strip it down to the chipset itself which is the size of something smaller than your pinkie finger’s nail. Imagine how many of those chipsets you could effectively embed onto your chipboard of your PC ? You could comfortably squeeze quite a few of them onboard and it would simply cost no more than USD$50 without the additional stuff in my opinion unless someone (probably @RobertT) from the chipset industry can correct me.

Where we end up are users needing to explicitly ask for security and the integration of the security devices with our PCs are rather … awkward … Try setting up a smartcard login to your desktop vs. sticking a USB flash device. USB flash devices are supported by default (insecure options) vs. smartcard support that would probably take you half a day to setup if you are trying to get the drivers into the OS to be recognized.

It is quite baffling as to why few chipboard makers have ever made the efforts to integrate security modules with security processors directly into the chipboards they sell. Processors that directly support AES-NI are still not considered security modules with security processors until they are rated according to FIPS 140-2 and CC EAL to at least attain a “reasonable industrial rating” of FIPS 140-2 Level 2 and CC EAL 5+ which smartcard chips are rated for as their minimal requirements.

The security chip is simply an example of how cheap and easy it is to engineer a proper security module into a modern chipset which seldom gets done.

The next part is trusted secure execution of codes (TPM modules) where sensitive codes are executed in trusted environment spaces. Again I would point to smartcards due to their compact yet powerful nature. A smartcard system contains an underlying OS that virtualizes the applets into their own sandboxes which effectively turns them into multi-applet trusted secure execution environments (sandboxes). If something small and cheap could have done a TPM for a small form factor, I don’t see why modern computing systems could not engineer such TPMs with proper certifications into their systems and sold as a package but it rarely gets done.

It is mysterious as to why such as mentioned above are not yet mainstream. One reason might be the need to tweak their research and lifecycle to take into consideration of implementation of security no matter how simple it seems (like buying a bunch of security chips and integrating them) because they need to know how the security module and TPM systems work and how to integrate them properly and planning extra lifecycles.

Clive Robinson January 4, 2015 12:27 PM

@ Thoth,

It is quite baffling as to why few chipboard makers have ever made the efforts to integrate security modules with security processors directly into the chipboards they sell.

Actually it’s not baffling, and some suppliers such as Sun and IBM have put SIM size smartcards on their motherboards.

Currently there are two main reasons why smartcards get used with PCs and Servers. The first is for “personal identification” the second is for “resource lockdown / managment”.

When used with PCs for “personal identification” the smart card reader is placed where it is most convenient for the user which is in the keyboard. When used with servers for “resource lockdown / managment” it is put in the server where it is reasonably accessible to a service tech.

If motherboard manufactures thought there was a market for Smart Cards to be used with PCs it’s still unlikely that it would be on the motherboard or any other card in the standard slots because it would be “around the back” and thus difficult to get to at best. It would more likely be built into one of those “multi memory card readers” and put in where the old “floppy drive” used to be.

As it happens I have been involved with such a beast, where the smart card is like an IME that sits between the USB interface and the MC, it was a prototype I was involved with designing some years ago. Sadly it did not go into production because the Gov Dept involved wanted to pay China Mass Production FMCE prices, for a bespoke security system and could not understand why the devices would be over ten times more expensive).

Sancho_P January 4, 2015 3:41 PM

@ Toth, Clive Robinson

“… we could package security into user computers but the fact we don’t …”

-/- ”… suppliers such as Sun and IBM have …”

It is funny when you say this.
I guess there are only a few persons who know exactly what’s already inside the chipset of our consumer machines.
Not everything may be actually usable respectively used nowadays.

To make things going it needs an incentive = money.
Security doesn’t have it (today).
Content provider usually have the money if their content is protected, e.g. they could protect you from watching the same film too often (thereby not violating your license agreement).

Wouldn’t it be great if security and protection could be combined in the very heart of the machine, reflowed onto the main board, simultaneously securing and protecting the user?

Let’s think one step forward behind nuisances like UEFI, to a point in time where a silent “update” worldwide could suddenly stop any machine [1] to run their dangerous OS (like XP or anyX or whatever), or where a dedicated “update” could stop that particular terrorist (machine) from watching child porn?

No one could now run any other software on that machine(s) than connecting to our security database to get the “go ahead” – or not (/wet-dream).

Bear with me, only a few days left in the dark age of the insecure Internet.

[1] That could “prevent war” if e.g. used against rough states like NK.
/sarcasm

Nick P January 4, 2015 4:29 PM

I recently tried to use a particular open source email client with my Gmail account. The authentication failed despite using the right credentials and same IP/location as usual. I logged in via Gmail to see an alert that an intruder tried to access my account. It said that if it was me then:

“You can switch to an app made by Google such as Gmail to access your account (recommended) or change your settings at (link) so that your account is no longer protected by modern security standards.”

Authenticating over SSL link is not acceptable by “modern security standards.” What the hell standards are they using? Lol.

Grauhut January 4, 2015 6:48 PM

@Nick Why is free antivirus free? Because its an easy way to scan other peoples hard disks and send files for “checking” into a cloud? 🙂

Undermine our values January 4, 2015 7:41 PM

If you’re the President of the United States and a couple of fired Sony suits made a fool of you and a laughingstock of the FBI, then you just impose illegal sanctions with no evidence:

“officials said they could not establish that any of the 10 officials had been directly involved in the destruction of much of the studio’s computing infrastructure”

http://www.nytimes.com/2015/01/03/us/in-response-to-sony-attack-us-levies-sanctions-on-10-north-koreans.html?smid=tw-share&_r=0

Thoth January 4, 2015 8:43 PM

@Clive Robinson, Sancho_P, mike~acker
I was referring to taking smartcard chips (instead of entire cards) as in-built security modules and security processors (static chipsets as a HSM) besides having optional portable card readers for dynamic ones.

@Nick P
Gmail account ? Why not instead just use other more privacy enhancing mail in a more liberal country 🙂 ?

Don’t want to get into the technicalities of what defines “privacy enhancing mail” and “liberal country” but you know what I mean. About time to move away from those mail servers that reads your email contents.

BP January 4, 2015 10:34 PM

Sysops take note. Don’t let yourselves become part of the fascist trap and fall in line with bad requests:

The ten points of the Nuremberg Code
The 10 points are, (all from United States National Institutes of Health).[7]
The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him/her to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration, and purpose of the experiment; the method and means by which it is to be conducted; all inconveniences and hazards reasonable to be expected; and the effects upon his health or person which may possibly come from his participation in the experiment. The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity.
The experiment should be such as to yield fruitful results for the good of society, unprocurable by other methods or means of study, and not random and unnecessary in nature.
The experiment should be so designed and based on the results of animal experimentation and a knowledge of the natural history of the disease or other problem under study that the anticipated results will justify the performance of the experiment.
The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury.
No experiment should be conducted where there is a prior reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects.
The degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.
Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death.
The experiment should be conducted only by scientifically qualified persons. The highest degree of skill and care should be required through all stages of the experiment of those who conduct or engage in the experiment.
During the course of the experiment the human subject should be at liberty to bring the experiment to an end if he has reached the physical or mental state where continuation of the experiment seems to him to be impossible.
During the course of the experiment the scientist in charge must be prepared to terminate the experiment at any stage, if he has probable cause to believe, in the exercise of the good faith, superior skill and careful judgment required of him that a continuation of the experiment is likely to result in injury, disability, or death to the experimental subject.

Nick P January 4, 2015 10:49 PM

@ Thoth

I have three email accounts: a very old Yahoo account with my name in it for public things; an old Gmail account without my name for still untrusted things; a newer privacy-oriented account that you’ve already seen. Yahoo and Gmail accounts I keep just in case I get something critical from a service I forgot about that requires email. Password resets for forums, newsletters, online billing, and so on. Reliable delivery and cheap storage are more useful there.

Nick P January 4, 2015 10:52 PM

@ Grauhut

That’s funny. Grey hats have been toying with the issue a long time because AV’s are often privileged software that have often been the beach head for black hats invading a system. Another reason I tried to avoid using them on hosts wherever possible.

Wael January 4, 2015 11:32 PM

@Nick P,

re theories.They’re a mixed bag. It would take whole essays worth of information to support or refute them.

They have been discussed on this forum to a certain depth during the C-v-P topics. Too tired to give references to each, but they are there.

Wael January 5, 2015 12:19 AM

@Figureitout,

antenna guys will tell you it’s getting f*cking impossible to squeeze in better antennas in the space limits they’re given.

Yes, that’s true, although not as impossible as you’d think. I worked with these guys for a few years and sometimes helped in the calibration of RF “stuff”. Still, it’s a fertile field for innovation.

When I was trying to help someone out find a photo the phone apparently deleted, I also found photos the person did “delete”…the memory mismanagement is a problem on desktops but way more so on smartphones…

That’s because “deleting” doesn’t physically purge the photo. Add to that: Wear-leveling and other aspects. There are ways to “securely” delete files, and maybe even products for smart phones.


want more-secure comms you have to look into VOIP and other RF devices.

They all have their weaknesses, but conceptually you are correct for the simple fact that the attack surface of the alternates you suggested is significantly smaller than that of a typical MD. However, you’ll have to expand the attack surface of the “VOIP” device by accounting for the weaknesses of the host “VOIP” runs on in addition to the networking issues.


You could actually get 255 of these off the bat and set up a long chain of them to relay a message (using estimated 2KM range w/ the dinky antenna and 20db TX-power, that’d be a conservative 510KM or 317mile range.

You have solved the technical aspect of chaining devices to extend the range. How do solve the physical security? Someone can snag your devices. Then there is the “power” problem. You can use rechargeable batteries and solar cells, which could be stollen just as fast. No, I am not suggesting you steal them, I am saying someone will snatch them if you leave them out in the open 😉


You tell your bud to authenticate him/herself by sending 500 packets of data.

It can be replayed unless you’re using an indexed partition of a long shared secret and keeping the state in sync. It’s a form of OTP. You still only covered the authentication part. Confidentiality wasn’t addressed. Maybe you intend to also use the next partition of the shared secret for another OTP to encrypt the channel (or the payload.)


Bottom line I want an EXTERNAL means of authenticating, and not a text message or phone call either for “2FA”. Not just SMS, different means are needed. An attacker in your computer “technically” won’t be privy to this depending on how much you chat about it (like what I’m doing) and connect to infected PC’s

Ok, an out of band authentication mechanism. You still have to be careful… Should be an interesting project!

Figureitout January 5, 2015 1:07 AM

Wael RE: typos
–I googled it…dude people don’t mis-spell words in the title of your paper! I honestly don’t give a f*ck if it’s just non-chalant chatting or grammar stuff (languages change…otherwise I would be speaking like Shakespeare), that cool brain research (and a jimmie johns has a sign I see sometimes where you can leave first and last letter, jumble in-between and brain can still read what you’re saying…also psychology research into memory where people remember beginning, middle, end and forget in-between those areas usually) but it’s a presentation of your work! But a typo on a schematic can result in a blown (hopefully easily found and replaceable component) and typos in code…do I even need to say that’s a problem..? I would’ve swore on my mother that I didn’t have a typo in code until the compiler caught that..grrr!…stupid comparison/assign bug eg: if(i==2) not if(i=2) error. My frickin’ eyes deceived me, not my brain’s fault, it was my eyes I swear! :p

Also, ummm… your little joke: Pretty intresting, by the way.–You forgot the f*cking e again, not funny! :p

RE: reply (still haven’t told me what you’re working on…if you want more pumpkin butts tell me :p )
–RE: antenna
—-I had a nice intro to the field, and damn it’s getting advanced. Unfortunately it’s so interesting to me so I want to know even if I don’t understand…

–RE: deletion
—-Yeah, probably some sort of special data structure to store it in, in the first place. Still, I think it’s wrong for the tech. industry to call it “delete” when it is just removing the pointer. Android studio was pissing me off too as it was adding code on its own…like, aw hell naw. Screw that.

–RE: VOIP-based comms
—-Yep, the data needs to be secure before transport (pfft, cop out lol). I’m not exactly sure what I’d use or how, but I think it can work.

–RE: physical security
—-I don’t like that problem as it’s affected me too much, too creepy. I’d rather just fake being gone then shoot someone who shows up. There isn’t much a solution unless you start getting murderous.

–RE: authenticate
—-I wasn’t adding my own “payload” to the tests but other parts of protocol are adjustable. Seems like the ID’s are stored in an EEPROM as they remain after flashing flash memory so exchanging a pre-shared secret sequence of ID’s to use would add to it (using your mind or pseudo-random program on numbers from 0-255). W/ this chat program I could but it wasn’t working and I’m not sure about getting a program to work, I’m scared it’ll be too much for me w/ USB-to-serial and Windows driver code…I could “hard code” the messages, not what I want though. I’m afraid I’ll have to revert to a simpler platform but then it’s harder to get nice interface w/ x86 WINTEL PC’s…Then to make it reliable (not too many packet/bit errors) means plenty of signals to intercept…ugh! Hard…

Nick P RE: UCSB paper
–Pretty good.

mike~acker && Grauhut RE: UEFI exploits
–Just watched that CCC talk, pretty good (besides…very nervewracking, you don’t even need to make a very good trojan, just write garbage and you brick PC to be very hard to diagnose). Also they mentioned something about attacking HDD controller as a way for a killing rootkit. Funny they thought it was wierd to have to zero out flash memory before programming, yeah that’s what you do when you get lower lol (can still program on top and have programs running, which is ugly no doubt). Main thing was race condition brute force attack and buffer overflow by writing a bit way outside where it should be and that this can be done w/ just admin priviliges on Windows. I was eating a sandwich while I watched so I wasn’t taking notes and also the actual exploits and implementation details will take awhile to get.

Was wondering what you guys got from the talk?

Bottom line: These are well-known exploits (presenter mentioned “not that exciting, but hey we can brick your PC”) that can kill your PC. I would hope my fellow nerds out there wouldn’t want to waste a PC by bricking it and then likely get put in trash, but probably not.

Also, as most attackers and researchers will painfully admit, having to reverse engineer a system that’s different is very annoying for an attack. Back in “wild wild west” where there were like what 5-8 BIOSes a single attack to compromise them all remotely (physical attacks are cheating, yeah we can all hook up a programmer and overwrite the protection bits) would be rare. I’m not looking forward to purchasing my first PC w/ UEFI…And welp, this attack is out there now (being patched but still..how far can a patch go sometimes) so f*ck…

otseven RE: OT7
–Nicely commented and formatted code (you use the style I like), but 16,000+ lines would take awhile for someone like me to go over lol. Having good test programs is good that test as much of the code as possible. Do you know how program will be integrated w/ other platforms or is it a standalone program?

Thoth RE: askg.info
–Looks good, my 2 cents is keep the info consise but rich, so be picky what you put on it. And…you know…you’re prepared for attacks right? I know you are just want to know how…Regardless thanks for being willing to host.

OT RE: RF-authentication via SI4432 radio module
–Able to compile the “hello world” blink an LED program (which had to short couple pins which is wierd but ok) and was able to flash newest demo firmware w/ OOK modulation after frantically looking for original firmware when I flashed over it. Just trying to get used to SiLabs way of doing things (haven’t worked w/ them much, code is a little more “closer to the metal” so slightly more difficult). Also worry about being able to explain what exactly’s going on as I delve deeper w/o “losing” people in register values. Things were going so smoothly, unusually smoothly..too smoothly…then boom finally a failure…Turns out, I didn’t know about this even before, there’s a chat program w/ a virtual com driver aptly named “EZlink Chat” that would’ve done almost exactly what I wanted! Well it’s not fcking EZ lol and there’s little documentation to troubleshoot so yeah…I’d have to bust open the programs and dig into more filthy drivers…Windows terminals won’t work w/ this chip, they mentioned something about them stopping supporting them so SiLabs had to make their own and their terminal program isn’t working at all and throwing up error never heard of before so…It’s unlikely I write a good reliable terminal program to work w/ this board so I’m looking but porting it to this chip will be tricky, especially on Windows (yuck). I’ll get something like that working eventually, probably Arduino. Something fcky’s going on, it would freeze up my computer and the board is heating up around USB pins and can smell burnt metal but no severe damage, I honestly don’t know why…I thought after smelling burnt metal I’d burn out a USB port but nope thankfully. I’m not a paying customer (have limited code compile space) but there’s a power issue. Man that’s annoying. Still interesting platform and they’ve stopped recommending this for newer designs so I bet the newer boards are easier to get up and going and will be better supported. Also there’s a HopeRF module w/ this chip: http://www.hoperf.com/rf/fsk_module/RFM22B.htm and also it’s probably been used for educational purposes and maybe applied to some opensource projects: http://www.radioeng.cz/fulltexts/2011/11_04_758_765.pdf

Also, I’ve noticed this on other platforms I work on, but as it stands flashing the board the LCD screen flickers, running thru code w/ debugger the LCD flickers, so that’s a quick EMSEC problem which is not a surprise at all, but not good for security. It stands to reason pertinent info will leak on code execution (way beyond just what’s meant to be transmitted) as it loops thru a relatively small program, just takes time to put noise-to-signal. EMSEC gets tricky when you are…trying to transmit data securely…so yeah, just an observation.

Figureitout January 5, 2015 1:39 AM

Oh forgot, looking into One-Way-Communications (mostly ethernet), need a frickin’ sex-changer piece for my serial cables (always something). Sounds like there’s some doable ways w/ 10MB/s but any faster and it gets difficult. This guy (Tom Leek) made one way comms w/ audio cables ( http://security.stackexchange.com/questions/56995/one-way-data-transfer-cable ), link he mentions here (few links are dead): ( http://www.stearns.org/doc/one-way-ethernet-cable.html ) and was mentioning how it’s bad how bandwidth gets low (that’s a good thing for security!) but you have to convert data to audio data. I’ve worked w/ audio a little, making a beacon for a radio, it is easy to unscrew the cover (actually that’s really good for checking that part of the cable where bugs can hide easier) and solder wires.

Anyone done what this guy is talking about?

Wael January 5, 2015 2:26 AM

@Figureitout,

Also, ummm… your little joke: Pretty intresting, by the way.–You forgot the f*cking e again, not funny! :p

I’m flattered! I like your jokes better 😉

Funny they thought it was wierd

An ‘e’ / ‘i’ joke?


this can be done w/ just admin priviliges on…

Another ‘e’ / ‘i’ joke?


the info consise but rich, so be picky what you…

My, my! A ‘c’ / ‘s’ joke! Haven’t heard one of these in sometime! I love it!


And welp, this attack…

This is a toughy, I don’t get it!

I work on many things. Work stuff, and personal stuff. Thinking of new authentication mechanisms is one of them. Also human eye prosthetics (like Steve Austin’s bionic eye.) I can say a thing or two about that. Wouldn’t be hard to link to security, but would be artificial… No pun intended…

65535 January 5, 2015 3:30 AM

@ Nick P

“Why are free proxies free?”

I have been suspicious of proxies like Ultra Surf and the like… but I use them now and then. I will not be using them in the future.

@ Grauhut

“Why is free antivirus free? Because its an easy way to scan other peoples hard disks and send files for “checking” into a cloud? :)” – Grauhut

That is a serious question. If we cannot trust the Anti Virus venders then there is a huge hole in the Security System.

What happen with that survey of AV vendors and their answer as to their cooperation with the NSA? There should be and up-date on this project.

https://www.schneier.com/blog/archives/2013/12/how_antivirus_c.html

@ Nick P

“Authenticating over SSL link is not acceptable by “modern security standards.” What the hell standards are they using? Lol.”- Nick P

I agree that if SSL/TLS is broken than most of the security system is broken [from lawyers, banks and doctors including HIPPA]. If Google is relying on SSL/TLS to protect passwords [or their hash] we are screwed. I would guess that if the NSA can break SSL/TLS then so can their counterparts in China and Russia.

If AV software providers depend on certificates for “code signing integrity” we are in trouble. If the AV vendors are in bed with The Hacking Team, Vupen and the NSA we are screwed.

‘Matthew Green Speculates on How the NSA Defeats Encryption’

https://www.schneier.com/blog/archives/2013/09/matthew_green_s.html

[And]

“All of this is a long way of saying that I was totally unprepared for today’s bombshell revelations describing the NSA’s efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it’s true on a scale I couldn’t even imagine. I’m no longer the crank. I wasn’t even close to cranky enough.” –Mat Green

http://blog.cryptographyengineering.com/2013/09/on-nsa.html

Although Green goes into some hypothetical methods of breaking SSL/TLS he does explicitly explain how it is done on the fly. I would like to see more data on this subject – it feeds into the Certificate Authorities and their roll in providing help to the NSA [either direct help or weakening of their crypto certs].

I am somewhat uneasy with CA that create both the private key and the public key [Cough …G@daddy, and so on].

I would like to know the current status of True Crypt. I have not gotten a clear picture of the situation – other than to be cautious of it.

@ Undermine our values

“If you’re the President of the United States and a couple of fired Sony suits made a fool of you and a laughingstock of the FBI, then you just impose illegal sanctions with no evidence” – Undermine our values

You bring up a good point. The President is jumping the gun – or eating bad intelligence.

I would go one step further and state that if President Obama wanted to curtail the NSA and its dragnet spying he could just sign a Presidential Order. He campaigned on curtailing their spying but did the opposite. He expanded it – which is very odd. Something smells fishy.

Thoth January 5, 2015 3:57 AM

@65535

“I am somewhat uneasy with CA that create both the private key and the public key [Cough …G@daddy, and so on].”

G@daddy is not a trusted authority on security and their security practices are probably dubious anyway (personal opinion).

It is funny that you mentioned a system (server or web hosting or CA) generates the asymmetric keypairs for you and that is one of the biggest mistakes ever.

I recently equipped my website with HTTPS and there was an offer to generate the keypairs and certificates for me which I became suspicious about. Better off to generate them on your own and upload them to your servers personally. Once I generated my keys and certs offline somewhere and transported them to my server, I would say the next step is to access the HTTPS version of the website and download the certificate and compare against your offline version of the certificate (check the modulus and hashes) just in case they decide anything funny 🙂 .

Clive Robinson January 5, 2015 4:29 AM

@ Figureitout,

I’m not sure what the guy wants with his one way audio cable, but I suspect he might be looking for a circulator or equivalent which you can make with 2wire-to-4wire converters to deal with a four quadrant issue.

Think of the audio cable as a telephone pair then you have two audio paths being used, from Alice to Bob and back from Bob to Alice each with their own voltage and currents added together. Over relativly short distances this is fine, but signals antenuate with distance thus you need to amplify the signal.

There are a couple of problems with this in that if you connect an amplifier output to it’s input, it oscillates at a frequency that is the inverse of the effective delay through the amplifier. Most people have heard “howlaround” where a microphone on a PA system picks up sufficient of the speaker output to oscillate loudly with an ear spliting squeal (you can also just before oscillation starts hear “regenerative” amplification hence Super-Regen receivers).

The second problem is where is the amplifier along the length of the cable… if it’s closer to Alice than Bob then to amplify Bob’s signal to the level Alice needs Alice’s signal gets amplified to the same extent, which means her levels are easily going to hit the end stop and be distorted, as well as creating a myriad of other problems including cross talk to adjacent signal pairs.

Most of the problems are solved by taking the 2wire pair into a 2-to-4 wire bridge and use two amplifiers that have independent automatic gain control and frequency selective amplification and delay correction.

However another problem then becomes apparent, which is “echo” if the load is not correctly matched to the signal impedence then some of it bounces back towards the source, thus “echo cancellation” is needed as well…

A similar set of problems occure with power supplies and loads that both have energy storage components in them. I’ll leave looking up “four quadrant” issues as “an excercise for the reader” because unlike 2-4wire bridges it’s a subject that is actively taught currently.

Oh likewise similar issues for antennas and feed lines with repeaters, transponders and co-located broadcast transmitters are used, where the solution is to use circulators and narrow band –cavity– filters are used.

Clive Robinson January 5, 2015 5:07 AM

@ Figureitout, Wael,

Now children, the headmistress want’s to know why when we sing “old MacDonald has a farm” in school assembly you two burst into fits of giggles when we get to “Eh Ih Eh Ih Oh”, perhaps you would care to explain, as she’s over their swishing her cane, and looks like she’s in the mood to use it to hand out a little discipline 😉

Andrew_K January 5, 2015 7:56 AM

My two cents on [C]P services being run by agencies.

Yes, seems plausible in some ways as they really do benefit from offering such services. BoppingArounds observation fits in: There is no reason not to use the very same technique on the clearnet.

  1. They can gather more information on potential targets (see neighbor thread on doxing) — sexual orientation, interests, fetishes, and so on — mostly unfiltered.
  2. During “consumption time” (which does not necessarily have to be the same as online time), many people let down their guards as result of distraction. In rush of hormons, they are likely to open pages they would not open otherwise. They also probably won’t notice malicious activities on their computer during this time.
  3. When you’re hosting stuff that’s paid for, people will hand their “secret paying” details. The hidden money, wifes or husbands won’t know about. Proably not the money they will use to pay bomb building ingredients but nevertheless: Additional knowledge is additional knowledge (more relevant in clearnet than TOR).
  4. It generates traffic. Traffic which can be used for covert channels or further analysis.
  5. If you’re agency, it’s no problem to get the necessary imagery. Just walk down to the guys that seized the material from the last raid against a CP ring. Parts of me refuse thinking of Agencies producing new material of CP. That would be exactly that kind of stuff which turns loyal analysts into whistleblowers.

Happy new year!

vas pup January 5, 2015 9:17 AM

@Benni • January 3, 2015 12:33 AM
“Only when Merkel is completely reminded of her former live under STASI in the GDR, Merkel will do anything against NSA/GCHQ. I really think CIA must send more double agents into germany…” Not really. My guess is that NSA/GCHQ got some kind of NKVD ‘kompromat’ out of her intercepted by both phone communications. A result, they hook her to do whatever they want, not for the best interest of Germany.

@Thoth • January 3, 2015 9:00 AM
“Why aren’t the nations using laws to persecute those blatantly lying in Congress/Parliament when they were suppose to take an oath of truth?” Because for most of them the following motto is in usage: “For friends everything, for OTHERS – Law”. Same applies internally and internationally, aka double standard.

@Surrender monkeys • January 3, 2015 6:50 PM and
@Please help me redeem my Nigerian bank account • January 3, 2015 9:16 PM.

The idea is not to dismiss all LEAs/LEOs, Intel or make them toothless/powerless. In 90th of the last century Russia had such experience. That is path to mob/mafia/organize crime groups ruling or/and ohlocracy. The idea is to put in their heads the idea that the function is to be ‘guardian dog’ not the master of the people. They should not have their own Agenda except to serve people (most of the population – not just 1%). I am against any actions like mob/riots (like Detroit, Ferguson, or attacking/killing cops in New York). I need them to protect me, not harass me.

Sancho_P • January 4, 2015 3:41 PM
“I guess there are only a few persons who know exactly what’s already inside the chipset of our consumer machines.”
Solution is to conduct independent verification of chipsets by lab like UL funded by private/non-for-profit (like Google, EFF) and get seal of approval with the right to conduct thereafter same tests on similar chipsets distributed.

Skeptical January 5, 2015 9:48 AM

Had a chance to look at some of the NSA audit documents that were disclosed pursuant to ACLU et al’s FOIA request.

Lot of thoughts, but to toss out just a few:

First, for a bit of levity: one of the funnier incidents involved a new group of analysts who were less familiar with a particular tool than perhaps they ought to have been. In the course of entering a query for information, they encountered a datafield that they believed required them to enter their own information (for accountability purposes). As it turns out, the datafield was actually a means of adding targets to one of NSA’s collection programs, into which these analysts apparently enrolled themselves. Left undisclosed and to the imagination is how exactly this error came to light. “Sir, in our review of the efficiency of our collection mechanisms, we noticed that apparently three of our targets are eating lunch every day at the cafeteria. Their mobile devices are more difficult to crack than we anticipated, so we were wondering if we could just sit a little closer to their table.”

Second, one can clearly see the evolution of a more involved compliance function if one reads through the reports chronologically.

Third, one can see the difficulty NSA had in keeping its compliance functions at pace with the development of new techniques, databases, and the expansion of its facilities, full time employees and contractors. Some of the compliance reports note organizational deficiencies in newer facilities, and if one continues to read one catches glimpses of how those facilities improved and where new problems developed.

Fourth, it does appear that at some point unauthorized queries on US Persons (including companies) began to automatically trigger reporting to the compliance personnel, which weakens some of Snowden’s grander claims regarding what he could do while sitting at his desk (perhaps he could enter the President’s email address for collection tasking, but perhaps the tasking would be rejected and he’d be receiving a visit from some very curious people before the day was finished).

otseven January 5, 2015 10:47 AM

@ Figureitout RE: OT7

Do you know how program will be integrated w/ other platforms or is it a standalone program?

OT7 was designed to be both a stand alone program and also work as a part of a larger system. It can integrate with other programs or scripts because it returns informative result codes when it exits.

One of the original design goals was to build it as a plug-in extension for Bitmessage, but in view of the revelations regarding the general lack of endpoint security on network facing computers, it seems more appropriate as a tool to be used on an airgapped machine.

Nick P January 5, 2015 10:53 AM

@ Skeptical

“Sir, in our review of the efficiency of our collection mechanisms, we noticed that apparently three of our targets are eating lunch every day at the cafeteria. Their mobile devices are more difficult to crack than we anticipated, so we were wondering if we could just sit a little closer to their table.”

LOL. Yeah, that would be some funny shit. You have people scrambling to delete them from anything drone operators can see and so on.

vas pup January 5, 2015 11:59 AM

Dear respected bloggers,
Do you remember all those movies where security keypad was bypassed by electronic manipulation, analyzing pattern on keys (fat residue), other tricks to get unauthorized access? Please see this link related to new technology which could substantially change balance on the game in similar cases:
http://www.bbc.com/news/technology-30553159

Sancho_P January 5, 2015 12:38 PM

@ Thoth
Re: smartcard chips

Yes, I got that, my point was that similar functions are already in your machine and more will come, be patient 😉
(just open the lid of your chat cam for personal identification)

Re: “reading” emails, the sending server (MSA) must read your email. The question would be if the content is analyzed for more than to catch spam.

@ Clive Robinson, Figureitout, Wael
Re: “old MacDonald has a farm”

Too late, they are already registered at the EU Terror Suspects Database:
Toddlers, watch up!
http://www.telegraph.co.uk/news/uknews/terrorism-in-the-uk/11323558/Anti-terror-plan-to-spy-on-toddlers-is-heavy-handed.html

@ vas pup

“Solution is to conduct independent verification of chipsets by lab like UL funded by private/non-for-profit (like Google, EFF) and get seal of approval with the right to conduct thereafter same tests on similar chipsets distributed.”

Hillarious!
Nice attempt, though. I’ll reward you with our “Chipset Ready+” sticker 😉

Markus Ottela January 5, 2015 3:11 PM

@otseven

One of the original design goals was to build it as a plug-in extension for Bitmessage, but in view of the revelations regarding the general lack of endpoint security on network facing computers, it seems more appropriate as a tool to be used on an airgapped machine.

I made an attempt to solve the issue of end point security with TFC. With C the TCB units could be replaced with microcontrollers which would reduce the attack surface even further.

Grauhut January 5, 2015 3:44 PM

@figureitout: “Was wondering what you guys got from the talk?”

Scale down, use primitive things, like ARM boards, with easyly checksummable/whitelistable boot-loaders and flash memory. The simpler a system is, the easiers it is to have a HIDS that really checks integrity, the smaller a controllers cpu is, the harder it is to place malware on it. Compile your own distribution for your system, in a fashion that makes it “a singularity”, makes p0wnage more difficult. Trust nobody, log as much as possible, let a trusted system automatically scan these logs for anomalies.

Clive Robinson January 5, 2015 4:40 PM

@ Wael,

I know from what you have said that C-v-P remains of interest to you, I wonder what you make of @Grauhut’s response to @Figureitout’s question?

Personaly I still think “prisons” will be the way of the future, not just for security reasons but for scalability as well, as us poor old sequential task humans get to grips with parallel tasking [1].

@ Nick P,

Likewise I would be interested on your opinions, for obvious reasons.

[1] I would say multitasking but that has been given “sexist overtones” when talking about humans as opposed to computers.

Clive Robinson January 5, 2015 5:15 PM

@ Vas Pup,

Funnily enough it was more than a few Christmases ago that I gave a small talk on “Holo-Deck Technology” and how it would be possible as the then “rubber-ware” systems using tactile feedback were to put it politely “impractical” [1]

The reason I was asked was that it was known that I had done work on using various interferance and other techniques to make both long range sound projection and indirectly weapons systems.

In essence if you generate two millimetric or less ultrasound beams you can point them at living creatures and nonlinear effects in the dermis and epidermis causes the signals to mix and thus demodulate onto the nerves in the same way as physical stimulation.

I talked about how this ultrasound projection could be combined with visual 3D projection technology in the future to produce a crude version of the “StarTrek Holo-Dec” or to give a physical object like a robot an adaptable or chameleon surface when interacting with humans.

However I cautioned that such long distance ultrasound projectors could be weaponised, in that frequencies that are known to produce epileptic type responses in the human brain, could be directly stimulated in the nervous system of a target individual and cause seizures and death fairly rapidly.

It will be interesting to see if and how this technology develops and where it ends up.

I also talked about direct brain stimulation by EM pulses of various kinds, that back then were on the outer fringes of medical research. I thought at the time that this would develop rather more rapidly thsn it has done, and indicated that it could be built into a helmet along with a visual 3D display and that this would likely be the first type of “immersive system” rather than a projection system. I guess I’ve made a wrong call on that one… Oh the joys of being a futureologist 😉

@ Wael,

You mentioned artificial eyes as being a subject in which you have an interest, have you read the papers from some of the earlier experiments where over stimulation gave rise to neurological events such as seizurs?

[1] Another presenter refered to it as “the joy of gimp suits for the masses”… A TMI turn of phrase that has stuck in my mind now for over a decade, and will probably now stick in others minds.

AlanS January 5, 2015 7:17 PM

@Skeptical

“…one can clearly see the evolution of a more involved compliance function if one reads through the reports chronologically…”

Dissembling and manipulation requires “the evolution of a more involved compliance function”. For a decoding of the operation of the compliance function, see Marcy Wheeler’s on-going multi-post analysis of the IOB reports.

For more general arguments on how the compliance function subverts the rule of law see: The NSA’s Culture of “Legal Compliance” Still Breaks the Law and The Surveillance State’s Legalism Isn’t About Morals, It’s About Manipulating the Rules.

Dirk Praet January 5, 2015 8:08 PM

@ Skeptical

Had a chance to look at some of the NSA audit documents that were disclosed pursuant to ACLU et al’s FOIA request.

Your comments probably belong more in the recent Merry Christmas from the NSA-thread.

… it does appear that at some point unauthorized queries on US Persons (including companies) began to automatically trigger reporting to the compliance personnel, which weakens some of Snowden’s grander claims regarding what he could do while sitting at his desk

But at which point ? As mentioned in this report – and also mentioned by Bruce in above thread – an NSA analyst in 2012 searched her spouse’s personal telephone directory without his knowledge to obtain names and telephone numbers for targeting. Which leads me to believe that these findings actually vindicate Snowden’s claim instead of weakening them.

Thoth January 5, 2015 8:26 PM

@Dirk Praet, Nick P
It is rather interesting that the warrant canaries are posted on Github and are wide in the open. Not sure if Github would be that trustworthy for the use of critical applications (warrant canaries) and such.

So QubesOS first warrant canary … does that mean that something went wrong and all their stuff are considered compromised ?

Ouch … a WHOIS on their domain names “qubes-os.org” spits out a lot of personal information :S . Although I don’t mean WHOIS protection packages from Domain Name Providers may be of any significant protection against state actors and HSAs from accessing the identity but it provides some way of making search via WHOIS much less convenient and adds a small layer of difficulty.

Thoth January 5, 2015 8:37 PM

@Clive Robinson

You mentioned about fleet broadcast methods before to create a way of getting messages across in a more deniable and traffic resilience manner. Wouldn’t it allow the point of original broadcast be noticeable ?

I wonder if a hybrid of point-to-point and broadcast methods were to hybrid to prevent the origin and target nodes in a network matrix from being discovered (especially the origin node) ? I was thinking if mixing and alternating point-to-point with broadcasting, you could create more deniability in the event that most of the nearby nodes are unfriendly nodes where you could say that you receive point-to-point OTR messages from someone and simply broadcast the message and for the target node, it can just dump a random encrypted packet and pretend as a passing node to continue to pass the message in hybrid point-to-point and broadcast ?

If someone tries to trace the suspected origin node’s (previous P-T-P node), the message is only probable and deniable since it is made to be forgetful and at every node the message maybe repackaged on the surface (the main encrypted message should not be known except for the target node) to throw off tracing ?

If this is able to provide plausible deniability to a good extend, some TOR-like protocols or TOR-wannabes can at least use a hybrid P-T-P with Broadcast but I wonder if TOR would be that agile to pickup such a change after all these years of branding.

Wael January 5, 2015 8:48 PM

@Clive Robinson,

I know from what you have said that C-v-P remains of interest to you, I wonder what you make of @Grauhut’s response to @Figureitout’s question?

It remains of interest. I kept trying to watch the video without succes, audio seems work. I’ll have to wait until I am able to watch the video before I say anything. I’ll also have to sort out my thoughts regarding the prison.

Wael January 5, 2015 10:20 PM

@Clive Robinson,

You mentioned artificial eyes as being a subject in which you have an interest, have you read the papers from some of the earlier experiments where over stimulation gave rise to neurological events such as seizurs?

I remember reading something of that sort a while back. Maybe it was related to certain frequencies of a strobe capable of inducing seizures. Current research tends to look at connecting cameras to the optical nerve which did not work better than the ability to distinguish light from dark. My thinking was to do it the other way around. Googling around, seems someone already did it…
http://www.extremetech.com/extreme/110031-a-bionic-prosthetic-eye-that-speaks-the-language-of-your-brain

Oh, well… I think that’s the correct path.

Thoth January 5, 2015 11:26 PM

@JackPair Fans
Latest JackPair Update available.

Link: https://www.kickstarter.com/projects/620001568/jackpair-safeguard-your-phone-conversation/posts/1097447

Latest Features:
– ECC Curve using the DJB’s Curve25519 for ECDH.
– Detachable design with transparent casing

My Recommendations to JackPair:
– Upgrade to ECDHE with the E at the back for ephermeral ECDH keys by zeroizing akeys whenever they are not in use.
– Basic tamper lid switches and security meshes are still the primary ways of defeating LSAs and MSAs in terms of tampering but a transparent case is a good start.

Wael January 5, 2015 11:38 PM

@Clive Robinson, @Figureitout, @Sancho_P,

… the headmistress want’s to know why when we sing “old MacDonald has a farm” in school assembly you two burst into fits of giggles when we get to “Eh Ih Eh Ih Oh”

I would giggle for two reasons: Number one is: Old McDonald had a farm — not has a farm. Number two is a bit long. A long time ago, I had a physics teacher who had an unusually big nose. I never payed much attention to it. One day my desk mate (we were arranged in desks that take two students, this is early high school) told me: Have you noticed how big this guy’s nose is? For some reason it was very funny the way he said it, and I could not contain my self (I snorted a couple of times) but did not laugh. Then the teacher came near our deck and was explaining something. My desk mate put his hand on his notes and whispered to me: “Protect your notes dude! His nose is just above them, he’ll inhale them”. I could not stop laughing loud. So the teacher looked at me and said what’s so funny? It’s not polite to laugh alone! Share with me what’s so amusing. When I looked at him I noticed that he was angry, and his nose was flared 🙂 I said you will not think its funny, sir. He kicked me out of the class. and I was still laughing on the way out. But it was not because of the ‘e’ ‘i’ ‘o’ jokes.

@Sancho_P,

Too late, they are already registered at the EU Terror Suspects Database:

That’s a pretty messed up article (the link)…

Nick P January 5, 2015 11:53 PM

@ Clive Robinson

I haven’t seen the talk yet. Most of them have just repeated pieces of what we’ve posted for years [without the best solutions :)]. Getting redundant.

Far as Grauhut’s solution, it’s not realistic for most applications. Remember that general-purpose computing, esp for Internet use, is often data-driven and especially foreign-data driven. The memory, CPU, and feature usage profiles differ wildly sometimes by the second. It’s doubtful that, at any given moment, he’s gotten things so small or predictable that nothing extra will occur. Further, microcontrollers are just ordinary processors with some extra hardware and firmware typically not made with security in mind. Regular risks apply to them too.

The only advantage you might have over regular RISC processors with microcontrollers is one of those that are pure Harvard-style architectures, safety-oriented Java processors (Sandia/Score processor), or RISC core + FPGA logic chips where security can be inserted in FPGA logic. These inherently provide better protection than von Neumann style architecture. Additionally, one can cascade chips (including microcontrollers) so one chip might do computation, one handle I/O safely, one do security-critical functions, and so on. Easier with microcontrollers due to their cost and low power. I’ve posted on that recently.

@ Wael, Clive

What Clive called castle was my approach to building systems where certain types of breaches were impossible or raised alarms by design with easy recovery. These required little hardware for sequential logic and only synchronization for parallel. The prison methodology assumed stuff would keep happening while trying to look into the system to find it. That’s kind of similar to the COTS antivirus signature method so I figured it would never work. Anyway, we’ve had time for the research community to do their job and here’s what’s come through with strong security properties.

  1. Architectures with memory consisting of tagged objects where hardware only does sensible operations on different types of objects.
  2. Architectures based on capabilities that impose restrictions and support secure composition better than most architectures.
  3. Architectures that enforce an information or control flow policy across everything in the system.
  4. Architectures that use cryptography to ensure malicious reads or writes will cause problems for the enemy.
  5. Architectures with strong separation of code and data, with assurances that only the right code is loaded.

These are the architectures that survived the mindset of provably preventing problems across the board with as little resources, complexity, or overhead as possible. The crypto is an exception to complexity and overhead but aims to do a lot more than other architectures most of the time. The closest thing to a prison architecture in this list is the Control-Flow-Integrity (CFI) monitors in hardware belonging to category 5. Of all the monitoring- and inspection-based architectures I followed, none came close to these in terms of assurance, performance, or chip area across the whole system.

So, that debate seems settled to me. The next debate is “which of these are the best to build on?” I’d have added links to my previous posts if I wasn’t so tired. Anyway, academia are working on this with better results each year. Probably best for everyone else to start with these methods that are working well and figure out how to improve on their deficiencies.

@ Dirk Praet

In the Lavabit case, the judge agreed with the FBI that the company could be compelled to lie about the private key being turned over for sake of serving the warrant. Further, there is legal ground to support forcing the secrecy of a warrant. For such reasons, I don’t trust warrant canaries.

Good to see they are trying something. I’d just like to know it could work. Fortunately, Joanna’s operation is overseas outside the FBI’s direct reach. Her canary might work so long as Polish authorities or courts won’t cooperate. NSA also has their code, Dom0 (Linux), and everything below to attack if targeting a QubesOS user. So, the security is unknown but probably better than a similar U.S.-based solution.

As a wise vampire hunter once said, “Location, location, location” (Dracula Dead and Loving It)

Wael January 6, 2015 12:03 AM

@Vas Pup,

Please see this link related to new technology which could substantially change balance on the game in similar cases:

Simply amazing! I think it’ll have more applications than security. From a security perspective, it can be snooped on.

Wael January 6, 2015 12:25 AM

@Nick P, @Clive Robinson,

So, that debate seems settled to me

And with these few words he brushes it off, just like that! Reminds me of a quote by Gilbert Strang that I read in his “Iinear Algebra” book, also the same quote is here: http://www.maa.org/sites/default/files/pdf/CUPM/math-2010.pdf
The only theorem that I mention by name is the Fundamental Theorem of Linear Algebra. I would not want the rest of the faculty to know how seldom I complete a proof in the lectures. An example can be much more memorable anyway. Two examples are totally convincing! (My favorite proof remains the one I found in a book by Ring Lardner: “Shut up” he explained. But I use this in class only when desperate.)
Let me come directly to the recent events that present new problems.

Umm, you desperate, @Nick P? 🙂

Thoth January 6, 2015 12:53 AM

@Nick P, Clive Robinson

“4. Architectures that use cryptography to ensure malicious reads or writes will cause problems for the enemy.”

Memory encryption have always been a pain. It takes extra overhead and space (bloating). Clive Robinson did mention the use of XOR between two elements (data and a key) which is the fastest way to scramble memory but can easily be undone if the key is not properly managed. Traditional crypto algos are even worse on that part and the only way is to use a crypto accelerator (another black box).

“5. Architectures with strong separation of code and data, with assurances that only the right code is loaded.”

Not going to be easy as the codes need to interact with the data to process data and any improperly written parser would be the demise of the system (injection style compromisation) but possible.

Regarding Levison’s renewed secure email with Phil Zimmerman, they have a few more new protocols as shown below.

Link: http://arstechnica.com/security/2015/01/lavabit-founder-wants-to-make-dark-e-mail-secure-by-default/

Clive Robinson January 6, 2015 2:56 AM

@ Wael,

I guess it might be “had”, however I see you still have “I” problems with,

    “Iinear Algebra”

Now I have to go and find a new song for you to sing 😉

With regards the UK requiring teachers and childminders to report toddlers and children that “have be radicalised” you have to recognise the “White Supremacist” leanings of the current UK government and certain of it’s senior ministers.

The simple fact is that the Conservative party are not just a bunch of racists, they actually want to destroy the UK’s hard won freedoms for individual rights, and bring back “Surfdom, and the Lord in his Castel”. Where the Lord has the benifits of slavery without the costs and complications. If you read the newspapers you will see that “Rachmanism” style landlords are on the rise, but these modern versions are actually far worse than what Peter Rachman did to his tenants.

The basic aim is to force the majority of people into a position where they are incapable of building up non monetary assets that are inflation resistant like property. Thus they are at the prey of those with such assets who use exhorbitant rents to build up their property assets, in the process forcing up property prices further. We are already at the state in the UK where, of those currently entering emoyment less than 50% will ever be able to save up the deposit let alone buy a property to live in. Oh and in the UK we have the smallest homes by square footage of any where in Western Europe, thus on that measure we have just about the most expensive accommodation in the world, for the average or below salaried wage slave…

Figureitout January 6, 2015 3:02 AM

Clive Robinson RE: audio cable
–I don’t think the questioner was sure what he wanted either (nor am I). I’d rather it be Alice to Bob, then Bob to Cindy (Bob’s having a threesome); and I check Cindy for errors. Length of cable no more than 1-2ft (I’m ignoring acoustic attacks b/w machines for now as I can defend against that well I believe). Assuming 2->4 wire converters have to do w/ RS485 and the 4 quadrant has to do w/ “flyback converters”? I’d still need some more tools for my home lab to even have a chance testing for that…let alone know what I’m testing for as my “analog” skills are amateur. Improving each day though.

Clive Robinson && Wael RE: ole mcdonald
–Alright getting too silly lol, suppose if the headmistress has her way I’ll be singing another tune “Oh Oh Ohhhh” and “thank you may I have another?!” :p

otseven
–Ok, sounds good mate; will remember and try it out sometime. Yes I’m going more and more airgapped for programming machines (removing wifi cards etc, look forward to getting a real “metal box” w/ nice door locks and filtered power) and using separate machine for internet and simply won’t use programs that need to connect to the internet every 2 seconds on airgap PC. Problem is still file transfer for a lot of software I still need which I need yet another intermediary to open file and let potential malware run there and then transfer to programming PC if I don’t see anything (problem being advanced infection hiding as always, which leads me to believe practically every machine will have some malware on it, which is a crushing realization).

Sancho_P RE: toddler terrorists
–Yeah, would be nice if they watched for bullying instead w/ me as I learned you had to start throwing punches or worse to get it to stop…

Grauhut
–Yeah ok…get a dev-board targeting the chip you want and then take the design for minimal applications, adding some “why would you do that” jumps and mods for fun reverse engineering. And use a risky first jump tool chain on airgapped machine to make your next tool chain on a separate machine etc…Just recently on my internet PC I have to close out USB drives twice and I run live always now…another annoyance amongst other things…

Clive Robinson January 6, 2015 4:03 AM

@ Thoth,

You mentioned about fleet broadcast methods before to create a way of getting messages across in a more deniable and traffic resilience manner. Wouldn’t it allow the point of original broadcast be noticeable ?

Yes and no.

Firstly whenever you communicate you expend energy in a way that can be picked up by both the intended recipient and any other party within the comms path.

So whilst avoiding detection and thus location is not possible, preventing recognition of actuall valid communication is, by the simple idea that all stations broadcast fixed amounts of data that is encrypted. This data is formed of, valid messages, copy messages and padding messages.

A valid message is in effect one originating from the station / node, a copy message is a valid message from another station / node that is being rebroadcast by the current node either as “bandwidth fill” or as a way to move it across the network to it’s destination node and padding messages are invalid junk used to pad / “bandwidth fill” the transmission quota.

So in a wired network with a security net on top, unlike TOR there are no entry or exit nodes, all nodes are full peers, and maintain a fixed bandwidth of transmission into the network, whilst receiving a fixed amount of information from other nodes.

The downside of such a system is you cannot have everything, so you give up on comms channel efficiency and low latency for increased security.

This is the sort of thing the likes of Larvabit’s Levison and Phill Zimmerman should be looking at if they want their “Dark Mail” to be “traffic analysis” proof, which they should be. This is because as the majority of “geeks & Gurus” are starting to understand as far as the likes of Data Mining Data Agrigators –be they Google or NSA types– as much if not more valid information comes from traffic analysis of message metadata than message content.

Currently we can hide message content, but we cannot hide message metadata and we will lose the privacy war if we don’t take steps to hide it. Because as we are increasingly see people are being convicted not by real evidence but by who they associate with and a load of speculation that is not even circumstancial evidence.

That is a “profile” is presented as being typical of a criminal / terrorist and if you can be shoehorned into that profile by authoritied then you must be guilty… unless you can prove otherwise. Thus as self defence we should all attempt to have the same profile such that such shenanigans by authorities will become ineffective.

Further as all users are alike, knowing a location will be pointless because unlike finding a needle in a haystack the authorities will be looking for an unknown strand of hay in all the haystacks there are not just the ones they can see.

Oh and as the “junk” or “fill data” can be set quite high, the likes of the NSA’s repository will become limited, so they will end up either in a poitless “red queen race” or throwing away all encrypted comms once it’s more than a year or so old.

Whilst we cannot beat the Lawless NSA et al with legislation, we can cause their resource issues to be such that budgets will not get allocated by politicos no matter how much FUD is pumped out.

Grauhut January 6, 2015 4:04 AM

@Figureitout Usable toolchains for today, better than none… 🙂

Life is too short to wait for “the perfect academical security of tomorrow”, i drive a car knowing there is a risk. I post here knowing there is a risk. I can only try to limit risks, show a small attack surface and have alarm watchdoggies looking at what happens on this surface.

Modern dev boards based on ARM cpus are quite powerful devices. After the first steps you dont even need an airgapped risky standard pc anymore for your best effort systems.

For my official identity i still use regular insecure standard material, for the statistics engines. 😉

Wael January 6, 2015 4:17 AM

@Clive Robinson, @Figureitout,

however I see you still have “I” problems […] Now I have to go and find a new song for you to sing 😉

“I”, “problems”? That’s a no brainer. It’s gotta be Santa Esmeralda!

Baby, do you understand me now?
Sometimes I feel a little mad
But, don’t you know that no one alive can always be an angel
When things go wrong I seem to be bad

‘Cause I’m just a soul whose intentions are good
Oh Lord, please don’t let me be misunderstood

If I seem edgy
I want you to know
That I never meant to take it out on you
Life has it’s problems
And I got my share
And that’s one thing I never meant to do

With regards the UK requiring teachers and childminders to report toddlers and children that “have be radicalised”

The indicators are false. I previouly said, on Spock’s tongue[1]: No one can guarantee the actions of another. I have seen people flip a 180 degrees in a few days, no indications whatsoever (flipped both ways.) I am guessing it’s some con artist who wants to charge the government some money. Racism is stupid, I think. If you don’t like a race, it’s ok… Just keep it to yourself. We may not have controls on our feelings, but our actions should be governed. Besides, no one chose their own race; it’s given to them. We blame others for things that are not under their control?


“Surfdom, and the Lord in his Castel”.

Oh, crap! Here we go again… I say put him in a prison inside the Castel 🙂


The basic aim is to force the majority of people into a position where they are incapable of building up non monetary assets that are inflation resistant like property.

How so? Don’t property taxes increase every year in the UK?

thus on that measure we have just about the most expensive accommodation in the world, for the average or below salaried wage slave…

Yup, London was pretty expensive, but I ate Fish and Chips every day for the four days I was there 🙂 Japan was very expensive as well, but London is one of the most expensive I have seen.

Speaking of racism, search for Dave Chappelle and Clayton Bigsby. I’m not putting a link to it here 😉

[1] Trying direct translations from another language to see how it’s taken 😉

Gerard van Vooren January 6, 2015 4:32 AM

@ Clive Robinson

About Lavabit e-mail system, the main goal was to still be e-mail. That said, I have to say they applied some clever stuff to it. Without going to a real p2p network I think this is as good as you can get. There is an onion model and the mail message is end-to-end encrypted.

Which also introduces a whole new world to spam filtering. Server side spam filtering only applies to bad ISPs then, the client has to do the rest.

65535 January 6, 2015 8:39 AM

@ Thoth

“G@daddy is not a trusted authority on security and their security practices are probably dubious anyway (personal opinion). It is funny that you mentioned a system (server or web hosting or CA) generates the asymmetric keypairs for you and that is one of the biggest mistakes ever. I recently equipped my website with HTTPS and there was an offer to generate the keypairs and certificates for me which I became suspicious about. Better off to generate them on your own and upload them to your servers personally…”

I hear you.

G@daddy can’t be trusted. They require you to use their CA service and hosting service as a package.

What is the top three SSL/TLS key generating systems? Are they above the average business user?

Currently I see most [small] Business users blindly trusting M$ CA server [Server 2008 R2 or Server 2012 with Exchange]. They use it out of ease of use. I would like to see that change.

Thoth January 6, 2015 9:05 AM

@65535
The best way to generate a key of any type is to use a toolkit. For Java, you have the keytool and of course everyone’s favourite OpenSSL and now there is LibreSSL and whatever SSL libraries.

The reason you specifically use a toolkit is not to trust those web-based CAs and to have a stronger choice but these stuff are way beyond the abilities of normal non-tech users and let alone a tech user who has no security skill at all, you won’t be surprise they would post their private keys instead of public keys !!!

If you are paranoid enough, you gotta build your own HWRNG from TFC’s documents or from some free and libre sources on a breadboard or chipboard and do your RSA keys on your own on a hand-built hardware (that has the highest assurance). If HWRNG is troublesome, a strong CSPRNG would do the trick.

For one reason of forcing people to use G@Daddy hosting with their CA, I don’t even use that sort of hosting. Those kind of business models are probably even dubious from the start and obviously insecure.

MS CA … everyone’s favourite. I have setup a few for clients at their request and the ease of simply clicking through without needing to think much is phenomenally easy. I wouldn’t say MS CA’s key generation (which uses Bruce Schneier’s Fortuna PRNG by default) is insecure but the problem is if you can really really trust MS or not.

If you want abit more to Enterprise grade or ePassports, you gotta go EJBCA or RSA CA but they require more thinking and planning due to the complexity and the “real world of security” comes into the picture. There are even enterprise grade CAs or Government grade ones that we don’t hear about but are used behind the scenes.

Top three real world key generation programs according to ranking.
1.) Web provisioning from web hosting providers.
2.) OpenSSL.
3.) Java Keytool.

Those three real world key generation programs are for generic usage. We have to take into account of banking/financial and other forms of standards which would use a lot of HWRNG from secure hardware like HSMs. So far most of the MS CA or RSA CA I have seen in the field uses HSMs as their RNG (HWRNG in the HSM) as part of the HSM’s package to manage keys.

Recent proliferation of smartcards, IoTs and smartphones with security apps on them pushes the boundaries of how small you can convert a traditional secure element and squeeze them into really constrained devices and most key management takes place in constrained devices as part of their lifecycle and this is where RNGs in these smart devices can be very tricky thing to do right and one mitigation technique is to generate keys on a proper secure machine and load them into the devices for a whole bunch of purposes including SSL/TLS Client Side authentication on “secure terminals”.

otseven January 6, 2015 10:14 AM

@ Markus Ottela

TFC is an excellent solution and I also like the microcontroller optimization idea. Thank you!

SoWhatDidYouExpect January 6, 2015 10:25 AM

US sanctions North Korea over Sony hack and classifies attack evidence

http://arstechnica.com/tech-policy/2015/01/us-sanctions-north-korea-over-sony-hack-and-classifies-attack-evidence/

What better way to tell the world that you don’t have a leg to stand on to support your claim?

When evidence of wrong doing becomes a state secret, we all know where the wrong doing is actually going on. There is no evidence, or revelation of the “evidence” will actually prove there is no evidence. That is what “state secrets” are for…to hide the truth or lack thereof.

Wael January 6, 2015 12:30 PM

@Thoth,

If something small and cheap could have done a TPM for a small form factor,

I’m at a loss as to what you mean! TPM’s are small and cheap. Have you seen a TPM before? Take a look at this one; it happens to be an Infineon TPM 1.2 — not the latest 2.0, but it should give you an idea. SLB9635TT12. And here is some more informationThis is another manufacturer: STMicro

I don’t see why modern computing systems could not engineer such TPMs with proper certifications into their systems and sold as a package but it rarely gets done.

Most Computers have a TPM installed. It’s either a discrete component like the picture above or an integrated one. It could be integrated in the CPU (Intel’s, for example), or on the network card (Broadcom had these at one point.) Apple used to have a TPM, but not anymore. For a mobile device, a FW TPM running in TrustZone is a valid option (given that it passes the conformance and compliance requirements, links below) and has zero size in terms of physical measurements — quiet small, I believe. Actually, smaller than “small” 😉


It is mysterious as to why such as mentioned above are not yet mainstream

Not mysterious at all. It’s mainstream. According to market research reports, over 100 million branded PCs and laptops with TPMs were sold in 2007. We are 8 years later, that number is a lot more. There are over 100 participating companies in TCG, and if that’s not mainstream, then I wonder what is!


because they need to know how the security module and TPM systems work and how to integrate them properly and planning extra lifecycles.

They need to have the needed knowledge to integrate the HW and SW. Specifications are available and so are manufacturers to help with the process. If you care enough, then you’ll need to start by reading up. The specifications will take you a few months, Trust me – pun intended, too 😉 Here is some starting links…

TCG Workgroups
TCG Certifications for Trusted Interconnect

Specifications
Take a look at the current deveopment efforts. You can join and develop your “Security” application that leverages TPMs
This is a list of promoter member companies
This is a list of certified products
Some news releases

TCG news letter. Keep up to date
Industry and liaison participation programs

Perhaps it’s better to start with the FAQ — I should have put that at the top, but just like supermarkets, I put the milk at the end of the store, so you pick up things (that you need) on the way 😉

I’ll close with this: Mobile devices can utilize a TPM as well (was called MTM — Mobile TPM previously) for hardening some weaknesses that currently exist. And this situation is the only valid one where your comments regarding size apply because the real estate on Smartphones is extremely limited. But now with some FW implementations mentioned above, it’s no longer an issue. Android can definitely use one for several use cases.

I will not say this again, but spelling mistakes are not intentional. It looks good right now, but I sure know how it’ll look like when I post it.

Clive Robinson January 6, 2015 12:59 PM

@ SoWhatDidYouExpect,

US sanctions North Korea over Sony hack and classifies attack evidence.

It looks like the US has “painted it’s self into a corner” and does not know how to get out without leaving incriminating footprints indelibley indicating what they have done…

The simple fact is that not offering evidence just makes the case even louder that they probably don’t have any real evidence at all. Even Ronald “Ray Gun” recognised over thirty years ago you have to back up such allegations with real hard evidence that cannot be denied or get held up on the world stage as some kind of crazed idiot.

The US is thus in a position where at best they look like they have no real evidence, and are just using supposition to further another political agenda. Or worse they have deliberatly orchestrated the whole thing as a preliminary to some kind of agression, possibly to restart the war that was put on hold with the 1950’s cease fire, or make the Chinese think that this is the US agenda.

However all the NKs have to do now is issue warrants for the arrest of various people in the investigating organisations and lodge them with an appropriate international organisation.

This will cause the US significant problems when, as they have done with some Chinese Officers, they start some kind of proceadings against NK personnel. Because they will have refused the NKs warrants they can not hope to be taken seriously, by the international community.

Yes it’s pointless point scoring on both sides but the US has handed the NKs the moral high ground on a silver platter…

I don’t know how this is playing out in the US especialy after the lame speach given in LV by the senior Sony representative yesterday.

However I can say that in Europe a number of people think the US Gov has “bats in the belfry” over this and that either the administration is deluded or setting up to intimidate China, either way the opinion appears to be that in the White House there is way more than just “one or two loose screws”, and that the edifice is in imminent danger of falling apart…

Nick P January 6, 2015 1:11 PM

@ Grauhut

There’s certainly some reduced attack surface. The bigger risk comes when you use a board that’s popular. It’s much more likely to be targeted by any number of parties. So, use an embedded board that’s not popular, preferrably not ARM/x86, and easily sourced from foreign countries not allied with NSA activities. For ISA’s, that leaves PPC, MIPS, SPARC, Super-H, DSP’s, and oddball architectures. You can even leverage old devices that have long been hacked with mods like PS2’s, Dreamcasts, foreign netbooks, and so on.

I made the definitive list of non-Intel hardware here. Have fun. 🙂

@ Thoth

re Darkmail

Their protocol is interesting. Reliance on DNSSEC concerns me. Plus it’s looking to be a bit complex and so will have plenty flaws. For now, I advise a solution like Claws w/ GPGME + Tor + Swiss email service. There’s also email strategies for I2P and Freenode.

re TPM

Wael couldn’t help himself. The dump of TPM-related information was inevitable. It’s an obsession with that guy. Maybe even former career choice. He sleeps with an early chip under his pillow.

They’re useful, though. TCG technology is leveraged in a number of papers I’ve promoted. Only concern is that NSA loves the things, helps with development, and pushes them for public adoption. Must be subversion free. 😉 I’d only recommend them for stuff where Five Eyes are outside the threat profile. They can be very useful and affordable for such things. Of course, smartcard coprocessor chip strategy is more flexible and secure while being inexpensive. Keep them in mind in applications where you might leverage a TPM.

Wael January 6, 2015 1:34 PM

@Nick P,

Wael couldn’t help himself. The dump of TPM-related information was inevitable. It’s an obsession with that guy.

I only try to correct some missconceptions. The target protection is against hackers, Identity theft, typical non-state adversaries. With a state adversary… Well, I shared my view several times.


Only concern is that NSA loves the things, helps with development, and pushes them for public adoption. Must be subversion free. 😉

Very funny! 🙂 NSA, GCHQ, BSI (here is an example) are all aware of this, as they are aware of any other technologies, including the ones discussed here. If you want to use that argument (NSA subverts TPMs, etc…) then that applies to every device and component. I can claim that NSA subverts capacitors and resistors as well as transistors, gates. Hell, they even subvert wires and PC boards 🙂 Yes, TPMs were an early career choice, and don’t be silly, I don’t put a chip under my pillow; It’s above the pillow closer to my head. My obsession with TPMs is equivalent to your obsession with Orange Book 🙂

BJP January 6, 2015 1:50 PM

@ Clive Robinson

“The US is thus in a position where at best they look like they have no real evidence, and are just using supposition to further another political agenda. Or worse they have deliberatly orchestrated the whole thing as a preliminary to some kind of agression, possibly to restart the war that was put on hold with the 1950’s cease fire, or make the Chinese think that this is the US agenda.”

I have said essentially that since the first claims of DPRK attribution from the FBI. I would offer another possibility: they expected to be taken seriously enough that the world community would compel DPRK to “out itself” so to speak, to deny the allegations and perhaps offer evidence against them, as a way to bring the regime in to public talks or to spill secrets about their capabilities that the US has not yet identified.

As an American generally hostile to this administration, I’ll readily admit my generosity in presupposing an internally consistent, ultimately benevolent though flawed, ulterior motive. Remember, this same jovial crew apparently thought Putin would roll over due to ever-ratcheting “I’ll slap your wrist harder next time” sanctions. That their inflated self-image writes checks reality can’t cash speaks to me of the hubris necessary to attempt such a ruse, particularly when still giddy from the fruits of a year’s effort drawing out the Cubans.

The more malevolent motives Clive presents would fit just as well.

65535 January 6, 2015 1:52 PM

@ Thoth

“Top three real world key generation programs according to ranking.
“1.) Web provisioning from web hosting providers.
“2.) OpenSSL.
“3.) Java Keytool.” –Thoth

Thanks for the good information.

I use OpenSSL for customers who use n-stacks [say an internal web site on a LAMP setup with WordPress on top].

The MS customers usually use Self-signed certs for employee uses. Some MS customers actually buy certs from Symantec and other big vendors [this allows the web server or email server to show the green lock] – although, the value of those certs is now in doubt.

G@dadday has to go out the door. I have a law firm hosted on G@daddy… Which is not good. It’s a cost saving thing.

“…[this] stuff are way beyond the abilities of normal non-tech users and let alone a tech user who has no security skill at all, you won’t be surprise they would post their private keys instead of public keys !!!”

I agree. And, posting their private keys is a head-banger! I suppose it does happen ;(

‘…with security apps on them pushes the boundaries of how small you can convert a traditional secure element and squeeze them into really constrained devices and most key management takes place in constrained devices as part of their lifecycle and this is where RNGs in these smart devices can be very tricky thing to do right and one mitigation technique is to generate keys on a proper secure machine and load them into the devices for a whole bunch of purposes including SSL/TLS Client Side authentication on “secure terminals”.’

That is an interesting observation. Client side authentication is a real problem on small phones and tablets.

One last question, on all of the devices [phones, tablets, retail laptops] what percentage of those devices have flimsy certs or – worse – certs made by vendors who make both the private and public key and/or keep a copy of the private key for our friends at Fort Meade?

Very good information.

I got to run.

Nick P January 6, 2015 2:03 PM

@ Wael

re Revisiting “Prison” architecture with 2015 perspective

Here is Clive’s description of his method. He uses a large number of RISC CPU’s connected to a bus. These will have to be 32- or 64-bit for industry support. Largest I know of for a real RISC processor is Cavium’s 48-core MIPS processor. There is also a design of 128 simplistic cores for number crunching on memory fab technology. The cores are similarly directed by a COTS CPU. Nobody has really outdone these. So, an upper limit of 48-128 cores throws his hundreds of cores goal out the window.

The next part is a MMU interface controlled by a hypervisor. The hypervisor moves stuff into the slave CPU, lets it perform its work, and moves it out. The description is basically what the Cell processor does with a master PPC core controlling slave SPU’s. Something I don’t think he noticed or mentioned IIRC. That proves, without inspections, that his model is doable and can provide great parallel performance up to about 8 cores. However, the Cell was extremely expensive to develop and manufacture.

The next part diverges greatly from Cell: monitoring. Upon a write to memory attempt, the hypervisor captures and analyzes the state of the slave core. It compares “cycles/time, memory/register usage, and I/O buffer behavior” to a signature of that function’s expected behavior. The system has many functions and associated signatures so that’s why he wanted hundreds of cores. Closest thing to this concept in industry is CodeSEAL architecture’s monitoring and signature methods for control flow integrity. Much simpler. Yet, even such simple monitoring requires caches at the circuitry and has a (1-10%?) performance hit.

His description left off that his design would need a similar cache to hold the signatures. This is to avoid the (50x?) penalty of RAM access for each operation on signatures. It might be a huge cache/scratchpad shared among cores for a centralized hypervisor (huge parallelism performance penalty) or a distributed hypervisor logic + small caches at each core. Either way, he now has a bunch of extra memory, a large load operation at each function execution, and a cache clear upon termination. This amounts to a lot of extra chip logic with cost and performance considerations.

He also vaguely mentioned mask programmable stuff. A decade plus of FPGA research shows you’ll always be 10-30 times slower if you’re doing general-purpose system logic on FPGA vs custom COTS CPU. The speedups there are tactical: an algorithm or optimization here or there. He also said the hypervisor was not a real CPU core but a state machine. Aside from HLS tools, flow programming, or a state-machine hardware engine, I’m not sure what he even intends to do there. A simple RISC core with cache or scratchpad is a much easier solution with plenty of proven silicon.

So, now that I’ve spent over a year learning hardware, I can evaluate his design against the many I’ve seen in papers. It almost doubles the amount of silicon used while having a potentially huge hit on performance. I also can’t even be sure his signatures would catch things. Signature-based security has mostly been a failure in AV and NIDS. The best one, where the memory access pattern is observed, takes a serious performance hit even in simplest of comparable systems (eg CodeSEAL). And they were only able to do one core of it with limitations what could run on it.

Conclusion: The design he described provides difficult to measure security with massive cost increase and likely performance issues. It will also require development and adoption of very different software ecosystem to let average developer use it. The parallelism has an upper bounds of 48-128 cores, probably half that given extra circuitry in his design. A novel idea that was fun discussing and kept me thinking. In practice, the simpler solutions academics and industry have invented achieve a greater assurance of security with less silicon or performance overhead. So future work or investments should go in similar directions to increase odds of success.

Nick P January 6, 2015 2:17 PM

@ Wael

Oh no the NSA concern is more real than that. Remember that the groups funding TCG had a private goal of sneaking DRM onto people’s computers for their own benefit. They designed much of their work (eg Singularity, authorized software lists) around this goal while publicly talking about malware prevention. Similarly, industry and NSA met together at conferences with classified proceedings. These proceedings might just been about classified, legitimate uses of TPM’s by NSA and military. Or they might have been about functionality NSA wanted that benefited them. TPM + “HAP” tech is in General Dynamics TVE product that NSA endorses for mainstream. And we know they rarely endorse something with their own tech included unless they can subvert it.

So, my concern about TPM’s being subversive is realistic given TCG companies (esp content providers) and NSA are both very subversive. Whether it’s subverted is another issue. I just have a habit of treating products made or endorsed by subversive organizations as subverted until proven otherwise. Snowden leaks showed it was a good habit given number of companies participating in covert subversion of their products and services. 😉

“My obsession with TPMs is equivalent to your obsession with Orange Book 🙂 ”

Lol damn. You got me there!

Nick P January 6, 2015 2:42 PM

@ Wael

EDIT: I meant NGSCB/Palladium, not Singularity. That BSI report and recommendations were pretty good. It was also how I caught the error.

Wael January 6, 2015 3:41 PM

@Nick P,

Remember that the groups funding TCG had a private goal of sneaking DRM onto people’s computers for their own benefit.

That’s simply not true. But let’s say it were true for the sake of argument, then all their intent was to protect their own material (books, movies, songs, etc…) And that should not constitute a threat to users (unless they are freeloaders.) The fact that some implementations severely limited user’s choices (for example locking them to a certain OS distribution) should be viewed as “collateral” that eventually hurts their business. If technologies were leveraged the wrong way, then it’s not a problem of the technology or its promoters! Rather, it’s a problem with the implementers of such usages, and that’s easily detected by users who have a choice not to use such products.

The group funding TCPA, the predecessor to TCG, started in 1999 with six companies: (I only remember 5, maybe it was 5 because I think SUN was one of them, but I am not sure now[1]) Compaq, Hewlett-Packard, IBM, Intel, and Microsoft. I worked at two of them. None of them had motives beyond securing the system, and I did some of the architecture, implementation, and use case development.

[1] Testing my ability to read @Clive Robinson’s mind… SHA256 of hidden footnote – I may or may not reveal it 🙂 aa8cf1df2c8bfc1c382bf3bf611d75f0a641529f9d643315fdb96dac1b8067f1

Nick P January 6, 2015 4:43 PM

@ Wael

Haha enjoy the new weekend. Regarding TPM, my information or memory might be far off. It might have just been one or two organizations with malicious agendas. Ill look into it again in near future.

Sancho_P January 6, 2015 5:14 PM

@ Clive Robinson – replied to Thoth 4:03 AM here

That’s a very valuable comment – only sad it’s lost in a running blog.
You perfectly explain both central points of private communication we have to fight for [1].

(1) We may hide (encrypted) content until this will be declared to be illegal.
(2) But we can’t beat “metadata” (which is the data in the first place) until we hide in noise.

However, increasing their haystack may have a different outcome.
Right, firstly it will increase costs but don’t forget we are paying (more than twice) for it.
Probably it will render their collection efforts useless.
But finally it could result in a ban of personal encryption!

There is an other issue, which wasn’t mentioned, it is connected to (“micro”)payment and “metadata” collecting.
It has an extremely chilling effect for the future.
We can not buy Bruce’s books (e.g. @amazoo) or donate to ACLU / EFF (or order / pay online whatever we want) without being registered and sorted into being “Left”, “Right”, old, gay, black, creditable or simply dangerous for society (read “for the sovereign”).
Targeted advertisement would be the smallest pest here [2].

Contrary to communication and thoughts we can’t hide, only refrain from paying.

[1] At least as we see privacy (national security may have a different view):
a) Freedom of personal communication (chat, mail, … ).
b) Freedom of thought = freedom of personal interest (surfing habit).

[2] This ‘they know it all’ isn’t fully true today but only because their systems are not ready yet.
It will be hidden from the public + partially secret. Individual information or appealing would be an illusion, a pipe dream.
Think of credit bureaus, most countries have them: Want a bank credit, buy a property? Want a certain job, … ? These privately owned databases are only the sweet part of the whole governmental personal picture.

Sancho_P January 6, 2015 5:23 PM

Re: NK unsupported allegations + sanctions may backfire

I don’t think so.
The torture report is history, the fool has served it’s purpose,
long live Kim Jong Un!

Clive Robinson January 6, 2015 8:56 PM

@ Alan_S,

Looks like some people in the US might change their VM message to domething like,

    I’m sorry I can not take your call, I’m probably in a public place, leave your name and number ONLY and I’ll get back to you…

AlanS January 6, 2015 9:33 PM

@Clive

I’m probably in a public place? Where and under what circumstances, one wonders, do they believe someone has a reasonable expectation of privacy? My guess is nowhere but, if such a place exists, I’m sure they can convince themselves that it poses “an imminent danger to public safety”.

From Leahy and Grassley’s letter:

“We understand that the FBI’s new policy requires FBI agents to obtain a search warrant whenever a cell-site simulator is used as part of a FBI investigation or operation, unless one of several exceptions apply, including (among others): (1) cases that pose an imminent danger to public safety, (2) cases that involve a fugitive, or (3) cases in which the technology is used in public places or other locations at which the FBI deems there is no reasonable expectation of privacy.”

They go on “We have concerns about the scope of the exceptions.” No kidding.

Thoth January 6, 2015 9:52 PM

@Wael, Nick P
My meaning of TPM refers to a hybrid CPU core with a TPM inside it. What I am refering to as suggested is to simply put a dedicated security processor with full user knowledge and access on the board or adapted to the board. Yes, there are many boards with TPM modules on them but the user can’t really get to these modules and are so badly hidden by so many obsfucation layers that it makes access irritating and you have no idea what they are. What I meant was something the user know, the user installs and the user controls. Of course this would be as good as the user inserting his own smartcard or HSM inside since he has much better chance of control anyway. In simple, what I meant are chips that provide security and are fully visible and within knowledge of the user. The user can straight away call the chip to do whatever he wants instead of a TPM which you typically need to bypass so much “gateways” just to get to the “front door” and it may bite back at you.

TPMs are suppose for the security of the user and the system but what are most TPMs doing these days ? They are not doing what they are supposed to do …. A bunch of millions of TPMs and we are nowhere closer to a better security right 🙂 ??

Those already TPMs … I call them lock-in backdoor chips where users don’t know what’s up. At least if you manually get one into the front cover of the device and specifically allow and make aware of the use (like an individual security device i.e. smartcards) then that is so much better where they can take more precaution against observation or use it’s functions albeit we can all readily conclude that all security chips in any kind of security processor whether embedded or not are simply backdoored anyway … just whether you are aware of the existance or not.

One thing is the lesser chances of collusion if the chipboard has a separate security processor and the CPU itself has another. The likelihood both of them are colluding is lesser but we al know most of them are backdoored.

Maybe I should revise what I say in a way:”we should all use mental calculations to do crypto since using smartcard crypto-processors in standalones, TPMs in backdoored chipset motherboards, HSMs and the likewise items are all crapped”. Abit overly paranoid but the choices of a virgin chip is close to non-existent.

Well, who knows if I might actually dust off my old abacus I used in my younger days and start picking them up and figuring a way to put some basic crypto onto abacus and post a couple here for you guys to encrypt/decrypt if it is successful ? Probably the first one would be basic ultra-lightweight feistel block ciphers ? (No promises on this one as the failure rate maybe high.)

@65535
“One last question, on all of the devices [phones, tablets, retail laptops] what percentage of those devices have flimsy certs or – worse – certs made by vendors who make both the private and public key and/or keep a copy of the private key for our friends at Fort Meade?”

Imagine how much space is dedicated for a security chipset (less than 1 cm x 1cm). I am very doubtful if you can squeeze a good RNG on that amount of space regardless of whatever nano-technology. Most of them simply use the something like a 3DES/2DES/DES or AES as a CSPRNG with some initialization seed somewhere (hopefully properly seeded … Lol…). Some of these phones use secure elements inside them which Wael mentioned above about TPMs or some form of crypto-chip and some people go to the extent of buying microSD card configured with smartcard chips in them and the answer for backdoors ? Highly likely. Youtube (https://www.youtube.com/watch?v=69mU6h1Sd2Q) Kleptographic topic presented by Yeung and Yung where they explain a simple trick of selecting RSA primes in a way that the public modulus leaks the secrets if the person viewing the public modulus holds a backdoored private key to decrypt the public modulus (not sure if I put it accurately enough in abstract).

Probably you shouldn’t even trust those Open/Libre/BoringSSL or Java Keytool if you are that paranoid as well. The HSAs might already be inside there manipulating them all.

@Sancho_P, Clive Robinson
Clive’s reply is definitely very precious on the directions privacy advocates should take regarding privacy-security technology

Wael January 6, 2015 9:54 PM

@Clive Robinson, @Alan_S,
Warning: — keep the volume down —

I’m probably in a public place,

And if you answer by mistake, you can always ensure privacy by being a little… “colorful

Thoth January 6, 2015 10:05 PM

@AlanS
It seems like the public has now turned into an official enemy of the Obama administration and the length of using warfighting tactics to scoop up intelligence is now highly propagated and authorized against the public that supposedly feeds the US Govt.

  • Air-based recon used in both wartime and peacetime roles to detect “threats”.
  • Simulating messages via fake receivers and transmitters to trick “enemies”. Used by the Govt and civilians in their struggle and also on real battlefield. One good example of Govt used on civilians with fake receivers and transmitters are the “stingrays” that propagate fake messages to trick phones into giving up their data.
  • Many more militaristic tactics deployed in civilian theatres to “tame the wild beasts”.

We are seeing more escalating violence and behaviours of Govt against civilians in a similar fashion against unarmed/lightly-armed civilians. Deployment of armed and armoured vehicles in urban peacetime roles … deployment of heavy assault weapons (sniper systems) during home entries … the list can keep going on.

The Govt seems to be using heavy handed methods with no regards of probable collateral damages and shrugging of responsibilities and pissing people to the point the civilians have to arm themselves in self-defense :S … not a good thing.

Probably the next thing is to bring in armoured battle tanks from front line into urban environment to do “crowd control” just like what China did in Tiananmen Square.

Clive Robinson January 6, 2015 10:45 PM

@ Wael,

If you are going to spend the weekend on it firstly read the whole thread and note the dates your eyes might pop open when you do.

For instance,

@ Benni,

Look at RobertT’s comment,

https://www.schneier.com/blog/archives/2011/07/research_in_sec.html#c561215

Does it remind you of something more recent…

@ Nick P,

You also need to consider that the Prison model was described in other places, and what you point to is an explanation focused at one particular asspect, not other aspects and thus other details are missing.

You can see that I was concentrating on making anomalous behaviour detectable, whilst RobertT was concerned about leaking key data etc.

I defined the way I was looking at the problem towards the top of the thread when indicating that humans were unreliable but we had developed ways to deal with them, and that perhaps we should treat the chips in a similar way. The reason for that was the important thing to note which several of us had pointed out in different ways, was that for various reasons that the attempt to make verified chips was not possible, thus the project goals of that US Gov project were not possible. I was assuming a multiple chip solution using different vendors as part of the solution.

All the methods you have described above do not solve the problem of subverted chip design / manufacture, so you are not comparing apples with apples, but lemons from this perspective.

The prisons model has always been about multilayer multicomponent solutions to detect considerably greater types of cheating, specificaly as I’ve always said to get away from the “shifting sand” problem in the standard computing stack, the chips you have identufied only dip fractionally bellow the CPU level. Thus they don’t go down any where close to far enough, and provably can not so they are like the saber tooth tiger, ultimately doomed to be out evolved by a more adaptable aproach, teeth and speed are by no means everything.

And because you cannot verify even the best designed of chips you have to resort to mitigation techniques, if you want to solve attacks such as components you cannot verify, and it’s investigating this that the Prison model is all about.

Thus the –untrustable– chips you mention for all their virtues against external only threats can be built into systems where their potential hidden insider threat can be mitigated.

As I’ve also indicated in the past with the “probabalistic security” idea, it is the choice of the system operator to decide how much security checking to carry out and that it is a trade off on the use of time for processing or checking, which can be mitigated by using more structures in parallel.

There are quite a few other aspects of the model that you have not considered in your analysis, which arguably makes it –all be it inadvertently– biased.

Figureitout January 6, 2015 10:46 PM

Grauhut
–Preaching to the choir buddy (you should probably project your statements to a certain someone else), ARM M4 cortex is cool, makes my life easy (won’t say numbers on power consumption, but it is impressive, so less “compromising emissions” naturally). Will also probably be used more than a Z80-based system which will have less hand-holding. I’m a practical man[child] at heart and I love this area, get to put my hands in both cookie jars (software/hardware) and goes along w/ what I think I am (a bridge). Found a little problem I was working on today too so I got a little boost after feeling bummed about a seemingly simple task stumping me for a couple days (which was not by any means “simple” or “logical” for that matter, at least until I finally found it. And there’s still a mystery which may be bypassed b/c I think there’s a hardware issue or weird bug pulling this line high or low.).

That’s why I’m putting an otherwise insecure G/FSK authentication scheme out here (not done yet, just started) in the face of an active competent attack (8051-based MCU, which there’s a million designs for) that avoids the internet and phone system (known tapped systems) all together and the few that may even be listening at a particular time shouldn’t get much w/ a tweaked protocol (there’s also “data whitening” which seemed to me to be a slight bit of encryption). I’d like to see the chalkboard academics attack it w/ their hot air and chalk powder[1], at least I got them away from the chalkboard! It requires some relatively intrusive eavesdropping and prediction. Then next day it’ll be PSK (another easily eavesdroppable mode, in theory) and it’s back to chalkboard.

[1]Don’t get me wrong, I love me some chalkboard theory, I just want to do something too, not just think and talk. At same time it is unfair as there are many academics doing tremendous work and delivering.

Thoth January 6, 2015 11:00 PM

@Nick P
I opened the Ur webpage and was expecting the same colourful hype of a new programming language and behold …. a plain HTML page appears … same as my website that leads to a little more colourful HTML webpage. I wonder if these are designed with security in mind ?

Clive Robinson January 6, 2015 11:11 PM

@ Figureitout,

Don’t get me wrong, I love me some chalkboard theory, I just want to do something too, not just think and talk. At same time t is unfair as there are many academics doing tremendous work and delivering.

I guess you don’t get to meet to many physics types, the theoretical ones hanker to “get in the lab” because they know that whilst their best skills involve the chalkboard, they know that ultimately their skills are subservient to the “rubber meets the road” of real world practical experimentation and testing. They know the fallacy of the “I think therefore I’m spam” view of life, and they don’t want to be seen as “chopped liver” liver let alone passed over “bacon product”.

Nick P January 6, 2015 11:32 PM

@ Clive Robinson

re link to Wael

“Does it remind you of something more recent…”

Not sure what you meant by that. However, looking back at the list, it reminds me how stuff like that hasn’t been shredded by hackers: the monolithic and mainstream stuff I opposed was instead. The proprietary offerings there have been expanded. The L4 family became Fiasco.OC, OKL4 v4, and Genode. All much stronger than mainstream offerings. Despite RobertT’s criticisms.

re your comments on my essay

“You can see that I was concentrating on making anomalous behaviour detectable, whilst RobertT was concerned about leaking key data etc.”

In a heavyweight way. Other architectures make silent enemy control of system nearly impossible and so your security goal happens as a side-effect when alerts/exceptions are raised over certain operations.

“All the methods you have described above do not solve the problem of subverted chip design / manufacture, so you are not comparing apples with apples, but lemons from this perspective.”

My model is to combine NonStop architecture or diverse components + voting with architectures like I described. That catches subversion and faults at the silicon. Your architecture suffers from the same risk you say my list suffers from. You run that design through black box synthesis tools or hand it to a mask company. Either can subvert the whole thing, including your antisubversion schemes unless they’re extremely clever. Part of RobertT’s job was working around stuff like that and with plenty of success.

“I was assuming a multiple chip solution using different vendors as part of the solution.”

Ok, that was left out of my analysis. There’s potential to defeat subversion issues in your method there. Still retains performance and cost issues.

“There are quite a few other aspects of the model that you have not considered in your analysis, which arguably makes it –all be it inadvertently– biased.”

Nah, the details are scattered all over Bruce’s blog. I based my analysis on the link you gave Wael in a previous discussion to help him understand your proposal. If there’s specific technical points, add them and we can revisit them too.

“As I’ve also indicated in the past with the “probabalistic security” idea, it is the choice of the system operator to decide how much security checking to carry out and that it is a trade off on the use of time for processing or checking, which can be mitigated by using more structures in parallel.”

A problematic statement. Your architecture presents an unknown amount of risk in that it relies on inspections and signatures. The attacker gets code in, it runs, done. If the attack makes a lasting signature change and wants TB of data leaks, your scheme might catch it in time. If it’s a quick one aiming for a key, your scheme might not catch it in time. The other schemes make certain things straight up impossible at the hardware level without any inspection or pauses at all. Their efficiency and silicon cost varies. Yet, they follow a safer dogma from another scientific field: an ounce of prevention is worth more a pound of cure.

Software-focused prevention of the kind I used to advocate was too slow and burdensome to be tolerated. The newer hardware-software architectures combine acceptable cost/performance/compatibility tradeoffs with strong (or at least precise) assurance arguments. Some even seem obviously true rather than debatable. Many come with formal proofs, design descriptions, and downloadable code.

Perhaps you should do what they did and really spell out your prison architecture in detail in one spot. Maybe show how one or more simple apps on a simple secure OS leverage it. Use a permanent Pastebin + link if you don’t want to do 20+ pages of text in the comments. Then a comparison can be made that you might consider apples to apples.

Figureitout January 6, 2015 11:44 PM

Clive Robinson
–Unfortunately no, I do like the head of our physics dept. and another prof who specializes in detecting very small “signals”. Can’t say the same about my upcoming prof and I’m looking forward to next semester when I’m done w/ him/her for good (might as well just give me the book and charge me less for the credits as I teach myself). If they can’t document their “untested knowledge” then it’s up to someone else to come along, learn it, then test it as best as we think we can to actually be useful.

Nick P January 6, 2015 11:46 PM

@ Thoth

This page? That’s common in academia, especially if it’s older people. It took me a while to give up my HTML 3.2 habit because it worked on everything, was simple, was fast, and didn’t carry a ton of risks. Many who don’t simply follow the old ways just throw together basic HTML because it’s easy, esp for lazy people. 😉 The page itself may or may not be generated with Ur/Web. I know it compiles to regular Web tech but not how that works.

An article about web application security tech, including Ur/Web, is in the list I gave you. Opa is another good design. A fork of either to generate code in safe languages or on secure platforms would be great. They’ve at least both used the kinds of tools (esp Ocaml) that would make that easier.

EDIT to Add:

Looking at Page Source, he uses Frames. He’s definitely old school: people were hating on frames sometime after 2000. My stuff used them for a while for their benefits. I had to quit when browser vendors started talking about dropping support for them and it became a dirty word in mainstream web development for some reason.

The main page mentions in HTML comments (invisible to user) that support for Ur/Web is available at… The reference might mean the page was autogenerated by an early version of the tool. However, an example site I visited used different tags at the top and didn’t have that at the bottom. So, who knows. I’m going with the “old site they threw together with basic HTML” or “auto-generated by Ur/Web/OldSchoolEdition.”

Clive Robinson January 7, 2015 12:32 AM

@ Nick P,

re link to Wael

Err it was a link to Bennie actually…

Back in 2011 RobertT gave a very heavy handed hint that the German equivalent of the NSA was sponsering in an indirect / hidden way the development of (in)secureOS’s via front companies and the like. Bennie has pointed to some similar behaviour that insiders have revealed and it has ended up in yhe German Press.

On of the things nobody expressed out loud at the time was the German involvment in Stuxnet, it was assumed incorrectly that it was a technology swipe via a “Black Bag” job rather than a Freebie for the Germans to earn “Pixie points” from the US and Israeli Intel Community. As RobertT said “follow the money” especialy when it comes to Siemens and DT and it leads back to…

As for Prisons it’s still a work in progress in quite a few ways, and I wish I had more time to work on it. Somethings have changed, for instance on that blog page I mention a slightly different way to do a voting protocol, my subsiquent investigations has thrown up what appears to be a new and novel attack vector which I have yet to investigate more thoroughly. So I’m nowhere near ready to write it up as a single document when it’s spawning other lines of investigation that are worth several papers for each line.

As for your other comments I think you are still missing points I’ve already highlighted and explained, so in the mean time go back and read them.

Thoth January 7, 2015 1:12 AM

@Clive Robinson, Nick P
What is the slightest likelihood that someone can successfully make an observable and testable chip in an open manner so that interdiction would changge the observable behaviour of the chip and the chip is open so that would thwart close source black boxes.

The voting protocols have to depend on a quorum of trustworthy nodes and as long as no nodes can be trustworthy, it cannot function.

The setup phase of the voting protocol is very essential in having at least a quorum of trustworthy nodes to begin with and after that during the data access phase if the quorum contains all malicious nodes, it would still fail. Probably the current weaknesses of many known voting protocols would be during the setup phase to find properly working trustworthy nodes (just me trying to guess what the voting protocol does).

What can we really trust in this era where almost every other chip, diode, transistor, resistor, capacitor … etc … has a high chance of backdoors ?

Are we back to the age of using paper, pencil, string, grains, beads and marbles for maths ?

Clive Robinson January 7, 2015 2:05 AM

@ Thoth,

The voting protocols have to depend on a quorum of trustworthy nodes and as long as no nodes can be trustworthy, it cannot function.

Err not true, rather than look on it as trustworthy look on it as honest and dishonest. If the three nodes are of different CPU types then the software will of necesity be different on each as would any malware.

Thus as the three nodes have a common input they cannot all be infected by new malware at the same time, thus you will get a transitory disagreement whilst malware is put on all three serially.

There are two exceptions to this,

The first is an insider attack where the voting circuit output is physically disabled whilst the malware is loaded on all three nodes.

The second is malware is loaded that does not change the functionality from honest to dishonest, untill a given input signal. Then as the three CPUs are fed from a common signal their state would change at the same time.

There is no easy solution for a dishonest insider attack, just the usuall physical preventions / indicators etc.

However you can mitigate the second one in a number of ways.

More on that later I have to jump trains shortly as part of this mornings grind 🙁 and the latter is the “underground”.

Thoth January 7, 2015 2:24 AM

@Clive Robinson
Ah yes … I forgot the term was honest/dishonest. Was using the idea from here: http://eprint.iacr.org/2014/429

I would assume the best way to get over a ton of these attacks is to make very small modular functions so it is harder/unlikely to go wrong ?

An example would be a small modules just for function X and another for function Y and they simply just do it properly.

Ouch lots of train hopping in London/Metro tunnels 🙂 ?

Wael January 7, 2015 2:34 AM

@Thoth,

My meaning of TPM refers to a hybrid CPU core with a TPM inside it.

Why?

What I am refering to as suggested is to simply put a dedicated security processor with full user knowledge and access on the board or adapted to the board.

That is the case: The TPM is advertised, the user has access to it through defined, standard interface(s)!


Yes, there are many boards with TPM modules on them but the user can’t really get to these modules and are so badly hidden by so many obsfucation layers that it makes access irritating and you have no idea what they are.

The user can! They are abstraction layers to make the interface standard, less error prone, and to save the user from writing raw byte streams to the TPM. If you want to know what they are, then you need to read some of the links I posted, that was the purpose!


What I meant was something the user know, the user installs and the user controls.

What does that accomplish? Does it help to give the user a false sense of control by giving the user a chip to install?


Of course this would be as good as the user inserting his own smartcard or HSM inside since he has much better chance of control anyway.

Not quiet! Look up the static root of trust. The TPM is not a user vetting component; it’s a platform vetting component, hence the name Trusted Platform Module — not Trusted User Module.


In simple, what I meant are chips that provide security and are fully visible and within knowledge of the user. The user can straight away call the chip to do whatever he wants instead of a TPM which you typically need to bypass so much “gateways” just to get to the “front door” and it may bite back at you.

In a protected OS architecture, if the user wants to straight-away-call-the-chip-to-do-whatever-he-wants then the user must be-able-to-write-his-or-her-device-driver-and-understand-the-spec. This is a typical OS -> HW communication stack. Do you think the user speaks directly to the smart card using ISO-7816? Are you mixing up users and application or system developers?


TPMs are suppose for the security of the user and the system

They are meant to establish a primitive called a hardware root of trust which can be used as a building block for other concepts and use cases.


but what are most TPMs doing these days ? They are not doing what they are supposed to do …. A bunch of millions of TPMs and we are nowhere closer to a better security right 🙂 ??

Hardly! Read the links…


Those already TPMs … I call them lock-in backdoor chips where users don’t know what’s up.

A statement that’s only valid if the users know what’s up with the rest of the components.


At least if you manually get one into the front cover of the device and specifically allow and make aware of the use (like an individual security device i.e. smartcards) then that is so much better where they can take more precaution against observation or use it’s functions

Hmmm… Another Hmmmm… You know TPM 1.1b and 1.2 were available on a daughter card that the user could install on their own! Pay close attention to the picture in my previous post. See that vertical board that says “PCB subver.. I mean made in China”? That little bad boy soldered to it is a TPM. The whole board is detachable, and at one point was sold after market. There were problems with that around binding to the platform and some attack vectors (that were addressed, by the way.) That aside, the fact that the user “manually adds the TPM” to the platform adds little security. You might as well request that the user writes the BIOS firmware that does what’s needed starting from the time the PC or platform is turned on. That’s pretty unreasonable to expect from a “user”, wouldn’t you agree?


albeit we can all readily conclude that all security chips in any kind of security processor whether embedded or not are simply backdoored anyway … just whether you are aware of the existance or not.

If that’s the case, then why do you impose all the previous requirements? They don’t offset the fact that a TPM is backdoord, whatever that means! Besides, what if a TPM is backdoored? What consequences do you see?


One thing is the lesser chances of collusion if the chipboard has a separate security processor and the CPU itself has another. The likelihood both of them are colluding is lesser but we al know most of them are backdoored.

Reasonable thinking. How do we know that most are backdoored? It’s one thing to design a system with this possible assumption in mind, and a totally different thing to assert most chips are subverted.


Maybe I should revise what I say in a way:”we should all use mental calculations to do crypto since using smartcard crypto-processors in standalones, TPMs in backdoored chipset motherboards, HSMs and the likewise items are all crapped”. Abit overly paranoid but the choices of a virgin chip is close to non-existent.

I am starting to suspect that your tinfoil hat is subverted! If you’re overly paranoid, switch to the salad bowl 😉


Well, who knows if I might actually dust off my old abacus I used in my younger days and start picking them up and figuring a way to put some basic crypto onto abacus and post a couple here for you guys to encrypt/decrypt if it is successful ? Probably the first one would be basic ultra-lightweight feistel block ciphers ? (No promises on this one as the failure rate maybe high.)

Then how do you prove to a challenger that the state you reported can be trusted? Send it your abacus along with the steps you took, perhaps on a piece of papyrus paper to fit the technology level? Then again, how would you do an elliptic curve cryptography operation on your little abacus? If you try, I think you’ll get close. But only close enough to get an epileptic seizure</> 😉

Thoth January 7, 2015 3:06 AM

@Wael
Manual calculations are a proof of concept that technology is something of a good to have but not a must to have to enure personal privacy and security. The better idea wouldn’t be asymmetric crypto as we all know that isn’t going to be very convenient on mechnical-assisted mathematics. Probably certain symmetric crypto might be too slow on these stuff as well. One of the starters might be OTP (if the keys are generated randomly and of good quality, properly secured keystreams and messages are of secure length) which means the randomness can be done digitally on a “secure” and air-gapped with no direct input and output machine albeit someone could walk up to it and do some side-channel analysis which is another problem of it’s

On the TPM/chips and what not … I guess we have to accept the fact it’s not convenient to trust the chip and that leaves with rather manual efforts.

Wael January 7, 2015 3:33 AM

@Thoth,

I guess we have to accept the fact it’s not convenient to trust the chip and that leaves with rather manual efforts.

I respect that, but don’t ask me to agree! Because if I did, then I’ll send an email to TCG suggesting they change the name of “Trusted Platform Module” to “Untrusted Platform Module”. It’s not a clever marketing name and for the same reason, I think their marketing guys will throw something back at me 😉 Wait a second, I have a better idea! You send them an email suggesting they create a new subgroup called: The Rather Manual Efforts Group. It’s perfectly OK, they’re nice guys and girls, bu-bu-but please don’t mention my name 🙂

Thoth January 7, 2015 4:00 AM

I was scanning through the NXP SmartMX crypto processor page under the CMOS14 SmartMX family properties that it could be “native or open platform and multi-application operating systems in market segments such as banking, E-passports, ID cards, Health cards, secure access, Java cards, Near Field Communication (NFC) connectable mobile hand sets as well as Trusted Platform Modules (TPM)”.

@Wael
Interesting on how wide range a crypto/security processor could be used in smartcards to TPMs and so forth. Probably for TPMs, they might have their own circuits with those NXP chips so it’s essentially a blackbox within a blackbox ?

Wael January 7, 2015 7:03 AM

@Thoth,

it’s essentially a blackbox within a blackbox ?

It depends what you mean by a “black box” The functionality is defined. There are “musts” that translate to mandatory functionalities, and there are “shoulds” that translate to recommended optional functionalities, and there are “mays” that translate to pure optional functionalities. How these functionalities are realized in the circuitry and firmware are usually hidden, ie a black box. So in that sense, it could be viewed as a greybox within several layers of protection.

Nick P January 7, 2015 3:02 PM

@ Clive

Not getting touchy are you? 😉 Well, it’s quite unreasonable to expect people to comb the Internet for enough of your posts to try to piece together your proposal. I do an occasional favor for a friend so here’s what two hours found. You can review it for accuracy before I do further security review.

@ Clive, Wael, Thoth

Clive’s Prison architecture with links to specific comments

Ref 1. Original post:

https://www.schneier.com/blog/archives/2012/06/friday_squid_bl_330.html#c784474

“Now if you look at the silicon area on some modern CPU’s it’s vast, and the same area can hold upwards of 1000 very basic RISC style CPU’s

If you put an MMU between each CPU and the main busses going off to memory and I/O the CPU can be effectivly issolated. Now think what you can do by putting the MMU not under the control of the CPU but a hypervisor… You can then make the CPU effectivly blind as well.

One. asspect of this is that the CPU is also effectivly imprisoned. The hypervissor only gives it sufficient memory to carry out it’s execution thread and no more. Further it read/writes to buffers not memory or devices. It means that it actually does not require an operating system just a very very simple interface equivalent to a very striped down API all of which is unprivaledged and requires just a few bytes to implement (I’ve got one which is just a couple of hundred bytes and that’s still bloated). This API effectivly talks to the Hypervisor.

Now because you have so many CPU’s you could effectivly have each one running an independent thread of a task. The more you strip the fat of each thread the clearer a signiture it will have and the easier it is for a hypervisor to check.”

Analysis: Similar to separation kernel concept as they have static memory and CPU time allocation for the same reason. Around a 1,000 RISC cores with a MMU. There are buffers instead of direct memory access. The hypervisor controls the buffers, MMU, and task to load. Hypervisor relies on signatures. Claims having no OS or other “fat” makes signature easier.

Next comment:
https://www.schneier.com/blog/archives/2012/06/friday_squid_bl_330.html#c785794

Suggests hardware FIFO’s, hypervisor controlled memory region with overflow protection, or a read/write-only stack. CPU writes to it & hypervisor CPU reads. Harvard architecture where slave CPU can’t modify its own code.

Elaborates on software side. Each task launched from shell is designed for inherent security and has an associated signature. It runs with absolute minimal resources on its own CPU.

Signature elaboration. “Occasionally secure.” Only inspect upon output or after period of time without output. Hypervisor scans “instruction and data memories for anomalies” while CPU is halted. Hypervisor itself is a state machine that OK’s or fails hard. It’s watched by another more trusted hypervisor. Don’t remember any elaboration on the second hypervisor.

Claims malware injected from outside has no memory to be in. Not impossible, but unlikely.

Analysis: Assumes one can make a signature for any real-world function that can detect malice reliably. For instance, a data-driven overflow during a cryptographic algorithm would use no extra memory or CPU time while producing same random output. Additionally, so many real-world applications are dynamic I have doubts about the minimal memory protection. There’s potential given what Clive, others and I have done with state machine and functional programming to make state more predictable. So, it’s a claim I have low confidence in (temporarily) but worth exploring.

Next comment:
https://www.schneier.com/blog/archives/2012/06/friday_squid_bl_330.html#c796361

Elaborates on his signature concept a bit. Correctly points out that complex functions with data-dependent branches makes signatures chaotic. That’s my concern. Implies breaking those functions into other functions with straightforward input to output or transforming the large function into one with these properties. Claims CISC CPU’s complexity adds to the noise: RISC better for signatures. Uses sensors like memory used or elapsed time with the function implied to be bounded in that way. Adds it’s hard but must be done by more talented and security-aware programmers.

Elaborates on software dev. The average developer basically picks out modules that are already decomposed and signature profiled. Plugs in their numbers. The developers themselves might still insert malicious constructions.

Reiterates that reducing CPU and task complexity allows minimal memory for malware. Data independent signatures reduce side channels. The hypervisor also becomes a debugging tool during development as developers are forced to deal with issues that it finds.

Next comment:
https://www.schneier.com/blog/archives/2012/06/friday_squid_bl_330.html#c800918

“That is the CPU it is watching, produces various “heart beats” via the sensors which should happen at known times for a known number of occurrences. Remember these watched CPU’s are always in “background” operation as they do not get interupts of any kind causing a state switch into “foreground” operation, thus their behaviour is very determanistic and easily measurable. Any deviation from the expected signal of heart beats is an indicator that ther is a problem. It does not matter if the problem is an exception or error in the program or an attempt to load malicious code into the execution space. The hypervisor halts the CPU and inspects it’s registers counters and execution space if all is not as expected then the hypervisor creates an exception.”

Analysis: Sounds like a lot of overhead per RISC core. Real-time monitoring and parallelism implies almost as many hypervisor cores as compute cores. Might not be what it seems. He might clear it up.

“If you then consider what malware has to do and how as a series of steps, the first is to cause a branch in the expected flow of execution. ”

True.

“This branch will cause the timing of heart beats to change in time and this should be recognisable in a well found task, this should then trigger the hypervisor inspection of the CPU and it’s reduced environment.”

Maybe. Should be true for most malware. Hence, probabilistic. Gives Forth’s stacks upon stacks as an example to support practicality of his decomposition.

Next comment:
https://www.schneier.com/blog/archives/2012/06/friday_squid_bl_330.html#c807499

Maps tradition UNIX model to his Prison architecture. User mode processes and IPC work about the same to the shell user. Change is in kernel where hypervisor mediates all activities. Shared memory is avoided for IPC: “same as other communication streams.” Traditional CPU’s have tasks block and do a context switch. He says his architecture has a single task on each CPU, the CPU just halts, and hypervisor inspects it.

My comment on that to Wael: “His uses very-fine-grained decomposition, a special type of POLA, signature monitoring, & little to no legacy capabilities. I’d love to try his but I still can’t see it being implemented in hardware. (I’m not a hardware guy either, so it’s hard to wrap my mind around practicality of a totally radical hardware approach.)”

Now I know a lot about hardware and my recent post says the same thing in more specific detail.

Ref 2. Different post.
https://www.schneier.com/blog/archives/2012/07/friday_squid_bl_328.html#c818030

“First off I’m not sure Nick does “vacation” in the normal sense”

A vacation would be nice… Moving on.

“So the prison model is designed to work like a massivly parallel system of CPUs with small tasklets running on each CPU which pipeline there results not to each other but to and from the CPU hypervisor.”

Hmm. It seems likely that Clive’s posts had an influence on my thinking as one of my designs was a Massively Parallel Processing architecture with special MMU’s at each node. There’s no similarities past that but I doubt I was totally original now: something he planted that popped up years later.

More interesting discussions with Clive, Wael, and RobertT on a variety of issues in that thread. However, I think I’ve collected enough posts to have conveyed Clive’s prison architecture using his own words and elaborations. We’ll wait for him to accept or supplement it before any further security review happens.

@ Wael

re analogy

You said you’d drop the analogy if I had something better. Now I do. High assurance engineering once thought purely in terms of state changes but more recently talks of flows (eg info flow control). My approach isn’t a castle but is an Interstate. It’s designed so traffic can only flow a certain way and flow well. Integration also happens in ways that ensure proper flow. Only the most severe accidents (exceptions) lead to total flow failure: most just make the system run in degraded mode to work around that failure. Malicious or unreliable drivers also cause alerts to responders who can contain damage fast due to the design. The Interstate doesn’t aim for perfection and yet still achieves a very high rate of proper flow.

The newer, secure architectures are mostly about controlling how data flows through the system. The routes and effects of data flows are constrained by the hardware design. Improper flows always cause a noticeable exception. The only flows remaining that can run are secure flows. No inspections are required to ensure this in the preventative designs. Further, a number (esp SAFE & CHERI) use very simple mechanisms running side-by-side with CPU core that add little overhead and allow very flexible user-defined policies. Like an Interstate that can adapt itself to new flow management strategies.

So, how do you like that analogy? And what other safety- or security-critical metaphors can apply to safe flow by design concept?

Wael January 7, 2015 7:49 PM

@Nick P,

so here’s what two hours found.

Thank you! Great effort! Brings back some good memories.


My approach isn’t a castle but is an Interstate.

Last minute curve ball! Excellent 😉


So, how do you like that analogy?

I like it 🙂 Initial remarks:
Castle, prison, Interstate! Analogy is missing something: End points of the interstate, landmarks on the way, etc… You can add the interstate, but the castle still needs to be there (I think.) I am afraid two years from now we’ll have added elements that span the visible universe.

@Clive Robinson,

If you are going to spend the weekend on it firstly read the whole thread and note the dates your eyes might pop open when you do.

Can you give me a small hint?

Say something about the footnote! The suspense is killing me! And I need to run a self assessment diagnostic on my predictive mind reading abilities! Try not to let my words influence your comment, and don’t feel bad if I get it wrong 😉

Clive Robinson January 8, 2015 1:38 AM

@ Nick P,

It’s early in the UK and I have a busy day ahead and Fri is going to be “catch up if I can day” so my reply will be at the weekend when hectic is just “seven down” in the crossword puzzle not a state of existence.

@ Wael,

Hmm I wonder about your ability to read between the lines sometimes, which if you remember got me a yellow card in the past…

So here goes nothing… Stuxnet if you remember attacked an industrial Control system, designed by a well known German company. Although originaly nobody thought the company had anything to do with the build of the Stuxnet worm, that is looking less and less likely, what was put down as a “black bag” job to get signing certs etc was in fact anything but.

The company concerned can and has been traced back financialy and by personnel to be in effect a “front company” for the German equivalent of the 5Eyes signals and other Intel Community agencies (much the same as that which became BT, was and still is).

RobertT dropped a very heavy handed hint that he knew that it was a front company and who was pulling the strings and that the agency concerned was backdooring secure OS’s thus other products as well.

As Nick P has noted in the past RobertT gave the impression of looking over his shoulder much of the time, which was sort of put down to the way Criminal elements in the Asia arena “do business” and the fact he was party to information involving industrial espionage. However there was the occasional hint it was “Gov / IntCom” as well including reverse engineering work on crypto ( google “robertT lorenz”).

The German string pullers as it turns out also have very close links with the Israeli IntComm, and have been trying to get their feet undef the bordroom table in the 5Eyes club for quite a long time. In fact rather longer than has been publicaly realised, information has come to light that they have been “doing party favours” in vast numbers for the NSA / GCHQ since the end of WWII. I guess originally on some kind of “Paper Clip” arrangement for “war crime” immunity, but these days to get full 5Eye membership.

Thus giving the US & Israeli IntComm “the keys to the door” on the industrial control software or opening the door for them is way way more likely than a “black bag” job.

Which in turn brings in further questions about elements in Microsoft and how stuff has gone out under one of their signing keys.

As I’ve pointed out on quite a few occasions over the years “Code Signing” is an illusion that the industry has bought into. In reality all it says is that on such and such a date someone with access to the signing key used it on a block of executable code etc. It offers no guarenty of quality or security, or even that the code was made by the owner of the signing key. It could in practice be the kid that sweeps up at night to pay his way through school / Uni, who knows were a disk is with the copy of the signing key. Why the ICT industry puts such faith in signing keys I realy don’t know especialy as some have already been broken or made public (TI Calculators etc).

Wael January 8, 2015 2:57 AM

@Clive Robinson,

Hmm I wonder about your ability to read between the lines sometimes, which if you remember got me a yellow card in the past…

My bad… The formatting threw me off.

Thoth January 8, 2015 10:22 PM

Post-Quantum Secure Onion Routing.

Simply put, TOR equipped for Post-Quantum fallouts by changing the DH-based mechanism with Lattice Crypto. To me, it’s just steam rolling ahead without solving a whole ton of TOR issues (previous posts you need to search written by many of us here). End results would be a quantum resilient DH-based channel but the traffic analysis (proposed by Clive Robinson), the endpoint security (noticed by so many of us) …etc… stays there.

Link: http://eprint.iacr.org/2015/008

Wael January 9, 2015 10:25 PM

@Nick P, @Clive Robinson,

There goes my weekend!

I’m sorry. Something came up that I need to finish, and this requires a lot of thought and organization. Will get to it soon.

Eileen Wen December 10, 2020 2:02 AM

OT RE: RF-authentication via SI4432 radio module
–Able to compile the “hello world” blink an LED program (which had to short couple pins which is wierd but ok) and was able to flash newest demo firmware w/ OOK modulation after frantically looking for original firmware when I flashed over it. Just trying to get used to SiLabs way of doing things (haven’t worked w/ them much, code is a little more “closer to the metal” so slightly more difficult). Also worry about being able to explain what exactly’s going on as I delve deeper w/o “losing” people in register values. Things were going so smoothly, unusually smoothly..too smoothly…then boom finally a failure…Turns out, I didn’t know about this even before, there’s a chat program w/ a virtual com driver aptly named “EZlink Chat” that would’ve done almost exactly what I wanted! Well it’s not fcking EZ lol and there’s little documentation to troubleshoot so yeah…I’d have to bust open the programs and dig into more filthy drivers…Windows terminals won’t work w/ this chip, they mentioned something about them stopping supporting them so SiLabs had to make their own and their terminal program isn’t working at all and throwing up error never heard of before so…It’s unlikely I write a good reliable terminal program to work w/ this board so I’m looking but porting it to this chip will be tricky, especially on Windows (yuck). I’ll get something like that working eventually, probably Arduino. Something fcky’s going on, it would freeze up my computer and the board is heating up around USB pins and can smell burnt metal but no severe damage, I honestly don’t know why…I thought after smelling burnt metal I’d burn out a USB port but nope thankfully. I’m not a paying customer (have limited code compile space) but there’s a power issue. Man that’s annoying. Still interesting platform and they’ve stopped recommending this for newer designs so I bet the newer boards are easier to get up and going and will be better supported. Also there’s a feasycom module w/ this chip: https://www.feasycom.com/product-hc05-bluetooth-module-dual-mode-bluetooth-module.html and also it’s probably been used for educational purposes and maybe applied to some opensource projects: https://www.feasycom.com/product-bm77-bm78–Bluetooth-Dual-Mode-Module.html

Also, I’ve noticed this on other platforms I work on, but as it stands flashing the board the LCD screen flickers, running thru code w/ debugger the LCD flickers, so that’s a quick EMSEC problem which is not a surprise at all, but not good for security. It stands to reason pertinent info will leak on code execution (way beyond just what’s meant to be transmitted) as it loops thru a relatively small program, just takes time to put noise-to-signal. EMSEC gets tricky when you are…trying to transmit data securely…so yeah, just an observation.

- December 10, 2020 3:21 AM

@ Moderator,

The above from “Eileen Wen” is unsolicited advertising.

They have taken the bottom of a long post (from figureitout) and used it to hide bits to advertise their probably dodgy GSM products.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.