Comments

Daniel β€’ November 15, 2013 7:11 PM

It is more privacy related than security related but…

http://www.forbes.com/sites/kashmirhill/2013/11/13/your-phone-number-is-going-to-be-scored/

Just like Americans now have a credit score they will soon have a “phone score”. One interesting impact of this is on those who use so-called “burner phones”. Police will soon be able to tell what phone is a burner phone by looking at its score.

To me, its one more sign of how we have confused trust with reputation. But that’s a different story.

Jonathan Wilson β€’ November 15, 2013 10:18 PM

The real solution to burner phones is to require greater identity checks before you are allowed to buy and activate cellular service. They did it in Australia in response to some terrorist concerns, I bet the US could do the same thing.

I suspect it hasn’t been done because the companies that sell the pre-paid phones and supply service for them (AT&T, Tracfone, Verizon, Wal-Mart, Radio Shack etc etc etc) have lobbied hard against it due to the extra costs and lost sales.

65535 β€’ November 15, 2013 10:57 PM

@Daniel

It would appear the Telesign want to hop on the NSA’s gravy train of data mining subsidiaries (corporations that sell personal data to the government). It’s a generalization of certain companies who want to cash-in on the growing data mining partnership with the NSA and other three letter governmental agencies.

β€œTelesign wants to leverage the data β€” and billions of phone numbers β€” it sees deals with daily to provide a new service: a PhoneID Score, a reputation-based score for every number in the world that looks at the metadata Telesign has on those numbers to weed out the burner phones from the high-quality ones.” -Forbes

Those free burner cell phones given out by Obama are now looking very sketchy (a honey pot trap?).

  1. Obama and Burner TracFones:

[AP]

β€œTracFone Wireless, a provider of prepaid mobile phones, will provide free mobile-phone service for as long as a year to Virginia families who earn less than 135 percent of the federal poverty level. For a four-person household, for example, the threshold would be an annual income of $28,620.”

http://articles.washingtonpost.com/2008-10-31/news/36859728_1_prepaid-mobile-phones-phone-users-phone-service

2.

[Prepaid Mobile Phones – WaPo]

β€œIn the 3 1/2 years after false rumors started that the Obama administration was giving free cellphones to poor people β€” and six months after a racially charged video about it went viral β€” a once-obscure phone service subsidy is getting renewed scrutiny on Capitol Hill. β€œThe program has nearly tripled in size from $800 million in 2009 to $2.2 billion per year in 2012,” the senior Republicans on the Energy and Commerce Committee wrote in a March 26 letter to the Democratic minority. β€œAmerican taxpayers β€” and we as their elected representatives β€” need to know how much of this growth is because of waste, fraud and abuse.” …Lifeline was begun not by President Obama but under Ronald Reagan.”

http://articles.washingtonpost.com/2013-04-09/politics/38405292_1_president-obama-obama-phone-lifeline

name.withheld.for.obvious.reasons β€’ November 16, 2013 12:49 AM

There is nothing in the PPD that affords a classification, this is public policy…that is unless the President of the United States believes he or she is the executive to a democratic republic without citizens..

I have include a section of the PPD released last week and have a few markups–

Malicious Cyber Activity:
Activities, other than those authorized by or in accordance with U.S. law1, that seek to compromise or impair the confidentiality, integrity, or availability of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident thereon.

Cyber Effect:

The manipulation, disruption, denial, degradation, or destruction of computers, information or communications systems, networks, physical or virtual infrastructure controlled by computers or information systems, or information resident thereon2. (NWFOR)

Cyber Collection:

Operations and related programs or activities conducted by or on behalf of the United States Government, in or through cyberspace, for the primary purpose of collecting intelligence — including information that can be used for future operations — from computers, information or communications systems, or networks with the intent to remain undetected. Cyber collection entails accessing a computer, information system, or network without authorization from the owner or operator of that computer, information system, or network or from a party to a communication or by exceeding authorized access. Cyber collection includes those activities essential and inherent to enabling cyber collection, such as inhibiting detection or attribution, even if they create cyber effects.

NWFOR

COMMENT: This activity, in violation of federal law, percolates from the same stench that defines it as unlawful (by way of US Public Law), an extreme exercise in hypocrisy is obvious. //COMMENT
Activity and action specified above can best be described as woefully misguided; the usurping and collection/penetration of ANY who knows what device (computers, information or communications systems, or networks with, unauthorized systems is not properly bounded. Three features either violate or persist in a manner unfamiliar to any exercise in modern law. One would have to return to the Spanish Inquisitions to replicate the authority and power granted governmental and non-governmental entities–this goes well beyond anything resembling jurist prudence:
The Compromise to host systems, in violation of the multiple public laws and the constitution, for an indeterminate period of time. This includes the so called third party doctrine (and with commercial companies become qualified protecting entities–there is no limitation.

No termination or minimization of storage or use of collected data

Defensive Cyber Effects Operations (DCEO):

Operations and related programs or activities other than network defense or cyber collection – conducted by or on behalf of the United States Government, in or through cyberspace, that are intended to enable or produce cyber effects outside United States Government networks for the purpose of defending or protecting against imminent threats or ongoing attacks or malicious cyber activity against U.S. national interests from inside or outside cyberspace.


U.S. FEDERAL AGENCIES, ACTING UNILATERALLY, CAN AFFECT DESTRUCTIVE ACTIVITY UPON FOREIGN AND DOMESTIC PARTIES (CYBER ASSETS DO NOT OPERATE OUTSIDE THE CONTEXT OF HUMAN ACTIVITY, IT IS A SHARED CONTEXT). RECOMMENDING THAT CAUSATION AND CONCRETE ATTRIBUTION MAY NOT HAPPEN IN THIS CONTEXT,

…IMAGINE…

A MALICIOUS OR OVERT ACTION TAKEN BY SOMEONE TO STAGE AN ATTACK FROM A HOSPITAL FACILITY, MAKING IT APPEAR TO BE A NETWORK OWNED BY THE PERPETRATOR. THE PERP CAJOLES THE U.S. DoD TO TAKE/MAKE A REACTIVE RESPONSE, 30 PEOPLE DIE AS THE HOSPITAL ELECTRICAL, ELECTRONIC, AND PATIENT SUPPORT SYSTEMS ATTACKED BY THE U.S.


Rewording β€’ November 16, 2013 3:15 AM

The real solution to democracy is to require greater identity checks before you are allowed to do anything.

RC4 Killer β€’ November 16, 2013 4:26 AM

Microsoft kills RC4
A couple of days ago Microsoft pushed a Security update to its customers.
This update (KB2868725) takes RC4 out of the normal communication stack.
This update was marked as ‘important’ (one below ‘critical’, the highest).
This means it was defaultly pushed to many customers, with no discretion (so long as they have auto-update on).
This is for Win7 and onwards – so roughly 40% of non-mobile users (and a multitude of servers).
This basically dooms RC4.

We can judge the effectiveness of an ‘important’ update – by the growth of IE10 market share.
It had a ~3% market share until March-2013.
These were elective adopters/previewers.
Then it began to slowly, but steadily, grow.
It grew by 10+% since (globally), and now stands on 15-20% – just 8 months later.
This is mostly due to Windows 7 users, receiving it as an ‘important’ update without discretion.
Windows 8, has a roughly 5% of global OS share, so the rest must come from Win7 (as these are the only two supported Windows versions).

With Windows7 accounting for ~40% of global desktop users – 10% would mean a quarter of its users have ‘important’ updates pushed automatically.
We should expect the same from the “RC4-disable” update.
Within a year, more than 10% of global users wouldn’t be able to connect to sites offering only RC4 encryption. This is bound to be more dramatic in the US and other wealthy nations, where more people have Win7/8.

Rae Leggett β€’ November 16, 2013 1:00 PM

Re: Terminal Cornucopia

I was looking at that today. My first thought was that human ingenuity knows few bounds. My second thought was regarding Bruce and his concept of “security theatre”.

Bryan β€’ November 16, 2013 2:47 PM

Re: Terminal Cornucopia

Doesn’t surprise me at all. Many chemicals have multiple uses. Using kids toys I can see a number of ways to make them much more deadly. Security theater indeed.

tinfoil β€’ November 16, 2013 6:10 PM

Apparatus and method for remotely monitoring and altering brain waves

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=16&f=G&l=50&co1=AND&d=PTXT&s1=3,951,134&OS=3,951,134&RS=3,951,134

United States Patent 3,951,134

Abstract

Apparatus for and method of sensing brain waves at a position remote from a subject whereby electromagnetic signals of different frequencies are simultaneously transmitted to the brain of the subject in which the signals interfere with one another to yield a waveform which is modulated by the subject’s brain waves. The interference waveform which is representative of the brain wave activity is re-transmitted by the brain to a receiver where it is demodulated and amplified. The demodulated waveform is then displayed for visual viewing and routed to a computer for further processing and analysis. The demodulated waveform also can be used to produce a compensating signal which is transmitted back to the brain to effect a desired change in electrical activity therein.

Petrobras β€’ November 16, 2013 6:13 PM

Our companies need hardware that is not backdoored.

We need a crowd-sourced project for a small computer, based on i386, Neo Free runner, or something else, with these modifications:

(1) All its processors should be very simple to prevent backdoors, performance is not a priority. Even for the small processor on the ethernet port.

They may be a copy of i386. Or of the ARM ARM 6, with only 30,000 transistors. They may be adapted to 22nm process for faster clocks, and longer run on batteries. It should be designed to facilitate JTAG interfaces, Pipeline Emission Analysis (PEA) and/or other scrutiny access. Fully publicly documented.

(2) With a keyboard, a mouse, a screen, and inert removable support (CD, or DVD, or floppies, or Zip drives, …).

(3) It may have three or more ethernet ports to avoid the use of backdoored switchs.

Please, Clive Robinson, or Nick P, or anyone else here with the knowhow about factories of processors, spare some time to write such a Kickstarter project. The government will not help us. Even at $2000 each, they will be worth to companies.

Nick P β€’ November 16, 2013 7:05 PM

Temporary solution is to use pre-9/11 hardware that’s less likely to be subverted. I just got my 1GHz RISC system recently. Here’s a list for business’s to choose from.

Yes, we need new chips. I’ve been dealing with issues recently that are distracting from my security projects. However, I have around a dozen good papers in my “secure from ground up” paper set already. As I read them, my brain told me we actually need a bunch of chips:

  1. At least one for legacy NIX applications on desktop or server. Optionally modified to make secure operation easier such as enforcing control flow integrity, memory safety and secure boot.
  2. One that can run with little power for mobile applications. We’d either need a second chip for baseband or main chip must have strong isolation capability.
  3. Chips with radically different designs that maintain integrity or confidentiality from ground up. Optionally make it easier on programmer. Optionally compatible with some existing software.
  4. Chips specifically for high performance IO with security baked in for network appliances, storage systems, etc.

My paper set has options for all of these. There are also open core’s available for certain architectures. Verisoft’s VAMP processor is both open and verified for functional correctness. Should be easy to port Linux or NetBSD to it. One of my designs modifies it with aspects in some of the papers to make one hell of a secure processor. Problem is that, on FPGA, these processors will be much slower than they’d otherwise be and you must trust FPGA vendor. ASIC with foreign company is best option for new chip with lower backdoor potential. That will be utterly expensive though.

Solution for Right Now

My main recommendation is to use either the Chinese Loongson processor or an older RISC processor with a hardened NIX system. OpenBSD if possible, otherwise BSD or Linux is fine if following security config advice. Try to make sure a trustworthy system is between that system and the Internet if it must be connected. The middle system acts as a guard restricting communications to barest minimum, not to mention maybe blocking a direct attack against… certain potential weaknesses. No need to give them ideas. πŸ˜‰

Start of Next Solution

My main recommendation for the new chip is to get a team together with the right expertise and evaluate options. The goal will be to look at all the use cases for chips in business and personal to figure out the fewest number and type of chips to create to solve the most problems. Each chip might easily cost $30mil so the fewer the better. So, the chips must be able to solve many different problems and businesses must accept the tradeoff that the resulting systems will be slower/uglier.

My Proposal

My initial proposal is two chips: one supporting legacy apps with beefed up [optional] security; one designed for ultra-secure operation with no legacy support. The legacy chip would be low power for mobile. Desktops and servers would simply use several of them maybe at a higher clock rate. Think of the web “servers” that simply cram a dozen embedded (e.g. Atom) processors on a board while still being dirt cheap & reasonably performance. Later, more chips (or faster ones) would be created better suited for other use cases.

However, we can’t forget all the microcontrollers powering our systems. There’s many of them on motherboards, USB devices, etc. They might be 8, 16 or 32 bit depending on intended function. We need at least one microcontroller that can do all of that designed in a similarly open and low subversion way. That gives us a dirt cheap chip to use in all kinds of stuff. Then open designs for that stuff can be made for the microcontroller. This chip and the more complex chips can be combined in many ways for many different types of systems.

I’ll also add we might as well use strong verification technology on all of them since the budget will be millions anyway. CLI’s microcontroller, VIPER, VAMP CPU and Rockwell’s AAMP7G all were verified against their spec with good results. So, why not The People’s CPU’s too? πŸ˜‰

EDIT. Oh yeah, one other idea I had was to design a badass FPGA or other reprogrammable chip with trustworthy loading mechanism. Maybe one for computation, one for IO. Then, we can target it instead of ASIC’s for both prototyping of future chips and as a usable way to run secure core designs.

Werner Almesberger β€’ November 16, 2013 7:47 PM

Petrobras, if you’re worried about the CPU or other “intelligent” chips doing things behind your back, you may want to look at the Milkymist One:

http://milkymist.org/3/mmone.html

It’s basically a small “PC” built around an FPGA, with the circuit and all the Verilog being Open Source. Drawbacks: it’s slow, only some 80 MHz. The CPU core (in the FPGA) is an LM32, which is poorly supported by Unices (the Milkymist One normally runs RTEMS. There’s a partial Linux port without MMU and an OpenBSD port with MMU, but I don’t know if they’re in a really usable state. Both GCC and LLVM support the LM32.) A lot of things have been done in this project that would be useful building blocks for a PC-like system.

There’s also a successor, the Mixxeo, for a different application (video mixing instead of VJing) that improves some bits but also drops certain features.

FPGA programming depends on the closed-source Xilinx synthesis tools, though. But then, there’s a silver lining on that horizon as well, in this (currently dormant) project:

https://github.com/Wolfgang-Spraul/fpgatools/commits/master

  • Werner

RobertT β€’ November 16, 2013 8:22 PM

@Petrobras,

Can you give me any details about the market for such a device?

Can you also give some thought to the likelihood that a team of security aware chip designers, capable of developing the product you’re requesting, might be also capable of developing a new unknown backdoor, basically a chip_zero_day?

I suspect the project will go absolutely nowhere without reliable and verifiable answers to these two questions.

Third thing to consider: Legacy software considerations will basically force the chip to be an ARM or i86 compatible device. Is it possible that given this starting point your security goal is already impossible?

Wael β€’ November 16, 2013 8:56 PM

@ RobertT,

Third thing to consider: Legacy software considerations will basically force the chip to be an ARM or i86

You might want to consider LLVM and architecture independent instruction sets. May ease that constraint.

Phoner1 β€’ November 16, 2013 9:08 PM

@Jonathan Wilson

“…require greater identity checks before you are allowed to buy and activate cellular service. They did it in Australia”

They didn’t do it very thoroughly. I recently bought two prepaid mobile phone SIMs without supplying ID, just to see if I could. Larger chain stores (eg Target) won’t sell them without seeing ID, but Vodafone were happy to sell me one of theirs, and I bought a Lycamobile SIM from the newsagent. In both cases, they were activated online using completely fictitious personal details.

Mike the goat β€’ November 16, 2013 11:07 PM

Bryan: ahh, lithium from batteries and water… Always fun. Apparently it is a major clandestine Li source for stimulant production too, so what now? Ban them would be the typical govt response! As a variant on his luggage bomb, use an analog watch as a relay by wrapping a single strand of wire (recovered from the toy) around the hour hand and tape a piece of wire on the other side of the dial where it will connect after a few hours. No doubt you could also make a baro sensor out of perhaps an empty water bottle. Dent the water bottle slightly, tape one wire there and tape your other wire to the side of the bottle so that when the bottle expands it will make contact. So many possibilities.

Mike the goat β€’ November 16, 2013 11:13 PM

Phoner1: as some will know I have been involved in telcos in several Western countries and the AU mobile rules are complete theater. “Travel” SIMs (ie by MNCs outside country) are not subject to ATMA’s ID requirement. I don’t think terrorists would care about having to call an international number to set off their explosives nor do I believe drug dealers would care about roaming fees. Also it is haphazardly implemented and many carriers do their own ID verification on activation (requiring social health benefits number – called Medicare in AU but covers all population bit like Canada’s scheme or drivers license number) too. A thing to remember is it is only on purchase – prepaid cards are often exchanged on gumtree (Australian kijiji) or eBay and thus could fall into wrong hands.

ben β€’ November 17, 2013 2:10 AM

http://www.hacker10.com/computer-security/offshore-free-encrypted-email-service-mail1click/

Offshore free encrypted email service Mail1Click
Posted on 3 November, 2013 by Hacker10

Mail1Click is a free encryption email service with a simple and easy to use interface. Data stored in the email server is kept secure using AES256-bit.

Communicating in the same server without sending any data across the Internet is first rate security since we all now know that spy agencies from around the world wiretap emails as they transit fibre optic cables.
– See more at: http://www.hacker10.com/computer-security/offshore-free-encrypted-email-service-mail1click/

benny β€’ November 17, 2013 2:24 AM

How mobile phone accelerometers are used for keylogging
Posted on 8 October, 2013 by Hacker10
Share

Massachusets and Georgia Insititute of Technology researchers have developed a method to log computer keystrokes by placing a smartphone next to a computer keyboard and major its sound and vibration using the smartphone accelerometer. The researchers employed an iPhone 4 for this and noted that sensors in older models are not good enough to pick up remote vibrations.
http://www.hacker10.com/other-computing/how-mobile-phone-accelerometers-are-used-for-keylogging/

Mike the goat β€’ November 17, 2013 2:29 AM

benny: yeah I saw the research a while back. If I recall correctly it was about 80% recovery rate of plaintext.

Clive: amusing isn’t it? I am just waiting for the key to finally leak and Adobe will have a real crisis on their hands. I wonder what implications all the known plaintexts (the 100 odd already determined common pws based on people using pw hints literally) would make to cryptanalysis.

Wael β€’ November 17, 2013 3:01 AM

@Nick P

Temporary solution is to use pre-9/11 hardware that’s less likely to be subverted.

I still don’t understand the rational behind that. I don’t believe that subversion happened (if it did) instantaneously after that date. On the other hand, do we know that pre 9/11 systems were “clean”?

My main recommendation is to use either the Chinese…

I don’t understand your trust in that either!

My initial proposal is two chips

I think you have to define the purpose of the proposed system first. What are going to use it for? If it’s air gapped, do you care if it’s a COTS system? I like your data diode better. You want to prevent packets getting out of your system that may leak your confidential data, whether it’s a new project or personal information. RobertT’s four tiered setup sounds reasonable to me!

Wael β€’ November 17, 2013 3:44 AM

@Nick P,
Here is my proposal:
Assemble your own system, and build your own BIOS from open source such as http://www.coreboot.org/Welcome_to_coreboot then add some TPM functionality to measure things you care about, which includes option ROM’s (which is actually already done, but you may want to tweak it). After that, disable BIOS updates. Stick this system in a tiered network that suits your needs, or air gap it. Also use open source OS’s. My preference is FreeBSD. Build all your binaries from source (make world). And use different partitions (or drives) for components that you want to keep immutable, make that partition read only. Then encrypt your confidential stuff with a key that’s saved off a token. And protect that key in transit, in usage and at rest. The purpose of this setup? Generic usage πŸ˜‰

Winter β€’ November 17, 2013 4:45 AM

The untrusted hardware problem looks a lot like the “Trusted trust” compiler problem. Maybe it can be solved in the same way? Set up two (three) seperate systems build from different sources (e.g., different ARM sources) that should deliver bit identical network communications. Then run them in parallel on the requiered task while observing their (network) behavior using separate, different, hardware appliances. If you can assume that the hardware does not have the same backdoors, say, because one is Chinese, and the other American, then you can trap some backdoors in some contexts.

Clive Robinson β€’ November 17, 2013 7:11 AM

OFF Topic :

Valve the company is known for it’s games market that unlike all the other (console manufacturing) games markets is making money.

They recently have made some announcments about SteamOS their version of Linux. Valve is notable in that unlike the ever caustic Linus they have got the high end graphics card companies on board (expect ruffeled feathers over in Linux Central).

SteamOS has other advantages that are going to ender it to the likes of IP holders over in Hollywood etc.

But unlike Microsoft or Google with what are increasingly closed platforms (Win7 and above and Android) SteamOS has been anounced that it is going to remain open.

Further Valve have anounced a SteamOS hardware platform for desktops.

Now if it stopped there it would be the “same old same old” of the equivalent of “this is the year of the Linux desktop”, but it’s not and this is where it gets interesting.

Valve have two versions Bigfoot and Littlefoot, and are clearly going to go after the Mobile market. Put simply Android is designed to be a Marketing platform not a games platform and so far any attempts within the Google controled Android partnership has been fairly ruthlessly dealt with by Google. Whilst the Java development side of Android is improving it’s not doing much in the way of Soft Real Time which is essential for high end games development and likewise other applications where audio, visual and tactile need to stay in sync (apparently it’s an area Apple have issues with as well). Normaly such high end development would be done at a “closer to the metal” layer such as C/C++ which although in theory is supported by Android in it’s native layer it’s not receiving the attention needed to make it either usable or viable in Android.

It would appear that Valve are intending to make Google’s Android boat rock by not only providing the tools needed at the native level but also partnering with Mark Shuttleworth’s open mobile hardware platform.

Although not talked about by Valve, I suspect the also have a set of cross-hairs firmly sighted on the pad etc market place.

An interesting and indepth view from someone involved with games development can be read at,

http://www.informationweek.com/mobile/mobile-applications/why-valves-steamos-could-be-revolutionary/d/d-id/899846

Bryan β€’ November 17, 2013 7:58 AM

On a secure processor. I’d make it ARM compatible and low power. Go after the phone/tablet market. Part of the package, there will need to be a new phone network processor. Certification for it will be the harder part, but I’m betting there would be enough demand for a really secure phone or tablet that sales should be able to fund desk top and server processor designs.

Mike the goat β€’ November 17, 2013 8:09 AM

Wael: I don’t think 2001 is a magic number, but if you consider funding and the general amount of govt subversion that would be tolerated esp meddling in many different facets of the industry from fab to software devel – after 9/11 what was politically and financially impossible became something the govt would not only consider but likely try and implement. We can assume that – likely as a consequence of learning about these programs from the spy grapevine other nations started running their own programs, like China possibly backdooring some of their h/w.

If we are talking COTS PC hardware I would feel safest with early 90s hardware – the Internet wasn’t ubiquitous and most people had dialup and patchy connectivity anyway so remote backdoors are unlikely. Those that actually make the tech would be unwilling to put in any backdoor that actually increased costs or decreased performance – something that would be inevitable with hardware of that era. I have a Sun SPARCStation and the damn thing still runs like brand new.

Of course using old hardware is a stopgap measure. Luckily crypto isn’t /that/ intensive and most things aside from key generation are reasonably snappy.

Clive Robinson β€’ November 17, 2013 10:28 AM

@ Tinfoil,

    Apparatus and method for remotely monitoring and altering brain waves

It’s for EM waves that are for a number of reasons not very good (wavelength and shielding being but two).

I worked on “other systems” that are much better as I’ve said before.

One system that is very effective at rendering idividuals unconcious or dead is centimeter and less wavelength ultra sound. You use two narrowish beams at very slightly different frequencies which you aim both at the victim of choice. The effect on the skin effectivly causes nerve signals at the frequency difference which cause all sorts of issues with the autonomus nervous system. Because aiming at an individual in a crowd is difficult unless you have very very high gain dishes the trick is to point two beams slightly appart so that the overlap between the two beams is very small (like the German radio beams or the “cone of silence” system used to land aircraft). The end result was on testing with pigs you could kill one at over a Km however it did not work to well with goats or sheep unless close to head on.

Another system that only works at very very short distances uses magnetic fields and is currently being used in medical applications dealing with age related nurelogical conditions.

All three systems came about as part of research into nonleathal weapons to be used against civilian or mixed civilian/unfriendlies crowds that are threatening friendly forces.

They all have a failing which is modulation. A straight carrier wave (CW) signal only produces quite inefficient sine waves, phase shift modulation using IQ generation will give any desired modulation pattern on individual radiated beams thus using two such carriers allows various signals to be induced in an individual at nearly peak power.

I suspect if you have good “google fu” you will find the code names of some of these projects and who was developing them (I know the UK and US were doing it and I’ve been told the Russians tried but had technical/reliability issues and went down the non leathal chemical weapons route with quite a bit of success).

Wael β€’ November 17, 2013 11:59 AM

@DarkHadron

One of the links in the article pointed to:
http://cm.bell-labs.com/who/ken/trust.html
This is a link that should be read! I was looking for it for sometime, and thanks to your post, I found it. It needs to be read carefully as it shows another subversion vector even when source code is available. Also has a brief caution about microcode. It also emphasizes the need question the development tool chain and frame work — a subject that was discussed on this blog. If the tool gain cannot be trusted, then you can’t even trust the binary from code you wrote and compiled yourself!!! I like this quote from the article:

Moral
The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. — Ken Thompson

Clive Robinson β€’ November 17, 2013 12:56 PM

@ DarkHadron,

It’s entirely possible, apparently Linus was asked at a confrance and very slowly saying NO whilst nodding his head vigerously to say YES.

So pays your money and takes your choice.

There is a discussion above about pre-2001 hardware. There are likewise many pre-2001 versions of Linux and other Open Source OS’s in the back of books etc.

@ Mike the Goat, Nick P, Robert T, Wael,

Much as I would like to join in the CPU talk, I’m realy quite unwell, and the brain appears to be running on the three backup nurons. If my temp goes up any more not only will I cease to make sense, I’ll also be off to hospital. Last time they kept me in for two weeks, on another occasion they locked up my mobile, and I know that some of you were concerned about my lack of postings.

So if the oral meds get on top of the infection I’ll be back in a day or so. If the hospital fills me with IV antiBs I’ll be talking to the wall in an unknown jargon to Hosp staff but you guys would understand in seconds πŸ˜‰

Nick P β€’ November 17, 2013 1:03 PM

@ Wael

“I still don’t understand the rational behind that. I don’t believe that subversion happened (if it did) instantaneously after that date. On the other hand, do we know that pre 9/11 systems were “clean”?”

“I don’t understand your trust in that either!”

“…do you care…?”

“AGGGGHH!!!!!”

Rather than do it point by point, I’ll just do a whole essay on this so other readers can also understand how I’m coming up with this stuff. Then they can decide for themselves.

My thoughts on NSA subversions at each point in time

There are years and time periods that my mind caught onto. Here’s what we know from pre- and post-Snowden information. (Not in any real order.)

  1. NSA was scared of software such as PGP back in the 90’s and focusing on getting vendors to use weak crypto.
  2. Systems such as Echelon and Carnivore focused on intercepting data sent over the wire hoping it was plaintext or easy to break.
  3. Hayden took over in 1999 writing a bleak assessment of the NSA from management and operations standpoint. He began a transformation of it.
  4. NSA’s top hacker group targetting China uses the same types of techniques they do and has been at it for “15 years.” That puts the beginning of their successes at around 1997 or so.
  5. Most leaked programs seem to have beginnings that are all after 2004.

  6. NSA used to be divided on “COMPUSEC” and “COMSEC,” with COMSEC getting priority. This is because their staff was primarily mathemeticians and engineers dedicated to codebreaking. This easily explains why they did attacks on crypto for so long with programs to backdoor endpoints coming much later.

  7. _NSAKEY was discovered in Windows NT in 1999.

So, the numbers are as follows: mid-90’s or earlier they focused on crypto; mid-to-late 90’s they began to put effort into modifications to proprietary OS’s and hacking them with black hat methods; post-1999 their effectiveness was improving on a regular basis; post 9/11 any secretly developed tech or plans (such as backdoors) would be immediately authorized for use; post 9/11 they’d also begin new programs to do this; post 2004 many of their programs are subverting various internet and companies.

So, that’s my start. This tells me that hardware before 1995 should be fine albeit dated in many aspects, incl security features. The NSAKEY crap shows a lack of sophistication for their activities in 1999. Hayden’s revamp would take at least a year for any real effectiveness. This means 2000 has high probability of being safe for hardware with open source software on it.

Both you and I mentioned that 9/11 isn’t magic in that they’d have to develop the newly authorized capabilities. This is a big mistake in thinking that people often make about military-industrial complex. The MIC constantly develops capabilities that they think will get them a huge advantage under Special Access Programs, regular or unacknowledged, and internal projects run on a need to know basis. Most of Congress are bypassed in these. Echelon seems to have been done this way. So, we know in the mid-to-late 90’s they started focusing on endpoints. They could easily have begun something in secret to be fielded the moment they had authorization. NSA even under Hayden is a large unwiedly organization so I’m thinking six months to a year is a safe bet. That means 2001-2002 are risky if SAP’s existed.

The next important moment is 2004-2005. This is the earliest date I’ve heard of one of the more serious programs. I think these heavyweight programs were developed by contractors or teams post-9/11. They had to brainstorm ideas, develop requirements, assign work so everyone gets a slice of tax dollars, build it, selectively test it, and then field it. This would explain a few years delay. We also saw leaks of companies willingly or forcibly cooperating in taps around 2004.

So, here’s your options with associated risk:

  1. Tech so old that you can use tools to tell what they chip is running or hardware protect it is very safe.
  2. Tech from early 90’s on back should have low risk of being subverted, although probably having many latent vulnerabilities.

  3. Tech from 1995-2000 seems like it has low[er] risk of hardware subversion, but proprietary OS’s that cooperate with governments might be subverted. (Microsoft, IBM and Sun would be much riskier than Apple, BeOS or Amiga for instance.)

  4. Tech from 2001 onward from govt cooperating vendors can’t be trusted.

  5. Tech from 2000-2003 from obscure or uncooperative vendors might be safe from hardware subversion.

I’ve tried to produce this evaluation framework myself based on limited information and my understanding of their organization’s culture. My information might be entirely wrong. However, I haven’t seen anyone else produce a framework based on anything other than speculation. I’m starting with an analysis of their known capabilities at given periods and working from there. That will have to do until we have trustworthy chips.

Note: I think Bruce said he’s seen the Snowden documents. A few media organizations have as well. Plenty haven’t been released. It would be really helpful if they’d look at them for dates or years of program beginnings relevant to this discussion. If we knew when OS and/or hardware partnerships began, then that would give us a much better way of assessing the risk of any particular time period.

Wael β€’ November 17, 2013 1:04 PM

@Clive Robinson
Keep your temp down!
In some cultures, nodding head up and down means no;) not sure about Finland though!

Wael β€’ November 17, 2013 1:19 PM

@Nick P
I like that coherent thought process! I’ll need to chew on it a bit. But our current knowledge doesn’t exclude early systems (as early as 286 – 486) from being vulnerable to various subversion mechanisms. I’ll have to wait until I get another laptop to post more detailed stuff. My car was broken into, and all my laptops, pads (not the feminine type — the Apple type), ID’s, passport, a couple of phones, etc… were stolen. The police wouldn’t even come to take a report — let alone investigate!!! When I have a chance, I’ll rant more about it…

Figureitout β€’ November 17, 2013 1:35 PM

Wael
The police wouldn’t even come to take a report — let alone investigate!!!
–Officer Doofy probably won’t help much anyway. The insurance company and my good friend investigated my incident better than the police. False sense of security, you really have to protect yourself and your residence is an even easier/more fruitful target. Hope you find who it is.

Nick P β€’ November 17, 2013 1:55 PM

Ok, now back to the chip discussions. πŸ˜‰

@ Wael

“Assemble your own system, and build your own BIOS from open source such as http://www.coreboot.org/Welcome_to_coreboot then add some TPM functionality to measure things you care about, which includes option ROM’s (which is actually already done, but you may want to tweak it). After that, disable BIOS updates. Stick this system in a tiered network that suits your needs, or air gap it. Also use open source OS’s. My preference is FreeBSD. Build all your binaries from source (make world). And use different partitions (or drives) for components that you want to keep immutable, make that partition read only. Then encrypt your confidential stuff with a key that’s saved off a token. And protect that key in transit, in usage and at rest. The purpose of this setup? Generic usage ;)”

Good advice. Of course, following my timeline analysis & obscurity mantra, I’m using a RISC machine from a vendor that’s not blindly cooperative. I’m chancing it with the time period but I have almost no luxury spending budget right now. Time to turn a preferrable machine into a secure system is a also a luxury for me so I picked one with plenty BSD/Linux software available. Most of the security will come from keeping dangerous IO away from it and use of up-to-date NIX LiveOS. The next set of systems will be the really strong ones with this as an interim.

Oh btw, we’ve seen no evidence so far that the Chinese processor was compromised by NSA. They have a chip, open firmware, and open source OS platforms. I just think odds that US govt is a risk to such a system, esp if semi or partly airgapped, is pretty low. It also has x86 emulation. wink

@ Winter

It actually is similar to that. The problem is larger than that, though, due to extensive supply chain poisoning by NSA. So, to make it easy, we could do the bootstrap in a way similar to my previous proposal on verified (for correctness) software.

  1. Use an extremely simple, yet useful, language for the initial work.
  2. Produce certified compilation to object code. Might be formal or informal, but must be believably correct with proof of no subversion being pretty obvious.
  3. Several different groups write the compiler using different programming languages, OS’s, instruction sets and supplier locations.
  4. Resulting object code and test runs are compared to ensure correctness.
  5. Compiler and basic OS are written that way, compiled with that toolchain and verified by all.

  6. These resulting diverse binaries are used by each group to (a) run jobs each must trust and (b) improve the trusted tools themselves.

  7. The systems are all air gapped with physical security and personnel each group trusts with audits of different compilations regularly done.

That’s just for the software. Hardware might take high level code, low level stuff such as netlists/macros, complex tools that do the conversions, and then it has to run on FPGA or custom chip. I’d say security of all that is still an open issue. Seems easiest to use the diversified, mutually-suspicious compilation strategy on a common typesafe platform and toolset portable to arbitrary hardware. Then, one can simply use a bunch of old hardware. Yes, I cheat to take the easy route whereever possible. πŸ˜‰

Note: Wael’s suggested LLVM. I was looking at it too as there’s already typesafe languages, including Haskell, on it with a number of backends. One team is also producing a formal semantics of it. If I used it, I’d ensure a careful choice of optimizations that couldn’t make security defeating alterations.

@ Bryan

ARM is a decent choice for the mobile architecture so long as it doesn’t have a bunch of inherent security weaknesses like Intel did. I mean, if it’s complexity lets system security be bypassed easily then what’s the point? So, I’d start with an open core that’s ARM compatible then make some security enhancing modifications. Trusted boot and hardware accelerated compartmentalization of native code alone would greatly improve the robustness of the phone, esp against baseband stacks as entry way. There’s existing and in-development tech to do the compartmentalization quickly with minimal silicon. Various tradeoffs.

My point about legacy-compatible security enhancements is that, if one is gonna drop millions on a processor, might as well put in some enhancements so we aren’t cloning the same hard-to-protect crap that already exists. I want “ARM+1” at a minimum.

@ RobertT

“I suspect the project will go absolutely nowhere without reliable and verifiable answers to these two questions.

Third thing to consider: Legacy software considerations will basically force the chip to be an ARM or i86 compatible device. Is it possible that given this starting point your security goal is already impossible?”

I suspect you’re right. It’s why Bell said the changes will require “selfless acts of security” and why I say developing the secure platform will be a financial loss due to tiny market. A government, big company or private investor is literally going to throw away money to make this happen if it happens. Now, there is certainly money being thrown at stuff like this albeit often at ineffective stuff. So, it’s possible. Just seems less probable stateside all the time due to US govts incentives. However, foreign countries or multinationals might build something to protect their own secrets.

@ Stanislav

Funny you mention that as I was just reading through the Scheme48/W7 work to see the easiest way to apply such concepts to hardware I can build/acquire or a custom core design on FPGA-type hardware. Thing is that the core system was very simple, yet extremely powerful for isolation & development. I like that combo.

“C and Unix ought to be considered backdoors in their own right”

Or a sick joke on the world perhaps: http://mariusbancila.ro/blog/2007/04/04/creators-admit-c-unix-were-hoax/

@ Mike the goat

” I would feel safest with early 90s hardware – the Internet wasn’t ubiquitous and most people had dialup and patchy connectivity anyway so remote backdoors are unlikely. ”

Consistent with my timeline and analysis. And the stuff is pretty cheap on ebay these days.

@ Clive Robinson

Good luck on your recovery. My brain was too stressed to work past few weeks but past 24 hours brain cells were firing so I said “time to hit the blog!” Got a bunch of other stuff done too. Odd enough, happens mainly when I drink a particular relative’s coffee. Gotta figure out what she’s been spiking it with.

Wael β€’ November 17, 2013 1:55 PM

@Figureitout

<

blockquote> Hope you find who it is.Thanks! But I will not. And if I did, the police won’t do anything about it either, I was told… Learned a lesson. Don’t leave anything you care about in the car even for 15 minutes. My house was also broken into two or three years ago, same story…

Fundamental β€’ November 17, 2013 2:05 PM

@ Wael “No amount of source-level verification or scrutiny will protect you from using untrusted code. — Ken Thompson”

Excellent point, and the key qualifier here is “source-level”.

To solve the general validation problem, it is necessary to start at the bottom of an untrusted tool chain and disassemble the assembler to see what it is capable of doing. Things act in accordance with their nature, so once you know exactly what the assembler opcodes are, you can discover if a potential exists for inserting bugs.

Once the assembler is known to be secure, then it can be used to build a C compiler and linker from source code, which only needs be validated at the source-level.

Wael β€’ November 17, 2013 3:06 PM

@Fundamental
Yes! If the foundation is shaky, anything you build on top of it will be shaky. Maybe OK for earthquake resistant structures. Not so for software, unless we can utilize a similar model, which we are trying to do… Problem is the shaky foundation in software is not your friend as it is in it’s earthquake resistant structure counterpart.

kashmarek β€’ November 17, 2013 3:11 PM

Another entry on the “phone score”…

http://yro.slashdot.org/story/13/11/17/0239220/your-phone-number-is-going-to-get-a-reputation-score

Where is Telesign getting their metadata? NSA? Or, do they have their own data collection like the NSA? Perhaps it is the same data that the telcoms are providing to the NSA, and now being leveraged for another purpose (or is it the same purpose, financial gain?)

It seems this is all going overboard – there is the original credit score, followed by insurances scores, health scores, a score for your house (or residence), another for your car, probably one for the place of worship you attend (or don’t attend), certainly a subsetted score for your travel, your entertainment (TV, radio, movies, & Internet), organizations you are a part of, and then the score of all scores that has been going on for ages: demographics but now individualized (you know, that portfolio or dossier about you).

Nick P β€’ November 17, 2013 3:12 PM

@ Wael

Sorry to hear bout the robbery. And screw those police. Least the ones in the hood I lived in had the courtesy to show up and write the details on a piece of paper destined for a trash bin. I mean, least they pretended to be interested.

Figureitout β€’ November 17, 2013 3:18 PM

Nick P
Least the ones in the hood I lived in had the courtesy to show up and write the details on a piece of paper destined for a trash bin.
–Yeah, ones I dealt w/ said they get a “detective” on it right away and call me w/ any leads. No calls. Have to be your own detective. One copper at a sports facility joked w/ a kid on my team who just had his phone stolen, “Must’ve had legs and walked out!”; worthless dicks.

Wael
–You could get a tiny transmitter, put it in a old smartphone case, and lay out for would-be thieves. Revenge is sweet.

Wael β€’ November 17, 2013 3:22 PM

@Nick P
I actually understand why they can’t spend time on things like that. A lot of them have been laid off due to budget cuts, and can only allocate resources to life threatening situations — not their fault really. Money is being spent on “other projects” — some of which are being discussed here. By the way, there were cameras aimed at my car, and blood stains left on my car, but the police said they’ll not investigate them. I had to file a report online πŸ™‚

Wael β€’ November 17, 2013 3:28 PM

@Figureitout
Time for some teeny-weeny vulgarity…

worthless dicks

Not at all! that structure has a head! They are nothing short of “worthless shafts” πŸ˜‰

Figureitout β€’ November 17, 2013 3:41 PM

Wael
–Yeah yeah lol. Handling it well, I’d probably be “out ‘n about”. Sounds like they couldn’t even break a car window lol; so hopefully “opportunistic” and not “targeted”.

Petrobras β€’ November 17, 2013 3:57 PM

Thanks, thanks a lot for all your answers.

@Nick P: “The goal will be to look at all the use cases for chips in business and personal to figure out the fewest number and type of chips to create to solve the most problems.”

The personal people are happily buying Apple’s and Google’s phone and sharing irresponsibly on Facebook. Just forget them in the initial project.

@Nick P: “That gives us a dirt cheap chip to use in all kinds of stuff.”

It does not need to be dirt cheap. It may have the same price as the main processor. If the results uses the same chip for everything, it will be faster and cheaper to kickstart.

@Nick P: “Solution for Right Now”

We also need to specify ram-based chips. Or store only encrypted content with shuffling in the RAM.

http://milkymist.org/3/mmone.html is very interesting, thank you, but it is more about a big open-source firmware than open-source transistor layouts. http://lists.gnu.org/archive/html/gnewsense-dev/2010-09/msg00002.html

@RobertT: “Can you give me any details about the market for such a device?”

I beg for someone competent in fab here to spend ten hours writing a convincing kickstarter project (and advertize it here). Then, we will have a precise picture of availability of crowd funding.

@RobertT: “Can you also give some thought to the likelihood that a team of security aware chip designers, capable of developing the product you’re requesting, might be also capable of developing a new unknown backdoor, basically a chip_zero_day?”

It will be difficult to hide a backdoor in 30000 transistors that can be checked easily and totally by anyone motivated enough.

@RobertT: “Legacy software considerations will basically force the chip to be an ARM or i86 compatible device.”

No. Open source software can be recompiled to different architectures. Closed source software may contain backdoors, so there is no need to consider them.

@Wael: “I like your data diode better.”

The armrace between NSA and people using air-gapped backdoored systems is in favor of NSA. We need a safe base.

@Wael: “Here is my proposal”

This is the “Solution for Right Now” as defined by @Nick P. What about backdoored processors and backdoored BIOS of ethernet card ? USB controllers ? We need open-source simple processors.

@Winter: “(e.g., different ARM sources) that should deliver bit identical network communications”

These do not exists and need to be created. It will be faster to just create one simple truster processor.

@Bryan: “Certification for it will be the harder part”

But wireless communications can be delayed. Use a dumb phone to communicate about unprotected matters, and do serious things with a wireless tablet.

Ear Worm β€’ November 17, 2013 4:14 PM

1979 “Video Killed the Radio Star” ~ The Buggles
2010 “Internet Killed the Video Star” ~ The Limousines
2013 “NSA Killed the Internet Star” ~ [redacted]

“In my mind and in my car,
we can’t rewind, they’ve gone too far.”

bamafone β€’ November 17, 2013 4:20 PM

burner phone? stolen phone, fone under the name of someone with Alzheimer’s
a phone taked by stealth from a teenager would be reported stolen in a few minutes, but slipped from a drawer in an oldfolks home, maybe several days, these are quick burners, one call and throw away, like bank robbers stolen cars.
use a semi legal phone to let correspondent know the new number of the illegal phone, its actually obvious. if they cant buy a phone not listed to themselves, they can still get a phone.
with out action against easy access to buy a phone, you do not eliminate the problem, you complicate it. even afgansitan has cells phones used by the taliban, this allows the NSA to listen in. phone denial cannot work unless no one has a phone.

CallMeLateForSupper β€’ November 17, 2013 4:53 PM

Implementing Keccak on a TRS-80.
http://cryptome.org/2013/11/trs-80-keccak-sponge-cake.pdf
That old enough hardware for ya? πŸ™‚

For the edification of any youngsters on this blog, the TRS-80 was a personal computer marketed by Radio Shack starting in 1977. It used the 8-bit, Z-80 uP running at a scorching 1 MHz..

The Z-80 was my self-inflicted introduction to microprocessors.

Wael β€’ November 17, 2013 6:43 PM

@Mike the goat
On pre 9/11 HW…
I understand the speculation and agree to some of it to some extent. I see proposals for a solution to a not so well defined problem. We talk about using a “secure” system. We had a long security discussion with @Nick P, @Clive Robinson, and @RobertT almost a year ago. it was under a generic heading of Castles-V-Prisons. The point is this: What aspect of security are you trying to achieve? Will your system be air gapped or networked? Maybe the proposed secure system will be secure, and maybe it won’t! Just having a system not subverted or back-doored is not sufficient to label it as secure, although a good thing to have.

Dirk Praet β€’ November 17, 2013 6:53 PM

@ Wael

Don’t leave anything you care about in the car even for 15 minutes. My house was also broken into two or three years ago, same story…

Sorry to hear this, mate. If it makes you feel any better: a couple of years ago, a former colleague of mine had a window smashed and his laptop stolen from his car. He subsequently drove it to a local garage to have the window replaced, only to get an even more distressing call the next day at the office from the garage telling him that his car had been stolen from their premises.

In general, the police don’t do anything with this sort of stuff, unless you have some tracking software on your machine and you can provide them with the coordinates where it can be found. At least, that’s how it works over here. A while ago, one such case made national TV when some guy had several Apple devices stolen from his home, and then saw them coming back online at the local Polish embassy. Since there was nothing the police could do about that, he took his story to the media which almost caused a diplomatic incident.

A tip for at home: lock down your machines with Kensington locks and/or hide them away. It’s also a good idea to leave an old or even broken laptop in plain sight so the opportunistic thief doesn’t look any further.

Godel β€’ November 17, 2013 7:37 PM

@Dirk Praet

“In general, the police don’t do anything with this sort of stuff, unless you have some tracking software on your machine and you can provide them with the coordinates where it can be found.”

No, even that’s not enough. I’ve heard stories of the owner tracing stolen iPhones to the thief’s address, and the cops still refuse to do anything about it.

RobertT β€’ November 17, 2013 7:41 PM

@Petrobras

With all due respect I dont believe you have a sufficiently developed understanding of the problem.

Formally Verifiable Hardware is definitely the starting block for any “secure” system design. Unfortunately although undeniably necessary it is also provably insufficient, especially when you consider the full skill sets of the likely participants in the design / definition process.

As has been noticed elsewhere the NSA’s favors attacks on protocols and functional definitions, and often seeds the market with subtlety flawed implementations. I’d expect a project such as your’s to be no exception.

Outcome:
Formally verifiable = MAYBE
Subtly flawed = DEFINITELY

Wael β€’ November 17, 2013 7:41 PM

@Dirk Praet
Thanks, bud! It doesn’t bother me that much. I only see the irony and humor in the situation. One of my friends (of a total of 3 friends) came to visit me over the weekend from Seattle. I was complaining to him about things that were bothering me in life. We decided to visit Sacramento and have dinner somewhere there. At the end, he cheered me up and said the good thing is “things can’t get any worse”. Driving back from the restaurant he told me close the back window, it’s cold! I said I have no windows open. He looked and told me the window is broken. Anyway, my stuff and his stuff were stolen. Long story, but he said he feels sorry for me. I told him don’t worry, I don’t care. Then I asked him: Didn’t you say “things can’t get any worse?” πŸ™‚ the funny thing is he said when we looked in the car (they took 3 bags out of four, remaining one was his) great they didn’t take my bag that had the good stuff and was happy. I told him what a selfish thing to say;) then he realized his iPad, glasses, and other important stuff was in the missing bag. He then said holly sh@# my important stuff is also missing. I started laughing at him πŸ™‚ between the two of us we lost around 7k, 1.5 is his…

Figureitout β€’ November 17, 2013 7:59 PM

Petrobras
–Keep pushing, it’s going to be a very hard and awkward project b/c no one will trust each other and trusted members of the team may still subvert the system…

Wael
–How about let’s start w/ an air-gapped computer first? Shielded and used only in another shielded room. 2 serial ports for keyboard/mouse or maybe physically attached (no usb ones), spacious intuitive layout w/ large components for easy testing and replacing. Physical test points for every single component; the design needs to allow the operator to see all activity at all times (this is most important). Malicious physical access to the device should be considered mostly not secure; so you sleep w/ it under your bed and store or take w/ you everywhere. In short, if I can’t see all components w/ my bare eyes and remove them w/ my bare hands, I don’t want it.

CallMeLateForSupper
–No not old enough πŸ™‚ Nice to see Radioshack used to be geared toward engineers, now it’s basically a phone store w/ a tiny section for some microcontroller kits.

Godel
–I think that’s about time you take matters into your own hands.

Figureitout β€’ November 17, 2013 8:07 PM

RobertT
–So what, I think we all realize the huge problem. The more of these projects keep popping up, the NSA won’t be tracking terrorists they’ll be hunting down citizens wanting secure devices and some damn privacy. Anything can have a subtle flaw; even the NSA can get pwned w/ social engineering(!) attacks…

Wael β€’ November 17, 2013 8:46 PM

@Figureitout

Malicious physical access to the device should be considered mostly not secure

You don’t say:) I concur, but I would remove the “mostly”. How about we start with the purpose of the computer:
If you want to browse, use a device that doesn’t have PII.
If you want to keep Snowden’s docs, air gap it.
If you are working on a secret new project, have tight controls with air gaps as well
There is no one-size-fits all security, because you’ll also hit the Security-V-Usability threshold as well.

Figureitout β€’ November 17, 2013 9:05 PM

Wael
–The only reason I said “mostly”, is maybe the system is already under prior surveillance and use of it is to fool attackers. Before anyone uses it, you need to practice keeping a device physically secure that would be physically impossible barring a swat attack to get. To get some real training, bait some agents w/ talk of a revolution, they’ll come. And this device will be way more secure than usable, major pain to use; you only use it for either secure calculations, key gen/en/decrypt, storage of critical info. Networking will be an entirely new problem after.

Figureitout β€’ November 17, 2013 10:49 PM

Lulz.

Seeing how many failures the gov’t and military make in security; a citizen’s project having many failures, that’s completely fine in my view. Any real engineering project in my view will have many fails, otherwise you’re not solving a hard problem. For god’s sake, they (air force) dropped a nuke in Georgia that luckily didn’t blow in the continental US and left our country entirely vulnerable to air strikes on 9/11. Healthcare.gov anyone?–I think we can chalk it up for epic fail trying to do too much. Depends how far someone wants to take the security.

Winter β€’ November 18, 2013 1:32 AM

@Nick P
“2. 3. 4. Resulting object code and test runs are compared to ensure correctness.”

If your aim is to test for hidden backdoors in the compiler binaries, i.e., The Trusting Trust attack, David A Wheeler has developed a better way:
http://www.dwheeler.com/trusting-trust/

@Petrobas
“>(e.g., different ARM sources) that should deliver bit identical network communications

These do not exists and need to be created. It will be faster to just create one simple truster processor.”

But you still would need a way to ascertain that the hardware you get is secure. The same as with the compiler binaries in the Trusted Trust attack.

If you cannot ensure whether the hardware you get contains a backdoor, you might be able to ensure that two different sets of hardware send out the exact same communication.

Petrobras β€’ November 18, 2013 2:26 AM

@Figureitout: “Keep pushing,”

I have unfortunately no knowhow about fab.

@Figureitout: “it’s going to be a very hard and awkward project b/c no one will trust each other and trusted members of the team may still subvert the system…”

In serious open source projects aimed at security, you do not need to trust the development team. You need to audit the code. The shorter it is, the better it is for auditing. I think it is the same with transistor meshes.

@RobertT: “Subtly flawed = DEFINITELY”

Yes, but with 30000 transistors it would be more difficult to subvert than in modern processors with a billion of transistors.

Do you agree on that ? Why do you think that would not be a small victory in the security ecosystem ?

Bryan β€’ November 18, 2013 3:33 AM

@Bryan: “Certification for it will be the harder part”
But wireless communications can be delayed. Use a dumb phone to communicate about unprotected matters, and do serious things with a wireless tablet.

Actually, no it can’t. It is part of the sales strategy. Many need a reasonably trusted secure mobile device, and would pay for it. They also don’t want to carry a number of devices. I’d bet more than a 100,000 secure phones could be sold in the first year to governments and corporations needing security. Personally I’d set it up to run Android in a sandbox on the device to give it additional appeal.

On ARM, or any other CPU. It needs a first class MMU that at minimum has control bits for readable, writable, data, and program memory spaces. My opinion is if a process makes a memory fault, other than for something innocuous like copy on write, it gets killed.

Design a very simple RISC CPU+MMU+DMA core that can be replicated a large number of times on the die. Give them individual or common cache memory spaces to fill up the space on the die. Dedicate a few of them to directly manage IO like USB, Ethernet, SATA, graphics, phone, Etc. An IO connected core will have an additional register space for the IO device that can’t be accessed by any other core. Each MMU needs to have at least a couple MMU page translation sets loaded and instantly available to it, 4 or 8 may work best. This is to allow quick switching between system and user processes. It will allow fine grained servers that do just one function, like USB IO. That means their code is easier to audit. Another server task handles the file system management. Another one handles RAID processing. I think after the MMU on core #1 is enabled, all processes except system memory management run in a memory mapped space. Even the system memory management process uses privilege bits for restricting where it can read, write, and execute. Of course, it has access to the system memory maps so it can set it’s own privilege bits, as well as manage them for other processes.

Use a DMA unit to load the BIOS into ram from a serial FLASH chip when the system comes out of reset. Once it is loaded, execution of the code is started on core #1. It is in charge of setting up the rest of the system. After all code is loaded, a hardware logic trap then can be triggered that will prevent all future accesses to the serial FLASH chip, thus making it effectively immutable.

For faster IO for a server chip, I’d give the chip a number of USB or SATA buses.

Any CPU: gcc can be retargeted to any CPU relatively easily for a compiler. Unfortunately gcc is likely one of the hardest pieces of code to properly audit as it needs a compiler guru to decipher some parts of it. Would a proper cross platform audited decompiler with an audited functionality comparator be a way to verify? Functionality comparator being a piece of code that reads in two pieces of code and compares them for sameness of function. It should know about various optimization methods and how they equate to each other.

On the drone hitting a ship: Sounds like a standard training accident. Just unusual for what hardware was involved. The US military figures on loosing a few and injuring many every year due to accidents while training. Playing war is dangerous if you do it in any sort of realistic manner.

On stolen stuff: I keep combination locked cases in my trucks. They are there for two reasons. First, so any electronics, notes, etc. that I don’t want the cops or others to see/access can get locked in it. I can plead the fifth on the combination, but a key can be taken and used. Locked cases, like brief cases, need a search warrant to open. Second, for anti theft reasons. It’s stout enough to make it hard to break into without a big crowbar or drill. It’s also bolted or chained to the floor, and I throw a floor colored towel over it when not open. Big towels that are the color of your floor and interior upholstery are great for temporarily covering up stuff in the car.

Petrobras β€’ November 18, 2013 3:50 AM

@Bryan: “Unfortunately gcc is likely one of the hardest pieces of code to properly audit as it needs a compiler guru to decipher some parts of it.”

What about relying on http://bellard.org/tcc/index.html instead of gcc ?

(error in my previous comment: “wireless tablet” should have been “networkless tablet.”)

It would be better, for auditing purpose, to multiply the number of core chips present in a phone than put many cores on one die. They will take more space but will have independent JTAG interfaces.

Winter β€’ November 18, 2013 4:50 AM

@Bryan
“Personally I’d set it up to run Android in a sandbox on the device to give it additional appeal. ”

Shouldn’t you first worry about the chip and firmware that runs the radio in your smartphone?

For secure communication, I would think you better stick to encrypted VoIP/Texting on a WiFi-only handheld. Then use another plain vanilla smartphone as the tethering device for cell tower contact.

You could even just cannibalize the innards of a smartphone to add to your own device for linking up to the cell towers. You could control power to the radio chips directly to prevent troubles.

Petrobas β€’ November 18, 2013 7:24 AM

@Winter: “cannibalize the innards of a smartphone to add to your own device”

Good idea ! Don’t forget to take its microphone out.

Dilbert β€’ November 18, 2013 9:19 AM

@Godel,

I like the cheap/broken laptop idea. I have a laptop that can be placed on edge in my bookcase and look like just any other book. It’s a nice simple way to keep it unobtrusive. I also have a pair of leather loveseats that, for some unknown reason, were manufactured with zippered compartments underneath them – on the bottom of the seats. Since they’re lightweight, it’s easy to flip them forward and stash items inside the zippered compartment.

Neither location is fool proof… it’s just not obvious.

I also like to keep one of my old laptops up and running with Yawcam installed, using the motion detection feature. Any captured images are immediately uploaded to my dropbox account.

Tarzan β€’ November 18, 2013 9:56 AM

@RC4 Killer:
Within a year, more than 10% of global users wouldn’t be able to connect to sites offering only RC4 encryption.

What are the implications of this? Is it good/bad/neutral?

Garfield β€’ November 18, 2013 10:24 AM

Out in the Open: The German Plot to Give You Complete Control of Your Phone
http://www.wired.com/wiredenterprise/2013/11/neo900/

But a German company called Golden Delicious, together with an open source community called OpenPhoenux, is trying to change all that. Golden Delicious will soon offer a tiny motherboard that lets you upgrade a Nokia N900 β€” an iconic phone with a full-hardware keyboard, part of a dying breed in the world of mobile devices β€” and the team hopes to provide a thriving hardware and software ecosystem around this and other phones.

β€œWe believe in choice, and we want to make mobile computing as free as we’re used to it in the PC world,” says Sebastian Krzyszkowiak, one of the project’s core developers.

The project is called the Neo900, and the replacement board is based on the GTA04 platform from Golden Delicious. Those who don’t want to get their hands dirty β€” or don’t have an N900 lying around β€” will be able to purchase fully assembled Neo900 phones, based on existing N900 cases.

From the neo900.org page:
Neo900 can be used with 100% Free Software stack. Forget about spying and influences of intelligence agencies. If you turn off GSM modem from the software, you can be sure it’s really turned off.

bcs β€’ November 18, 2013 11:42 AM

@Jonathan Wilson

Your statement has the built in assumption that making burner phones harder to get is inherently a good thing. I’ll assert the opposite and suggest it would be better to forbid cell phone makers/networks from collecting any more ID data then is necessary to bill the user (which, for prepaid phones, is no ID at all).

Bryan β€’ November 18, 2013 11:53 AM

Winter

@Bryan
“Personally I’d set it up to run Android in a sandbox on the device to give it additional appeal. ”

Shouldn’t you first worry about the chip and firmware that runs the radio in your smartphone?

I was thinking of running the phone network software on one of the cores, but it should be on it’s own chip for update/upgrade reasons.

On single or multiple cores vs jtag ports. I don’t see it as mattering.

Making the secure smart phone modular would be good because then subsystems could be replaced as needed.

Bryan β€’ November 18, 2013 12:09 PM

Hum, looks like the blockquote tags aren’t working…
@Winter

“””@Bryan
“Personally I’d set it up to run Android in a sandbox on the device to give it additional appeal. ”

Shouldn’t you first worry about the chip and firmware that runs the radio in your smartphone?”””

I was thinking of running the phone network software on one of the cores, but it should be on it’s own chip for update/upgrade reasons.

On single or multiple cores vs jtag ports. I don’t see it as mattering.

Making the secure smart phone modular would be good because then subsystems could be replaced as needed.

Nick P β€’ November 18, 2013 12:48 PM

@ Petrobas

” Just forget them in the initial project.”

I decided to forget them in all my projects quite a while ago. Theo de Raadt inspired it. He didn’t cater to requirements from people who didn’t give a shit enough to make tough tradeoffs. Made his job easier.

“It does not need to be dirt cheap. It may have the same price as the main processor. ”

No, it (the microcontroller) really does need to be very cheap. There are many, many cost-sensitive uses of such chips. A chip that’s reasonably functional, enhances security, and is still cheap will see more use in embedded systems and hobbyist work. Such sales also fund further development. Plus, if it’s cheap enough, someone might be able to try to decompose an entire system onto such chips with a chip dedicated to each function. Currently, this requires a NUMA machine that nobody can afford/trust or a large cluster of cheap boards not designed for that.

“We also need to specify ram-based chips. Or store only encrypted content with shuffling in the RAM.”

It’s not strictly necessary. There’s several options here.

  1. Build a trusted RAM interface, incorporate IOMMU, and use memory safe software, then wipe RAM upon shutdown.
  2. Use a device in the middle to encrypt RAM or IO for applications that need it. (see HAVEN virtualization project)
  3. Use a chip that’s designed not to trust RAM or other hardware. This is more similar to what you’ve mentioned. Examples are Aegis processor, SecureCore/SP, and SecureME.

“It will be difficult to hide a backdoor in 30000 transistors that can be checked easily and totally by anyone motivated enough.”

A chip meeting all your requirements including RAM protection will probably be a lot larger than that. Fortunately, many projects producing safer variants of general purpose processors have been able to use the low end FPGA’s for prototypes. That might indicate the finished chip will be relatively low complexity. Maybe. I still don’t think it will be visually validated or anything like that.

My previous advice was to make at least one chip that could be made at even low end fabs so we have many to choose from. Then, maybe get fabs in several different countries producing the same chip.

“The armrace between NSA and people using air-gapped backdoored systems is in favor of NSA. We need a safe base.”

It’s actually in favor of nobody. The people attacking air gaps are getting pretty clever. However, nobody that’s been compromised was following good air gap advice a la what govt recommends. They use dedicated machines from sources they trust in rooms that block out sound and signals. Any connections to those computers often involve high assurance guards or media encrypters. The trick here is that one must get a PC that isn’t backdoored, use it in a very isolated environment, and ensure any communication happens in a trustworthy way.

@ Wael

“Just having a system not subverted or back-doored is not sufficient to label it as secure, although a good thing to have. ”

Good point. My goal in such systems is finding old hardware that won’t have a clever backdoor in it. They should be very easy to use securely if the only point is air gap use case for content reading or signed message origination. (Mike and I’s main use case.) As for one directly connected, that’s a different situation entirely. My requirements for protecting such a system are sufficiently painful that I switched a long time ago to two system schemes where only one was Internet enabled. It acts as a front end for the other sending the information in a simple way over more trustworthy comms scheme.

@ RobertT

“As has been noticed elsewhere the NSA’s favors attacks on protocols and functional definitions, and often seeds the market with subtlety flawed implementations. I’d expect a project such as your’s to be no exception. ”

If we got working hardware, then it would be worth it. We could fix the implementations later. Matter of fact, there’s already plenty of open implementations of almost everything we might need. The fixes can be incremental. The hardware, though, absolutely must work as advertised for the system to be trustworthy. Ideally, it must also make writing secure software/systems easier as well with its on-chip capabilities.

@ Winter

re Wheeler

I’m aware of his work. I’ve posted his SCM page here a few times as it’s excellent. A real system would certainly consider or include his DDC concept to some degree.

“But you still would need a way to ascertain that the hardware you get is secure. The same as with the compiler binaries in the Trusted Trust attack.”

Which is why I keep saying we need to make the systems portable across architectures, then deploy on a mix of older systems and newer ones from various suppliers. I also prefer keeping the host properties hidden so attacker must guess and possibly give themself away. That trick has worked in the past.

Re secure communication and handhelds

I covered a high security solution here. I posted a mobile version of it once that stretched the meaning of mobile. Think small briefcase rather than phone. πŸ˜‰ It was necessary, though, as it had to use trustworthy chips and the mobile SOC’s have so much crap in them I couldn’t begin to know their actual attack surface. RobertT’s many posts just made me more sure of that over time.

The cool thing about my design is that it’s simple enough for many IT people to manage and each component can be customized. Makes attackers’ job harder so long as individual components are strong.

@ Bryan

” I’d bet more than a 100,000 secure phones could be sold in the first year to governments and corporations needing security. ”

Ask Cryptophone in Germany (or ask Frank Rieger). They sell premium devices with encryption, hardening, usability, and a few grand price tag. I think their volume might give an idea of what ballpark your solution’s sales would be in. Might be more or less than you think.

Re your device

That’s interesting. I have no comments about it right now. Just make sure you watch the squid forums for my next paper release as it addresses secure SOC’s and bottom-up trust in systems. Reading esp your MMU needs, I’m sure one or two papers have something for you to integrate.

re GCC

The simple route is to just not use GCC. There’s LLVM and simpler portable C compilers. There’s also CompCert whose backend should be getting extended for all sorts of things. The middle end showed itself to be perfect in testing. We have a verified compiler and people aren’t using it for code addressing subversion? Blasphemy! πŸ˜‰

@ Garfield

The neo900 is interesting. I hope they succeed. However, it seems they have a weakness in privacy as the graphics and baseband stacks are still closed. They say they will restrict the baseband stack somehow. Sounds like they plan to use software to restrict highly integrated and untrustworthy hardware. I’m not in any hurry to buy into this one until I see how they do that. It’s one of the biggest risk factors in any phone and still exists in neo900.

Petrobras β€’ November 18, 2013 3:31 PM

@Nick P: “A chip meeting all your requirements including RAM protection will probably be a lot larger than that.”

If this is true, then let’s drop requirements about RAM encryption.

Let’s just make sure that any input from outside will be encrypted by the application before writing it to the RAM.

Clive Robinson β€’ November 18, 2013 5:46 PM

@ Petrobras,

I think you will find that custom silicon is beyond what resources you have, thus I would think of other ways to do things.

The first thing you have to do though is design the scope of what you want to do, without which you will always have an unrealistic piece of hardware.

One other way to do things is to set up a mitigation system such as a stack oriented language running on a DSP chip. For various reasons DSP chips are not likely to be “backdoored”.

Another mitigation method is to use three architecturaly different CPUs in a voting protocol.

And several others that could be quite effectivly done if people were to think a little out of the box…

Anura β€’ November 18, 2013 6:54 PM

So… When are nanoassemblers going to be on the market? Because that can solve a whole host of problems.

Figureitout β€’ November 18, 2013 8:44 PM

Dedicate a few of them to directly manage IO like USB, Ethernet, SATA, graphics, phone, Etc.
Bryan
–Woah woah slow down. Maybe for a more useable computer; but my vision is no usb, hell no ethernet, no sata, simple LCD command line, no radio aka phone. Way too much added complexity; the entire attack surface needs to be killable at all times; or at least a pre-set nuke-zone. I almost say no mouse, just 4 way β—„ β–Ί β–Ό β–² keys and maybe eliminate Caps and Alt on a keyboard. My vision is that even w/ physical access, the computer will be so simple that it almost doesn’t even matter; you will have to destroy or steal it. B/c really, w/ just a gun and a psychotic mind, you can get physical access.

Figureitout β€’ November 18, 2013 8:56 PM

I have unfortunately no knowhow about fab.
Petrobras
–So what, are you going to go cry in a corner? Learn it; even the supposed “experts” of any field can get owned by a rookie, and you can make fatal mistakes all the time. They will only really know sectors of the process, the others they are just standing w/ twisted fingers placing religious faith in something they can’t control. Even leaders leading people, they don’t know what the engineers “in the trenches” are doing, so it’s really all just fluffy trust.

This computer I envision should be so simple, that all the members of the team will be able to learn and independently verify. Programmers, meet hardware. Engineers, meet compilers. Plus, I want all test points clearly labeled for what is “Open Hardware”, running “Open Software”, where the community actually checks the work and calls out errors by screaming.

Casagrande β€’ November 18, 2013 9:54 PM

Since we cannot get rid of the government spying machine, can we go the other way and push for it be built up further?

The only thing needed is to add a feedback mechanism to it so that it controls your electronic gadgets.

Then you could stand in your living room and control everything through spoken commands. Just speak the words (picked up by existing surveillance systems) and your TV displays the show or information you requested.

Figureitout β€’ November 18, 2013 10:12 PM

Then you could stand in your living room and control everything through spoken commands.
Casagrande
–Aww you’re adorable. You won’t have control over anything you adorable thing. The info you requested will be infected, everything will be infected. No freedom. So your idea is terrible and I’d rather kill myself directly or indirectly.

Casagrande β€’ November 18, 2013 10:42 PM

@Figureitout

So your idea is terrible and I’d rather kill myself directly or indirectly.

Dang. There goes that idea;-P We would not want you to kill yourself either way.

Nick P β€’ November 18, 2013 10:46 PM

@ figureitout and others wanting small all-in-one solution

If the goal is a safe system easily understood through and through, then I can think of at least three pieces of inspiration:

  1. Niklaus Wirth’s Lilith project
  2. LISP machines
  3. JOP Java Processor

Lilith get’s No 1 because it’s exactly the kind of project you refer to: a processor, language/compiler, operating system, and apps that were all both simple and integrated. It was one of the only systems where the hardware was written to comply with the language’s needs rather than vice versa. The various systems were used for real work at ETH Zurich for a long time and extended by students. Active Oberon and Bluebottle A2 are most recent versions of this work that I know of.

Lilith Quick Overview with Pictures
http://www.ethistory.ethz.ch/rueckblicke/departemente/dinfk/forschung/weitere_seiten/lilith/index_EN/popupfriendly/

Full Technical Paper (it’s scanned so it looks ancient haha)
http://www.cfbsoftware.com/modula2/Lilith.pdf

Btw, OS’s such as Bluebottle written in type-safe, higher level code are nearly ideal bootstrap for whatever open CPU that gets designed. Only the few cpu/board specific codes and the backend of a compiler must be ported. Then, any apps are immediately usable and new one’s benefit from language-based security via type system on top of other methods.

LISP machines

The reason I bring them up is you can build anything with LISP. It’s ugly but it’s the most powerful language ever designed far as I know. So flexible that everything from OOP to theorem proving to concurrency was as easy as writing a library, then the language had it. The problem with any interpreted language, though, is the native code underneath and interactions with unsafe features. Unlike LISP interpreters, the LISP machines had processors that processed it natively. I could imagine building in some basic integrity or security features to a LISP processor, then putting something like the Scheme security kernel and/or VLISP on it.

K-machine chip description:
http://fare.tunes.org/tmp/emergent/kmachine.htm

(“IGOR” is a modern prototype LISP chip available on open core site)

Heck, most programming environments still don’t match all of LISP Machine’s features:
http://www.symbolics-dks.com/Genera-why-1.htm

Java Processor

Java processor makes for a good goal due to abundance of Java software and developers. The easy way is to use the safe subset of Java either without native code or with a chip that isolates that. HISC is best way to do the latter. However, it’s patented so designs like JOP are a nice place to start. Their site has many more educational links. Combine JOP processor, JX OS, SpecialJ compiler, external IO handling chip and cryptocoprocessor for a nice solution.

Voices In My Head β€’ November 18, 2013 10:49 PM

There Is No NSA.

The Government Loves You.

You Will Be Assimilated.

Resistance Is Futile.

Figureitout β€’ November 18, 2013 11:04 PM

nick P
–So you’re basically saying you want a large expanded “system-everywhere” solution; are you trying to succeed or fail?. Thanks for the links (I’ll check them out later) but I still don’t really like it. I don’t want OOP, I don’t like it. And I don’t want Java. I want binary or ASM. I’m not trying to be a dick, I want a true minimal computer and too much security is too much complexity which is more backdoors.

Figureitout β€’ November 18, 2013 11:09 PM

nick P
–Stanislav is going to have to convince me on a Lisp machine; I’m not buying it now though.

RobertT β€’ November 18, 2013 11:32 PM

@RobertT: “Subtly flawed = DEFINITELY”
Yes, but with 30000 transistors it would be more difficult to subvert than in modern processors with a billion of transistors.

A upper 30K transistors is a good starting point to minimize the Hardware attack surface, but lets be realistic there is very little that one can do with just 30K transistors. Probably a more important point is that a chip like what you are suggesting would be definitely PAD-Bound.

Pad-Bound means that the ring or two of pads around the outside of the chip sets the chip’s minimum dimensions .

As an example if a chip must have 100 pads (required for IO and external memory) THAN it will have minimum dimensions of say

80u minimum Pad + 50u pad to pad space + corner set-back (say 200u )

so pad to pad pitch is 130u. For 1 row of pads the minimum side length is (400 + 130 * 25) microns or about 3.8mm even with two rows of pads it’ll be about 2mm. (BTW double rows of pads complicates assembly and there increases costs)

So from a pad bound perspective in a 100 pad cheap chip you get at least 4mmsq silicon.

Ask yourself: How many gates fits into a 4mmsq die area on a 22nm process?

At say 4 transistors per gate and over 2M gates per mm sq the answer is 8M gates OR 32M transistors.

Thats right the die area defined by your likely pad minimum aea can contain 1000 processors with 32K gates EACH.

Now what was it you were saying about a minimized design providing nowhere for things to hide?

Sorry I’m not trying to be mean, the problem is that many people are still conceptualizing device minimized circuits when the true minimum is set by pad-bound area of the die.

Before anyone tries to educate me in technology advancements in Pad_over_active area concepts let me assure everyone that I’m well up to speed on all these technologies, so go ahead and divide my Xistor counts by 4 or even 8. you still have adequate area to implement 100 processors with 32K xistors per processor. this is the technology reality today.

next gen will only get denser and denser and denser look at the last 30 years for some ideas about how fast the chips will continue to shrink. IMHO any worthwhile project starting today should be thinking about being optimally sized for the technology available in 10 years time.

Bryan β€’ November 19, 2013 1:50 AM

@RobertT on chip size and transistors.

That is why I was going for a repeated core and cache memory. I’m also figuring on having more IO lines. Just the memory bus will chew up a few hundred.

I’m wanting the IO hardware as that will mean it is known, and I just load the drivers from the primary BIOS image, or operating system. No BIOS extensions to hide exploits in. As for linking IO hardware to a core, then it is running the same core/instruction set as the rest of the system. One less assembler/compiler to write and verify. I’ll admit I’m thinking about how to make the architecture good for anything from cell phones to servers.

Petrobras β€’ November 19, 2013 2:48 AM

Thanks Nick P for the links; I will check later if these processors have enough computing power, and how long they may run on car batteries.

Thank you very much @RobertT for the realistic details about die and processors.

Where do you put the JTAG interface ? Can it use that free room on the die ?

How much does it cost to check 30000 transistors on a given physical processor ? To check that there is no connection between two different circuits on the same die ? What it the initial inverstment needed to be able to make such checks ?

@RobertT: “there is very little that one can do with just 30K transistors.”

ARM6 has only 35,000 transistors according to http://en.wikipedia.org/wiki/ARM_architecture and it had no RAM cache (= hiding places).

If there is still room on the die, the free room may be used to host RAM, with its own independent connectors, but not connected to the 35,000 transistors.

@Clive Robinson: “For various reasons DSP chips are not likely to be “backdoored”.”

Can you please detail your reasons ?

Clive Robinson β€’ November 19, 2013 2:52 AM

@ Figureitout, Nick P, Petrobras,

Have a look at,

http://www.colorforth.com/

It might be of interest. Charles Moore is an interesting and colourfull charecter who over many years has ruffeled a few feathers along the way.

Many years ago when people thought the 68K “was neat” and the 8088 was just in their price range he came up with a rather interesting solution to more computational power. He took the language (which he invented) Forth and striped it down to 29 primative instructions and put it onto a TMS DSP chip as a quite effective RISC processor which got called the “Forth Machine” (this was quite a common naming ides see the “P Machine” for another). A little while later this striped down dielect of Forth became known as “Machine Forth”.

As Nick P has noted you can do anything with a Lisp machine, the same is true of a Forth Machine. The difference being the LISP version is hugh in hardware and looks more like a CISC system whilst the Forth version stays very RISC in nature.

In fact so small that the Pad-Bound issue @ Robert T mentions was not just a small problem but a large empty one, and the soloution (S40 multi cpu chip) is an idea I’ve mentioned and discussed on this blog before of putting multiple CPU’s on a chip. However good as it might have been the chip got axed.

I can’t say why it got axed but I suspect it was as with many good ideas, ahead of it’s time and thus, a solution to a problem that had not become sufficiently acute that messy (insecure from this threads POV) work arounds with current technology will still hang in by their fingernails. Another issue is “mind set” born of “custom and practice”, which puts blinkers on technical solutions due to the investment in becoming profficient (about 10K hours).

You can see both in effect in this commentry,

http://www.yosefk.com/blog/my-history-with-forth-stack-machines.html

So of the two (Lisp / Forth) I’m technicaly agnostic and see they both have their place (as I’ve indirectly indicated in C-v-P) but all the code cutters out there have billions of man hours invested in maintaining the messy status quo that the GCHQ, NSA, et all have so happily exploited.

And with “Rich Content” being the current aim of by far the majority of market cornering wanabees I can see people still discussing these issues long after most of us on this thread have retired or died.

Nick P β€’ November 19, 2013 9:12 AM

@ Clive Robinson

I did consider including ColorForth and Forth chips. They’re in my bookmarks from some prior conversation, maybe with you. Nice pieces of work with ridiculously small code sizes. I specifically excluded it from the list, though.

The reason is that Forth is untyped. I recall from my quick reading of the tutorials that Forth programmers don’t worry about such things. They just compose stack operations basically. I think this makes it easier for them to shoot themselves in the foot. I think it makes it conceptually harder for lower grade programmers, as well. In comparison, Java, Oberon and Scheme have all been used to teach programming languages for years.

So, not being a Forth programmer, I can’t say for sure if it’s an unsafe language or if one must use a lot of care to avoid problems. It just seemed that way from my reading.

Bryan β€’ November 19, 2013 9:44 AM

Personally I want to use my FLEX language and the GUI system I dreamed up, but then FLEX was specifically designed so PHD types could shoot them selves in the foot with a tac nuke. Imagine a language where you can extend the syntax with an object definition. :} LOL :} LOL :} LOL :} Oh yeah, the language only has the structure for defining objects and how to translate that syntax to a simpler already syntax as part of it’s core. A = B + C; needs an object library. 8) Objects are data types with the syntax that can operate on them. The simplest syntax being machine language. I got stalled on the compiler a decade or so ago, but it was at least handling a simple C variant.

Time to move more wagons of corn…

Clive Robinson β€’ November 19, 2013 12:01 PM

@ Nick P’

Hmm funny you should mention BrainFk, I was thinking likewise, for a language with only eight instructions it is surprisingly Turning Compleate, and from experiance I know you can write an interpreter to run on an eight pin 12bit PIC with naff all RAM and a K of ROM (Main memory was a RAM chip with a bigger PIC running as a bidirectional serialiser and dual UART). So you could implement a more complex instruction set on it which in turn… just like Russian Dolls upwards untill it gets to the point that J.A.Sixpack “code cuttter” can get a grip on things without their skull imploding due to the vacuum created by their downward spiral thinking on contemplating BrainFk πŸ˜‰

With regards Forth no it’s not a typed language your data container either goes on the stack or it does not… If it does not then you have to design a word to either put it into heap top or to push / pop / manipulate on the stack… It sounds tough but actually it’s not. Importantly unlike halfway house languages like C all data side effects are your fault not the interpreter. Put bluntly even C has way to many data types to be safe for J.A.Sixpack and the amount of code required to keep J.A.Sixpack on track is way way bigger than the basic data manipulation code of one data container (stack top).

If you think back to the many C-v-P chats you will remember I was advacating many small processors with simple OS’s that were jailed by a hierachical hypervisor using the MMU. The MMU would be minimal in design and I was thinking of an 8051/2 type CPU (which has signiture advantages) so down in the low thousands of transistors per core. The two-a-penny code cutting J.A.Sixpack’s would not get their hands on this but a high level scripting language the interpreter of which would be developed by secure coding experts (which are scarcer than honest politicians).

The question is “Which scripting language?” I was thinking of an extensibl “shell script” but a limited version of Lisp would probably work as well.

Thus think of the 20,000ft overview as being a very CISC Lisp Machine implemented by an interpretive system system sitting ontop of an X-RISC Forth Engine running on the likes of multiple straight line DSP cores.

@ Petrobras,

Why do I think DSP cores are not backdoored…

Well I’d be the first to admit the argument is almost circular πŸ˜‰

DSP cores are mainly seen as “Data filters” not the “Data mills” of general CPU cores. This is because although DSP cores are generaly Turing Compleate they are very rarely used as such, that is the data passing through is processed not on the value of the data but by a data invarient algorithm. General CPUs on the other hand are expected to use data dependent algorithms that change behaviour on the data values, they can be used to do signal processing but rarely are.

Thus the amount of interesting data to an Intel agency seen to be going through DSP cores is from their point of view is vanishingly small. When you then consider the NSA is actually “resource limited” you have to ask how high a priority backdooring a DSP core would be?

There is however a flaw in this argument, which revolves around how the likes of a state level agency of the likes of the NSA, GCHQ, et al would actually backdoor a general CPU. If it’s by getting into a CPU macro then the argument remains fairly solid. However if it’s by messing with the foundry process that puts in test etc extras then as this would in effect affect all chips irespective of function then the argument may be on shifting sands.

Finally ther is the SoC consideration, as has been pointed out Pad-Bound chips are a wast of resources, a chip foundry could in theory put two different customers designs on the same chip and bond them both out with some kind of MUX system where by you “blow” a programable link to disable functionality. This two/three/four for one is seen when academic designs are made in small scale foundries because it spreads the entire manufacturing cost across all designs. It is also known that a chip manufacturer will do the same because making a high end chip costs about the same as a medium or low end chip, the saving comes in volume production. So when you buy a chip there may well be a lot more functionality in there than you might expect. Thus a DSP chip may actually be a mobile phone chip with the extra functionality turned off by a “soft link” that could be re-enabled by software…

That being said high end DSP chips are specialised beasts needing more specialised designs and designers and thus may not end up in SoC designs…

You’ld need to get inside the Fab business in an indepth way to make a current evaluation of the state of play.

Clive Robinson β€’ November 19, 2013 5:43 PM

OFF Topic :

Bruce gets mentioned in this article about OpSec for hackers and others the IC/LEO’s see as “road kill to be” on the “information superhighway”:

http://blogsofwar.com/2013/11/11/interview-hacker-opsec-with-the-grugq/

There is much I would agree with in the article though it misses some bits and other bits are –depeding on your POV– inacurate.

As the article notes “human trust” not “technical trust” is a real problem, afterall you can only be betrayed by those we have incorrectly trusted and it’s our nearest and dearest that can betrey us the most because of the confidences we tell them. In the UK armed forces they have an expression about this which is “don’t leave amunition for the enemy” and another on types of trust “I’d trust him with my life, but not my wallet or wife”.

I was also given some “employee managment” advice by somebody well practiced in black bag work, which was “never trust or depend on someone you don’t have enough dirt to bury them with and then only when you’ve made sure they know it”, history occasionaly gives up one or two examples of this i.e. J. Edgar Hoover. Another piece of advice from the same source was not to have a family because “loved ones will bring you real pain at the hands of others”, another cheary thought from them was “The only thing pigs cann’t digest is teeth, so remember to sive the slurry”…

As for SigInt it has different meanings in different places in the UK SigInt is a broad catchall to all forms of intel about communications not just “traffic content”. Traffic content was the aim in WWI and technical solutions such as encryption are effective against this. What came into being in WWII was “traffic routing” in the form of “traffic analysis” and it was quickly discovered that this did give an accurate portrayal of intent and it was not effected by encryption… The partial technical solution to traffic analysis is “channel stuffing”, another is point to point signals being wrapped in “link encryption” and for “delayed/out of order delivery” such that messages cannot be tracked through a signals net (points that TOR should take on board).

Figureitout β€’ November 19, 2013 11:15 PM

Clive Robinson
–Hmm Mr. Moore lives near Lake Tahoe, should’ve known that when I visited (gorgeous place). Even more to think about; the language kind of blows the mind.

Speaking of blown minds, mine was when I first heard of brainf*ck on of course hackaday. Here’s what I envision my little computer to be like.

OT
More healthcare.gov: http://www.thefiscaltimes.com/Articles/2013/11/19/Tech-Experts-Healthcaregov-Shut-It-Down

http://www.techdirt.com/articles/20131010/01484924821/ (comment section is also interesting, some more stories of major failings at these contractors and oh, healthcare.gov requires javascript, surprise surprise…)

Figureitout β€’ November 20, 2013 12:14 AM

Clive Robinson
–The blogsofwar site was hilarious. Dude’s looking to get beheaded trying to infiltrate real drug gangs; you really can’t even “trust your bros” in that business for at least 2 years of business. I succeeded by shielding and having Grade-A trustworthy people who would never work for the feds against you; I got a little lucky too. First off opsec is overrated in a police state or where surveillance is everywhere; and by practicing good opsec you in fact make yourself more noticeable…

Also, you realize after having enough time “playing the game” out of curiosity that if you step back and think, it really is a waste of time. Eventually you have people just looking at people and watching what they do, not doing something by yourself.

The one point I agree w/ is the state has a very big legal edge, being able to have a legal spot to meet, some legal protections from murder and crime from the state agents unlike us citizens. This also allows room for false-flags and attacking each other’s gov’ts while falsely saying it’s citizens.

Agents are very easy to spot after playing the game long enough; and really you just have to laugh and just put on a fake smile while the gov’t is wasting money on their failed infiltration attempts. I also tested making agents move around and do things on my command, and it worked b/c they let themselves be controlled by me.

Petrobras β€’ November 20, 2013 4:09 AM

@Clive Robinson: “Forth and striped it down to 29 primitive instructions and put it onto a TMS DSP chip as a quite effective RISC processor”

Neat. How much transistors would it need, assuming that the stack stays in memory ?

That processor should have two more primitives to enable real and user mode:

(1) a marker to put on the stack, to protect the stack of the real mode;
(2) a sort of longjump (to code of a sleeping process) that:
(2.1) load the current stack of that process from storage;
(2.2) let that process execute natively up to 10000 instructions (evaluating that loaded stack, “user mode”) and stops if
(2.2.1) an instruction trying to access that marker (2) or the protected stack
(2.2.2) a heap access tried outside of its share.
(2.2.3) an access to the PCI bus tried outside of its allowances.
(2.3) moves back the modified stack of that process to storage.
(2.4) and delete that marker (2).

(2.1) and (2.3) may be implemented with other Forth primitives, and then (2.2) would be a primitive in place of (2).

The eventual presence of a syscall would be stored just before (2.3).

That computer would need a “compiler” to compile, into forth, programs in some languages (C++ ? Parasail ?) that do not have garbage collector.
That computer would need a disassembler of forth sequences, the output of that disassembler should compiles back to the forth sequence.

Then it would be usable.

@Nick P: “Well, at least your language was more sensible than Brainf**k.”

There are uglier languages https://en.wikipedia.org/wiki/Unlambda
Only eight instructions (.cdiksv) to make it Turing-equivalent. Four more instructions (|?@e) to be able to make a parser.
A hello world program:
r“““““`.H.e.l.l.o. .w.o.r.l.di

Wesley Parish β€’ November 20, 2013 4:14 AM

@Hello

And then there’s Dihydrogen Monoxide, one of the deadliest substances known to man.

And I quote:

What are some of the dangers associated with DHMO?
Each year, Dihydrogen Monoxide is a known causative component in many thousands of deaths and is a major contributor to millions upon millions of dollars in damage to property and the environment. Some of the known perils of Dihydrogen Monoxide are:

Death due to accidental inhalation of DHMO, even in small quantities.
Prolonged exposure to solid DHMO causes severe tissue damage.
Excessive ingestion produces a number of unpleasant though not typically life-threatening side-effects.
DHMO is a major component of acid rain.
Gaseous DHMO can cause severe burns.
Contributes to soil erosion.
Leads to corrosion and oxidation of many metals.
Contamination of electrical systems often causes short-circuits.
Exposure decreases effectiveness of automobile brakes.
Found in biopsies of pre-cancerous tumors and lesions.
Given to vicious dogs involved in recent deadly attacks.
Often associated with killer cyclones in the U.S. Midwest and elsewhere, and in hurricanes including deadly storms in Florida, New Orleans and other areas of the southeastern U.S.
Thermal variations in DHMO are a suspected contributor to the El Nino weather effect.

end quote

Z.Lozinski β€’ November 20, 2013 11:26 AM

There is a chap in the UK reporting that LG Smart TVs appear to transmitting data on viewing habits, specifically all channel changes … and the contents of attached USB drives LG’s servers. What’s more, turning OFF the preference to allow user reporting, has no effect. The story has been picked up by GigaOm. The original story, complete with wireshark traces is here:

http://doctorbeet.blogspot.co.uk/2013/11/lg-smart-tvs-logging-usb-filenames-and.html

His complaint to LG got a vacuous answer, part of which reads:

“The advice we have been given is that unfortunately as you accepted the Terms and Conditions on your TV, your concerns would be best directed to the retailer. We understand you feel you should have been made aware of these T’s and C’s at the point of sale, and for obvious reasons LG are unable to pass comment on their actions.”

Nick P β€’ November 20, 2013 1:00 PM

@ Petrobas

Here a bunch of them
http://www.ultratechnology.com/chips.htm

They certainly can be implemented with few resources and old technology. The point I made to Clive which is VERY IMPORTANT for you to understand is that they are not safe compared to some other designs. The language, if used carefully by smart people, can let you make a working system with minimal code on old chips. The problem is that people developing systems software shoot themselves in the foot regularly with existing languages. I can only imagine how they’d use a low level, stack oriented, typeless language like Forth.

That’s why I promoted chips that have safety built in. The Modula/Oberon projects show particularly that you can use a safer, readable language with good performance & usability. The Vega chips are doing that with enterprise Java, while designs like JOP and SHARP do it in embedded Java. Point is, there’s options for both old and new chips to be safe ground up without using an unsafe language.

(Of course, we should keep Forth chips/environments in the back of our head just in case there’s a situation where the preferred options can’t be used, but it can with some benefit. I’m not prejudiced against it. I just think we have safer options for many use cases.)

Figureitout β€’ November 20, 2013 10:06 PM

Bruce Re: Finally realizing a solution isn’t political
And techies can only fix it if government stays out of the way.
–First off, make sure they don’t take a crappy picture of you lol. I mean, let me just put it this way. A “political” solution to a technical problem would be the disaster that is “healthcare.gov” right now. It’s hard to envision more failure than that right now.

Enough w/ the bloated crap we’ve had to put up w/ for years now. We need level heads, clear minds to implement this security for the internet. Even though, “security” and “internet” seem to be oxymorons in my view. I think recommendations should be made that you have “internet-connected” devices and others that you use for whatever you do. Simple air-gaps right now, but you should know they can be breached easily. So we need a “home-base”; w/ undeniably verifiable test points. Bits coming from possibly many places doesn’t inspire a lot of confidence in me…

Sounds nice, but implementing it will be hell. But it has to be done (or at least attempted). For instance, on my dumb phone I experienced a very advanced attack that I couldn’t figure out whatsoever. I don’t trust the device but still, it can’t do jack what any smartphone can do now and those are completely vulnerable attack surfaces; unacceptable attack surfaces.

Petrobras β€’ November 21, 2013 12:39 AM

@Petrobras: “How much does it cost to check 30000 transistors on a given physical processor ? To check that there is no connection between two different circuits on the same die ? What it the initial investment needed to be able to make such checks ?”

I got no answer on that line: does it mean no one here ever checked a processor ? If this is true, then there is no point in making verifiable processors.

@Nick P: “low level, stack oriented, typeless language”

What is the problem with typeless ? Low level ?

Is this problem the reason no one here talked about replicating the well-documented i386 or arm-v6 ?

Nick P β€’ November 21, 2013 9:11 AM

@ Petrobas

“What is the problem with typeless ? Low level ?”

I already explained that in both posts. It’s about readbility and safety. Forth isn’t really either unless in the hands of top notch Forth talent. So, it can be used, but it defeats the old principle of making “bad things hard to do and good things easy to do.” It actually does that on purpose according to its designer.

Devil’s Advocate Against Myself πŸ˜‰

Now, one may try to compile a safe language to Forth code. Thing about Forth’s stack language is that many languages are stack oriented when they get compiled. So, the runtime would be an easily analyzed Forth interpreter or chip, while implementation language lacked its risks. Might be doable. Also should be easy to write an interpreter for a safe or powerful language in Forth. This, along with its advantages for unsafe low-level stuff, is why I said I wouldn’t throw it out entirely although I’m advising people not focus on it.

Let’s Say You Do a Forth System

If you want to use Forth, which can be done on ancient chips, you will be going back to DOS type interface with no program safety (a la kernel vs user mode) and you must get everything right. This will require you to thoroughly learn Forth as it’s a counter-intuitive language if you come from a different background. You will be thinking in stacks and stack manipulation almost non-stop. (thinking like a computer?) You will also need to design/code the app with perfection as Forth code will happily mangle security properties if you tell it to. If you achieve the perfect design/implementation, then others who thoroughly understand Forth can see it has no backdoors. See the troubles? πŸ˜‰

“Is this problem the reason no one here talked about replicating the well-documented i386 or arm-v6 ?”

It’s already been done commercially. The commercial knockoffs of x86 have all failed with the exception of AMD and VIA. Both are US companies so both might be backdoored. There are many ARM variants. Far as open designs (32 bits +), there are quite a few CPU’s out there including an ARM-compatible. Look at opencores.com to see some. The Chinese Loogson MIPS processor also has x86 emulation at the chip level to help run legacy stuff. Problem with copying x86 or ARM is you get all their strengths with all their [security] weaknesses.

Honestly, I think the best from a subversion defence standpoint is MIPS. MIPS processors’ performance ranges from embedded to servers depending on specific chip. It’s a very simple ISA that can be extended easily and many “secure from chip up” projects are leveraging MIPS. There are MIPS variants from chips coming from US and Chinese companies plus everywhere in between. Write the OS, crypto and security to be portable between MIPS machines, then mix and match suppliers. If you’re worried about a specific superpower, pick a chip from the other one. Easy way to keep risk low. Plus, OpenBSD and Linux are already on MIPS. πŸ˜‰

“I got no answer on that line: does it mean no one here ever checked a processor ? If this is true, then there is no point in making verifiable processors.”

There used to be quite a few ways to check processors in the old days. All of them took experts and some equipment. Thing is, there’s not much you can do cost effectively after a certain point of complexity or with certain manufacturing techniques. You’re gonna trust the fab and/or hardware tools even for a verified hardware spec. So, the real security measure is to incentivize the fab and its management to ensure you’re getting untampered chips. That can be quite difficult but is easier than a technical solution. That is, the technical solution is still an open research problem and the other solution is a business/financial problem.

Bryan β€’ November 21, 2013 9:14 AM

@Petrobras: “How much does it cost to check 30000 transistors on a given physical processor ? To check that there is no connection between two different circuits on the same die ? What it the initial investment needed to be able to make such checks ?”
I got no answer on that line: does it mean no one here ever checked a processor ? If this is true, then there is no point in making verifiable processors.

I’ve never done semiconductor work, but I have kept up some with the industry over the years. I originally went to college under computer engineering. Yes, chip design, but that was in the 80s. I’d bet a small team of laser machining, electron microscope and optical analysis guys could make a machine to tear apart a chip layer by layer to discover it’s internal structure. Then somebody who knows pattern matching could use that data to match it to the design data. After these guys have the system up and running, I could automate it. I’ve been able to automate allot of processes over the years that people thought was impossible. Samples could be pulled from production batches and tested every so often. The machinery to do this analysis will be very expensive due to the size of the features being analyzed.

My best guess on analyzing the actual design before chip production can be done by hand, and will take close to half the amount of man hour time per checking group as the original design took to do. The best thing to do is keep the design segregated into small blocks that can be individually checked, then the interaction of the blocks checked separately. This makes adding in features like a large bank of ram or MMU much easier to verify. Keep all the small block simple in design, and keep their interfaces also simple as possible.

RelativeNewbie β€’ November 21, 2013 9:55 AM

It seems that backdooring hardware has two approaches: 1) leaking information or 2) recognizing key sequences in code and subverting the logic as in the Thompson backdoored C-compiler example. Won’t 1) be eventually be discovered if one sets up a “reverse honeypot” and monitors the output of a machine whose input one controls? And doesn’t 2) require that the code to be backdoored be recognizable? Perhaps the answer to 2) is a polymorphic approach that transforms baseline code into a wide range of equivalent forms, different in each installation, making the recognition of key code sequences (say a login sequence) impossible?

Wael β€’ November 21, 2013 11:08 AM

@RelativeNewbie

It seems that backdooring hardware has two approaches:

Seems you are describing two goals, not approaches! You can subvert HW in several ways. Extra functionality that’s not documented (NV changes, configuration files, etc…) – and this one is normal, and strictly speaking not considered Subversion, even if it’s used for β€œunethical”, β€œstealth”, β€œharmful” activities without user’s consent, broadly speaking. There is also Micro code Subversion and Firmware subversion. I chose to include Microcode and Firmware under HW…

be eventually be discovered if one sets up a “reverse honeypot” and monitors the output of a machine whose input one controls

Depends on the nature of the subversion. Like Thompon said, a well installed Microcode β€œbug” is almost impossible to detect. Also, subversion may simply cause slight miscalculations to undermine β€œsome device” under control of the subverted HW.

is a polymorphic approach that transforms baseline code into a wide range of equivalent forms, different in each installation, making the recognition of key code sequences (say a login sequence) impossible?

You aren’t talking about HW subversion now! This is software. And that depends on where in the stack this polymorphism took place. It’s not uncommon for Viruses to use polymorphism (not in the OOP sense, but meaning varying forms and signatures with equivalent functionality). They use this not only to avoid detection, but to make it difficult to detect, reverse engineer, and profile. If they combine that with stealth techniques, then it becomes more difficult to detect.

Anura β€’ November 21, 2013 11:35 AM

I was thinking, let’s say you do store passwords securely, and let’s say after a certain period of inactivity, you will see that 90% of your customers don’t return, let’s say 3 months. If you keep track of their last login time, and then after 3 months, you replaced their password hash with random bits, and then reset the time they last logged in to a random time in the last 20 days or so, it would really waste the time of anyone trying to crack your database.

RelativeNewbie β€’ November 21, 2013 12:15 PM

@Wael I was trying to be concise but ended up being unclear. Yes, I am talking about two possible goals, two ways of subverting hardware/software to some end. (And now I’ve added a third goal below.) And ultimately that subversion may take place in the CPU chip design, in a peripheral chip design, in the motherboard design, in the microcode, in the BIOS, in the OS or in the application or at some other level in the stack.

Useful subversion would seem to require 1) leaking data, 2) allowing unauthorized activity or 3) preventing the system from performing its function. I guess 3) would be the easiest to escape detection: if a chip were to fritz on a certain random input that input may never be discovered (Could the trigger be successfully hidden as a specific random 64 bit integer value? Is every 64 bit value a valid float? If not maybe the trigger could be an invalid float unlikely to hit the arithmetic unit by mistake.)

But for an exploit of type 2) on the chip or in a compiler a la Thompson to function it seems to me there would have to be logic to identify the code sequence whose function is to be subverted. That logic would have to make the change in a very specific situation to avoid accidental detection as an unexplained bug. If the machine code or source code (depending on the level at which the exploit lives) were polymorphic it would be largely impossible for the exploit to decide when to act. At the IP level, perhaps web sites should change their IP addresses frequently and randomly transform their html/css/js to further confuse exploits that are targeted at a specific site.

What if the exploit were fired on a certain random value? Certainly, anybody could write an http server that dumps the disk when it receives a specific string. But this would be identifiable in code review. The deeper in the stack such a trigger resides the more it would rely on the higher levels being built in a certain manner and the more sensitive its function would be to polymorphic changes in the upper level. The trigger may no longer even be in the execution path of the function it intends to subvert. If there are routines that must always be run for critical operations, they are a small fraction of the code base and can be targeted specifically for verification.

Yes, this is coopting the techniques of cybercriminals. Pretty appropriate since the intelligence agencies are now targeting law abiding citizens as well as criminals. We are all criminals in their eyes.

It seems to me easier to make the exploit writer’s job many orders of magnitude harder than to take on the seemingly much more difficult job of building systems verifiable from the chip on up.

Wael β€’ November 21, 2013 12:44 PM

@RelativeNewbie

But for an exploit of type 2) on the chip or in a compiler a la Thompson to function it seems to me there would have to be logic to identify the code sequence whose function is to be subverted.

True, but not necessarily the only way. This would be a simple subversion attack. Other more compound attacks may not require a detectable logic. Remember, Highly competent attackers don’t only exploit weaknesses, they are also capable of creating weaknesses. It could be sufficient to create a minor weakness as a first step in a compound subversion strategy to achieve thier goals.

But this would be identifiable in code review.

Not if you don’t control the whole process, Thompson said that in is article. And this assumes the code is open source, too.

If the machine code or source code (depending on the level at which the exploit lives) were polymorphic it would be largely impossible for the exploit to decide when to act.

I’m not sure about that! If your subversion lives at the sub-assembly level or even the assembler level, there is no need to understand the logic of the program to exploit. Like I said, sometimes it’s suffcient to introduce minor weaknesses as part of a compound attack. Then, how do we know or verify that certain instances of our obfuscated or polymorphed code is immune? I think you bring good questions that need more research to answer!

What if the exploit were fired on a certain random value?

I’m sure that has happened. There were some dicussions on this blog where Kasperisky’s labs asked the public to help them with certain malware that triggered on either OS font characteristics or certain directory file tree structure (or installed programs)…

Bryan β€’ November 21, 2013 3:03 PM

@RelativeNewbie

Perhaps the answer to 2) is a polymorphic approach that transforms baseline code into a wide range of equivalent forms, different in each installation, making the recognition of key code sequences (say a login sequence) impossible?

To simple to get around. While it would get around a simplistic pattern recognizer. A complex pattern recognizer like the one I wrote for reading newspaper articles would see it fast. Sorry…

@Wael

It seems to me easier to make the exploit writer’s job many orders of magnitude harder than to take on the seemingly much more difficult job of building systems verifiable from the chip on up.

Making the hardware, OS, and development languages tight will make exploiting it much much harder. I could see using a highly audited base kernel with a fully typed checked language like Ocaml as the application language. The combination would be hard to subvert if non Ocaml libraries are not allowed. You could change the language to any that provides full type checking and range checking while not allowing the programmer to manipulate pointers. My experience with ML back in the mid ’80s told me that programming could be so much nicer than C, Modula, Fortran, Lisp, etc. provided. Me and my independent study partner wrote a relational DB with triggers in ML in a semester as we learned the language. Only 5000 some lines of code, and over half were comments. Working with C, Lisp, and others after that felt like I was being smothered in senseless details. Make the design of the hardware such that as many types of exploits will never work on it. Then use languages that make writing correct code easy.

Here is one example of making the OS tighter by design. When a process causes a privilege violation, or needs a privilege change, the hardware halts it, and the request is put into the interrupt queue for the right kernel process. That kernel process always checks the request for permissions before gathering the request data, and checking it for correctness. The system can be designed so that only processes of a specific permission set can do certain operations on hardware. An IO handling task like the USB code can be given rights to talk to the USB hardware, but still be excluded from accessing the MMU and privilege setting hardware. A process that can change process privileges could be excluded from accessing hardware IO or manipulating the MMU registers. The process that can manipulate MMU registers, and setup DMA transfers to bring off board page memory in could be prevented from changing process privileges and accessing hardware IO. The thing is there must be hardware on the chip for enforcing these protections.

I’m now dithering on if instruction restart is even needed. Give the CPU a hundred plus identical CPU/MMU/Cache cores, and a few cores attached to hardware IO. Only run one process at a time per core. If it memory faults for a swapped out page, halt it until the page is loaded. Other processes on other cores can use the internal memory buses in it’s place. Any violation like asking to use a non allocated page causes the process to be killed.

Here is something to think about that is bouncing around in my head. It is a possibility on how to have encrypted external RAM and FLASH memory. Have a large bank of on board memory, 128M Bytes or more. Use external DDS3 memory chips, like on a DIMM, in their large block burst mode as a very high speed “swap disk”. Between the on board memory, and the very high speed “swap disk” have a few paralleled cypher units that can keep up with the DDS3 memory transfer rate. The steps of the transfer would be something along the lines of. Allocate and setup a cypher group with a key. Setup and start a DMA block transfer from memory into the cypher group’s input queue. Each word goes to the next available unit in the group. Likely one sub unit per word of memory transfered. As the results come out of the sub units, they are transfered to a block of internal memory. For FLASH memory, a nonvolatile store for a decryption key would be needed. Hen the chip comes out of reset, have the chip automatically set up the DMA transfer and cypher group for reading in the contents of the first few blocks of the FLASH memory. When they are read in, core #1 is told to start executing it. The cypher groups key stores will need to be write only. There likely should be at least one per process, possibly best to have 4 or more. One for code and one for data. The spare ones could be used for files, etc. Not all processes will be running unique code, so some may be shared.

For the internal RAM, an emergency switch could be put across it’s VCC that disconnects it from external VCC, and grounds the internal ram’s VCC to ground. That will kill the data stored quickly. The only allowed reset for the switch should be a full power off reset. A sufficiently large capacitor could be used to keep the memory bank grounded so it would take a few minutes of total power off before a restart could happen.

If the chip has a dedicated read and write pins for the FLASH memory, the write one could have a jumper installed on it.

I’d better submit this before it becomes a book.

Clive Robinson β€’ November 21, 2013 3:11 PM

@ RelativeNewbie,

You are starting from the wrong point you need to step back a bit and think like an attacker not a defender.

Start by saying “What purpose is the attack/subversion going to serve?”.

Whilst it is not possible to come up with an exhaustive set of answeres they will all fall into broad classes in a simple hierachy. The top node being to attack or not with the “yes” feeding into “passive or active” and it is the output from the active attack being of interest to your thoughts.

Broadly this feeds into three choices,

1, Denial of use/functionality,
2, Injection of data,
3, Extraction of data,

Which I’ve listed in order of difficulty, and as a general rule of thumb you need to be able to do the preceading levels to achive the level of required functionality.

That is to get desired data out you need to be able to formulate and inject an appropriate request/command and to do this you have to be able to deny the system you are attacking the use or functionality of the preventative mechanisms designd to stop such attacks.

When it comes to hardware systems to deny functionality requires energy to be added or subtracted from the normal functional base line of the system. To do this requires a control mechanism that can be operated either directly or remotely.

The simplest is generally to “kick out the power supply” in some maner, but whilst this will deny functionality to those you are attacking it generaly cannot be used to advance you to the next level, further it’s usually quickly spotted when used and almost as quickly identified and prevented or mitigated.

Thus your next approach is to identify an existing control mechanism or failing that insert one. Of these two approaches the former is by far the best for a list of reasons so long it would make a large book (comming to you when the faux gurus/experts are shown up for what they are πŸ˜‰ Though in general principle the book has been written a couple of times in times long past (The Art of War several millenia ago and the works of Casanova and Machievelli several centuries ago).

The way to render control mechanisms ineffective from attack has has traditionaly been by the EmSec/TEMPEST principles behind “Air Gapping” BUT with the fatal assumptions of “starting from a trusted point”, “preventing emmission” and “working anti-subvertion mechanisms”.

Over a third of a century ago the assumption of “preventing emmissions” was proven to be false, I for one had worked out how to do “active fault injection attacks” “by modulated EM carrier” and demonstrated it but either people did not want to understand the implications or as I now suspect those who could were applying the head in the sand “Golden Goose Principle”. The reason for this latter belife is one set of people I made aware of it had direct connections back through the DWS at “Pounden” –since moved to Hanslope Park– back to the likes of MI5/6, and subsiquent conversations with Tony Sale (assistant to Peter Wright author of “Spy Catcher”) who put Bletchly Park back on the map, confirmed in a chat I had with him one day that my name had crossed his desk with regards to such attacks. And in doing historical research I suspect the principle was well known to the GRU/KGB via the works of somebody more known for a strange musical instrument named after them “The Feramin”.

And in more recent times considerably before Stuxnet had become known I’d posted the principle ideas behind Air-Gap crossing here and other places when working out how to rig electronic voting machines. Which invalidated the “working anti-subvertion mechanisms” assumption.

Leaving the assumption of “starting from a trusted point” which formed the basis of the Rainbow Books on secure systems going back nearly half a century. It was known and stated back then in open litriture that this could only work if you could assure the manufacturing and supply side were correctly secured. Which back in the “money no object” “cold war” with the sufficient “technology gap” was not much of an issue. But with the supposed “Peace dividend” became a real issue as the “technology gap” reversed it’s self in favour of Asia and COTS purchasing, which is worsening with time and political kick backs.

The simplest control mechanism is to produce a “matched filter” which is little more than a delay line and adder and the delay line can be made with a simple shift register that can be found in nearly all moderatly complex logic blocks, if for no other reason to aleviate metastability issues. When the right bit patern gets shifted in the matching output goes from false to true and this signal triggers an action. One such action could be to “crowbar the supply” briefly causing random soft faults in other parts of the chip.

But this limits it to the denial layer, using the match output to push serial data into the system else where would be more desirable, especialy if gated with another matched filter in either time or sequence planes. Depending on where the data is pushed might well trigger the output of a covert channle that leaks a few bits of key etc.

The hard part is getting the signal from one part of the chip to another part. And I’ve been told that there are very many non obvious ways this could be done which would make it at best very difficult to find.

For instance “grey beard” engineers will tell you that the likes of latches and logic gates draw different amounts of current dependent on their state and was (and still is) a TEMPEST/EMC issue in designs. Further if you arange for your delay elements to be fed from the same supply trace it will produce an analog signal which could be cross coupled to another line which correlates to known states on the elements. Such things are not obvious in either circuit diagrams or layouts of PCB’s where you can use your “Mark 1 eye ball” to try and spot them. Think how much harder this is on a modern chip layout where imaging systems are at best limited…

Wael β€’ November 21, 2013 3:15 PM

@Bryan

@Wael It seems to me easier to make the exploit writer’s job many orders of…

You are aware these are not my words you are quoting, correct?

Bryan β€’ November 21, 2013 6:25 PM

@Wael, Oops, sorry about that. Should have been RelativeNewbie.

@Clive Robinson

Broadly this feeds into three choices,
1, Denial of use/functionality,
2, Injection of data,
3, Extraction of data,

I’d add:
4.Use system and/or network resources. Think spamming, password cracking, and data pass through to hide behind.

While #4 is a combination of #2 and #3, it is a common occurrence. I’ve seen home network routers compromised to work as a pass through device.

Bryan β€’ November 21, 2013 7:04 PM

Brain dump #2:

For those interested, I’m basing my hardware ideas on a hardware design that can be connected to the net and have a good chance of staying secure. Also if it doesn’t stay secure, on being able to be fully restored to a secure state. It may take a full memory wipe, but that will be considered an OK solution for restoring the secure state. The hardware could still be operated in an air gapped situation, and as such would be that much less likely to be compromised.

I’m currently seeing a tablet or phone system that has the:
1: CPU/etc chip.
2: FLASH memory bank for BIOS, and possibly OS storage. Can’t be written to if the jumper is removed.
3: Possible second FLASH memory bank for data storage. There would be hardware support for a second FLASH memory bank that could have a second hardware write protection jumper. The OS and user data could also be stored on a card plugged into a USB device like the Raspberry PI uses. Using on board FLASH has the advantage of the β€œcontroller” is the OS kernel and it’s file system driver.
3: Large bank of RAM for process memory, likely DDS3. 128MB is maybe not enough. When you subtract kernel memory and application memory, there is only enough space for one full 1080p screen at 10bits per color, 32 bits/pixel. OS kernel will also need to be resident 100% of the time. Not much left for applications. 4k Video needs 254MB per frame, and 8k needs 1GB. Just looking at possible future requirements. Video memory may need to be relegated to external memory and that would require more IO pins. As video memory is displayed on the screen, I feel it doesn’t need to be encrypted. As video processing is compute intensive, more than one core will need to be able to access it. All cores could have the hardware, but bits setup by the privilege setting process could enable/disable it. Without a specialized MMU, I’m not sure how to restrict an application’s output to only one window of the screen. It would be possible, but the hardware would need to have fine grained address comparators and as such would take up allot of die space. The alternative is to have the video output hardware stitch together the screen from various window panes stored here and there in video memory. That gets complex, but can be done.
4: Multiple USB3 buses. For greater external data storage, ethernet controllers, keyboard, mouse, phone network, etc..
5: LCD/Video interface. The cores that are given access to the video interface are expected to handle all graphical output including video codec work.
6: Touch screen input interface? Is there enough commonality to do this or should it be relegated to a USB/SPI/I2C device?
7: Sound IO. Think DACs and ADCs with DMA only. The core(s) that is/are attached to them is expected to process the data using code from the BIOS/OS/Application program.
8: USART, SPI and I2C ports for near and dear simple devices like hardware clocks, keyboard controllers, LED controllers, accelerometers, GPS, etc..
9: Timing controller for keeping stuff like video and audio synced up. This will be a bunch of clock timers and counter comparators that can be used for providing common timed interrupts to groups of cores.
10: If the system encrypts FLASH and RAM, then a very high throughput cypher engine linked to DMA.

Each of these devices will be connected to a specific core. There could be a cross bar switch that allows devices to be connected to any core, but that is added complexity that isn’t needed.

Yes, secure capable hardware, drivers and internal firmware will be needed for various USB devices, but they can be designed, made, written and audited as needed. That’s life. They could use a subset of the core system used in the main processor to piggyback on the verification done for it.

Putting some device hardware onto the chip will cause regulatory headaches so I evicted the phone network, and WIFI hardware. Relegate them to one of the USB buses.

I also minimized for what is minimal for a standalone device like a tablet or phone. I considered that a way to get data onto and off the device was a minimum requirement, hence the USB interfaces. I have multiples of them for better throughput if desired. Once one USB hardware interface is there, ten is only more IO pins, die real estate, and dedicated cores. Remember cores can be used for general processing if the hardware attached isn’t needed. A tablet of phone may only use a couple of the USB interfaces, and have the others disabled. On the other hand a server may make use of all the USB cores to make disk IO much faster.

Bryan β€’ November 21, 2013 7:08 PM

USB device sans internal FLASH. Build into it’s USB interface a simple block transfer ability to load RAM with code from the host system. The code could be stored in the BIOS, or OS of the host. The device needs to be setup so after a hard reset the code needs to be reloaded. Once the code is loaded, the host triggers the CPU to start.

RelativeNewbie β€’ November 21, 2013 8:40 PM

@Bryan, you write

“To [sic] simple to get around. While it would get around a simplistic pattern recognizer. A complex pattern recognizer like the one I wrote for reading newspaper articles would see it fast. Sorry…”

Perhaps we don’t understand each other, but to detect a login process you want to change while compiling C code, as in the Thompson paper, you would need a pattern matcher that matched just that code and very little other code or the hack would be detected. There are many many ways of transforming C code into identical code, introducing dummy code, introducing new functions, new variables, changing control structures. I believe that the problem of deciding that C code sample 1 is equivalent to C code sample 2 is certainly intractable and probably undecidable in the Turing sense. Even a heuristic process would fail more and more as the transformations became more complex.

I guess an optimizing compiler-like process might come close since they seem good at identifying dead code and unwinding functions and the like. Perhaps the mal-C compiler that Thompson envisions can enlist itself in searching for the code to change. But I doubt that it would have a high enough level of accuracy to do close to perfect matching, because I suspect that a solution to this matching problem implies a solution to the Turing halting problem, which is proven to have no solution.

RelativeNewbie β€’ November 21, 2013 8:51 PM

@Clive Robinson, you wrote

You are starting from the wrong point you need to step back a bit and think like an attacker not a defender.

Start by saying “What purpose is the attack/subversion going to serve?”.

I am sorry for my dense unscanable blocks of text, but if you look carefully at my last post you’ll see a list of goals pretty equivalent to yours!

Figureitout β€’ November 21, 2013 9:53 PM

Bryan
–While your idea of a secure computer will be aimed for actually making some money in the market; I’m thinking anything more than something like a graphing calculator will have too much complexity to be considered secure and more crossed fingers/ lack of thorough understanding of the entire computer.

Maybe we can reverse my thinking, and make a “semi-secure” computer; to use for developing this hardened dream I can’t get out of my head (it’s an obsession). Of course it will have features I can’t even think of now, some other clever person will come up w/ it.

First things first, we need to start getting in contact w/ owners and engineers at the fabs and start getting access to resources most of us simply cannot come up with; maybe getting a “courtesy” or reduced price build w/ someone like myself ensuring the chips get from A to B not tampered with.

Oh, BTW, be thankful you have corn to harvest. ‘Coons and corn smut got most of my crop last year and again this year more ‘coons. I’m putting in an electric fence next year.

Petrobras β€’ November 22, 2013 2:42 AM

@Bryan: “The alternative is to have the video output hardware stitch together the screen from various window panes stored here and there in video memory. That gets complex, but can be done.”

It exists: wayland.freedesktop.org

“4: Multiple USB3 buses.”

Yes, but with USB drivers documented not to work for all ports.
(one/two port which is storage only, one/two port which is for keyboard/mouse only, …)

“I evicted the phone network, and WIFI hardware.”

Add one or three ethernet connectors (think of sata-like connectors, or better a slice of RJ45 port that is only 2mm thick).

Or also make an ethernet-over-USB dongle based on your chip.

Bryan β€’ November 22, 2013 4:35 AM

@Figureitout

While your idea of a secure computer will be aimed for actually making some money in the market;

Actually I’m aiming it restoring privacy for the majority. To do that it must not cost more or it won’t be competitive in the marketplace. Sure early adopters who value privacy will pay more so initially it will cost a bit more, but in the long run it needs to cost the same or less to get a vast majority to buy it. It also needs to be competitive in features.

As I’ve been thinking on this task, I’ve realized the whole, computer, operating system, programming language, and applications need to be integrated together as a whole. There needs to be many layers to the defenses. If one fails, hopefully another one of them will contain the breach. Even with large amounts of auditing, there still can be bugs that will slip through. Humans are far from perfect. So the system needs to be redundant in it’s protections.

Nick P β€’ November 22, 2013 10:32 AM

@ Bryan

” The system can be designed so that only processes of a specific permission set can do certain operations on hardware.”

You’re really just describing a hardware capability system with a specific permission setup. I recommend you look at older one’s as they will have explored some tradeoffs. I’ve thought about reimplementing one of them myself.

There’s been software (eg. EROS) and hardware capability systems. The hardware systems ended up being a lot slower and more expensive as Moore’s Law didn’t benefit them much. There’s some decent modern designs and even FPGA prototypes out there. None of the modern one’s are in production. I’ve always encouraged projects leveraging capability hardware or OS’s as they’ve proven to be quite robust foundations for security.

” It is a possibility on how to have encrypted external RAM and FLASH memory. ”

Already been done from design to implementation. Look at the Aegis secure processor, SP/SecureCore architecture, CODESEAL, and SecureME. Best to not reinvent the wheel as IT industry wastes insane amounts of manpower doing that already. Look at those, look at the problems/solutions they uncovered, and pick what fits your use case. Then, do your own implementation of it if there’s a copyright issue. πŸ˜‰

“For the internal RAM, an emergency switch could be put across it’s VCC that disconnects it from external VCC, and grounds the internal ram’s VCC to ground. That will kill the data stored quickly. The only allowed reset for the switch should be a full power off reset. A sufficiently large capacitor could be used to keep the memory bank grounded so it would take a few minutes of total power off before a restart could happen.”

The cool thing about having all RAM encrypted like in the projects I described means that only the key and a tiny on-chip memory must be wiped. Either way, destroying the key is measured in cycles and wipes vast majority of data. The rest of memory will be gone in a fraction of a fraction of a second. It can all be done with software after it detects a switch flipping or it’s a routine built into the chip itself. Either way, very simple.

(Note: any peripheral cards or devices often have their own memory and so will have state in them. They can be designed perhaps to wipe their own state on shutdown.)

“If the chip has a dedicated read and write pins for the FLASH memory, the write one could have a jumper installed on it.”

I’m a big fan of hardware write protect. It’s a good idea. If possible, it should also be actual hardware that prevents the write rather than software that merely says no after seeing a switch.

“I’m basing my hardware ideas on a hardware design that can be connected to the net and have a good chance of staying secure. Also if it doesn’t stay secure, on being able to be fully restored to a secure state.”

It’s a good requirement. I think you’re focusing too much on the memory wipe stuff, though. The real focus areas here are:

  1. Non-volatile memory with trusted state or boot must be non-volatile.
  2. System should have methods to prevent malicious attacks on any privileged software. This is because a system that’s getting compromised repeatedly is usually not so useful if work requires confidentiality.
  3. Recovery mechanism must be highly assured to work despite actions of sophisticated, well-funded adversaries who will target it as well.

Because of (1), you’re better off using two BIOS type memories rather than one. The first is a ROM that can’t be changed once written. The second is flash that can be changed with write protect switch. The ROM has an extremely robust initial loader that performs most basic tests (“does MMU work? do crypto instructions work?”), loads the flash into some kind of memory, verifies it, and then passes control to it. This ensures that at least the initial loader always runs during startup and can be guaranteed to perform certain checks.

The next security feature I devised was a different image for updates and production. Do the update of firmware/drivers/coreOS in an entirely different running mode. The updates are downloaded in production mode, checked, an update flag is marked, and system reboots. Upon reboot, after ROM, an update software is loaded and verified from flash. It can access the filesystem although does it with extremely safe, minimalist code with all checks on. Everything about this system is full of safety/security checks. It only temporarily creates a filesystem service, pulls the update into memory, checks its hash/signature, and then destroys filesystem service/privileges. It then prepares for flash write, ensure write protect switch is off and writes it. It then loads it back up, checks integrity to ensure write happened correctly, and then tells user to turn on write protect switch. It then reboots whereby new flash loads into production.

Orange Book guidelines suspended all applications before opening a trusted path for user into security kernel. I took this further in my trusted boot or update designs to say nothing should be running when these happen except the trusted software itself. They should also be developed with the highest assurance techniques we have as they’re the most important parts of the software.

I’m off to work now so I’ll try to address other aspects of your posts later.

Anura β€’ November 22, 2013 11:06 AM

I don’t see any real way to get the majority to adopt more secure hardware, without significant government backing. Security isn’t that much of a concern for the public; they want Windows or they want Mac, both closed source and, in the latter case, closed hardware; in the former, you are limited to x86-64. On top of that, how long were people running a known insecure OS with a known insecure browser running on an administrator account? If Intel couldn’t get the masses off of x86, then I don’t see individual, group, or organization smaller than the US government being able to do so.

I really, really wish I saw a way to get the US government, combined with other world governments to get together and plan Computing 2: This time, it’s secure. An open project, built for security, from the hardware, to the programming languages, to the firmware, to the software, to the protocols. I don’t see anything really working for the masses any other way.

Petrobras β€’ November 22, 2013 11:33 AM

@Anura: “Security isn’t that much of a concern for the public”

Yes, @Nick P and I agree.

@Anura: “I really, really wish I saw a way to get the US government”

The ones in power like what NSA does. No hope from the government.

My hope is from the private sector, with its deep-pocket investors that might want to protect actively their IP after what NSA did to Petrobras, Belgacom, … https://www.schneier.com/blog/archives/2013/11/another_quantum.html

Petrobras β€’ December 18, 2013 3:00 AM

Here is what would be possible to make a verifiable processor.

  • crowdfund first three steps below.
  • buy the licence to produce 6502-D (or the forth-processor or the Inmos T212 of the K-machine or Novix RTX2000 or …)
  • get the graphs of all transistors (visual6502.org for 6502-D, for others it is still an open question but you may want to monitor http://visual6502.org/wiki/index.php for updates).
  • make a new layout, optimized by graphviz, with
    — one layer of metal plates
    — one layer of connections
    — one layer of via
  • crowdfund the cost of a pilot plant.
  • produce the chip at desired fab process (22nm ?) with, in this order:
    — a thick, stiff and flat support which can be removed by a documented dissolution method (acid, water, heat, …).
    — one thin layer of metal plates
    — one thin layer of connections that will not dissolve
    — one thin layer of via that that will not dissolve
    — above all or some of metal plate leave an uncoated hole. Make these holes big enough to be able to come in contact with an Atomic force microscopy.
    — a thick layer of air
    — a thick protection against dust, easy to remove temporarily.
  • crowdfund the cost of a real plant.

Here are non-destructive way to audit that chip, after removing the dust protection:

  • To audit the vias: use deep UV light microscopy.
  • To audit the metal or to remove a possibly subverted support:
    — add a thick layer of electric isolator (for stiffness and cooling) above the via, then remove the support with the dissolution method (should be possible to anyone with a white room).
    — look at the layer of metal plates with deep UV light microscopy.
  • To audit the voltage of metal plates that have uncoated holes: use an Atomic force microscopy, slow down the processor clock, then use specific assembly code to throughfully test the processor.
  • To audit the number of layers:
    — use deep UV light microscopy on the uncoated holes above the metal plates to assess the thickness.
    — use X-ray to count layers ? Or destructively transversal cut with microtome or your favorite ion destructive method.

It is not possible to audit directly the connections. But I hope that the room left by above auditing methods will not let the NSA insert useful stealthy circuits.

To moderator: I reposted my comment because ident was lost. Please delete previous post.

Nick P β€’ December 18, 2013 4:44 PM

@ Petrobras

Nice that you mention Inmos Transputer chip. Their design, esp use of Occam, was quite radical. Good that your mind is open enough to catch such a radical possibility. πŸ™‚

With regard to your process node choices, it’s all interesting but our specialist here has dimmed my hope of such stuff working.

Of course, the coversations also pointed toward certain specialized or embedded chips likely being safe because there’s no reason to exploit them. There’s the chips designed to be cheap as can be, the DSP’s Clive mentioned, the boatload of uncommon chips I mentioned, adn RobertT even cleverly suggested GPU’s. I decided to disregard GPU’s at the time, but remember it for later. Now, there’s crypto, general purpose, and security type code going on extremely powerful GPU systems. It’s time to reconsider all of these as components of a secure system.

Fab problem: an endrun around production?

So, this leads me to a shortcut with fabs. Both new and existing ventures have a decent risk of being compromised. As I’ve outlined, there are many useful types of chips that are probably not compromised. These chips specs, logic, masks, etc have already been made. So, a shortcut occurred to me to simply evaluate (under NDA) an existing design, work with (or pay off) fab personnel to validate that it’s the design that’s been in production, and just keep using (and replacing) those same cells/masks.

Our conversations, along with my looking at older systems, show that many current chips can be combined in interesting ways to make secure [enough] systems. Many types. So, they can continue to sell for their existing use cases (with new safety/security benefits) and be used in ventures that need subversion resistance. Such a plan based on reuse of inherently marketable and existing designs might shave off tons of cost compared to other options. Not to mention I’m not thinking inspection based routes are viable anyway…

@ RobertT

Back on topic of chip validation, studying how the chips are made gave me an idea. The design process eventually produces hardware like masks that’s used to put the logic on silicon. I’m wondering if a similar process can be used to produce a layer of hardware that can verify chip functionality instead. Regular reversing would strip layers off of audited chips to get them down to silicon. Then, they’d be put on this material or process to show a probabilistic match. If the production was changed at some point, the patterns on the chip would start to change.

Think of it as a checksum for hardware. Rather than inspecting every bit, a function is run across many points to come up with a number than can perform the probabilistic check. The algorithm might be trained on the first batch of chips. Then it’s used for the rest. This is for process node tech that’s not super tiny. You’ve showed us how difficult such stuff is to work with.

RobertT β€’ December 18, 2013 6:16 PM

@Nick P
As I’ve said many times before, IF I wanted to change the functionality of a chip I’d focus on getting the desired functionality into the design database AND then use Mask or Fabbing stages to connect that which shouldn’t be connected, or disconnect something that should be connected or I’d simply rely on parasitic connection of disparate blocks to create the information path. I’d think of a way turn off a TRNG by disconnecting a single Via (something I could do at a processing stage). Or maybe a way to inject errors into the operation of the TRNG noise extracting circuit by intentionally creating local substrate injection in a neighboring circuit (For instance: If I wanted substrate injection I’d focus on triggering a local parasitic vertical PNP, (the easiest way to do this is with a poorly designed voltage multiplier circuit) Even if someone simulated the circuit they would never discover what the real intention of the circuit was….namely to interfere with the correct operation of a neighboring circuit). If I wanted to read a signal level I’d focus on using parasitic capacitance between adjacent signals to couple the state of a signal of interest into an unrelated circuit. If it was EVER discovered I could simply claim it was an innocent mistake.

I’m not the only person with this level of understanding of chip function, so I’d expect no less an attack from any state level adversary. So if you intend to prevent subversion at this level you need no less an understanding of the whole chip design / creation system. Sorry I cant see any exception.

RE: “functional verification” it implies Digital One-Zero logic, so it implies that the only circuits your looking at are digital cells. On a modern SOC chip I can easily implement Analog, Digital and RF circuits as a matter of fact they have to be able to do all these functions to remain competitive. So Functional verification will ignore an Analog circuit because it is not a logic function. This makes Analog and Interface blocks the ideal spot to hide security vulnerabilities. Not only is it impossible to functionally analysis analog, it is also incomprehensible to the majority of Top-Down Verilog “code cutters” that create modern digital systems. Even the simulation of Analog circuits uses a completely different set of tools to the typical digital flow. So whereas functional verification might get used at top level for Digital blocks this will e specifically excluded on blocks labeled as Analog (even if they contain digital / logic cells) I cant say for sure that this is the design verification flow at all companies but I’d expect this flow or something very similar.

When it comes to finding suspicious circuits you also need to think in terms of physical proximity on a chip (layout) rather than design database arrangement. it is possible that two circuits that are in the same hierarchy level of the schematic (or equivalent Verilog function) are actually located on opposite sides of the chip. Similarly it is possible that completely unrelated circuits are right next to one another (on the actual layout). So if I’m adding function to a chip database (for later hook-up or parasitic coupling) I’ll want layout proximity rather than design database proximity (basically one or two short /local routes is much easier to make than ten routes that go half way across the chip.

I’m not sure that this answers your question, what I’m trying to point out is that even complete logical equivalence (chip to design database and functional description, is no guarantee that the circuit is working correctly and not leaking information.

RobertT β€’ December 18, 2013 7:11 PM

@Nick P
I just reread my reply and realized it probably does not make much sense. let me summarize:

IF you are assuming your adversary intends to subvert security by attacking the Chip hardware THAN you must assume they have a very good understanding of the whole chip design masking and fabrication process. This implies an ability to create subversion at a level beyond your ability to check for subversion (beyond functional equivalence checking). This is the classic security problem of the attacker having the advantage because the attacker only needs to find one attack vector that works whereas the defender needs to defend against hundreds of possible but improbable attacks. TSA is an excellent example of this asymmetry in action.

I feel the only real defense against this subversion of the chip creation process is to create a chip where information is intentionally distributed across hundreds of cores rather then all being visible If you can access this ONE secret point. you than multiply the minimum magnitude of the leaked information by the CPU array size. If you further encrypt local memory addressing and local memory data paths Then you multiply the search for the right bit by another factor of say 100. Combine these approaches and you create a situation where the covert comms return channel needs 10000 to 100000 times the bandwidth required today.

This means that a 1bps acoustic channel that might be useful for key-logger return needs 100Kbps BW (this is hardly a covert channel).

Nick P β€’ December 18, 2013 7:45 PM

@ RobertT

“I just reread my reply and realized it probably does not make much sense.”

You said it first. πŸ˜› Nah, it was pretty understandable.

“I feel the only real defense against this subversion of the chip creation process is to create a chip where information is intentionally distributed across hundreds of cores rather then all being visible If you can access this ONE secret point. you than multiply the minimum magnitude of the leaked information by the CPU array size. If you further encrypt local memory addressing and local memory data paths Then you multiply the search for the right bit by another factor of say 100. Combine these approaches and you create a situation where the covert comms return channel needs 10000 to 100000 times the bandwidth required today.”

The purpose of my looking into this stuff is preventing or detecting chip subversions. If they can subvert it in ways you described, including design database, then how does this chip design prevent that? Can’t they just swap out the design with one that emulates it but is easy to crack?

Petrobras β€’ December 20, 2013 8:54 AM

Thanks RobertT and Nick P for your nice and long replies.

@Nick P: “Nice that you mention Inmos Transputer chip.”

I especially liked the idea of native communication channels between these chips. But I am not sure the current owner of that chip would accept to sell a licence (or accept to apply himself the plans
https://www.schneier.com/blog/archives/2013/11/friday_squid_bl_400.html#c2986171).

A former worker is collecting some factual informations about these Transputers chip:
http://www.cs.bris.ac.uk/~dave/transputer.html
I tried the e-mail address included there, but got no answer.

@Nick P: “With regard to your process node choices, it’s all interesting but our specialist here has dimmed my hope of such stuff working.”

Is there any of the observations from your specialist you could publicly disclose here ?

@Nick P: “simply evaluate (under NDA) an existing design”

In security, the more something is open to audit, the better. There is better than NDA-based audit.

@Nick P: “As I’ve outlined, there are many useful types of chips that are probably not compromised.”

Schneier said in that video http://www.youtube.com/watch?v=Skr-jIqISO0 that if NSA had two methods A or B to compromise something, they will do both (this is unfortunately not an exact citation). So from now I assume they tried to compromise anything, and since very old times (remember they asked to cap passwords to eight characters ?).

And the fact, that reusing a processor layout often happens (as discussed previously on a comment of schneier.com) for economic reasons, make it even more probable that any frequent processor is compromised.

@RobertT: “IF I wanted to change the functionality of a chip I’d focus on getting the desired functionality into the design database AND then use Mask or Fabbing stages to connect that which shouldn’t be connected, or disconnect something that should be connected or I’d simply rely on parasitic connection of disparate blocks to create the information path. […] parasitic capacitance […]”

This is why I prefer processors with a very low number of transistors (the one I quoted just above have less than 35000 transistors on the die) to make these types of subversions more difficult, and I thing inspection based audit is necessary even if it may make the processor less commercially viable.

@Nick P: “Not to mention I’m not thinking inspection based routes are viable anyway…”

People on visual6502.org have made an analysis of what happen if any given transistor of the MOS-6502-D fries.

@RobertT: “Analog, Digital and RF circuits as a matter of fact they have to be able to do all these functions to remain competitive.”

I don’t want a competitive product. Only a core less likely to be subverted, with more flops that in old microprocessors already existing.

@RobertT: “to prevent subversion at this level you need no less an understanding of the whole chip design / creation system. […] I’ll want layout proximity rather than design database proximity (basically one or two short /local routes is much easier to make than ten routes that go half way across the chip).”

I think that with only 35000 transistors, the analog simulation can be done with Simulink or other free solutions (scilab, octave, …).

@Nick P: “If they can subvert it in ways you described, including design database, then how does this chip design prevent that?”
This is one of the advantages of redesigning ourselves the chip layout (I named graphviz).

But if all the auditing methods I exposed above cannot detect a given subversion, then that given subversion would not influence many transistors, and hence cannot do anything complex or interact stealthily with the normal execution of an OS.

@Nick P: “Nah, it was pretty understandable.”

Ditto.

Thanks to the moderator for taking care of the comment feed.

Nick P β€’ December 21, 2013 10:04 AM

@ Petrobras

“I especially liked the idea of native communication channels between these chips. But I am not sure the current owner of that chip would accept to sell a licence (or accept to apply himself the plans ”

Well I’m glad you gave me the link b/c it’s been very interesting. There were certainly some clever concepts in there. I think in the end they were too radical for widespread adoption. A lesson new projects should remember as it’s happened many times. That’s why I keep looking at projects similar to current tech (on the surface). However, my reading shows the Transputer lives on in some form today: Firewire’s data protocol is Transputer’s data protocol. How neat!

Far as owners, remember that there are many types of high performance interconnect and multiprocessor system. The vast majority of players in that market failed. There’s plenty of intellectual property out there to license or imitate. Then there’s open standards like SCI, Firewire, etc that might be reusable for our purposes.

“Is there any of the observations from your specialist you could publicly disclose here ?”

RobertT has disclosed plenty here. πŸ˜‰

“In security, the more something is open to audit, the better. There is better than NDA-based audit.”

I was being realistic. Companies spend millions to tens of millions to make these chips. Most have no intention of putting their “secret sauce” in the open. So, full openness for an existing commercial design is unlikely. Next best things is a group of people that can be trusted to vet it for us.

“And the fact, that reusing a processor layout often happens (as discussed previously on a comment of schneier.com) for economic reasons, make it even more probable that any frequent processor is compromised.”

I don’t think that follows. Their subversion activities are costly and risky. They want results from these activities. So, they’ll go where the results are. It’s doubtful that they’ve subverted every reusable design or anything like that. It’s probably just a few things that are used by their targets. Hence, my looking into obscure chips, the market’s also rans, foreign chips, and chips that aren’t intended for general purpose computing (but can be used as such).

“This is why I prefer processors with a very low number of transistors (the one I quoted just above have less than 35000 transistors on the die) to make these types of subversions more difficult, and I thing inspection based audit is necessary even if it may make the processor less commercially viable.”

RobertT’s point was that they could put a state-of-the-art chip inside your low tech chip that would be totally invisible to your inspection methods. He also pointed out that the low tech fabs cost similar to operate and have similar high tech insider threats. So, while it seemed like a good idea at first, a bit of domain knowledge shows that it might not be beneficial for security.

Clive Robinson β€’ December 21, 2013 11:59 AM

@ Nick P,

    RobertT’s point was that they could put a state-of-the-art chip inside your own tech chip that would be totally invisible to your inspection methods…

It’s a point that troubles me greatly and I don’t think there is any point in building a “TTL Chip Computer” for security reasons…

If you take a look at the 74LS181 4bit ALU slice it originaly had about an 80 gate equivalent area. As far as I can tell it’s not actualy made any more but there are still quite a lot floating around on the market if you want to buy one or more…

However as far as physical silicon real estate goes the MSI chip used originaly in the 181 would these days hold a whole 8bit processor and several K of ROM/RAM or even a dinky little microwave transmitter…

As RobertT also noted at the ALU level you are realy only looking at the “carry bit” to get sufficient usable intel.

Now if you consider the 181 max clock rate (originaly around 10MHz) that is way less than a 1% bandwidth on a fairly easily realisable 6GHz oscillator, driving into the powersupply trace –where any expected decoupling cap has become inductive and not going to reduce 6GHz noticably– and invisable to most standard lab equipment such as Osilloscope’s or Logic Analysers, and even if seen probably put down to some form of chip internal metastability issues…

You start to think maybe these 181Ms ive sourced recently are not old stock that’s been sitting on some shelf for thirty years but possibly a “Trojan Chip”.

Which means that,

    …So, while it seemed like a good idea at first, a bit of domain knowledge shows that it might not be beneficial for security.

Could as equally apply to TTL designs, or anything else in the supply line…

Which begs the question “Where do we go from here?”…

Well as far as I can tell just leaves us with a “Mitigation Stratagie” as a viable way forward.

One such route I did try was some very cheap DSP chips of EU design from an EU foundry, but I fiind myself wondering if even they have been “got at”…

As once observed “Paranoia can destroy yer” πŸ˜‰

Nick P β€’ December 21, 2013 1:04 PM

Reading this quick article on how DSP’s work…

http://www.dspguide.com/ch28/3.htm

…got me thinking with the last line: “move data in, perform the calculations, then get data out.”

If we use a RISC or MISC like secure ISA, then it could be implemented in that fashion. Think a functional style where internal state and next instruction are passed as arguments. The DSP performs very simple instructions before next state is entered. Complex instructions would simply be broken up and outputted as next operations to perform. IO events might be handled similarly. And there could always be more than one DSP in the system (transputer or MPMD style) to handle specific functions effeciently with implied POLA.

At a minimum, I’m seeing these processors: keyboard, mouse, video output, main control CPU, IO processor, IOMMU, capability/tag/MMU unit, and something comparable to DMA. A few of these could be easy to implement on a variety of processors. Others have requirements that dictate some things. Maybe throw a chip in there directing trusted boot process too.

Petrobras β€’ March 13, 2014 9:03 AM

@Petrobras: “I especially liked the idea of native communication channels between these chips. But I am not sure the current owner of that chip would accept to sell a licence (or accept to apply himself the plans https://www.schneier.com/blog/archives/2013/11/friday_squid_bl_400.html#c2986171 ).”

Inmos accepted a modified version of these plans !

500 MHz, 64kB of memory, 1 watt, deep sleep instruction πŸ™‚

1cm^2, 8kB of One Time Programmable (OTP) ROM, with JTAG access, and compiler taking a variant of C++ named XMOS-C πŸ™

And I do no know how much transistors they have put inside, it is made at 90nm by fab contractors, so auditing is not possible πŸ™

http://www.xmos.com/news/24-feb-2014/xmos-launches-xcore-analog-development-kit
http://www.datasheetlib.com/datasheet/1128503/xs1-l02a-qf124-i4_xmos/download.html

@RobertT: “double rows of pads complicates assembly and there increases costs”
Yep, the processor “xCORE-Analog” has two double rows of pads πŸ™

Nick P β€’ March 13, 2014 10:28 AM

@ Petrobas

“Inmos accepted a modified version of these plans !”

What parts did they accept? And to be used in what?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.