TomAd December 13, 2013 5:19 PM

This is something I came up with today, when thinking about making crypto easier for the general public. Comments are welcome.

Title: Lazy man’s server key storage

The whole idea behind this key storage, is to store someone’s private(!) and public key on a server in a secure and easy retrievable way.

Generally people are lazy when it comes to remembering passwords and backing up private and public keys. Here a possible solution is presented.

When someone creates an account, he or she will create an ordinary login-name and (simple)password as usual, like any other mail/chat application. After this creation, someone also needs to create a public key pair for secure communication. When key pair generation is finished, a 100-bit random key and 256 bit random salt are created. The random (100-bit) key in combination with hash-salt-stretching (2^20 times iterated) results in an effective 120 bit key. This 120-bit key will be used to encrypt the private key. The encrypted private key, salt and public key can now be stored on the server. The 100-bit key itself is presented to the user as serial number, that he or she can/must write down, using a simple 5-bit coding scheme.

A 100-bit random key with 5-bit coding scheme results in 20 characters, which can/must be written down or stored locally, it must NEVER be send to the server.

e.g. 100-bit key with 5-bit coding, presented as serial key:


Coding scheme:
5-bit value = 01234567890123456789012345678901 (0-31)
characters = ABCDEFGHIJKLMNOPRSTUVWXYZ2345678(or other more clear characters, independent of low/high case.)

So whenever a person re-installs or installs an account on another device, he or she only needs to login and retrieve the encrypted private key (and salt) from the server. The serial number only needs to be used once to decrypt the private key. After this, he or she can continue communication as usual, only using default login-name and password. This Lazy-man’s-server-key-storage solves a lot of problems, among having one account on several devices.

How to store the private key locally on the client is beyond the scope of this comment. (Combination of application key, rnd salt and password?)

I don’t know if this server key storage protocol already exists, if not, it will be free to use.

Maybe something for Darkmail?

Thx for reading.

Buck December 13, 2013 5:27 PM

Teenage boy convinced to tattoo squid on his neck by an undercover ATF agent known as “Squid” to promote their sting operation’s front store (called “Squid’s”).

Plus plenty more examples of abuses of power, abuses of the mentally challenged, and abuses of police resources/taxpayers money via failure to communicate (read: national security) for your (possibly painful) perusal:

Warning: the linked content could be considered highly disturbing by some of the more idealistic readers out there…

unimportant December 13, 2013 6:07 PM

I like to know whether any cryptologist have ever thought about the necessity of uncertainty. For me it seems to be an essential element of being human. Humanity should be more than just being part of the exponential growth of technology. Liberty is already lost, we are about to loose our privacy and finally we’ll become humanoid robots.

Jonathan Wilson December 13, 2013 6:48 PM

NSA asks for a way to do its job without wholesale data collection:

My 10 point plan:
1.Stop all the things (demanding private SSL keys, breaking into private networks, weakening the security of software/protocols/algorithms/websites/etc) in order to carry out data collection. Disclose ALL instances where security has been deliberately weakened by the US government or by someone working for/on behalf of/with/at the direction of the US government.

2.Stop the practice of keeping exploits in software secret so it can be abused to gain access to systems. Disclose all exploits that the NSA knows of as soon as the NSA becomes aware of them so they can be fixed by the vendor.

3.Stop using spreadable malware of any kind. Spying software directly installed on a specific machine known to belong to a genuine threat (and that can’t replicate or spread to any other machine) is acceptable (its no different to the cold-war-era photocopiers and typewriters that duplicated everything copied/typed or to a telephone wiretap placed on the telephone of a known criminal)

4.Only collect information on people known to be bad guys (or information on people who can be 100% confirmed as having made contact with people known to be bad guys)

5.Only collect information after a proper warrant has been signed off on by a proper independent judge who can properly evaluate the evidence showing that the individual in question and determine if that individual is a threat or not. If all they have is an email account, phone # or other credentials the judge would need to be presented with the evidence showing why whoever owns this account is a threat (even where they dont have the persons actual identity they may have communications that person made that were obtained through other sources showing why that person is a threat)

6.Swing the pendulum away from the “spy on everyone else” part of the NSA mission back to the “stop everyone from spying on us” part of the NSA mission.

7.Spend a lot less money/time/effort collecting data and a lot more money/time/effort trying to turn that data into information. With all the advances these days in looking for needles in electronic haystacks (Google PageRank for searching for stuff, IBM Watson for parsing natural language and figuring out whats really being asked about, Siri for speech recognition and others) coming up with ways to find that one useful piece of information in all that data should be getting easier.

8.Allow internet companies to disclose in aggregate how many warrants (and of which types) they received from government agencies in a given period. Since (per #5 above the NSA would no longer be carrying out warrantless surveillance, this would cover all government requests to these companies)

9.End any NSA support for (or actions in support of) export controls on cryptography and lobby to have them removed. (with narrow exceptions prohibiting/restricting the export of cryptography to states/groups/individuals on the “bad people” list and for “military” cryptography)

and 10.As part of the “stop everyone from spying on us” mission, work to increase the security of computer systems and networks (including cryptographic protocols and algorithms)

RR December 13, 2013 7:39 PM

@unimportant: “I like to know whether any cryptologist have ever thought about the necessity of uncertainty. For me it seems to be an essential element of being human.”

This is an example of the stolen concept fallacy.

The concepts ‘necessity’ and ‘essential’ both presuppose and logically depend on the validity of certainty.

Nick P December 13, 2013 8:25 PM

How Johnny can safely program Satan’s computer: Survey of hardware/software architectures that protect software

Bruce Schneier called for ways for people to take back control of the Internet and their computers. I’ve often claimed most solutions wont work because the endpoints aren’t safe. This requires dealing with so many issues in hardware, firmware, software, etc. most people don’t know where to begin. That’s where I come in: I’ve kept track of all the work in that area. These papers mainly focus on methods of making systems that are secure/safe/correct ground up, in a pervasive way or against many classes of attack. Extra links are occasionally thrown in for inspiration or historical value.

I hope these keep everyone busy for months building new hardware/software architectures and other things I can’t imagine at the moment. 🙂

DARPA Clean Slate Program projects

OpenFlow: Networking redefined. Already deployed in a cloud provider if I recall correctly.

SAFE – A Clean-slate architecture for secure systems by Chiricescu et al.

“the goal of SAFE
is to create a secure computing system from the ground up.
SAFE hardware provides memory safety, dynamic type checking,
and native support for dynamic information flow control. The
Breeze programming language leverages the security features of
the underlying machine, and the “zero kernel” operating system
avoids relying on any single privileged component for overall
system security. The SAFE project is working towards formally
verifying security properties of the runtime software. ”

(Nick’s note: I’m more interested in the SAFE hardware than the rest of the system. It’s so radical that adoption might be hard even for geeks. So, I’ve been thinking of combining it with a type safe imperative language soon as hardware is beyond alpha stage. )

CHERI a research platform deconflating hardware virtualization and protection 2012 Watson et al

“CHERI’s hybrid capability model provides fine-grained compart-
mentalisation within address spaces while maintaining software
backward compatibility, which will allow the incremental deploy-
ment of fine-grained compartmentalisation in both our most trusted
and least trustworthy C-language software stacks. We have im-
plemented a 64-bit MIPS research soft core, BERI, as well as a
capability coprocessor, and begun adapting commodity software
packages (FreeBSD and Chromium) to execute on the platform. ”

(Nick’s note: Ross Anderson’s Cambrige people and Neumann’s SRI people are teaming up on this one. Like SAFE, they’ve already got prototypes and open works you can toy with. Unlike SAFE, this one is more like old segmented/capability architectures which gives them a route for legacy compatibility and low developer training time.)

End of DARPA Clean Slate list

Papers from DARPA’s earlier OASIS program for intrusion-tolerant apps, systems and networks

Boeing Phantom Work’s OASIS Contribution

“This document summarizes the results of the Boeing team’s survivable JBI design effort. The objective of the effort was to develop the design for a JBI system that can operate through sophisticated adversary attack, and yet scale well to wide-area networks. The system design approach ensures that the intrusion tolerance mechanisms do not significantly degrade performance, while still tolerating compromised components. For the system to be deployable and the results to transition to real systems, the performance cost of providing intrusion tolerance must be contained. The design trades opted for solutions that preserve system performance. The final report includes a section on the system design, a summary of system requirements and a concept of operations, design details and a summary of project accomplishments, planning documentation, lessons learned and a conclusion.”

(Nick’s note: My early favorite because it uses many Orange Book era tricks with modern components like Spread. It’s byzantine fault tolerance tradeoff is particularly clever and so is it’s survivability grammar auto-config trick.)

Intrusion Tolerant Software Architectures 2001 Stavrido et al.

“Intrusion tolerance is a new concept, a new design paradigm, and potentially a new capability for dealing with residual security vulnerabilities. In this article we describe our initial exploration of the hypothesis that intrusion tolerance is best designed and enforced at the software architecture level.”

Intrusion-Tolerant Architectures:
Concepts and Design 2003 Verissimo et al.

“There is a significant body of research on distributed com-
puting architectures, methodologies and algorithms, both in the fields of fault tolerance and security. Whilst they have taken separate paths until recently, the problems to be solved are of similar nature. In classical dependability, fault tolerance has been the workhorse of many solutions. Classical security-related work has on the other hand privileged, with few exceptions, intrusion prevention. Intrusion tolerance (IT) is a new approach that has slowly emerged during the past decade, and gained impressive momentum recently. Instead of trying to prevent every single
intrusion, these are allowed, but tolerated: the system triggers mechanisms that prevent the intrusion from generating a system security failure. The paper describes the fundamental concepts behind IT, tracing their connection with classical fault tolerance and security. We discuss the main strategies and mechanisms for architecting IT systems, and report on recent advances on distributed IT system architectures.”

An architecture for an adaptive intrusion-tolerant server 2002 Valdes et al.

“We describe a general architecture for intrusion-tolerant en-
terprise systems and the implementation of an intrusion-tolerant Web
server as a specific instance. The architecture comprises functionally redundant COTS servers running on diverse operating systems and platforms, hardened intrusion-tolerance proxies that mediate client requests and verify the behavior of servers and other proxies, and monitoring and alert management components based on the EMERALD intrusion-detection framework. Integrity and availability are maintained by dynamically adapting the system configuration in response to intrusions or other faults. The dynamic configuration specifies the servers assigned to each client request, the agreement protocol used to validate server replies, and the resources spent on monitoring and detection. Alerts trigger increasingly strict regimes to ensure continued service, with graceful degradation of performance, even if some servers or proxies are compro-
mised or faulty. The system returns to less stringent regimes as threats diminish. Servers and proxies can be isolated, repaired, and reinserted without interrupting service.”

Intrusion tolerant server infrastructure 2001 O’Brien

“Uses network layer enforcement mechanism to reduce intrusions, prevent propagation of intrusions that occur, provide automated load shifting upon detection, and support automated srver recovery.”

Malicious Code Detection for Open Firmware Adelstein et al

“Malicious boot firmware is a largely unrecognized but significant security risk to our global information infrastructure. Since boot firmware executes before the operating system is loaded, it can easily circumvent any operating system-based security mechanism. Boot firmware programs are typically written by third-party device manufacturers and may come from various suppliers of unknown origin. In this paper we describe an approach to this problem based on load-time verification of onboard device drivers against a standard security policy designed to limit access to system resources. We also describe our ongoing effort to construct a prototype of this technique for Open Firmware boot platforms.”

(Nick’s note: Mac’s use open firmware. Another thread many of us decided to get old hardware to reduce odds of it being subverted. Tech like this might be useful for such machines.)

Integrity through mediated interfaces by Balzer

Intrusion Tolerance by Unpredictable Adaptation 2002 Pal and Sanders

(Nick’s note: I provide a link to their publication list rather than a paper. They have a ton of stuff that’s superceded the original paper.)

Recovery Oriented Computing (ROC):
Motivation, Definition, Techniques, and Case Studies Patterson et al.

(Nick’s note: This is NOT part of OASIS. I included the ROC paper because its techniques can be used for similar purposes if modified a bit.)

Updated presentation on ROC 2012

End of OASIS and ROC list

Uncategorized papers on higher security architectures

Hardware Architectures for Software Security

(Nick’s note: I haven’t read this yet. I saw it when I was nearing publishing this list. However it had references to several things on my list so I figured it might have decent ideas in it.)

Secure Virtual Architecture: A Safe Execution Environment
for Commodity Operating Systems Criswell et al 2007‎

“This paper describes an efficient and robust approach to
provide a safe execution environment for an entire operat-
ing system, such as Linux, and all its applications. The
approach, which we call Secure Virtual Architecture (SVA),
defines a virtual, low-level, typed instruction set suitable for
executing all code on a system, including kernel and applica-
tion code. SVA code is translated for execution by a virtual
machine transparently, offline or online. SVA aims to en-
force fine-grained (object level) memory safety, control-flow
integrity, type safety for a subset of objects, and sound anal-
ysis. A virtual machine implementing SVA achieves these
goals by using a novel approach that exploits properties of
existing memory pools in the kernel and by preserving the
kernel’s explicit control over memory, including custom al-
locators and explicit deallocation. Furthermore, the safety
properties can be encoded compactly as extensions to the
SVA type system, allowing the (complex) safety checking
compiler to be outside the trusted computing base. SVA
also defines a set of OS interface operations that abstract all
privileged hardware instructions, allowing the virtual ma-
chine to monitor all privileged operations and control the
physical resources on a given hardware platform. We have
ported the Linux kernel to SVA, treating it as a new archi-
tecture, and made only minimal code changes (less than 300
lines of code) to the machine-independent parts of the kernel
and device drivers. SVA is able to prevent 4 out of 5 mem-
ory safety exploits previously reported for the Linux 2.4.22
kernel for which exploit code is available, and would pre-
vent the fifth one simply by compiling an additional kernel
library. ”

(Nick’s Note: A good interim solution. Could be even stronger combined with a safe coded OS.)

Memory Safety for Low-Level Software/Hardware Interactions
2009 Criswell et al SVAOS

“We have added these techniques to a compiler-based vir-
tual machine called Secure Virtual Architecture (SVA),
to which the standard Linux kernel has been ported previ-
ously. Our design changes to SVA required only an addi-
tional 100 lines of code to be changed in this kernel. Our
experimental results show that our techniques prevent re-
ported memory safety violations due to low-level Linux
operations and that these violations are not prevented by
SVA without our techniques. Moreover, the new tech-
niques in this paper introduce very little overhead over
and above the existing overheads of SVA. Taken together,
these results indicate that it is clearly worthwhile to add
these techniques to an existing memory safety system.

(Nick’s note: Follow up paper I just got a hold of Thursday. Seems they cover “processor
state manipulation, stack management, memory mapped I/O, MMU updates, and self-modifying code
” threats to safety with benefits to control flow integrity too.)

Safe to the Last Instruction: Automated
Verification of a Type-Safe Operating System (Verve) Yang and Hawblitzel

“Typed assembly language (TAL) and Hoare logic can verify the
absence of many kinds of errors in low-level code. We use TAL and
Hoare logic to achieve highly automated, static verification of the
safety of a new operating system called Verve. Our techniques and
tools mechanically verify the safety of every assembly language
instruction in the operating system, run-time system, drivers, and
applications (in fact, every part of the system software except the
boot loader). Verve consists of a “Nucleus” that provides primitive
access to hardware and memory, a kernel that builds services on
top of the Nucleus, and applications that run on top of the kernel.
The Nucleus, written in verified assembly language, implements
allocation, garbage collection, multiple stacks, interrupt handling,
and device access. The kernel, written in C# and compiled to TAL,
builds higher-level services, such as preemptive threads, on top of
the Nucleus. A TAL checker verifies the safety of the kernel and
applications. A Hoare-style verifier with an automated theorem
prover verifies both the safety and correctness of the Nucleus.
Verve is, to the best of our knowledge, the first operating system
mechanically verified to guarantee both type and memory safety.
More generally, Verve’s approach demonstrates a practical way
to mix high-level typed code with low-level untyped code in a
verifiably safe manner. ”

(Nick’s note: another advance for formal verification. They verify type safety down to assembler code for most of it. Work like this would be much easier if the processor itself supported more secure or safe execution. Other papers in this collection for that. Mix and match ideas people.)

A Java Operating System as the Foundation of a Secure Network Operating System 2002 Golm et al (JX)

“We present the architecture of the JX operating system,
which avoids two categories of these errors. First, there are
implementation errors, such as buffer overflows, dangling
pointers, and memory leaks, caused by the use of unsafe
languages. We eliminate these errors by using Java—a type-
safe language with automatic memory management—for
the implementation of the complete operating system. Sec-
ond, there are architectural errors caused by complex sys-
tem architectures, poorly understood interdependencies
between system components, and minimal modularization.
JX addresses these errors by following well-known princi-
ples, such as least-privilege and separation-of-privilege,
and by using a minimal security kernel, which, for example,
excludes the filesystem.
Java security problems, such as the huge trusted class
library and reliance on stack inspection are avoided. Code
of different trustworthiness or code that belongs to different
principals is separated into isolated domains. These
domains represent independent virtual machines. Sharing
of information or resources between domains can be com-
pletely controlled by the security kernel. ”

(Nick’s note: It’s TCB is quite small and it can leverage existing Java tools/talent. I’ve previously mentioned combining it with a Java processor for a hell of a TCB. Maybe a tool like Jython to make it even more usable/productive on top of that.)

More JX links:

Extensibility, Safety and Performance in the SPIN Operating System Bershad et al

“This paper describes the motivation, architecture and per-
formance of SPIN, an extensible operating system. SPIN
provides an extension infrastructure, together with a core
set of extensible services, that allow applications to safely
change the operating system’s interface and implementation.
Extensions allow an application to specialize the underly-
ing operating system in order to achieve a particular level
of performance and functionality. SPIN uses language and
link-time mechanisms to inexpensively export fine-grained
interfaces to operating system services. Extensions are writ-
ten in a type safe language, and are dynamically linked into
the operating system kernel. This approach offers extensions
rapid access to system services, while protecting the operat-
ing system code executing within the kernel address space. ”

(Nick’s note: The OS, casts, and linking mechanism are made type safe through using a variant of Modula 3. Like JX, this is both a strategy and possible code to draw on. It should be combined with other efforts.)

Gemini Trusted Network Processor – Final Evaluation Report (mid-1990’s)

(Nick’s note: A nice piece of history. This is the A1 evaluation of GTNP, built on GEMSOS security kernel. The assurance, design and evaluator comments are particularly interesting. The effort solved plenty of tough problems. GEMSOS is still available through Aesec. I could imagine an updated form of it being combined with the type and memory safe CPU/language efforts. That might be badass.)

Karger’s Retrospective on VAX Security Kernel 1991

(Nick’s note: Just a bonus link as it was first high assurance virtualization solution. They even modified the processor microcode to make it easier to virtualize. Their “assurance” activities are informative.)

The KeyKOS® Nanokernel Architecture 1992 Shapiro et al

“The KeyKOS nanokernel is a capability-based object-oriented operating system that has been in production use since 1983. Its original implementation was motivated by the need to provide security, reliability, and 24-hour availability for applications on the Tymnet® hosts. Requirements included the ability to run multiple instantiations of several operating systems on a single hardware system. KeyKOS was implemented on the System/370, and has since been ported to the 680×0 and 88×00 processor families. Implementations of EDX, RPS, VM, MVS, and UNIX® have been constructed. The nanokernel is approximately 20,000 lines of C code, including capability, checkpoint, and virtual memory support. The nanokernel itself can run in less than 100 Kilobytes of memory.
KeyKOS is characterized by a small set of powerful and highly optimized primitives that allow it to achieve performance competitive with the macrokernel operating systems that it replaces. Objects are exclusively invoked through protected capabilities, supporting high levels of security and intervals between failures in excess of one year. Messages between agents may contain both capabilities and data. Checkpoints at tunable intervals provide system-wide backup, fail-over support, and system restart times typically less than 30 seconds. In addition, a journaling mechanism provides support for high-performance transaction processing. On restart, all processes are restored to their exact state at the time of checkpoint, including registers and virtual memory.”

(Nick’s note: I can’t mention something like GEMSOS without mentioning KeyKOS, the capability-based competition. KeyKOS implemented incredible degrees of both POLA and availability. Like GEMSOS, its design is still more secure than what most modern systems came up with. Both were also in production use for some time. That KeyKOS is worth updating/imitating is obvious in that Shapiro’s EROS and COYOTOS projects worked to do exactly that, with EROS work also giving us nice networking and GUI deliverables.)

More KeyKOS links:

Randomized Instruction Set Emulation to Disrupt Binary Code Injection Attacks by Barrantes et al.

” A unique and private machine instruction set
for each executing program would make it difficult for an outsider
to design binary attack code against that program and impossible
to use the same binary attack code against multiple machines. As
a proof of concept, we describe a randomized instruction set em-
ulator (RISE), based on the open-source Valgrind x86-to-x86 bi-
nary translator. The prototype disrupts binary code injection attacks
against a program without requiring its recompilation, linking, or
access to source code. The paper describes the RISE implemen-
tation and its limitations, gives evidence demonstrating that RISE
defeats common attacks, considers how the dense x86 instruction
set affects the method, and discusses potential extensions of the
idea. ”

(Nick’s note: It would be nice to have a capability like this that could be used to generate microcode, assembler and debugger for a given machine with a unique instruction set. The instruction set could be the same but with different labels. If each installation did its own, then an adversary could be facing a different ISA with each target although the underlying hardware is the same. Might make it extremely difficult for them. I know it’s doable because Transmeta Crusoes could do essentially this although for different use case.)

Binary Stirring: Self-randomizing Instruction Addresses of
Legacy x86 Binary Code by Wartell et al 2012;jsessionid=2A31AD667CB7D0FB7199EE53EC971D4C?doi=

“This paper introduces binary stirring, a new technique that imbues
x86 native code with the ability to self-randomize its instruction
addresses each time it is launched. The input to STIR is only the
application binary code without any source code, debug symbols,
or relocation information. The output is a new binary whose basic
block addresses are dynamically determined at load-time. There-
fore, even if an attacker can find code gadgets in one instance of the
binary, the instruction addresses in other instances are unpredictable.
An array of binary transformation techniques enable STIR to trans-
parently protect large, realistic applications that cannot be perfectly
disassembled due to computed jumps, code-data interleaving, OS
callbacks, dynamic linking and a variety of other difficult binary
features. Evaluation of STIR for both Windows and Linux platforms
shows that stirring introduces about 1.6% overhead on average to
application runtimes. ”

(Nick’s note: Black box application, works on Windows + Linux, and provides protection against major attack vector.)

Efficient Validation of Control Flow Integrity for
Enhancing Computer System Security by Park 2010 (hardware)

“It is hard, if not impossible, for two programs to expose identical
control flows during program execution. Therefore, control flow integrity checking can
effectively prevent malicious code implants from executing.
This thesis proposes three new protection schemes based on control flow integrity
checking, each with its own assumptions of hardware and/or application scenarios. The
first study, IBMON (Indirect Branch MONitor), utilizes existing hardware features to
efficiently observe unusual control flow transfers and check them for any abnormality.
Prototype systems for proof of concept have been successfully implemented on three
different system platforms to demonstrate its efficacy. By using the hardware features,
IBMON can effectively protect a system from malicious control flow modification trans-
parently to the target applications. We have successfully built prototype systems on
real machines using several processors. Although the prototype system exhibits the best
performance among other control flow validation mechanisms, it still incurs moderate
performance overhead. We further propose IBF-Cache, an enhanced IBMON system
with special hardware support, to minimize the performance overhead associated with
IBMON. Although it requires an extension of existing processors, the cost is negligible
and the run-time of IBMON is reduced to virtually zero. ”

(Nick’s note: If it passes peer review, I’m going to love this one too. Control Flow Integrity has already established itself as the No 1 concern. IBMON is a very simple solution that leaves plenty of chip space and CPU time for solving other problems. )

CryptoPage: an Efficient Secure Architecture with Memory Encryption,
Integrity and Information Leakage Protection by Duke and Keryell

“We propose the C RYPTO PAGE architecture which im-
plements memory encryption, memory integrity protection
checking and information leakage protection together with
a low performance penalty (3 % slowdown on average) by
combining the Counter Mode of operation, local authenti-
cation values and Merkle trees.”

(Nick’s note: I think it’s designed to trust nothing but the main CPU chip. Btw, ACSAC conference always has good papers. Best to keep up with them and go through the archives. Might find a pleasant surprise.)

AEGIS: Architecture for Tamper-Evident and
Tamper-Resistant Processing by Suh et al

“Our architec-
ture assumes that all components external to the processor,
such as memory, are untrusted. We show two different im-
plementations. In the first case, the core functionality of the
operating system is trusted and implemented in a security
kernel. We also describe a variant implementation assuming
an untrusted operating system.
aegis provides users with tamper-evident, authenticated
environments in which any physical or software tampering
by an adversary is guaranteed to be detected, and private
and authenticated tamper-resistant environments where ad-
ditionally the adversary is unable to obtain any information
about software or data by tampering with, or otherwise ob-
serving, system operation. aegis enables many applications,
such as commercial grid computing, secure mobile agents,
software licensing, and digital rights management.
Preliminary simulation results indicate that the overhead
of security mechanisms in aegis is reasonable.”

(Nick’s note: AEGIS is one of the original, total-package offerings in this category. Many others built on its features. It might be a burden on programmers compared to some schemes, though. I can’t be sure. I included it in case it’s useful and b/c they deserve credit for their contribution.)

SecureME: A Hardware-Software Approach to
Full System Security∗ by Chhabra et al

“We propose
SecureME, a hardware-software mechanism that provides
such a secure computing environment. SecureME protects
an application from hardware attacks by using a secure pro-
cessor substrate, and also from the Operating System (OS)
through memory cloaking, permission paging, and system
call protection. Memory cloaking hides data from the OS
but allows the OS to perform regular virtual memory man-
agement functions, such as page initialization, copying, and
swapping. Permission paging extends the OS paging mech-
anism to provide a secure way for two applications to es-
tablish shared pages for inter-process communication. Fi-
nally, system call protection applies spatio-temporal pro-
tection for arguments that are passed between the applica-
tion and the OS. Based on our performance evaluation us-
ing microbenchmarks, single-program workloads, and multi-
programmed workloads, we found that SecureME only adds
a small execution time overhead compared to a fully un-
protected system. Roughly half of the overheads are con-
tributed by the secure processor substrate. SecureME also
incurs a negligible additional storage overhead over the se-
cure processor substrate. ”

(Note: An improvement in the category and has nice references to many other projects/tech. Unfortunately, you need ACM membership to get it. University libraries often have this.)

Enforcing Executing-Implies-Verified with the
Integrity-Aware Processor by LeMay and Gunter 2011

“IAP is a processor technology that is specifically designed to efficiently support
a variety of integrity kernels. It provides high performance, hardware-enforced
isolation, high compatibility with target systems and flexible invocation options
to ensure visibility into the target system. We demonstrated the utility of IAP by
developing XIVE, a code integrity enforcement service with a client component
that fits entirely within IAP’s protected space, containing 859 instructions. XIVE
verifies all the code that ever executes on the target system against a network-
hosted whitelist, even in the presence of DMA-capable attackers. ”

Kernel and Application Integrity Assurance: Ensuring Freedom from Rootkits
and Malware in a Computer System by Wang and Dasgupta 2007

“This paper has presented our hardware-software co-
design for software integrity assurance. Attesting software
integrity is a difficult but critical task to build a secure com-
puting environment. Running on separate hardware, the
SecCore and its verification functionality remains in place
even when the host kernel is thoroughly compromised. The
SecCore is time-driven, independent, and self-sufficient,
and it activates other software checkers on a regular basis
to build up a hierarchical trusted chain. The SecIO is ex-
ploited to interact with the owner for an authentic message.
This out-of-band I/O device along with the SecCore provide
a flexible and robust security solution on software integrity
assurance. We also demonstrate a prototype implementa-
tion of our design approach. The security hardware is sim-
ulated by PCI SBC device, and the software on SecCore
and host computer are a patched Linux kernel. We added functionalities to support PCI interconnection, and we also
developed routines to build an initial software integrity and
compare it with the running software in memory. Our pro-
totype demonstrates both feasibility and efficiency with the
hierarchical checking. ”

(Note: It has at least one serious limitation so it’s a work in progress. However, I often promote both PCI coprocessor and trusted path via dedicated hardware approaches on Bruce’s blog. Interesting to see that they use both to good effect. Least people are thinking in a good direction. Unfortunately IEEE so you need a membership to get it.)

CODESSEAL – compiler FPGA approach to secure applications by Gelbart et al post-2004

“paper proses a join compiler/hardware infrastructure for software protection for fully encrypted execution in which both program and data are in encrypted form in memory. The processor is supplemented with an FPGA-based secure hardware component that is capable of fast encryption and decryption, and performs code integrity verification, authentication, and provides protection of the execution control flow. “

(Nick’s note: One aspect of this design maintains control flow integrity by analysing the code and creating a control flow graph of it to spot valid jump targets. It then enforces them. I like the simplicity of the approach although other links I’ve posted might make it obsolete.)

(Nick’s edit: I couldn’t find a link to this exact paper probably because it’s been turned into a product. Microsemi sells CodeSEAL for use with many OS’s and RTOS’s. Google them if interested.)

Architectural support for Securing Application Data
in Embedded Systems
by Gelbart et al

“Encrypted execution and data (EED) platforms, where
instructions and data are stored in encrypted form in memory,
while incurring overheads of encryption have proven to be
attractive because they offer strong security against information
leakage and tampering. However, several attacks are still possible
on EED systems when the adversary gains physical access to the
system. In this paper we present an architectural approach to
address a class of memory spoofing attacks, in which an attacker
can control the address bus and spoof memory blocks as they are
loaded into the processor. In this paper we focus on the integrity
of the application data to prevent the attacker from tampering,
injecting or replaying the data. We make use of an on-chip
FPGA, an architecture that is now commonly available on many
processor chips, to build a secure on-chip hardware component
that verifies the integrity of application data at run-time. By
implementing all our security primitives on the FPGA we do
not require changes to the processor architecture. We present
that data protection techniques and a performance analysis is
provided through a simulation on a number of bechmarks. Our
experimental results show that a high level of security can be
achieved with low performance overhead.

UNSAFE LANGUAGES by Dhurjati 2006

“This thesis presents a new compiler and a run-time system called SAFECode (Static Analysis For safe Execution of Code) that addresses these two problems. First, SAFECode guarantees memory safety for programs in unsafe languages with very low overhead. Second, SAFECode provides a platform for reliable static analyses by ensuring that an aggressive interprocedural pointer
analysis, type information, and call graph are never invalidated at run-time due to memory errors. Finally, SAFECode can detect some of the hard-to detect memory errors like dangling pointer errors with low overhead for some class of applications and can be used not only during development but also during deployment. SAFECode requires no source code changes, allows memory to be managed explicitly and does not use metadata on pointers or individual tag bits for memory (avoiding any external library compatibility issues).
This thesis describes the main ideas, insights, and the approach that SAFECode system uses to achieve the goal of providing safety guarantees to software written in unsafe languages. This thesis also evaluates the SAFECode approach on several benchmarks and server applications and shows that the performance overhead is lower than any of the other existing approaches. ”

(Nick’s note: Decent work. Provides a comparison to methods at the time including Ccured, SFI, and Cyclone.)

Ironclad C++ A Library-Augmented Type-Safe Subset of C++ 2013 DeLozier et al

“Ironclad C++ is, in essence, a library-augmented
type-safe subset of C++. All Ironclad C++ programs are
valid C++ programs, and thus Ironclad C++ programs can be
compiled using standard, off-the-shelf C++ compilers…. brings type safety to C++ at a runtime over-
head of 12%. ”

(Nick’s note: making existing C and C++ code isn’t just about protecting native apps. Remember that there’s a ton of useful code out there that just happened to be coded in a very unsafe language. Having a way to safely solve a problem with C/C++ code is nice. Ironclad is particularly nice due to low overhead.)

Securing Untrusted Code via Compiler-Agnostic
Binary Rewriting by Wartell et al 2012

“Binary code from untrusted sources remains one of the primary vehicles for malicious software attacks. This paper presents R EINS, a new, more general, and lighter-weight binary rewriting and in-lining system to tame and secure untrusted binary programs. Unlike traditional monitors, R EINS requires no cooperation from code-
producers in the form of source code or debugging symbols, requires no client-side support infrastructure (e.g., a virtual machine or hypervisor), and preserves the behavior of even complex, event-driven, x86 native COTS binaries generated by aggressively optimizing
compilers. This makes it exceptionally easy to deploy. The safety of programs rewritten by R EINS is independently machine-verifiable, allowing rewriting to be deployed as an untrusted third-party service.
An implementation of R EINS for Microsoft Windows demonstrates that it is effective and practical for a real-world OS and architecture, introducing only about 2.4% runtime overhead to rewritten binaries. ”

(Nick’s note: works on Windows binaries with no help from the vendor or debugger. That’s what I’m talking about! Sounds like a good way to deal with legacy windows apps in corporate settings. In conjunction with whitelisting of course…)

Complete Translation of Unsafe Native Code to Safe Bytecode by Alliet and Megacz 2004

(Nick’s note: the abstract wouldn’t cut n paste and I didn’t feel like typing it. So, summary, they compile native libraries to MIPS, then translate that into Java bytecodes. Clever idea. Might be useful for deploying existing libraries on JX, high level ISA’s, etc. There’s more modern approaches for native to Java, while this can be targeted to many open runtimes and toolsets. This has also be done to run native code in Javascript I think called asm.js.)

Control-Flow Integrity
Principles, Implementations, and Applications by Abadi et al 2007

“The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.”

(Nick’s note: Great stuff.)

Practical Control Flow Integrity & Randomization for Binary Executables by Zhang et al 2013

“CFI has not seen wide indus-
trial adoption. CFI suffers from a perception of complexity and inefficiency: reported overheads (average/max) have been as high as 7.7%/26.8% [13] and 15%/46% [9]. Many CFI systems require debug information that is not available in COTS applications, and cannot be deployed incrementally because hardened modules cannot inter-operate with un-hardened modules. We propose a new practical and realistic protection method called CCFIR (Compact Control Flow Integrity and Randomization), which addresses the main barriers to CFI adoption. CCFIR collects all legal targets of indirect control-transfer instructions, puts them into a dedicated “Springboard section” in a random order, and then limits indirect transfers to flow only to them. Using the Springboard section for targets, CCFIR can validate a target more simply and faster than traditional CFI,
and provide support for on-site target-randomization as well as better compatibility. Based on these approaches, CCFIR can stop control-flow hijacking attacks including ROP and return-into-libc. Results show that ROP gadgets are all eliminated. We observe that with the wide deployment of ASLR, Windows/x86 PE executables contain enough information in relocation tables
which CCFIR can use to find all legal instructions and jump targets reliably, without source code or symbol information. We evaluate our prototype implementation on common web browsers and the SPEC CPU2000 suite: CCFIR protects large applications such as GCC and Firefox completely automatically, and has low performance overhead of about 3.6%/8.6%
(average/max) using SPECint2000. Experiments on real-world exploits also show that CCFIR-hardened versions of IE6, Firefox 3.6 and other applications are protected effectively. ”

(Nick’s note: More awesome work via CFI-based techniques. They’re on a roll, yeah?)

Language-Independent Sandboxing of
Just-In-Time Compilation and Self-Modifying Code by Ansel et al 2011

“ this paper introduces general mech-
anisms for safely and efficiently sandboxing software, such as
dynamic language runtimes, that make use of advanced, low-
level techniques like runtime code modification. Our language-
independent sandboxing builds on Software-based Fault Isolation
(SFI), a traditionally static technique. We provide a more flexible
form of SFI by adding new constraints and mechanisms that allow
safety to be guaranteed despite runtime code modifications.
We have added our extensions to both the x86-32 and
x86-64 variants of a production-quality, SFI-based sandboxing
platform; on those two architectures SFI mechanisms face different
challenges. We have also ported two representative language
platforms to our extended sandbox: the Mono common language
runtime and the V8 JavaScript engine. In detailed evaluations,
we find that sandboxing slowdown varies between different
benchmarks, languages, and hardware platforms. Overheads are
generally moderate and they are close to zero for some important
benchmark/platform combinations. ”

(Nicks note: Native Client was one of the best things to happen in the security field in a while. This work expands it to handle dynamic and JIT languages. It demonstrates that on Mono and Javascript. If that doesn’t prove practical usefulness, I don’t know what does.)

Retaining Sandbox Containment Despite Bugs in
Privileged Memory-Safe Code by Cappos et al 2010

“In this work, we construct a Python-based sandbox that
has a small, security-isolated kernel. Using a mechanism
called a security layer, we migrate privileged functionality
into memory-safe code on top of the sandbox kernel while re-
taining isolation. For example, significant portions of mod-
ule import, file I/O, serialization, and network communica-
tion routines can be provided in security layers. By moving
these routines out of the kernel, we prevent attackers from
leveraging bugs in these routines to evade sandbox contain-
ment. We demonstrate the effectiveness of our approach by
studying past bugs in Java’s standard libraries and show
that most of these bugs would likely be contained in our
sandbox. ”

(Nick’s note: We know the language sandboxes can solve a ton of problems. This work protects unsafe aspects of those sandboxes. It’s nice that they ported Python to it: a recent Coverity report showed Python code having the lowest defect level of all code and you know it’s language decisions had a big role in that. And it’s fun, popular and easy to learn.)

Leveraging Legacy Code to Deploy Desktop Applications on the Web by Doceur et al 2007 XAX architecture

“Xax is a browser plugin model that enables developers
to leverage existing tools, libraries, and entire programs
to deliver feature-rich applications on the web. Xax em-
ploys a novel combination of mechanisms that collec-
tively provide security, OS-independence, performance,
and support for legacy code. These mechanisms include
memory-isolated native code execution behind a narrow
syscall interface, an abstraction layer that provides a con-
sistent binary interface across operating systems, sys-
tem services via hooks to existing browser mechanisms,
and lightweight modifications to existing tool chains and
code bases. We demonstrate a variety of applications and
libraries from existing code bases, in several languages,
produced with various tool chains, running in multiple
browsers on multiple operating systems. With roughly
two person-weeks of effort, we ported 3.3 million lines
of code to Xax, including a PDF viewer, a Python inter-
preter, a speech synthesizer, and an OpenGL pipeline. ”

(Nick’s note: This is a great piece of work with plenty potential beyond browser plugins. Their approach to dealing with system calls is awesome. The numbers alone show it. Love or hate Microsoft, you gotta admit there’s some great minds at Microsoft Research.)

InkTag: Secure Applications on an Untrusted Operating System 2013 Hoffman et al.

“InkTag is a virtualization-based architecture that gives strong safety
guarantees to high-assurance processes even in the presence of a
malicious operating system. InkTag advances the state of the art
in untrusted operating systems in both the design of its hypervi-
sor and in the ability to run useful applications without trusting the
operating system. We introduce paraverification, a technique that
simplifies the InkTag hypervisor by forcing the untrusted operating
system to participate in its own verification. Attribute-based access
control allows trusted applications to create decentralized access
control policies. InkTag is also the first system of its kind to ensure
consistency between secure data and metadata, ensuring recover-
ability in the face of system crashes. ”

(Nick’s note: It does a whole lot. I read the paper recently and I’m still trying to let it all sink in. It should be evaluated as an interim solution to use until ground-up secure solutions are ready.)

A Nitpicker’s guide to a minimal-complexity secure GUI Feske and Helmuth

“we present the design and
implementation of Nitpicker—an extremely minimized se-
cure graphical user interface that addresses these prob-
lems while retaining compatibility to legacy operating sys-
tems. We describe our approach of kernelizing the win-
dow server and present the deployed security mechanisms
and protocols. Our implementation comprises only 1,500
lines of code while supporting commodity software such as
X11 applications alongside protected graphical security ap-
plications. We discuss key techniques such as client-side
window handling, a new floating-labels mechanism, drag-
and-drop, and denial-of-service-preventing resource man-
agement. Furthermore, we present an application scenario
to evaluate the feasibility, performance, and usability of our
approach. ”

(Nick’s note: secure GUI done right. It’s minimal, usable, and securable. Enough said.)

Architectures for MLS Database Management Systems Notargiacomo

“This essay presents a survey of the basic architectures that have been
used in the research and development of trusted relational database
management systems (DBMSs). While various approaches have been
tried for special-purpose systems, the architectures presented here are
those developed for general-purpose trusted DBMS products. In addi-
tion, this essay presents research approaches proposed for new trusted
DBMS architectures, although worked examples of these approaches
may not exist in all cases. ”

(Nick’s note: This essay summarizes different approaches for MLS databases. Certain design aspects, or outright designs, might be useful for someone looking to make high assurance databases.)

jVPFS: Adding Robustness to a Secure Stacked File System
with Untrusted Local Storage Components 2011 Weinhold and Hartig

“The Virtual Private File System (VPFS) [1] was built to
protect confidentiality and integrity of application data
against strong attacks. To minimize the trusted com-
puting base (i.e., the attack surface) it was built as a
stacked file system, where a small isolated component
in a microkernel-based system reuses a potentially large
and complex untrusted file system; for example, as pro-
vided by a more vulnerable guest OS in a separate virtual
machine. However, its design ignores robustness issues
that come with sudden power loss or crashes of the un-
trusted file system.
This paper addresses these issues. To minimize dam-
age caused by an unclean shutdown, jVPFS carefully
splits a journaling mechanism between a trusted core and
the untrusted file system. The journaling approach mini-
mizes the number of writes needed to maintain consistent
information in a Merkle hash tree, which is stored in the
untrusted file system to detect attacks on integrity. The
commonly very complex and error-prone recovery func-
tionality of legacy file systems (in the order of thousands
of lines of code) can be reused with little increase of com-
plexity in the trusted core: less than 350 lines of code
deal with the security-critical aspects of crash recovery.
jVPFS shows acceptable performance better than its pre-
decessor VPFS, while providing much better protection
against data loss. ”

Iris: A Scalable Cloud File System with
Efficient Integrity Checks by Stefanov et al 2012

“We present Iris, a practical, authenticated file system designed to
support workloads from large enterprises storing data in the cloud
and be resilient against potentially untrustworthy service providers.
As a transparent layer enforcing strong integrity guarantees, Iris
lets an enterprise tenant maintain a large file system in the cloud.
In Iris, tenants obtain strong assurance not just on data integrity,
but also on data freshness, as well as data retrievability in case of
accidental or adversarial cloud failures.
Iris offers an architecture scalable to many clients (on the or-
der of hundreds or even thousands) issuing operations on the file
system in parallel. Iris includes new optimization and enterprise-
side caching techniques specifically designed to overcome the high
network latency typically experienced when accessing cloud stor-
age. Iris also includes novel erasure coding techniques for the first
efficient construction of a dynamic Proofs of Retrievability (PoR)
protocol over the entire file system.
We describe our architecture and experimental results on a pro-
totype version of Iris. Iris achieves end-to-end throughput of up
to 260MB per second for 100 clients issuing simultaneous requests
on the file system. (This limit is dictated by the available network
bandwidth and maximum hard drive throughput.) We demonstrate
that strong integrity protection in the cloud can be achieved with
minimal performance degradation. ”

Poly2 Paradigm: A Secure Network Service Architecture Bryant et al (CERIAS)

“The Poly2 approach is to separate network services onto
different systems, to use application-specific (minimized)
operating systems, and to isolate specific types of network
traffic. Trust in the entire architecture comes from the sepa-
ration of untrusted systems and services. The separation of
network services helps contain successful attacks against in-
dividual systems and services. Therefore no single compro-
mised system can bring down the entire architecture. The
minimized operating systems only provide the services re-
quired by a specific network service. Removal of all other
services reduces the functionality of the system to a bare
minimum. Specific types of network traffic such as adminis-
trative, security-specific, and application-specific traffic are
isolated onto special sub-networks. Because the nature of
the traffic on each sub-network is specific and known in ad-
vance, deviations in normal traffic patterns are more easily
detected .”

(Nick’s note: This is more akin to best practices. They noticed what I noticed about physical separation being much cheaper than in the past. They use this to their advantage. )

CodeShield: Towards Personalized Application
Whitelisting by Gates et al. 2012

“ In this paper we propose the concept of Person-
alized Application Whitelisting (PAW) to block all unsolicited for-
eign code from executing on a system. We introduce CodeShield,
an approach to implement PAW on Windows hosts. CodeShield
uses a simple and novel security model, and a new user interac-
tion approach for obtaining security-critical decisions from users.
We have implemented CodeShield, demonstrated its security effec-
tiveness, and conducted a user study, having 38 participants run
CodeShield on their laptops for 6 weeks. Results from the data
demonstrate the usability and promises of our design. ”

(Nick’s note: really usable whitelisting might have stopped many of so-called APT attacks that just trick people into executing programs.)

SHIELDSTRAP: Making Secure Processors Truly Secure 2009 Chhabra et al.

“Many of these attacks target a system during booting
before any employed security measures can take effect. In
this paper, we propose SHIELDSTRAP, a security architecture
capable of booting a system securely in the face of hardware
and software attacks targeting the boot phase. SHIELDSTRAP
bridges the gap between the vulnerable initialization of the
system and the secure steady state execution environment
provided by the secure processor. We present an analysis of
the security of SHIELDSTRAP against several common boot
time attacks. We also show that SHIELDSTRAP requires an
on-chip area overhead of only 0.012% and incurs negligible
boot time overhead of 0.37 seconds. ”

(Nick’s note: ACM again…)

Project Oberon The Design of an Operating System and Compiler by Wirth and Gutknecht

“This book presents the results of Project Oberon, namely an entire software environment for a modern workstation. The project was undertaken by the authors in the years 1986-89, and its primary goal was to design and implement an entire system from scratch, and to structure it in such a way that it can be described, explained, and understood as a whole. In order to become confronted with all aspects, problems, design decisions and details, the authors not only conceived but also programmed the entire system described in this book, and more. ”

(Nick’s note: It’s a great piece of work to look at. It lives in in modernized form in A2 Bluebottle OS which is downloadable off the net, source and all. The book/paper outlines the details of the design of their OS, compiler, etc. in a way that it could be quite easily reproduced. I’m including it mainly as a starting point for people trying to figure out how to design an OS easy to analyze for subversion and that can run on minimal hardware. Their OS already accomplished both, the former just because it’s easy enough to comprehend and therefore see everything is there for a good reason.)

A list of capability based computer systems and their details:

Knockoffs of those will be easier to secure than knockoffs of vanilla hardware. Flex was a similar type of machine which Ten15 was to run on. I’m not sure I’ve seen anything even today like Ten15 but it’s worth remembering for any wisdom to glean from it.


HISC – Object-oriented instruction set processor w/ built-in GC

Java processors, VM’s, realtime, etc. plenty good material and even working chip

List of Forth and stack chips for people into that sort of thing

LISP Machines since almost anything can be done with LISP. It’s code as data can be restricted.

green382746282 December 13, 2013 8:58 PM

@RR Your “stolen-concept” critique presupposes certainty as the only basis for discourse. You are discovering your “certainty”, not unimportant’s.

Figureitout December 14, 2013 1:18 AM

Nick P
–Just finished reading just your essay, goddamn! That’s a lot…Thanks though, you need to do one on FABs and chip-fabrication process. Agree the endpoint-security is more important b/c crypto doesn’t matter on a keylogged device; and the memory/execution encryption paper even stated physical access renders the security mostly worthless. Also, I don’t believe those papers hiding behind ACM and IEEE paywalls will remain so for very long. Poor people who can’t afford membership fees can and should have secure computing too, or at least access to information to implement it. For those keeping score at home, we need to find three papers:
–SecureME: a hardware-software approach to full system security
–Kernel and Application Integrity Assurance: Ensuring Freedom from Rootkits and Malware in a Computer System
–SHIELDSTRAP: making secure processors truly secure

Jenny Juno && Buck
–Typical agent behavior, taking advantage of weak and vulnerable people. Once people experience it for themselves, they will see why I get so mad about it. Also breaking laws to catch criminals lol, why not just arrest themselves. Most agents do not know who to catch the real criminals b/c they broadcast the fact they’re agents. Drugs were so prevalent when I was in highschool, deals used to go down in class. And I don’t understand why a sawed-off shotgun is illegal when it’s a simple modification to a gun. Anyway I know a career Airforce man that told me about his legit sawed-off shotgun and he used to do computer-security for nuclear labs.

uair01 December 14, 2013 3:26 AM

This is a nice study analyzing abuse of threat inflation to spread technological fear. It is not very recent, but quite relevant:

Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle

Adam D. Thierer – George Mason University – Mercatus Center

Fear is an extremely powerful motivating force, especially in public policy debates where it is used in an attempt to sway opinion or bolster the case for action. Often, this action involves preemptive regulation based on false assumptions and evidence.

This paper considers the structure of fear appeal arguments in technology policy debates and then outlines how those arguments can be deconstructed and refuted in both cultural and economic contexts. Several examples of fear appeal arguments are offered with a particular focus on online child safety, digital privacy, and cybersecurity. The various factors contributing to ‘fear cycles’ in these policy areas are documented.

Link here

Aspie December 14, 2013 4:38 AM

According to the guardian the man behind the curtain is going to adujst some of the mirrors but the puppets and manuscript is going to stay the same.

When a dirty wound is freshly re-dressed it usually doesn’t help the patient though it can look good on the outside.

Mike the goat December 14, 2013 8:32 AM

Nick P: as you well know I have been an advocate of the “clean slate” – In order to build a fully trustworthy machine you must control the construction of the hardware from fab onwards all the way to secure delivery to the consumer. This is not an easy task and the likelihood that we don’t hear as much about subverted hardware is because the software most people run (Windows, ahem) is so perforated it could be used as a sieve. That said I think that without a doubt in the COTS marketplace TEMPEST emanations have been ignored and you can see certain design decisions – e.g. the use of LVDS in laptops – ostensibly driven by cost but without a doubt would make their emanations far more detectable and viewable from a much longer distance. I saw a demo of a Van Eck snoop on a commodity laptop three rooms away through six layers of drywall and the display rendered by the unit was clearly and unambiguously readable (albeit in black and white).

Technologies like vPro’s AMT and IPMI are worrying but I suggest that the intelligence agencies are putting a lot of effort into cellular devices. They are ubquitous, always online and people do a lot of their internet activities through them including social media, banking and email. While android can call itself open source as much as it likes there is the shady baseband which can do who knows what (well, we know for a fact that it can send your GPS derived network location as per E911 if it receives an interrogation ping and this cannot be disabled. We also know that the baseband can remotely answer a call without ringing and can also turn on the mike effectively turning a cellphone into a bug). The android layer itself is pretty terrible too with most “security” features mere toys.

In the mean time old hardware is our only defense against shady hardware. I would love to see us pool some resources and get some kind of fab going. We needn’t aim for high clock speeds or dramatic performance – and I am sure that a design could be licensed from the Chinese. They’ll sell anything at the right price.

My 2c.

Aspie December 14, 2013 8:35 AM

Nobody will notice the Sunday delivery

Sneaky indeed but what else do they know? Still, since it’ll probably add up to “The NSA will be curtailed in that it can no longer monitor 6th degree connections without a court-order
… or unless they really want to.” it probably won’t be much of an announcement.

Expect more details to be delivered either 2013-12-25 or 2014-02-29. I believe the latter date will remain free for bookings and Satan is safety-checking his snow plough. 😉

Figureitout December 14, 2013 10:53 AM

Mike the goat
//From your blog: As an aside I am planning somewhat of an exposé (including some disassesmbly) of some of the cellular baseband firmwares as I believe the baseband is likely the biggest threat we have to our privacy.
–Any time frame on this? I experienced an attack and I have zero clue how it works…

Nick P December 14, 2013 12:08 PM

@ Figureitout, Mike re my paper release

This paper release is specificly for removing the low hanging fruit: software issues. That it’s nearly impossible to put programs on a machine without vulnerabilities undermines every use case before hardware level issues are even considered. So, I’ve always advocated that we try to solve the software problem first then solve the next one. If they can be solved simultaneously, even better. I’m focusing on the major problem first, though.

re paywall

An unfortunate reality of modern academia. Thing is, the IEEE and ACM memberships are worth the annual price if one wants all the cutting-edge and/or arcane tech hidden in there. There’s a bunch which I’ve referenced here plenty of times and some of which solved big problems while still being relatively unknown. I actually have the three papers on my HD but there’s certain serious copyright laws in my country, along with 250,000+ potential witnesses on this blog. 😉

Just check your local and college library to see if they have free access to ACM/IEEE. Many do. Then, just mass download everything onto flash drives. Good keywords include “secure,” “high assurance,” “NUMA,” “networking,” “MLS,” “protocols,” “software engineering,” and “TCB.” Of the three I listed, SecureME improves on SecureCore and Aegis. It’s the most useful. (Btw, Google SecureCore or SP architecture as I should’ve included them.) SHIELDSTRAP does just one thing but it’s chip surface is tiny so maybe another tool in the toolbox. The other I just threw in b/c it fit the theme.

@ figureitout re chip fabrication

Our resident expert on this (RobertT) and I just had a debate on various aspects of it. We got the discussion down to these critical points:

  1. Even a free fab cost millions to operate and customers are hard to keep. It’s why there’s many for sale right now: even experienced chip companies can’t afford to keep them open.
  2. Even a fab using old technology subject to optical inspections can be subverted by insiders using cutting edge nanoscale chip technology. 20nm attack structures might be used against a micron process chip. You can’t “uninvent the chip advances” once they happen.
  3. Chips, even simple ones, actually get cheaper as the fab gets more expensive so there’s no economic reason to invest in lower grade fab technology.
  4. Biggest point was that 20nm process node tech is so exotic that experts can barely get chips to function at all, much less build them in insidious ways. This extra difficulty might make it the safest technology.

So, I’ve decided the fab issue can’t be managed economically for the mean time. The best way to do it is a vetted hardware design, vetted macros/cells, vetted translation, and a trustworthy distribution method to the fab. From that point on, it’s all a matter of faith anyway so why waste money on inferior process node tech. 20nm process node seems to be a good default choice.

A newcomer to the blog also gave me an interesting idea. That person posted a link to new tech where a company had developed a processor that could be fabbed by a memory tech fab. Processors usually aren’t done at those fabs for various technical reasons. The poster was interested in it because memory chip tech costs many fold less (100x?) than CPU fab tech and the innovating company already had a core ready to go.

However, I noticed ANOTHER benefit: the odds of subversion are extremely low. This is a fab process that was believed impossible to produce usable CPU’s with and is currently producing memory chips en masse. Although there’s been proposals to hide backdoors in memory chips, there’s no logical reason to believe the memory fab is subverted in a CPU defeating way. So, our choices are a vetted design on 20 nm CPU process node tech with vendor supplied logic or on the memory tech licensing that one company’s logic cells. A side benefit is 20nm gives you plenty of room for performance or security enhancements, too.

“and I am sure that a design could be licensed from the Chinese. They’ll sell anything at the right price.”

The Elbrus processors from Russia are SPARC compatible & hence worth considering. They have small ones, dual cores (w/ fault resilience support), quad core, and so on. The reason I bring up SPARC is there’s at least one ground up enhancement that builds on SPARC cores. The Russians also have MIPS chips too. The CHERI project (with many others) uses MIPS. Matter of fact, NonStop used MIPS for a long time for five 9’s with high throughput. They’re pretty versatile. Then, there’s China’s MIPS Loongson as you pointed out.

Finally, the Japanese companies have knockoffs of SPARC and the like. One of them is used in mainframe style machines so it’s probably reliable as hell. The Russian one had a ccNUMA feature in its spec so it’s probably used in large SMP machines. Might imply reliability. All in all, getting a core licensed from another country under NDA, taking out risky parts, adding security enhancements, and sending that to a 20nm fab might be a decent route.

I still thinking getting some CPLD’s or FPGA’s vetted in 20nm might be a nice idea. Could just get an endless supply of reconfigurable chips and keep putting open designs on them, building the real ASIC only that one time. Maybe use the write-once FPGA’s too to keep the security risks down. The prototyping can be done on untrusted COTS FPGA’s that are truly reprogrammable, then it’s ported to the write once FPGA’s that are already being fabbed. Might keep costs down.

@ mike re TEMPEST

TEMPEST is no doubt a concern. The best way to deal with it right now is to run the systems in a shiedled room on battery or generator. The soundproofing I used to recommend also just became a better idea due to the audio-based malware research. As per old DOD rules, either no cellphones or they have batteries removed. Any data lines have appropriate shielding. Use of rural areas is preferred as you can then create a min 100 yard zone around the site with cameras.

In parallel and long-term, I’d like to see more academic research into it. The governments have several decades experience in defence and offence. Academia has a few years basically. We need really smart people with good funding looking into everything in COTS hardware, power cords, active emanation attacks, etc along with shielding options for rooms and individual products. Essentially, they need to recreate all that locked away classified research from scratch and put it in the open with their recommendations. At that point, regular manufacturers and regular equipment will be able to produce cheap TEMPEST shielded workstations.

(Note: I’ve already seen books on shielding that might serve that purpose. Anyone trying at EMSEC defence should look at about every available work on dealing with regular EM/RF noise as I’m sure that will help in preventing passive leaks.)

re cell phones

Yeah, they can’t be trusted. I was working on that problem again during the paper release. I think the safe execution architectures could really help with that. Here’s the basic proposal:

  1. Safe (tagged/capability/bytecode/whatever) chip with almost all phone software running on top of it.
  2. Dedicated chip for baseband stack running a modified version of whatever they normally run.

  3. Careful interface for the two where (a) GSM stack only accesses restricted part of memory which is hardware mediated and (b) safe chip decides when to cooperate with main GSM functions.

For instance, the GSM chip might activate its hidden silent answer feature, but safe chip isn’t in call mode so doesn’t send any microphone data. The safe chip actually has to move data into the buffer (or over some IO line) before GSM chip can physically see it so the GSM chip at best can ask for permission.

The one big risk I still see with this architecture is EMSEC attacks. Cellphones weren’t allowed around secure phones for a long itme. It was even noted at having a Nextel phone near a STU-III caused a key compromising leak almost immediately due to EM interactions. It took a while before mobile phones got secure enough for NSA approval (e.g Sectera Edge). They’re no doubt designed to minimize emanations.

So, the attack vector that comes to my mind is that the attacker-controlled baseband stack causes the radio to leak (a) the trusted phone’s secrets or (b) some other device’s secrets nearby. The easiest route will probably be to separate the radio from the protocol interpreter, then somehow ensure they both behave a certain way. Yet another safer chip design… In any case, it seems my ad hoc architecture protects against the majority of attacks and the radio attack is somewhat exotic. Might still be beneficial.

@ Winter

Thanks for the link. I look forward to his articles when he gets them online.

Alan Kaminsky December 14, 2013 2:28 PM

The Russian Technical Committee for Standardization has announced an open research competition on the GOST R 34.11-2012 hash function (the Russian counterpart of the U.S. Secure Hash Algorithm).

They are offering actual monetary prizes for winning research papers showing weaknesses or breakages of GOST: RUB 500,000 first prize, RUB 300,000 second prize. (US$15,215 and US$9,129 at today’s exchange rate.) More than NIST ever offered for SHA-3.

Bauke Jan Douma December 14, 2013 3:05 PM


He’s a white collar criminal himself so what did you expect.
(the blood was on the hands, now prob. well cleansed — but
that’s for another christian holiday).

Fab! December 14, 2013 3:23 PM

“Even a fab using old technology subject to optical inspections can be subverted by insiders using cutting edge nanoscale chip technology.”

Now, this isn’t possible using the fab you’re using. So, at least, if you can verify your chip and verify the tools used to create them, then you can be reasonably sure that the chips do not contain such 20nm features, no (modulo being able to ensure the chip was really done by this verified fab) ?

That leaves the possibility that such a chip may be altered later to add those in. I do not know if this is plausible to do this.

Additionally, would it be possible to design the (micron) chip so that it is resistant to a maximum amperage that no 20nm circuit could possibly take ? If so, then adding in the ability to “flood” the chip with that much (or close) could ensure any uninvited small features get fried. I don’t know if that’s plausible either.

Zaphod December 14, 2013 3:42 PM


Invaluable post.

Many thanks for posting. I would advise all to copy & paste & print the post


Figureitout December 14, 2013 3:46 PM

Nick P
–Ok, but I really don’t want a bluetooth stack in the chips no matter how it’s “disabled”…That was Mike the goat talking about sourcing from China (still untrustworthy). If the tech is so exotic then it probably has lots of holes and it’s unverifiable which makes me really nervous. At least then if I can confirm secure delivery from a “trusted” source, still feel like we can do better…

It’d be a really nerve-wracking project too b/c you know you’re going to be targeted…

Oh and those papers, let me know if these are it:
SecureME: a hardware-software approach to full system security
Kernel and Application Integrity Assurance: Ensuring Freedom from Rootkits and Malware in a Computer System
SHIELDSTRAP: making secure processors truly secure

jackson December 14, 2013 5:05 PM

If Snowden took 1.7 million documents, as is now being reported, there is no way he could have known the contents of all those documents. The implications of such a compromise goes far beyond the arrest of any NSA overreach.

This reminds me of the mindless introduction of non-native species, done with the intent of eradicating a certain pest otherwise flourishing because of an absence of predators. The inadvertent consequence was worst.

Go ahead, keep laughing.

RR December 14, 2013 5:34 PM

@green382746282 “@RR Your “stolen-concept” critique presupposes certainty as the only basis for discourse. You are discovering your “certainty”, not unimportant’s.”

From context, it appears that unimportant and I are both talking about certainty as such, an abstraction subsuming all instances of certainty.

You are conflating the concept of certainty with two particular instances, his and mine.

anonymous December 14, 2013 5:42 PM

Compare working health exchange websites to

Washington state, Kentucky, Connecticut have state heath insurance websites that are said to be working. has many problems.

When HTML and CSS errors are checked at WA,KY,CT have few or no errors,, many errors.

Comparing results and WA,KY,CT have performance enhancements, does not.

See the pattern?

Nick P December 14, 2013 6:24 PM

@ figureitout

Yes those are the papers. Thanks for getting their links.

re bluetooth

Yeah id leave it out too or modify it similarly to GSM chip to operate untrusted.

Oh Vey December 14, 2013 6:53 PM


I do not believe that Snowden took more than one million documents. What I believe is that the NSA does not know what he took. Since they do not know what he took, the only thing they can do is assume he took everything he could possibly access…the worst case scenario. That is just smart security practice but it tends to make the situation sound much worse than it actually is.

Figureitout December 14, 2013 8:41 PM

Nick P
–No problemo. And the little warning about publishing the papers (so spooky, much scare) didn’t really apply as I’m not profiting from it and it’s academic. Took less than 10 minutes to find all 3 on google, so maybe take more precautions if they want to keep some knowledge secret.

GSM is worrying to me too. Kind of get that feeling that everything you thought was real or secure was a lie…

Nick P December 14, 2013 11:39 PM

@ Figureitout

“And the little warning about publishing the papers (so spooky, much scare) didn’t really apply as I’m not profiting from it and it’s academic.”

I was mocking our copyright system. And it always applies. You’re unlikely to be prosecuted, though.

re GSM

It was never safe or about security. That’s the thing: most stuff wasn’t designed for security so nobody should have trusted its security. Most things are designed for convenience, low cost, integration, backward compatibility, speed, “coolness,” etc. Most of this hurts security. GSM, POTS telephones and commercial VOIP are all insecure by default. Two of them had dedicated sections in spy catalogs. The other was torn up by black hats.

Here’s a little principle that will help you spot certain cases: if govt contributes to it and pushes it, but uses something different internally, the ‘it’ here might be subverted/weak. It applied with IPsec and VOIP security as NSA picked a different standard for each (HAIPE & SCIP) instead of what they pushed on the public. I couldn’t know that the principle meant something serious in practice but the Snowden leaks later showed they subverted the public offering. So, this tells us (a) truly secure solutions are often different by design with tough tradeoffs and (b) what NSA trusts is more indicative than what they tell us to trust.

That’s also why I posted the SCIP spec documents here. It’s their preferred secure calling system. Hopefully a cryptographer will review it at some point. Now if only I could get a HAIPE encryptor’s source code. I’d compare it to the weakened IPsec to see exactly what changes they made. Then we’d have secure Layer 2 and 3 link encryption, too.

I’m dreaming, though. Those devices are controlled COMSEC items. If someone sent me one, I’d probably throw it as far as I could before someone kicked in my door. I don’t have the tech to reverse engineer an actual device anyway. (sighs)

Nick P December 14, 2013 11:58 PM

@ Zaphon

You’re welcome! My only hope is it benefits people building The Next Secure Thing. (And they actually build a secure thing this time.)

@ Fab!

“Now, this isn’t possible using the fab you’re using. So, at least, if you can verify your chip and verify the tools used to create them, then you can be reasonably sure that the chips do not contain such 20nm features, no (modulo being able to ensure the chip was really done by this verified fab) ?”

Are you being sarcastic or asking me a question? Regardless, the end result of the discussion with RobertT was that none of them were trustworthy unless I could ensure the workers didn’t betray me (without being able to verify it). The cornerstone of my scheme was certain ideas for handling the personnel security part plus building a fab with controls on key areas of manufacturing. RobertT thought both were nonsense and I’ll just quote him rather than say it myself.

“BTW ALL subversion of product at the Fab stage is probably done by motivated enterprising individuals without any cooperation from fab management. WHY would this be any different at a fab that you own?” (RobertT)

“You dont seem to understand that 20nm is about 1/50th the size of the minimum structure that a typical optical inspection microscope can resolve even at 130nm your dealing with structures that are about 1/4 the wavelength of Green light (about 500nm). You might be able to sort-off see 130nm cells but you have no idea what the edges of the structure really look like its all just a diffracted blurry mess. ” (RobertT on optical inspection of nm chips)

And the whopper:

“To understand what is possible with a modern fab you’ll need to understand cutting edge Lithography, Advanced directional etching , Organic Chemistry and Physics that’s not even nearly mature enough to be printed in any text book. These skills are all combined to repeatedly create structures at 1/10th the wavelength of the light being used. Go back just 10 or 15 years and you’ll find any number of real experts (with appropriate Phd qualifications) that were willing to publicly tell you just how impossible the task of creating 20nm structures was, yet here we are!

Not sure why you believe that owning the fab will suddenly give you these extremely rear technical skills. If you dont have the skills, and I mean really have the skills (really be someone that knows the subject and is capable of leading edge innovation) then you must accept everything that your technologists tell you, even when they’re intentionally lying. I cant see why this is any better then simply trusting someone else to properly run their fab and not intentionally subvert the chip creation process.” (RobertT)

Everything he said makes sense. Although I indicated I had a method to get a bunch of that under control, the truth is it doesn’t contradict his belief that the tech is simply too complicated and exotic for security types to get a handle on. It essentially comes down to two major possibilities:

  1. they’re too busy doing all the hard science making chips work and have no time for the subversion shit at cutting edge process levels.
  2. They can be paid to subvert a chip using their esoteric knowledge despite whatever security precautions are put into place.

For a good process node tech, these seem to be true. I’d also say that even the older fabs could probably be subverted by subverting the [still custom] equipment. I’ve pushed silicon experts in the past on what aspects of fab I need to focus security on and worked out ways to build both tools and fab assuming untrustworthy parties. Let’s just say it gets so complex, painstaking and uncertain that it would probably be subverted anyway assuming it didn’t fail due to cost.

So, perverse as it seems, this problem might be best solved by staying so close to the cutting edge that subversion itself becomes difficult or unreliable. That along with revamping older machines and using things that aren’t made with electrical engineering. Like typewriters. 😉

Mike the goat December 15, 2013 1:51 AM

Figureitout: I am in the process of doing this as a side project, both through dumping and attempting to reverse compile baseband firmware and more recently through using a BTS emulator and throwing stuff at production phones to see how they behave. Unfortunately my equipment is GSM only and I am certain there are far more interesting things in the WCDMA air interface handling of modern phones.

Nick P: yeah, it just seems EMSEC has been conveniently neglected by the consumer industry and some innovations – like the LVDS signalling used on laptop LCD displays have actually made the problem far worse. What makes me believe that it is quasi deliberate is that some basic shielding would at least attenuate the effects somewhat and yet they have neglected to even sheild the signal wires. Call me a conspiracy theorist but it just doesn’t add up. External monitors do not fare much better with the signal lead being the main emanation point on modern monitors (unlike the CRTs of old). I would have thought that seeing as the DVI spec included HDCP it would have been trivial to setup a simple encryption system between video card and monitor so at least sniffing emanations from the signal wire could be limited. Of course there are other sources like the driver circuitry on both sides but hell, at least they should be making an effort to minimize the problem.

For those interested you can buy copper wallpaper which is essentially very thin conductive foil. You then cover the walls and roof of your office with the stuff and use either the adhesive bonding conductive tape (inferior) or carefully solder the copper. You can buy a perforated mesh covering for windows and specialized fittings for penetrations (e.g.honeycomb sheet for air vents). It doesn’t cost that much to protect a room but many neglect to sufficiently isolate the power supply of the room. Unfortunately there is so many places where the average Joe can err.

Clive Robinson December 15, 2013 2:56 AM

@ Nick P, et al,

One further thing to note,

Granted that any FAB doing four generations back cannot do 20nm as they don’t have the equipment, it does not stop the problem only moves it to some part of the chain you cannot 100% observe.

Let me put it another way, for some strange reason “the first off of the production line” of any product is often given an artificial value over and above “the second” or other subsiquent item off the same line by collectors (why they do this I have no idea, “it’s a human thing” 😉

The issue is that if a collector is shown two items off of the same line in most cases there is no way they can tell them appart in a way that indicates their “roll off” order. So they invent a fictitious “trust system” called “provenance” which is basicaly one or more bits of paper testifing that an object is “the first” at some point and they test the provanance instead… which after a moments thought you will realise is an excercise in futility as well.

The other more important thing to remember is that the provenance is not reliably physicaly linked in a secure way to the object, so all it does is open up another “trust channel” that can be abused… So even if the provinance is genuine it does not some how make the object presented with it genuine…

So if I give you a “tape of chips” on a reel or in a box, with paperwork indicating they are from your FAB how do you actually know they are from your FAB?

The answer is as any stage magician knows that “you can not” because you personaly have not had 100% physical control over the devices 100% of the time…

For instance a “security seal” only atests to the fact that it has not been activated nothing else, not that the box has not been opened, not that the box is genuine, not that the reel is genuine not that the tape is genuine, nor that the chip packages are genuine, thus not that the chips are genuine.

So you either “take it on trust” or you enter the world of “probability” in some way. For example you destructivly test one or two “samples” from the tape. The best that this can do is “atest” to the state of trust of the devices you test not the rest. Which if you cannot test reliably –which you cannot- is worth not a lot…

So at some point as Alice discovered you disappear down “the rabbit hole” and end up at “a mad hatters tea party” which only has the illusion of making sense.

As an example “PUFs” supposadly produce an unalterable “serial number” for a chip… so what. The fact that a chip can produce a unique an unalterable series of bits no more proves it came of your production line than it did off of somebody elses. So what you do is as the chip comes out of the process you read the serial number into a logging process and record it as having come out of your FAB. All you have done is make a provenance system that can be abused in some way, and will be if the stakes are high enough.

On realising this some time ago I sat down with several cups of hot brown stuff looking for a way out of the problem and I’ve come to three main conclusions. The first conclusion is that the costs and complexity of provenance rises to a significant power law and thus at some point unmanagably so in a commercial environment. The second conclusion is therefor, “There is no way of 100% certifing that an item is as it is presented to be.”, just some unknown probability that it might be. The combination of these two give rise to the unpalatable third which is, “The probability is at best related to your resources, at worst related to somebody who effectivly has unlimited resources, and over who you have no control.”

So welcome to the world of “Probabalistic Security” you can not avoid it so you have to learn to live with it and by so doing you will probably find a nice “sweet spot” where you can have an acceptable level of risk at an acceptable price.

And the route to this sweet spot is “mitigation”, which on the face of it can be quite expensive as in all “high XXX” systems.

However as I’ve indicated in the past, the bulk of expense comes from “bolting on” not from “designing in”. We’ve seen this with safety features in cars. Altering a production line to “bolt on” is hellishly expensive, modifing an existing design pior to production is also expensive compared to making a new design which has it “designed in”. But suprisingly it can actually be cheaper, that is where you turn the safety feature into a dual or more use item, crumple zones and side impact bars are examples of this.

So it is possible to come up with a mitigation design that is a lot lot less expensive and considerably more secure for you, but also as a consiquence makes life considerably more expensive for potential opponents no mater how well resourced.

One such example is the use of COTS components from different supply sources to make voting systems. Lets say I pick three different base technologies from five different base types my choice of low cost parts might be a PIC + MIPS + ARM CPUs. The cost of developing the software and hardware to use another CPU with carefull design will be minimal and fairly rapid. For an adversary they have to subvert five different CPU core designs which is not going to be easy or quick to do. I can also use “old stock” where the devices have been manufactured on mass and are aproaching their end of life and thus sitting on quite a few companies inventory waiting to be in effect “disposed of” as storage space costs.

The down side of this however is it might be less expensive for the adversary to attack the “common point” which is you or your business. We know that both Israel and Russia for instance are ethicaly quite happy with murdering non combatants if the down stream effect is to remove a future potential weapon / force / leader from a future potential conflict. Likewise US Drone attacks usually take out non combatants as colateral damage if the chance of getting a target is only possible not definate. We have also seen the US Government “making examples” of people one way or another by “right striping” legislation and we suspect that refusal to do “unlawfull acts” by US business men have led to them being prosecuted for insider trading or other acts (as you may remember there was the owner of the company quite legaly supplying internet gambling software who got “SWATed” at home for refusing to “back door” his software).

Winter December 15, 2013 3:42 AM

“So welcome to the world of “Probabalistic Security” you can not avoid it so you have to learn to live with it and by so doing you will probably find a nice “sweet spot” where you can have an acceptable level of risk at an acceptable price.”

Isn’t this not the definition of real security? If there was one thing that I learned from all my readings in (computer) security, this is it.

But we already know this. If real, absolute, security was possible, Darwinian selection would have made some organism to hit on it during the 3+ billion years of life on earth. But we know no organism has found absolute security. However, life thrives well with probabilistic security.

Bryan December 15, 2013 4:10 AM

As I was reading Nick P’s big post, and it’s references this came to mind:

Plan of attack for Secure Computing

Phase one. Years 1 and on. Set MMU page spaces for heap and stack as data only. Don’t allow code execution out of them. If code breaks, then the providers can fix it with new releases. This zaps many exploits dead in their tracks. Also, all firmware on commodity motherboards must have a physical hardware interlock that prevents alteration of the firmware. This includes VESA bios extensions and code for co-processors. After 12 months from enactment, no procurement for government or critical industry is allowed that doesn’t absolutely prevent the execution of code from data space. Also all commodity computers must also have the hardware firmware update interlocks. Non commodity computing systems have 5 years to be updated with the physical firmware interlocks. Yes, there are still many attack methods still possible, but this is a start. Other than shaving costs to the bone and convenience features, these should already be implemented.

Next Phase, Years 2 to 5. Make and introduce commodity processors that implement Randomized Instruction Sets (x86, ARM, etc.). At the same time implement both recompilers for the Randomized Instruction Sets, and Binary Stirring. It should be possible to make the first generation processors within a year that can handle full resorting of the bit locations, and opcodes of the instruction set. It should only take a couple more stages in the instruction decode pipeline. That combined with a simple change to the MMU that makes code memory pages and data memory pages exclusive. It should take less than a year to get most major, and open source operating systems up to speed using randomizing recompilers and instruction sets once the hardware and randomizing recompilers are available. The CPUs can be defaulted to the standard instruction set, with a load operation that loads the newly computed random set which is used until power off.

Next phase. Years 5 and on. Make computer architectures from the ground up that have security at their core. This is where capabilities, data types, etc. are enforced with immutable hardware.

Thoughts on this?

65535 December 15, 2013 4:35 AM

@ Jonathan Wilson

This is a good start. How do we get the NSA to comply?

@ Nick P

Nice collection of documents. I will read them.

I noticed that Sectéra Edge (according to Wikipedia) Uses: “Operating System: Microsoft CE.”

I though Microsoft was insecure. Why did Gen. Dyn use it?


Thanks for the open links to Nick’s post.

@ reke

Nice link. I believe BT could and would provide backdoor to the GCHQ/NSA. I have noticed that Belkin and other SOHO advertise a the ability to use a long “admin” password such as 64 to 128 characters – yet they truncate the actual password to 10 characters. A ten character password is weak and probably breakable by the NSA.

@ Mike the Goat

I agree that a “clean slate” is best. Http, software, OS, and firmware (combined) have a very high attack surface.

[To all: I will try to respond when I can]

Iain Moffat December 15, 2013 5:34 AM

As an old time electronic designer it is worth reminding that for those concerned about fab security or device provenance there is always the option to implement the CPU architecture of your choice using small scale logic – this is after all how the computers that started the Internet (PDP/11, DecSystem 20, … ) were built in the 1970s.

Such devices as the TTL 7400 series can be exhaustively tested for correct behaviour at the electrical level for all input and output combinations (16 to 24 pins aren’t that many …). Building a CPU that way involves more work but less advanced materials technology than trying to fabricate a single chip CPU. Googling for “TTL CPU” will find quite a lot of prior art.

The explosion of minicomputers from small vendors during the 1970s demonstrates that such a project is within the resources of a community project. The availability of really small surface mount packages and improvements in PCB technology mean that performance of such a CPU today should be significantly better than the 1970s version even if not quite at Pentium levels – certainly enough to run an IP stack and IDS/Firewall at useful data rates or provide a secure “gatekeeper” to mass storage for critical data.

For those with a slightly different trust versus effort tradeoff the bit slice processors (AMD 2900 series) offer larger building blocks which can be programmed at the microcode level. As an example Brad Rodriguez’s PISC is a non-trivial 16-bit 5MHz machine built that way for educational use.

Another piece of history worth looking at is the 1970s DecSystem 20. This machine had a fast 36 bit backend running user processes – more or less as a fast virtual clone of the 1960s PDP10 – and a dedicated PDP 11 front end running I/O – one could imagine a modern version of that architecture implemented by building a fairly PC compatible secure machine running a modern operating system on a commercial processor in which the “BIOS” or “HAL” is really a front end processor running a fixed program from ROM (Real ROM or at least one time programmable to limit risk of external persistent threats surviving a reset) rather than Flash with responsibility to run network IDS and file system access control as well as process low level I/O. This is really only inserting real hardware where the likes of VMWARE and VirtualBox use a software shim, I think, and reduces the engineering challenge from building the whole machine to building the I/O processor and its software in a secure way, while allowing a wider choice of software on the backend with some confidence that malware will be caught reading what it shouldnt or phoning home. The key to doing that for security is that the I/O processor needs to trust the back end at least as little as the external network ports.

One should probably also start any DIY CPU project with software tools from that era as the code is small enough to be printed and read and inspected for threats (one would think of the small C compiler in Berry and Meeking’s “A book on C” as an example). We know a lot better how to code for security in “C” now than in the 1980s so I don’t see that it’s inherently insecure – what matters more is that the hardware prevents bad code leading to data being executed. This can be significantly helped by a ROM-based architecture, hardware restrictions to prevent user mode writes to kernel data, and avoiding any use of RAM resident tables to modify OS functionality.

I hope this helps


Fab! December 15, 2013 5:44 AM

I wasn’t being sacastic, merely not knowledgeable about the matter, so my “ideas” were just of the quality you’d expect, sorry 😛

(I hadn’t seen that discussion about trusting the fab operators, and any chain of custody).

I’ll go back to lurking now…

65535 December 15, 2013 6:18 AM


I agree. I am getting suspicious of 11th hour (or ambush) announcements. I would guess that the memo is leaked to the right “news outlets” for spit polish and spin.

Clive Robinson December 15, 2013 7:24 AM

@ Fab!,

    … not knowledgeable about the matter, so my”ideas” were just of the quality you’d expect,… …I’ll go back to lurking now…

Don’t go back to lurking, it does not help you or other people learn new things, by re-evaluating old things.

The human world improves more by experimentation than it does by thought alone. Virtualy nothing is learnt when things go right, and to much going right means people don’t learn to cope they stagnate and cann’t solve issues when things go wrong as they always do. More is learnt when things go wrong, an unexpected result or failure opens oportunities to change and do things differently, that in turn can only happen if people are learning from failur with an open mind.

An open mind only arives by questioning the excepted and finding for yourself the truth or falshood of an excepted notion. A persons view should be based on reasoned results not guesses or hunches. Though hunches do tend to give new ideas to test with reasoned thought and the experiments that arise from both. Importantly you should always be able to answer questions by way of explanation in a manner that is understood by the questioner.

If I fail to answer a question in a way a questioner can comprehend then I have wasted both their time and mine which benifits neither of us. Importantly though answering questions no matter what they are improves not just your ability to answer questions but makes available fresh avenues of enquiry to you. To this effect it has been noted by others in the past that “The right question is often more important than the right answer”.

green382746282 December 15, 2013 9:09 AM

@RR unimportant wrote: “I like to know whether any cryptologist have ever thought about the necessity of uncertainty. For me it seems to be an essential element of being human. Humanity should be more than just being part of the exponential growth of technology. Liberty is already lost, we are about to loose our privacy and finally we’ll become humanoid robots.”

Then you responded:

“The concepts ‘necessity’ and ‘essential’ both presuppose and logically depend on the validity of certainty.”

Can’t you understand that you are replying in an analytical mode to a statement that is not intended to be analytical, but poetic or existential?

If your comment were a significant critique, rather than a nit-pick, unimportant’s statement would be unintelligible, yet it conveys a clear meaning, at least to me.

And, of course, uncertainty persists in even pure analytical environments as in Heisenberg’s uncertainty principal, which could be one way of understanding unimportant while remaining in an analytical mode. Uncertainty manages to be both necessary and essential there, does it not?

I think unimportant said something very important and that your dismissive response comes nowhere near to an interesting critique of it. I certainly will not live in the desert of your analysis, and I doubt you do either, except when it serves you for purposes I can only guess at: politics, perhaps, or simply bad digestion.

Or perhaps I missed the joke and your point was to ironically respond in the mode of the “humanoid robots” that unimportant warned of.

Figureitout December 15, 2013 12:54 PM

Now if only I could get a HAIPE encryptor’s source code.
Nick P
–Hmm, sounds like a challenge…I’ve got way too many other things to do first though (review math, study transitors/circuits, FORTH, a new beaglebone, arduino, radio, messing around w/ old computers and OS’s).

Those devices are controlled COMSEC items. If someone sent me one, I’d probably throw it as far as I could before someone kicked in my door.
–Yeah I’m sick and tired of military and mafia trying to scare people from having a secure PC and comms. If you touch it you got fingerprints and having drugs shipped to even a mayor somewhere which obviously weren’t his didn’t stop the coppers from busting in and shooting his dog. I’ve left my coat out and I’m just waiting for some agents to plant some child porn or drugs on me and falsely arrest and put in private for-profit prisons; maybe even better to lock me up w/o books so I can just rot and go crazy.

Lynx December 15, 2013 1:01 PM

@Iain Moffat “Such devices as the TTL 7400 series can be exhaustively tested for correct behaviour at the electrical level for all input and output combinations (16 to 24 pins aren’t that many …). Building a CPU that way involves more work but less advanced materials technology than trying to fabricate a single chip CPU.”

This approach is actually less work than one might imagine.

Based on personal experience, a motivated software engineer armed with a copy of Don Lancaster’s “CMOS Cookbook” can design and build a simple 4-bit CPU with about thirty 4000-series CMOS devices together with a 256-byte non-volatile memory chip for microcode storage. That design had an ALU supporting addition, subtraction, and logical operations. It had branch and conditional branch instructions.

That 4-bit CPU module could function as a bit-slice component to make a 32-bit machine, but I never got past the design stage on that bigger machine.

So the complexity of a simple 32-bit CPU using discrete logic chips could be as little as 30 x 8 = 240 IC’s.

saucymugwump December 15, 2013 1:41 PM

@Figureitout “I don’t understand why a sawed-off shotgun is illegal when it’s a simple modification to a gun.”

For the same reason that adults having sex with children, people killing each other, driving while drunk, and rape are illegal. Marginal personalities need to be constantly reminded that anti-social activities are not acceptable. That, and the fact that sawed-off shotguns are relatively easy to conceal and very destructive to living things.

Wow, the content here is always the same story: NSA is the devil, but allowing China to manufacture almost all PC hardware, Google to heavily monitor the Internet activity of its users, Glassholes to surreptitiously walk around and record faces in real-time, and Facebook to collect and collate the most intimate personal data is just hunky-dory.

Have any of you noticed that Intel does not offer many motherboards for sale anymore (Newegg only lists six Haswell/LGA1150 motherboards), with the only alternatives being ones designed by Chinese companies and manufactured in China? Did you hear that Google wants to start manufacturing its own CPUs for its smartphones? When was the last time you saw an add-on board made somewhere other than China?

Some people here believe that fabbing (if that’s a word) in China or Russia is safer than in the USA, yet the former has declared that large parts of the Pacific Ocean are its private domain and is the top source for industrial espionage, and the latter is the top source for online theft. If the fab is in the USA, you can easily and quietly travel to it to check on things, but if the fab is in China or Russia, they will know you are coming long before you arrive. Not to mention the language problem.

If you are worried about the NSA, then you had also better be worry about certain companies and countries. The best locations for a fab are probably in Germany or Finland because of their anti-NSA attitudes mixed with high intelligence, top-level honesty, and a good work-ethic. AMD already has one in Dresden.

Painfully gone December 15, 2013 2:33 PM

Interesting article about the current events unfolding in Sweden.

Using an exploit autonomous far left activists in cooperation with mainstream media have gotten its hands on account details on a lot of user accounts on disqus.

Mainstream/state media have used those details to link unwanted people comments on both dissident forums and state media television forums to actual people. As we speak media representatives are hunting down ordinary people, stalking them and displaying their personal information in media linked to several non-politically correct comments (which essentially will kill you in Sweden).

People have already lost their jobs because of this and I fear that we’ll see a pickup in suicides soon and targeted far left attacks.

This is an ongoing part of the power struggle in Sweden but it’s very interesting to witness the effects of a large scale “defacement” using meta data.

Essentially this is a reminder that no one is anonymous on the internet unless you take very strong measures. It will be a painful reminder for those people who are now being targeted.

No comment yet from Swedish politicians but this action goes straight to the government through their henchmen who do the dirty work (not very clever ones, facebook junkies and twitter megalomaniacs).

Schoolteachers and scientists, intellectuals have already been forced out of work because of this.

It sickens me that we so rapidly are loosing faith in the principles of democracy here where “unparlamentary” measures to keep people in check are increasing.

Expect more from our part of the world in the times ahead.

Uhu December 15, 2013 3:07 PM

How could surveillance be made in an acceptable way?

Many people (including me) are outraged about all the spying being done by the secret services (not just the US!). But any good critic should also suggest alternatives. So what could be done to help intelligence services with their legitimate goals and at the same time protect the privacy of ordinary people (not just Americans)?

Here are some key ideas:

  • Data (including meta-data) not originating inside a given agency is not mass-collected by government agencies
  • Basic principle: Data is retained by the originating organization (so for instance by the cell phone operator) according to specific law
  • All the operating procedures (but not the details on cases) is public. So no secret laws and no secret courts
  • Access to data needs a warrant
  • Once a warrant is issued, the agency can access all required data. Every access is logged with an ID for the warrant (traceability). Automatic access should already be possible with programs similar to PRISM
  • Once a case is closed (warrant expires), the access logs are verified to confirm that the access was reasonable
  • There can be a procedure to issue warrants quickly in an emergency (for instance an ongoing terrorist attack or similar emergency). In this case, the warrant will be checked at a later time
  • There can also be open warrants that check for certain conditions, such as a combination of key words or the like. The important thing is that these automatic checks are periodically validated by a judge and that the requesting agency does not receive the raw data but has to send a filter to the origin of the data

With this procedure, it is ensured that agencies do not collect random (or all) data and then profile people (or abuse their access to data). At the same time, agencies can access data when they need it. In particular, this protects the privacy of people because:

  • While data is still retained, no single party has access to data from a multitude of sources
  • Investigations are traceable and accountable

If this is done right, one could maybe even negotiate an international treaty that would allow to access data in other countries. This might work as follows:

  • An international warrant is issued with requests to other countries to validate it
  • A local judge validates the warrant
  • Once the warrant is validated, the requesting agency has access to the data covered by the warrant in the countries that have validated the warrant

An international agreement could be negotiated together with a “no spy” agreement, i.e., signatories to the treaty grant each other simplified access to relevant data within a well specified framework (as outlined above) and at the same time agree not to spy on each other’s citizens. In my opinion this would seriously calm and reassure many people. If the US took such an initiative, it might even help them to substantially improve their image.

What do you think?

PS: I do not claim any rights on this idea, and to the best of my knowledge, it is an original idea. Whoever can make good use of it, please do so!

saucymugwump December 15, 2013 3:51 PM

@Painfully gone — from that infosecurity article you referenced:
“Members of the Research Group quickly realized, however,” reports The Local, “that the data they received also came with metadata that included the email addresses tied to anonymous Disqus accounts.”

Are people so naive that they use an email account containing their real name? Why didn’t they obtain a untrackable email account for use with their Discus account, e.g.

Another interesting article regarding uber-politically-correct Swedes is:
Swedish parenting has created nation of brats

“We live in a culture where so-called experts say that children are ‘competent’ and the conclusion is that children should decide what to eat, what to wear, and when to go to bed. If you have a dinner party, they never sit quietly. They interrupt. They’re always in the centre, and the problem is that when they become young adults, they take with them the expectation that everything is centred around them, which makes them very disappointed,” said David Eberhard, prominent psychiatrist and author of How Children Took Power.

RobertT December 15, 2013 5:37 PM


You are correct in saying that you can’t produce 20nm structures using a 1um fabs equipment no matter how hard you try, and it does not matter how smart you are.

The assumption inherent here is that fab’s are static, truth is they’re not static. Old equipment wears out or is simply replaced by better , but now affordable / available equipment. So a 1um chip probably originally used a G-Line stepper, now I doubt there is a fab today operating a G-line stepper anywhere in the world, I doubt you can even buy a G-line stepper. So an I-line stepper would probably be used because it delivers resolution better than what is required and would be cheaper to operate.

Old 1um technology was not “planarized” this is an essential step in any process below about 250nm it involves CMP “chemical mechanical polishing” of the wafer between process steps basically anytime any step is done that grows some structure depositing Poly for the gates, first metal …… we polish the wafer surface so that it is flat, the advanced lithography requires this step to achieve controlled resolution over the exposed wafer surface. So you might say that without CMP you can safely say that all structures are larger than 250nm, however that is also no really true because CMP only tries to level the wafer making the focus controlled across a wider area. There are other ways to achieve small areas of level surface with know controlled thickness, this is all that is really required to get you all the way to 180nm.

Here’s the problem: to subvert a fab you dont really need to add active structures (circuits/gates/Ram…) you only need to be able to connect and disconnect several signals that should not connect. The truth is a subverted chip probably does not have a key-logger built in rather it simply communicates the state of the certain bits in the CPU to the outside world. Controlling bits like the Carry flag is essential to the security of all crypto algorithms (techniques like DPA and “timing attacks” try to discover this information by observing the operation of the CPU) if you have a hardware way to transfer just this ONE bit, than most crypto available todays is useless.

This is why I say that a very skilled technician could take a 1um process way way beyond what was considered the limits, at the time. Some of this would be the result of better equipment but most is the result of a better understanding of the limitations, and a deep understanding of the techniques that were developed to overcome these physical limitations.

Nick P December 15, 2013 6:26 PM

@ Fab!

“(I hadn’t seen that discussion about trusting the fab operators, and any chain of custody).I’ll go back to lurking now…”

No need for lurking or anything. I was just trying to get clarity on where you were coming from. It’s all good. 😉

@ 65535

“I though Microsoft was insecure. Why did Gen. Dyn use it?”

The main strategy of most devices for processing classified information is domain separation. They try to use crypto, physical separation, virtualization, etc. to keep the low and high sensitivity material/operations totally separate. I know the device has two buttons representing sensitivity levels allowing you to switch between them. They probably use two different instances of Windows CE and put some strong isolation/crypto in to protect the sensitive one.

@ Bryan

It’s a decent plan. I’m entirely for a physical switch for firmware. For instruction set randomization, one of my schemes was to simply create a tool that changes the instruction names in the microcode, then do a microcode update. I don’t code chips at that level so I don’t know if this is feasible but it seems like a simple idea. It’s also consistent with my A1-inspired lifecycle security requirements like on-site generation of the system from trusted tools, source, etc.

The main gripe I have is that there are stronger methods in the links I have that are immediately usable. At least read all the abstracts of the hardware security mechanisms and let them really sink into your mind. The techs on segmentation and legacy code protection in particular have very strong methods. The main goals of security in our case should be controlling what states the system can be in with its execution such that none lead to a compromise. And of course detection and recovery where possible. I’m sure there are ways to combine these techs for thorough protection that we’re just not seeing yet, although my mind is brimming.

Random ISA technique might also have a risk when legacy software like JIT’s help enemies execute code on your machine that they use to extract your instruction set or otherwise attack your system. CFI and CPU type controls have the most promise for stopping largest classes of attacks with little programmer skill.

Note: One potential risk for your firmware idea is it will be subverted by govts directly or indirectly through industry. It’s been claimed that this has happened already with UEFI.

@ Iain Moffat
(Clive Robinson part of my reply to you is in here)

Thanks for joining the discussion as we need more people in it who know what they’re talking about hardware wise. 😉 And thanks for the suggestion as I totally forgot about those! I actually posted a link here before to Magic 1, an amazing homebrew system. It used TTL’s with 8086 type performance. It ran Minix 2 and acted as a web server for a while. So, Magic-1 definitely validates your idea that it can be done today and be useful.

Funny part about you mentioning old mainframes and minicomputers is I was up all last night looking at them. I was reading manuals on Burrough’s, GCOS, and Tandem looking at the specific technical approaches they used for robustness. IMHO, they’re still ahead of many modern PC’s in those regards: Burrough’s protected procedures and high level language for OS; GCOS using segments for fine-grained memory isolation in addition to its paging (similar to Native Client); Tandem designing system for linear scalability & five 9’s. I was also considering capability machines like Cambriges CAP and Secure Ada Target.

So that’s my initial reaction. What’s your opinion on simply making a duplicate of one of these older machines and developing on it using modern tools? I mean, people used to do networking, accounting, secure messaging, etc. on this old technology. Esp with TTL route, it seems like it might be easier to come up with non-subverted tech if we just create a modern version of a capabiliity or segmented machine wired together with TTL’s. The prototypes for hardware might be made quickly using modern tools, then the output carefully checked in an old school way.

A few other uses of the lower-resourced TTL machine come to mind:

  1. Use it as the voter in a triple modular redundancy type system where the three workhorse systems are COTS for performance.
  2. Use it as a guard for air gapped networks with simplified protocols at the interface rather than TCP/IP.
  3. Use it as a root of trust containing the private key, source code hashes, directories, etc. Any service that might be justified in offloading to secure, dedicated server or device.
  4. Use it for private messaging and collaboration of many kinds, possibly with a gateway connected to it to translate from standard to primitive interface protocols.
  5. Full disk encryption for a SAN. Ok, I know this sounds ridiculous, but it popped in my head that the secure machine could front end to COTS storage machines. The secure machine would take in the plaintext, apply proper crypto, and store cipher text on untrusted storage nodes. It might maintain an index. Confidentiality, integrity and quick destruction of data would all be easier to do if the key security component was highly robust.

Each of these doesn’t seem like it would take too much CPU or storage for the trusted device. Each is quite useful. Each function in one form or another was performed by such machines in the past. So, there’s a precedent.

More Musings on Developing the Hardware

Another thing that comes to mind is that all the hardware might not need to be TTL or old. I’m sure that certain chips have simple enough function that COTS variety could be ordered. Heck, if the TTL chip does IOMMU, entire devices might be able to work untrusted without compromising system operation. Question is how much functionality really needs to be on the TTL’s or custom silicon for a general purpose system to work securely?

I could see using a mix of COTS components, FPGA’s, and TTL’s where each was selected based on risk. Tricky to decide which is which. For instance, last night I was thinking of combining a trusted memory interface with regular COTS RAM. An attack based on subverting the RAM to do more than read/write bits immediately jumped into my mind. The more of the system that can be untrusted the easier it will be to build.

It’s why I love designs where all one has to trust is the main chip functions. Makes things so much easier. Most of them take a lot of transistors though… (sighs)

@ Lynx

Thanks for the mention of the Cookbook. Might come in handy. Btw, what kind of effort do you think it would take to build a knock off of one of the segmented/capability/mainframe machines I mentioned using today’s TTL’s or other easily inspected chips? Also, 32-bit is my preference as well but it doesn’t have to be 32 bit. I’d say 16 at a minimum but something between 16 and 32 might be acceptable.

@ saucymugwump

“Some people here believe that fabbing (if that’s a word) in China or Russia is safer than in the USA, yet the former has declared that large parts of the Pacific Ocean are its private domain and is the top source for industrial espionage, and the latter is the top source for online theft. If the fab is in the USA, you can easily and quietly travel to it to check on things, but if the fab is in China or Russia, they will know you are coming long before you arrive. Not to mention the language problem.”

I wasn’t aware of any fabs in the US that let you thoroughly inspect each aspect of how they build your chips. Far as I recall, they don’t exist. US fabs must be trusted just as one trusts those overseas. Seeing as I’m the guy who posted the China and Russia solution I’ll explain the logic behind it (along with the critical context you left off).

  1. DOD certified fabs in US like Freescales or US companies like Intel are more likely to be subverted by NSA than any foreign company.
  2. The Chinese and Russian chips are more likely to be subverted by them than NSA.

  3. My solution was to separate work onto different hardware based on who one worried about: (a) Chinese/Russian chips if worried about Big Five countries and/or Israel; (b) US chips if worried about Russia, China, Japan, and France. There’s other potential combinations.

  4. Use strong controls at interfaces and preferrably air gaps.

Combined with the “use old hardware” idea, it seemed a halfway decent interim solution until a better one was found. What I didn’t say was put any kind of blind trust into chips made in Russia or China which, as you pointed out, are high risk espionage countries. My strategy was more playing opponents against each other to avoid risks on each side.

re germany and finland

Germany’s government has been spying on its citizens too I heard. That doesn’t sound safe from subversion or honest, esp as they gripe about NSA revelations. The Finnish, on the other hand, have been repeatedly spied on by both US and Russia. If they haven’t been infiltrated by now, I’d be surprised especially seeing what NSA did to Belgium. I have considered Finland for my operations as I actually agree with you about people over there being less risky. Like with chips though, that would only be a tiny piece of my security strategy.

” AMD already has one in Dresden.”

That other chipmaker that’s headquartered in the United States and makes all kinds of important chips? Oh I’m sure the NSA wasn’t interested in them. 😉

Clive Robinson December 16, 2013 2:56 AM

@ Nick P et al,

    It’s why I love designs where all one has to trust is the main chip functions. Makes things so much easier.

Oddly perhaps the first step on that is to select an architecture agnostic system bus (of which there are a few from the old days).

Basicaly you design your CPU blocks to work correctly with the agnostic bus, and likewise other blocks such as memory and I/O. The resulting upside is “Plug-n-Play” hardware, the down side is glue logic that can slow performance, such as droping in “wait states” to deskew otherwise convelouted and edge sensitive signals.

For various reasons you would need two buses one for basic stream type I/O and the other for the likes of memory and “block” type devices (ie those in the past that would potentialy use DMA). Again this is not unusual for old style systems suuch as the PDP11, various IBM systems, ICL systems and quite a few others, more modern ones being the likes of the modified IBM PC buses such as ISA which resulted in the PC104, and similar buses.

The second thing to look at in a similar vein would be device independent code, such that you can just swap the CPU block and low and behold the code you have in your non/semi mutable memory works fine at powerup, even though the CPU is from a different family (say from x86 to ARM).

And before people start thinking I’m “on the meds” etc, go have a look at byte code interpreters such as the original UCSD P-Code, Java-bytecode and the OpenFirmware F-Code (aimed specificaly to do PCI cards).

I’ve seen designs where you would have what you might call “firewall cards” that would bridge between a trusted system bus and an untrusted system bus. These have been used to build devices to connect systems of different secrecy levels which act as high end Network Pumps and Network sluices (compleatly indipendantly of the US Naval Research Lab).

Whilst this might seem overly complicated it has the advantage of not just standardisation but faster development times for new components and significant component re-use in effect following the Agile/Lean RAD “code refactoring” ideas but in hardware design (which oddly enough is where the ideas started with the likes of the New York Telephone organisation).

65535 December 16, 2013 3:13 AM

@Nick P
I understand the concept of separating domains in an OS. We used MS CE cores for various hand-held scanning, character input, storage and transmission auditing terminals. I asked the actual manufacturing team why they use CE over Linux. Their explanation was simple. CE accommodated any binary from assembly to C++ and C# in a flexible fashion. My understanding is CE not that secure (Win 98 level of security).

Why would the President of the USA communicate extremely sensitive and top secret information with a Microsoft product or hybrid MS products?

To turn it around, if the President uses MS CE for encryption why can’t your project incorporate it? It is small, light and flexible and GD C4, AES, Type 1 encryption approved – and used by the President. Btw, I’m not a fan of MS!

Clive Robinson December 16, 2013 5:53 AM

@ 65535,

    To turn it around, if the President uses MS CE for encryption why can’t your project incorporate it?

First off and most importantly we don’t know if he is using MS CE or even just some part of it. All we know is what he is alledged to use.

If you think back to the “Crackberry” craze a few years ago when it looked like Blackberries were the new fix for geeks and that Blackberry competitors “Coulden’t get you any higher”. Obama was a “Crackberry addict” and this caused no end of problems for the Whitehouse et al, in the end –so the story goes– the NSA “hardened it” and we had the Obamaberry…

Now I don’t know what the NSA did or did not do, –or even if they had done it considerably previously when breaking into Blackberry code and systems,– but if the NSA could do it for one mainstream mobile OS I assume they have done it for the others (even though Win CE has never, even remotely, been a contender in the mobile phone market, even when giving it away for nothing, and later effectivly taking over Nokia and turning them into an MS sub).

And if you think about it those of considerably less technical ability and suspicious outlook than those on this blog might very well assume that because –what is in effect a product placement advert– sound byte says “The Prez uses Win CE”, might very incorrectly assume that the vinilla stock WinCE was high grade secure (which we know it’s not). Thus making the lives of the NSA et al very much easier…

Curious December 16, 2013 7:50 AM

I suppose the world of number theory is known for all kinds of odd things, but this “Sack’s Spiral” being a graphical representation of prime numbers looked so funny I wanted to post about it here.

I did a quick search for sack’s spiral and the following is one article that seem to explain it:

I initially heard of it on youtube:

According to Wikipedia, Robert Sacks devised this variant of the Ulam spiral in 1994 and the Ulam spiral was discovered by the mathematician Stanislaw Ulam in 1963.

Clive Robinson December 16, 2013 7:58 AM

@ Iain Moffet, Lynx, et al,

With regards 40year old documentation about CPU design with TTL /ECL logic, terms change and this can cause confusion (as I found out the other day 😉

Back in the 1950’s a bod in the UK realised that building a CPU where the instructions were “hardware coded” was suffering quite quickly from the law of diminishing returns. So a proposal was put forward to make a simple state machine and what we would now call a “Diode ROM” to implement what we again would these days call “Microcode”.

That was you wrapped a simplified and optomised ALU and Register Set in a hardware based “interpreter” to give what we now call an “instruction decode unit”.

Thus “microcoding” consists of two parts defining the underlying ALU&register set, and then using what was eventualy called Register Transfere Language (RTL) to define the actions of the “outer loop” interpreter created by the state machine and ROM, which caused the Register Set control lines to be sequenced correctly in response to a higher level and more complex assembler instruction.

One ofther technique that was being used to get better performance out of ALU’s and other sections of logic was “Pipelining” back then it had a much much simpler meaning than it does today.

Back then piplining was to look at your logic design in terms of gate delays and natural choke points and put in latches. This had a number of positive effects, how ever the original one of was simply to deskew control signals with regards data signals and eliminate some of the metastability issues. It was initialy looked on as being bad news as it increased both the gate count and consiquently the overall instruction execution time. However one advantage was that the reduction in gate delay if taken far enough alowed individual parts to have higher clock speeds. It was then noticed you could use “multi-phase clocks” so that whilst the external memorry clock could be a few tens of killoherts the internal CPU clock could be four or more times this, but only with considerably more complex RTL.

So what we now call pipelining back then was unknown, the product of –old style– piplining with RTL and later multiphase clocks were just the first two steps along the road. Since then many other ideas some seen as major/radical by the designers of CPU internals have been added but all of them when seen “CPU external observers” as just another addition to Pipelining…

Thus whilst original pipeling added only marginal improvments the now very many additions and now very complex state machines form a number of inner loops to another outer loop that does the nifty stuff we now see with modern pipelining which in a number of cases sees multiple state machines ALU’s and register sets running in parellel and in effect running multiple instructions in RISC machines and multi-part instructions in CISC machines.

Also importantly has been the rise of data source elements of assembler instructions. Back in the day you had one internal data storage register –the accumulator– on the output of the ALU which “fed back” to one of the ALU inputs and one memory to ALU transfer register on the other ALU input. Thus the number of data sources was minimal and in effect required no data source decoding seperate to instruction decoding.

Since the Register usage has multiplied and the way data is fetched from memory into those registers has changed significantly. Thus we have necesitated an extra “data source decode” stage into our pipline steps adding considerable complexity to CPU internal design.

Although rarely mentioned compared to RISC / CISC CPU’s we also have Reduced and Complex Operand Set computers ( ROSC / COSC) [1]. Stack based CPU’s have an implicit assumption that data is stored on the stack, thus you have the PUSH / POP instructions for getting data onto and off of the stack into secondary memory and all instructions operating on data use the Top Of Stack (TOS) and Next On Stack (NOS) as inputs and the TOS for output. This significantly simplifies the assembler instruction set and removes the need for data source decoding which significantly reduces the state machine size as well as increasing throughput. Thus stack based designs can have much simplified CPU designs for the equivalent performance of RISC based systems but memory usage of CISC based CPU’s which provided they are implemented correctly provides a win-win advantage. Further it often means that the L1 cache for the CPU does not need to be multiway associative and thus simple high speed RAM can be used.

However if the underlying architecture is Harvard then instead of having an “instruction cache”, the RAM can be used to act as a threaded word cache which in essence means the most frequently used instruction words end up effectivly inside the CPU acting as a changable extension to the micro code… Thus the design can be further reduced in that RAM can replace the Microcode ROM and the state machine designed to read the contents of the RAM in, in response to a reset. However from a security aspect this gives an oportunity to actually use a micro controler to halt the CPU and load in a highly customised microcode that alows for all sorts of custom words and word lable swaps such that a program in the usuall system mutable or semi-mutable storage has little or no meaning outside of the system.

I’ve been involved with a specialised embedded design that does this –though instead of using bit slice components it uses over thirty very cheap DSP chips,– and the embeded program in each unit is in effect “code book” ciphered at the word level which makes malware attack difficult at best and further makes forensic examination of “frozen RAM” likewise difficult. Further the code that goes into the microcontroler is also encrypted by a modern cipher and uses public key techniques to protect it (similar to a PGP system) with the unique private key stored in the microcontroler. Whilst not impossible to crack it does make reverse engineering and code theft difficult, but alows fairly easy upgrade or customer specific custom upgrades.


Winter December 16, 2013 8:36 AM

I have some difficulty understanding what you intend.

If I summarize, you take a “simple” processor and emulate another processor on it (in microcode or otherwise). This emulated processor has randomized (obfuscated?) instruction codes and a randomized (obfuscated?) encoding for other values (randomized ASCII/unicode bytes?).

That is, every symbolic value, from instructions to unicode, gets randomized. Possibly, you encrypt memory too?

When you want to calculate something, you “obfuscate” code and data, and then decode the results again?

vas pup December 16, 2013 9:46 AM

@Uhu: “If this is done right, one could maybe even negotiate an international treaty that would allow to access data in other countries”.
International treaty as source on international law assumes reciprocity, equality (no exceptionalism for any of signed countries), clear definition of jurisdiction of each participating country, civilized dispute resolution (not based on “musles” – number of submarines/IBMs/aircraft carrier, etc.). That is my vision of doing it right + transparency.
In support of many other suggestions provided by Uhu and this: “All the operating procedures (but not the details on cases) is public. So no secret laws and no secret courts” in particular.

Mike the goat December 16, 2013 9:59 AM

Figureitout: re cellphone privacy betrayals. I have just put an article up on my blog about Google deliberately removing the excellent appops (which allowed somewhat granular permissions modification on already installed apps) from 4.2.2 and my slightly conspiratorial reasoning behind it. I figured you’d be interested.

CallMeLateForSupper December 16, 2013 11:35 AM

Anyone else see “60 Minutes” inverview of NSA Director last night? Anyone else throw their TV remote when he coughed up his tired but tried-and-true we-don’t-TARGET-Americans answer to interviewer’s pointed question re: NSA’s collection of Americans’ private info?

Dammit, General! The question was not did/does NSA target Americans’ private data. (“Target” implies doing knowingly, intentionally and with forethought.) Again, words matter. Let’s say my Hummer runs down a pedestrian while I’m driving it. A cop asks me, “Did you run down that person?”. I reply, “I never target a pedestrian.” That reply does not speak to the question that was asked; it answers a question that was not asked. Sadly, the 60 Minutes interviewer, who absolutely is not a stranger to TLA-speak, let the general get away with not answering.

CallMeLateForSupper December 16, 2013 11:49 AM

@ Tim#3

I’m kinda confused. The leaked memo from Mark Hughes says, “Bruce Schneier to leave..” but immediately says, “I would like to announce that Bruce Schneier is leaving…” So will BT lose Bruce or is that merely what Mark Hughes “would like”?

(Hughes also says “I’d like to thank [Bruce] … and wish him success,,,”, but Hughes doesn’t actually do either one in the memo.)

Figureitout December 16, 2013 12:04 PM

Clive Robinson
–Pretty neat design…But where is the microcontroller flashed from?

Mike the goat
–Just read it, yeah it’s stories like that, that make me happy I gave up my “smart”phone almost 2 years ago. Funny that one of the first apps I downloaded was a flashlight app and I was perplexed why it needed access to call history, contacts, location, etc. And I also experienced malware on that phone, google talk was on all the time and I couldn’t within the OS of the phone turn it off; I felt ZERO control over the computer-phone out of the box and it would connect to wifi’s when I clearly told it not to. So a lot of settings by the user were overridden by something. It’s just like Bruce feudalism analogy, you’re not given control of your device; screw that.

Still, I’ve experienced bluetooth attacks (grrr) and another one that I don’t know about. So I’d be interested in what you find regarding the Hayes command set and the E911 pinging your phone whenever it wants.

–If there was ever a propaganda piece, that was it. And I was just “on the edge of my seat” when they gave “exclusive access” to the agency. Then 3 analysts w/ their fake names described a social engineering attack (gasp!), basically phishing. Boring, and I don’t believe a word and they continue to lie.

Clive Robinson December 16, 2013 12:47 PM

@ Tim#3,

When I think back to the various places I’ve worked at, people have left for a whole variety of personal reasons, sometimes on short notice due to non work related quite personal issues (I myself left one job due to the fact it’s now medicaly inadvisable for me to fly, and sometime previous to that due to that I left another job due to a significant opportunity coming up due to somebody elses misfortune).

Bruce recently anounced he had accepted an accademic role in the US and prior to that his desire to work further in the area of research which gave rise to his previous book. So he has quite a few irons in the fire some of which may be a lot hotter than others.

So unlike ‘El Reg’ I’m not going to speculate it’s “cloak and dagger” related for the sake of a “pot boiler” line, I’m just going to wait for Bruce to say what his reasons are in his own good time.

Nick P December 16, 2013 1:08 PM

re Schneier leaving BT

I’m with Clive in that I’m waiting to hear from him on that. However, Bruce has often been dodgy about BT questions and my interpretation was that he was simply playing it smart by keeping his employer out of his posts. I’d do the same thing.

The other aspect is how much did he know about BT’s cooperation with the UK or other matters like that? Being their main guy, you’d think they’d ask his opinion about aspects of it. I know too little to accuse anything and Bruce has certainly put things on his blog BT wouldn’t be so happy about so he’s independent enough already. Separating from BT gives the bonus attribute of him having no conflict of interest with any companies involved in surveillance state.

If he’s doing academia, he can also inspire and even contribute to solutions coming from there. As my Friday post shows it’s colleges that plenty of the best technical solutions are coming out of. All these younger players need veterans to help them refine their ideas & catch problem areas as they appear. I’m not sure what academic position people here are referencing, but I’m sure the amateurs there will find invaluable Bruce’s experience in both security engineering and how things get done in a big business.

Anura December 16, 2013 2:02 PM

A federal judge has ruled that the bulk collection of telephone metadata by the National Security Agency is likely to be in violation of the US constitution.

The ruling, by a US district judge in Washington, is the first legal setback for the NSA since the revelations prompted by the former agency contractor Edward Snowden.

The judge, Richard Leon, ruled that the mass collection of the metadata of Americans’ phone calls was likely to be in breach of the fourth amendment of the constitution, relating to unreasonable searches and seizures.

I have some hope, and a lot of pessimism, that this ruling will help to limit the scope of the NSAs data collection.

Bauke Jan Douma December 16, 2013 2:16 PM

@time#3, @Bruce, @all concerned

The question now is:
What does that mean for future comments and articles that relate to BT or future comments and articles that expressly do NOT relate to BT?
I.o.w. what does the separation contract state about one man’s free speech.

kashmarek December 16, 2013 4:31 PM


Judge: NSA Phone Program Likely Unconstitutional
Posted by samzenpus on Monday December 16, 2013 @04:13PM
from the stop-listening dept.

schwit1 writes in with the latest on an U.S. District Court ruling over NSA spying.

“A federal judge ruled Monday that the National Security Agency’s phone surveillance program is likely unconstitutional, Politico reports. U.S. District Court Judge Richard Leon said that the agency’s controversial program, first unveiled by former government contractor Edward Snowden earlier this year, appears to violate the Constitution’s Fourth Amendment, which states that the ‘right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.’ ‘I cannot imagine a more “indiscriminate” and “arbitrary invasion” than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying it and analyzing it without judicial approval,’ Leon wrote in the ruling. The federal ruling came down after activist Larry Klayman filed a lawsuit in June over the program. The suit claimed that the NSA’s surveillance ‘violates the U.S. Constitution and also federal laws, including, but not limited to, the outrageous breach of privacy, freedom of speech, freedom of association, and the due process rights of American citizens.'”

kashmarek December 16, 2013 4:35 PM

Already referenced, but also at

Is Bruce Schneier Leaving His Job At BT?
Posted by samzenpus on Monday December 16, 2013 @12:55PM
from the parting-ways dept.

hawkinspeter writes

“The Register is hosting an exclusive that Bruce Schneier will be leaving his position at BT as security futurologist. From the article: ‘News of the parting of the ways reached El Reg via a leaked internal email. Our source suggested that Schneier was shown the door because of his recent comments about the NSA and GCHQ’s mass surveillance activities.'”

Dirk Praet December 16, 2013 7:59 PM

@ Anura

I have some hope, and a lot of pessimism, that this ruling will help to limit the scope of the NSAs data collection.

Only in an indirect way as a first step to moving the issue through the system and hopefully all the way up to SCOTUS.

Figureitout December 16, 2013 8:21 PM

//Joke RE: Bruce leaving BT
Well well well…Looks like the reason now comes out. Bruce’s default internet setup for ungodly freaky squid porn would be hindered…

False News was able to get a false quote from Mr. Schneier, “They can make the internet a surveillance state, but mess w/ my squid porn; no, that’s crossing the line. I’m done.”

Buck December 16, 2013 8:50 PM

@Bauke Jan Douma

I 2nd (or 3rd?) that question!


I wonder if you would now (or in the near future?) be willing to comment on the validity (or plausibility?) of a certain paper that posits BT has been quietly installing secretly snitching switches (thanks to a hidden DMZ VLAN) in the homes of their paying internet service customers.

The document was linked above by @renke, so I feel no need to post it here again.
( )

Uhu December 17, 2013 2:00 AM

@vas pup
Good point. I thought reciprocity would be obvious, but this of course has to be one of the conditions. I think there are similar treaties on which this could be modeled, maybe for instance extradition treaties.

name.withheld.for.pbvious.reasons December 17, 2013 8:58 AM

Thoughts on Bruce’s employment status…

First let me preface by explaining my small companies quandary:

  1. Aug 2012, received funding to propel small research company
  2. <LI<Sept 2012, Review Strategic and Business Plan, emphasis on investment efficiency

  3. Analysis of the technology landscape exposed some serious issues with potential short and long term impacts on business.
  4. Discovered active programs at major vendors/ISP’s/Portals
  5. Oct 2013, Discover compels analysis of business process exposure (intellectual property, integrity, reliability, etc.) and forces a complete rewrite of the strategic plan and a new business operations plan.
  6. Feb. 2013, the issue surround the FBI/NSA starts to become apparent (independent research on my part links the Prism vendors to the US government.
  7. May 2013, Contact a national organization about vestibule process/contact–indicate to DC NGO leader the issue of the FED back-dooring a series of companies.
  8. June 2013, SNOWDEN–I am now concerned. It is obvious he is in great danger. He acted within the law, though as a contractor he might not be under the umbrella of the Federal personnel oath–“to protect and defend the constitution”. More than likely though, as the NSA is under the DoD you are required to report illegal activity and not to obey unlawful orders.
  9. July 2013, Concerned about the business prospects as the fallout is likely to have may consequences that will negatively impact the Tech community
  10. Dec 2013, Colleague recommends I look at doing something other than tech…

Using the described events as constructive–the result is a negative effect on citizen–period. And, understanding that Bruce is a citizen (I use the term in the most formal form–informed, involved, and robust) it is understandable that he’d be under extreme pressure.

BP December 17, 2013 5:59 PM

Bauke Jan Douma
Maybe the separation contract wouldn’t survive legal scrutiny if it purportsto require the parties to not disclose illegal activity. Traditional contract law would indicate that it would not survive.

Just sayin’

Mike the goat December 18, 2013 6:41 AM

Name withheld: haven’t heard from you in a while. Can you confirm receipt of my last email, approx 14 days ago? Can understand you are busy.

Nick P: again, we are just speculating and I don’t want either of us to run afoul of the mod but- hypothetically speaking I would think that if Bruce discovered the conduct of his employer conflicted with his own very well defined publically known ethical persona then I imagine he would hand in his resignation in a heartbeat. I know I would. You could counter argue that perhaps BT’s board was angered about having someone like Bruce – who thanks to the Guardian and his analysis of the Snowden material now has “controversial” emblazoned on his back – on their payroll when their own hands aren’t exactly clean (allegedly, anyway, if you believe the British press). I understand that there are almost certainly NDAs in place and that neither Bruce nor BT could likely comment in depth on this issue.

Suffice to say, I think it removes a big question mark that has been hanging over Schneier’s head regarding his potential conflict of interest. I can only speculate that Schneier has a diverse schedule and a comparably diverse income stream from things like book sales, signings, endorsements and speaking appearances so I suspect (and hope, as he is one of the “good guys”) that this will not cause him any financial distress.

If (and I expect this to neither be confirmed or vociferously denied) Schneier’s departure has even a shred of connection between his work on the Snowden material then I congratulate Bruce on standing up for what’s right and hope that he continues to be a bastion for free speech, limited government and internet security. On behalf of everyone who has read your work over the years – thank you.

anonymous coward December 18, 2013 10:20 AM

holy shit!

Acoustic cryptanalysis:

“Here, we describe a new acoustic cryptanalysis key extraction attack, applicable to GnuPG’s current implementation of RSA. The attack can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts. We experimentally demonstrate that such attacks can be carried out, using either a plain mobile phone placed next to the computer, or a more sensitive microphone placed 4 meters away.”

“The acoustic signal of interest is generated by vibration of electronic components (capacitors and coils) in the voltage regulation circuit, as it struggles to maintain a constant voltage to the CPU despite the large fluctuations in power consumption caused by different patterns of CPU operations. The relevant signal is not caused by mechanical components such as the fan or hard disk, nor by the laptop’s internal speaker.”

“… in some cases, a regular mobile phone is good enough. We have used a mobile phone to acoustically extract keys from a laptop at a distance of 30cm… Using a sensitive parabolic microphone, we surpassed 4 meters.”

RobertT December 18, 2013 3:42 PM

@anonymous coward
Acoustic cryptanalysis:

Hardly news Clive and myself had a discussion about this very topic abut 2 years ago. Most ceramic caps used to filter SMPS are efficient ultrasound transducers AND so the pattern of acoustic emanations from the filter cap varies with the CPU loading.

BTW this pattern also shows up on the mains wiring as a conducted EMI emission at around 100khz to 140Khz. If you want to observe this buy a PRIME compliant (smart grid electricity meter) and just use the AFE (analog front end) plus mains coupling transformers.

Blue Dingbat December 18, 2013 3:59 PM

NSA Claims It Foiled Nonsensical Plot to Destroy US Economy
With more and more scandals emerging, the NSA is looking for silver linings anywhere it can get them. Today, it comes in the form of foiling a claimed plot to destroy the entire American economy – with computers.

The putative plot was based on the idea that there are “nation-states” who could invent a computer virus capable of forcing a false firmware update on every computer BIOS in the country…

Crazy thing is that probably NSA themselves is capable of exactly that. And may already have been involved in something like it.

Nick P December 18, 2013 9:45 PM

Recently, I’ve been pushing several different lines of inquiry into our hardware and software security. These included protected architectures to support safe/secure software, foreign made chips for lower [local] subversion probability, and typesafe languages to eliminate certain classes of errors. I cited quite a few examples of each.

Then, I read the Burrough’s B5000 Wikipedia article to get specifics (in English) on how it handled protection.

I was pleasantly surprised to find references to several specific others and I have mentioned. Article claims they all seem to tie into B5000 in some way. Examples follow.

Forth: “The B5000 stack architecture inspired Chuck Moore, the designer of the programming language Forth, who encountered the B5500 while at MIT. In Forth – The Early Years, Moore described the influence, noting that Forth’s DUP, DROP and SWAP came from the corresponding B5500 instructions (DUPL, DLET, EXCH).”

Russian Elbrus: “their stack-based architecture and tagged memory also heavily influenced the Soviet Elbrus series of mainframes and supercomputers. ”

(Confirms what I thought about Elbrus being a mainframe like chip.)

Tandem: “The NonStop systems designed by Tandem Computers in the late 1970s and early 1980s were also 16-bit stack machines, influenced by the B5000 indirectly through the HP 3000 connection, as several of the early Tandem engineers were formerly with HP.”

Language security: ” Kay was also impressed by the data-driven tagged architecture of the B5000 and this influenced his thinking in his developments in object-oriented programming and Smalltalk.”

I’m not sure what impression I should have of all this except to say my mind might be on the right track. B5000 was a somewhat secure architecture whose concepts were reused in all this. That means the improvements/advantages of each of its decendents could, in theory, be combined into a new secure architecture. This will probably just make me even more indecisive about copying/refining a secure old system vs designing a new one from best components.

Just thought I’d share that link and piece for everyone that’s been in the discussion on hardware.

Figureitout December 19, 2013 1:57 AM

This will probably just make me even more indecisive about copying/refining a secure old system vs designing a new one from best components.
Nick P
–Put your ideas on the market and try to make some money, or for others to tear them apart and/or build on. My ideas personally are way too much on the secure side and won’t sell any product in the market.

Also, I still think there are undiscovered ways of computing that will just add to knowledge attackers will have to learn to attack. I find that the best defense, making attackers learn new things, make it too hard and most of the worst ones will just give up as I believe the smartest attackers aren’t really all that full on malicious.

Figureitout December 19, 2013 2:06 AM

Ah, Nick P || Clive Robinson
–Something’s been bugging me about Forth as I keep reading about it. Here’s supposedly a quote from Mr. Moore: ” I remain adamant that local variables are not only useless, they are harmful. If you are writing code that needs them you are writing non-optimal code. Don’t use local variables. Don’t come up with new syntax for describing them and new schemes for implementing them. You can make local variables very efficient especially if you have local registers to store them in, but don’t. It’s bad. It’s wrong. It is necessary to have [global] variables. … I don’t see any use for [local] variables which are accessed instantaneously.”

–I’ve been warned to “take my traditional programming hat off” and think logically. I was previously told that global variables are very dangerous and to cautiously use them, do you think Mr. Moore is making a mistake here or does he have a point?

Figureitout December 19, 2013 2:22 AM

//Shout out to Aspie
–I’m still looking forward to when (not if 🙂 you get your computer working and see you featured on hackaday. 🙂 Would like to know where you’re at and any problems you’re having where maybe someone or me can help b/c I have some time now even though I’ve got other stuff.

Nick P December 19, 2013 9:23 AM

@ Figureitout

I think in discussing this we need to remember that Mr Moore always wears the Forth hat. His focus in programming language discipline is so narrow and consistent that he sees everything in that light. So, he may make general statements that seem true in his experience but would be dead wrong in a mainstream language. I think this is one of them.

Local variables can only be accessed within a function, loop, class, etc. They’re allocated when function is called and disappear afterward. Global variables can be accessed anywhere. They don’t disappear. This means global variables tend to take up more memory right off the bat. They might also need to go in the heap depending on the size of the program.

More relevant to our field is information hiding and POLA. POLA says to give each execution unit as little capability as it needs. For code, this would be access to certain variables, functions or language/platform capabilities. Information hiding (i.e. modularity) says to hide details behind an interface to (a) make things easy to change and (b) enforce a form of POLA. This is so powerful that it’s been used to achieve POLA in highly assured system designs and was one of the fundamentals of running unsafe code in a sandbox a la Java.

(You could even say capability architectures are about “information hiding” behind capabilities or addresses. If true, it only adds more weight to information hiding principle.)

So, Moore is saying this is all BS and we’ll be fine if we just analyse the subtle stack interactions of a ridiculous amount of code and global data. Competing languages give us types, modules, global vs local variables, interfaces, smart compilers and so on to let us think closer to the ideas we’re implementing. And catch plenty of errors before the user/hacker. Decades of programming experience shows the latter approach is superior for code security (where it was a goal lol), maintainability, productivity, reliability, and so on.

If Moore achieved these in Forth at comparable levels, it would be more a testament to his own skills rather than correctness of his claims. 😉

Clive Robinson December 19, 2013 4:00 PM

@ Figureitout,

In some respects Charles moore is right, because he’s comming from the bottom up not top down.

If you think about it, without some nifty but inefficient coding in the high level language compiler there is no local just global at the assembler level.

Most high level languages that are type safe etc etc do this by hiding a lot of checking code from the user. It is by ‘tagging registers and memory’ possible to type safe at the hardware level but the work required at the logic level to do this is without doubt is very large and it’s not extensible. The same is true for more complex types defined in software you have to write one heck of a load of code and have various support features not just in the compiler but built into the run time program.

These hidden “safe guards” which protect “code cutters” from themselves are actually a hinderance to both the run time and the “code cutters” development towards being a programer or even software enginer/scientist.

The run time is weighed down by all the run time checking and is thus inefficient and the code cutter fails to build the required skills.

I’ve occasionaly said on this blog and othher places that the average ‘computer scientist graduate’ is not employable to do serious real world software development and that people should instead look at recruiting engineering, maths or other hard science graduates that have atleast two years assembler or other “bare metal” “real time” programing on their C.V.

It’s harsh but it comes about from having to re-train CSgrads to ‘ride without stabalisers’ and it’s a painfull process. For instance try using C to develop OO code but only using byte or hardware word pointers to start getting the feel of what you are doing… if you cann’t do this then you certainly cann’t write hardware drivers or OS & Compiler code that is going to be even remotly fit for purpose (hark is that the sound of matches being struck and lighters being sparked to light a pyre I hear. 😉

But without going to that length sit back and think what local variables are –or should be– used for. Firstly they are used to pass vals/refs into subs/funcs, secondly as tempory value holders, thirdly as counters/etc for flow control.

In a stack based language you pass on the data stack, likewise temp data goes on the stack, and with most threaded languages the flow control is done via the language and stack(s). So the need for ‘local variables’ is obviated, unless of course you are using local variables for other purposes… and this brings up the issue of “re-entrant code” which can be realy abused by recursion not least because behind the simplicity of the high level code there is one heck of inefficient and dangerous behaviour at the assembler level… and I wouldd urge anyone with a yen for recursion to ‘think again’.

Now lets talk about security local variables should be stack variables and most high level languages don’t zero them on exiting a sub they remain in memory with a clearly seen method of finding them with just a modicum of pointer abuse. Changing this behaviour such that the run time zeros the variables is language dependent however it tends to be easier to implement as an extension with threaded languages. But whatever the language it means an added inefficiency and decrease in code execution time.

However C has the ‘heap’ and Forth has the ‘user memory’ which can be abused in either case to create local variables. The reality is in both cases that this memory is “global memory” if you don’t clear it then other parts of the program will see it (if you read Deep C Secrets, then you will find a clasic example of this with malloc and free use that leaked the password file).

So when you take a look under the high level language constructs and inefficient or faux protection there realy is no such thing as local or global memory, just memory that can be accessed if you know how to do it as any assembler level programer would tell you.

Figureitout December 19, 2013 4:50 PM

Nick P && Clive Robinson
–Thanks, kind of figured but thought it never hurts to ask; unless it’s a really stupid question. 🙂 I emailed Mr. Moore, don’t know if I’ll get a response, but think it would be cool if I could get him on here to make his point or his ideas (maybe implementations) on a secure Forth system. Doubt it, but meh, worth a shot anyway.

Clive Robinson
–Actually I’m going to criticize my school a little (always willing and able to criticize something :), they only have like 2 pure programming classes and I felt really rushed and I want at least 2 more (just in C) to get all nuances of C down and to go thru code making sure I’m reading it correctly. Maybe it’s like the real world making money, then it’s no wonder there’s so much crappy designs and holey code. But I think they aren’t teaching programming fundamentals as well as they could be…You have to have the initiative to learn it on your own and hopefully not have bad habits or even worse false knowledge.

And yeah I’m going to try some assembler on my graphing calculator and read that book, lots of cheeky jokes in that book and the Forth one too lol. 🙂

RobertT December 19, 2013 5:22 PM

@Clive Robinson

Re: Forth language security

I’m definitely not an expert in this area but I have done some hardware design for a stack based language processor, way in the past.

In general if you are trying to implement a stack based CPU you want an independent adder for the Stack Pointers because these operations occur very frequently practically every instruction, typically you also want at least two separate stacks Data stack and Return Stack. Keeping the Data and Return stacks separate is very important because most Stack processors are Harvard machines so Instruction flow and Data flow are very separate entities, it means that Code can never be injected by manipulation of the data stream.

Example of stack OP. ADD (In a stack language takes (Top of Stack)TOS and ADDs it to TOS-1 Result returned to TOS) the instruction requires an ADD on the Data path and a simultaneous Decrement on of the DataStack pointer (normally just called teh stack pointer)

Note: new TOS is TOS-1 BUT the old data is still present at new TOS+1. There is absolutely nothing in the normal operation of a stack based CPU to clear TOS+1, it contains the original TOS that was passed to the ADD. and will keep this data until it is overwritten. Fortunately since almost all operations reference the Stack old data is frequently overwritten.

similarly when you return from a subroutine the results are normally passed on the Stack as TOS when the return occurs these results are POP’ed of the stack HOWEVER there is no way to clear the values above current TOS. they simply continue to exist and can therefore be accessed by other routines.

Secure Stack processors usually try to physically segregate memory so that globally stored data (non stack data) is simply not addressable i.e. Get data (Ram@address) will return an error if the address pointer overlaps the memory allocated to Stack.

The biggest security problems resulted from Interrupts. Interrupts simply increased the Return stack they are implemented as a forced CALL instruction. So if an interrupt simply POP’ed the Data stack they could access Data that should have been local to some other routine, this is very difficult to protect against because different operations require different a different number of variables to be transferred over the Data Stack (an ADD just uses ToS and TOS-1 but a more complex routine might have 15 variables transferred so the relevant data is stored in ToS to TOS-15. The only way to protect against abuse of the Stack was to have depth of Stack as one of the variables this was subtracted for DataSatck-pointer and copied into a Memory management Register which was compared to the DataStackPointer on each operation.

Now the problem with software clearing of the stack is that the pointers are automatically incremented when stack operation occur, this means that a clear would require first get a suitable value to TOS say 55H doing a PUSH (put data on data-stack) increments SP so clearing 15 entries requires 15 PUSH instructions now to get the SP back to pointing to the correct location we need to do 15 POP instructions.

Unfortunately most of the efficiency of Forth comes from the automatic background handling of the SP+ SP- operations but IMHO it is precisely these automatic operations that create many security weaknesses.

I’d suggest you’all think about the hardware memory management necessary to create a secure Forth machine because it is definitely not trivial.

Nick P December 19, 2013 9:06 PM

Partial Rebuttal to Clive’s points on stack architecture

I have to take issue with two points he’s making.

Point 1: Typed, higher-level languages don’t hurt performance or safety to level you suggest

Funny thing is that Forth is the best example to support my point. The machine running the Forth code is probably a register machine. It doesn’t have a stack or user memory: it has high speed local storage (registers), a slower cache, and a much slower/larger global storage (RAM). The amount of registers, cache, and memory might vary. Instruction sizes, timings and amount of parallelism might vary. The Forth code works despite this because it runs in a virtual machine that interprets and executes the code on the real machine.

That brings me to the next point: these languages are just abstractions on what a computer actually does. Abstractions can be designed to do many things, including performance. The C language, for instance, can be compiled to run so fast it took the place of assembler in most applications and its types can catch some basic errors. Fortran’s abstractions beat C on numerical performance. Languages in Pascal/Modula/Oberon family achieved great performance while being typesafe, readable and useful in large software construction.

The past shows us the other side of the coin. We know what a world coded in assembler looks like. Even with smart people doing it, there still were plenty of bugs, vulnerabilities and even occasional performance issues. The abstraction of assembler actually hinders the developer as the program grows because it becomes incomprehensible. The software also had to be recoded on new architectures. Portability, productivity, and robustness all led to the development of systems programming languages that made software easier to read, write, port, and understand.

From there, you choose the abstractions:

Want to never worry about memory management in your code again? Use a language with a GC and probably take a performance hit.

Need to control things down to representation of the bits? Ada is much higher level than C or assembler, but can still do this. Certain ML’s can too.

Want assembler with safety? Typed assembler.

Want dynamic language that processes text files faster than C? Python.

Want OOP that’s cache friendly, has no garbage collector and imposes little to no penalty? C++. People with more sanity might try Component Pascal or OOP-enabled Oberon. 😉

Want the full OOP experience? Small talk.

Want a language with referential transparency, and immune to memory, numerical and concurrency errors with good performance? Haskell.

Want a language that’s trades off some speed for dynamic types, nearly unlimited flexibility, optional static typing for performance, ability to update/debug while running, and a macro system as powerful as a programming language? It’s called Common LISP.

Many tradeoffs. Some are very fast, some safe, some productive, and so on. The key thing is choosing the abstractions that let you focus your mind on doing what it does best. We don’t need to think like computers: computers are better at it. We just need to think about our problem and have languages that express it well. These tools must be able to produce machine code that executes our requirements. A stack machine doesn’t really follow that workflow unless your domain manages stacks (Fedex?). A high level language supporting user-defined types, modules, simplified memory management, and basic operators does support such a workflow.

My side of things is that plenty of languages offer plenty abstraction that benefits safety and has little to no negative effect on performance. Heavyweights like .NET and Java aren’t representative of HLL concept in general: they’re just the decision of the mainstream. Here, I agree with Clive that they were a bad direction with many points against them.

Point 2: Stack vs Register machines

That’s debatable. As I said, most machines are register machines which means stack based languages require a form of emulation or indirection. The few studies on VM’s showed that register VM’s kicked the crap out of stack VM’s for several reasons.

  1. Register machines can keep constants in a register. Stack machines often repeatedly reload them.
  2. Although stack machines have better memory density as they don’t have to mention register names, register machines might take fewer operations to do the same thing. Example: addition on a stack machine takes two to three instructions, while on a register machine it might take one. Certain machines might do several in a single cycle.
  3. Stack machines are inflexible in that they force everything to work like a stack whereas many problem domains work better with registers, streams, special purpose logic, etc.
  4. The chips wars showed that register machines were better for performance, although not determinism/security imho, so they kicked the crap out of stack machines. Moore’s Law put the nails in the coffin. Give me a stack machine for Forth and I bet I can find a register chip with similar cost/efficiency that outperforms it even with VM emulation. Even Moore doesn’t make pure stack machines anymore. (wink)

So, the jury is still out on stacks vs other architectures and the market mainly produces the competing model. That means the stack model can actually loose performance. Many HLL’s can create efficient stacks and many register machines can emulate them quite well. So, it’s really… complicated. The best one depends on one’s circumstances, skills, budget and platform for integration. Not the simple answer some readers might be looking for but I wanted to be clear that there wasn’t a clear answer.

Now, stack machines for security… that’s another discussion entirely. We’re still at the beginning of it. I hope some good results come of it as the machines are simple enough to verify along many lines.

Petter December 20, 2013 3:06 AM

There’s a swedish company with patents in sme countries incl US regarding secure communication without key distribution.

It’s said to be a blended DNA with a verification server initially authing each party.

It seems intresting but do you really want to place your trust in a third party this way?

Clive Robinson December 20, 2013 3:38 AM

@ Figureitout,

Well you now have more “food for thought” 🙂

It will be interesting to see if Charles Moore does reply either directly or to this blog.

It’s sometimes difficult trying to “second guess” peoples likely reasons for their view point when you don’t have the same information as they do.

@ Nick P,

As I said much of the issue is due to which direction you are looking hardware to programer/user or user/programer to hardware.

As we both know the real issues happen in “the messy middle” not either extream.

There is a lot more ground there and a lot of issues uncovered, just two of which are why “bugs to lines of code” appears to be language indpendent –and thus favour programing in the highest language possible for the task in hand– and byte code / assembler level access from high level code –which enables all sorts of end runs around type safety and security– giving a greate deal of “uncertainty” as to actual functionality.

As a subject to explore it could fill quite a few squid pages.

With regards register-v-stack Charles More is certainly aware of this problem as his later multi core designs show. His solution was to make atleast the top two members of each stack as registers with the stacks actually being circular buffers, which works very well in DSP chips as well.

With a small buffer then stale-local data on the stack will most likely be over written in just one or two instructions.

@ RobertT,

Clearing TOS NOS and further down the stack entries is possible with a stack decrement (pop) provided the stack is used as latches with clear in single port mode. But without care it’s going to pull atleast an extra clock cycle per POP which would not be good.

With respect to having seperate adders, yup they don’t belong in the ALU for either data or instruction pointing or manipulation. In effect they are the single slowest logic instruction because nomatter how you cut and slice it the carry has to cover the full bus width. Treating them as seperate enterties that take extra clock cycles can with some compound instructions make a payback on parellel operation. Likewise with MUL and DIV and other maths related function that should be seen as seperate from the ALU. If I remember correctly it was the 68K that was the first consumer leve single chip micro that had a seperate adder for memory addresses, shortly followed by the x86 family that had multiple adders for “segment” register use as a “poor mans” MMU.

With regards Memory Managment this is never trivial even in the simplest cases there are sharp toothed gotchers swimming just below the surface all the time.

With regards interupts yup they are a problem whatever the design, and are a little like “air bubles under wall paper”. There are a number of aproaches conceptualy the simplest being an alternative register set looking at differrent stacks, with data and signals passed via kernel buffering mechanisms. But this has it’s own problems in setup and response times, however a proper context switch at this level not just for hardware but software interupts can make multitasking a lot simpler.

BlackAngel December 20, 2013 1:30 PM

@Jonathan Wilson

re: NSA asks for a way to do its job without wholesale data collection

Good list.

I have a few observations, which relate to the entire fiasco and disclosures:

  1. They have broken so many laws, and are defending breaking those laws it has sent a message to everyone — in the US and internationally. This is a major failure. When governments become criminals they promote and legitimize crime and undermine any trace of moral authority they previously had.

Take a line from a song, “my dad tells me to quit smoking, but that hypocrite smokes two packs a day”.

  1. Fear is given as justification for lawlessness. This is the old “communist under every bed” thinking. It is paranoid, it is neurotic, it is flawed thinking. This manner of thinking is epidemic to unhealthy nations, tyrannies. It is a sign of serious corruption. When the founding “free” nation turns to the principles of tyranny, it just requires a simple push for collapse.

Saying there is no sign of data collection being unlawfully used, eg, political opponents are not arrested, and the like is flawed reasoning. Harassment and extortion programs are likely already in effect. This means that “democracy” and “freedom” are now simply words covered over tyranny, the exact opposite.

This undermines any legitimate, loyal worker or soldier. They have moved to the side they claim to be fighting against already.

Above all, there simply is not a terrorist under every bed and this is the sort of thinking cowards resort to. It is shameful and a betrayal.

This behavior has to be distanced. It has to be shut down. Even if they ludicrously want to continue it, they at least have to go through the motions of shutting it down. Delaying that inevitability is only useful for political ruin of anyone supporting it.

We can watch and see how behind the curve these guys are, how slow they are to realize this. How they believe they can fight against it and continue to justify what is already very clear to anyone who is objective and not invested.

It is like watching a painful sitcom where a character performs a horribly deceitful and selfish action while the audience bites their nails wondering when and how they will finally end up in the trash.

They, like Nixon, have stepped on the wrong side of history. And like Nixon (and Hitler and Stalin and so very many others) are not quick enough to step back over to the other side. They remain in power, and have gotten away with it for so long. It is the same pattern always repeated in history.

BlackAngel December 20, 2013 1:37 PM

@Wesley Parish
“Anyone else picked up on the latest NSA “Security Diva” performance?
The funny thing is they perform like Tragedy High Drama divas, and I see and hear them as Burlesque Low Comedy starlets.”

Typical cry wolf nonsense. Do they even try to vet these wild tales before sending them out.

Stealing and manipulating data is far more dangerous then stealing data.

Very basic principle in computer security, offense or defense.

If I thought they were trying to be incompetent to play down their incompetence, this would be extremely impressive.

Sad reality is: these guys are have a field day with funds and cushy, meaningless jobs where they can play pretend spy. Too much hollywood in their brains.

BlackAngel December 20, 2013 1:39 PM

“If I thought they were trying to be incompetent to play down their incompetence, this would be extremely impressive.”

correction: “If it was they were trying to show themselves as incompetent to play down their competence, this would be extremely impressive.”

Har har har. Ironic mistake.

jdgalt December 22, 2013 1:21 PM

Beginning with this post, Bruce’s RSS feed no longer contains any posts or even their subject lines — just empty posts with “(no subject)”. Please fix!

Moderator December 22, 2013 2:32 PM

What feed URL are you subscribed to, and what are you using to read it? This is the current feed, but the older ones are all working as far as I can tell.

Petrobrass March 13, 2014 8:22 AM

@Figureitout “//Shout out to Aspie –I’m still looking forward to when (not if 🙂 you get your computer working and see you featured on hackaday. 🙂 Would like to know where you’re at and any problems you’re having where maybe someone or me can help b/c I have some time now even though I’ve got other stuff.”

Aspie’s answer is posted here:

And thank you very much, Aspie and Figureitout, for your commitment in making a computer from scratch, I am waiting for your documentations !

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.