Security Vulnerabilities in US Weapons Systems

The US Government Accounting Office just published a new report: "Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities" (summary here). The upshot won't be a surprise to any of my regular readers: they're vulnerable.

From the summary:

Automation and connectivity are fundamental enablers of DOD's modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.

In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.

It is definitely easier, and cheaper, to ignore the problem or pretend it isn't a big deal. But that's probably a mistake in the long run.

Posted on October 10, 2018 at 6:21 AM • 29 Comments

Comments

MarkHOctober 10, 2018 10:49 AM

From personal knowledge, I can attest that Global Strike Command (yes, that's what they call it ... used to be called SAC), the arm of the US Air Force responsible for its nuclear arsenal, has been investing in a cybersecurity program for more than 5 years.

I'm not in a position to evaluate how thorough or effective that initiative has been. Given the nature of its responsibilities, it's logical and appropriate that Global Strike has been proactive about information security, and perhaps is "out in front" of other segments of the US military.

For what it's worth, I'm very confident that "the ultimate weapon" is not connected to the public internet. However, the assets and facilities of this Command are vast and sprawling, with many possibilities for vulnerability not directly connected with command and control of nuclear weapons.

Little LambOctober 10, 2018 11:18 AM

http://sel4.systems - The world's first operating-system kernel with an end-to-end proof of implementation correctness and security enforcement is available as open source.

This sort of thing sounds like the answer to this sort of problem. Open source with a mathematical proof of correctness of the implementation.

Do these people have competitors? I do not believe in "first" and "only" or in the no-bid govt contracts for which US//Israeli MIC is notorious.

Obviously the prevailing culture of "proprietary" and/or "classified" software is inherently hostile to the idea of open source, provably correct software solutions to address security problems.

No response but stone silence from "the biz" and no one "in the know" wants to discuss the idea of mathematical proof as it relates to security-critical computer software anywhere.

WarrenOctober 10, 2018 11:53 AM

Hey, Little Lamb - why are they (sel4.systems) serving over http and not https?

Sounds pretty unorganized and unprofessional - especially for an org claiming to be focused on "security".

Jim Andrakakis October 10, 2018 11:59 AM

@Little Lamb don’t forget that the military is still a customer and most certainly, as everyone ever, prioritizes features over security.

It’s just human nature. I’d even argue that, depending on the particular situation, may not even be wrong..

wiredogOctober 10, 2018 12:29 PM

The classified systems I've worked on have been air-gapped and on separate networks. (Of course, the systems attacked by Stuxnet were air-gapped and on separate networks, too.) Some of the systems were designed to be relatively high performance on low-end hardware but were completely un-networked or on a backplane system and thus had no security. Well, other than the men and women with M-4 rifles firing 5.56 mm ammunition.

Nixons NoseOctober 10, 2018 12:54 PM

@wiredog

"The classified systems I've worked on have been air-gapped and on separate networks. (Of course, the systems attacked by Stuxnet were air-gapped and on separate networks, too.)"

Yes, and attacking the centrifuges in Iran required recruiting an asset to deliver the malware to the facility on a USB drive......It's not magic.

It doesn't matter if computers are connected to the network or not if someone can access the hardware

Nixons NoseOctober 10, 2018 12:59 PM

@MarkH

"For what it's worth, I'm very confident that "the ultimate weapon" is not connected to the public internet. However, the assets and facilities of this Command are vast and sprawling, with many possibilities for vulnerability not directly connected with command and control of nuclear weapons."

Thanks Captain Obvious, SIPRNET and JWICs aren't connected to the public internet either, what's your point?

Like that somehow makes them anymore secure.

Any network and communications system can be exploited

Security SamOctober 10, 2018 1:24 PM

Having poor password security
And unsecured communications
Create a perrenial vulnerabilty
That brings the demise of nations.

Fred POctober 10, 2018 1:44 PM

Page 8 This appears to be possibly negatively useful advice: "Protecting a system also includes administrative processes, such as requiring users to regularly change their passwords"

Little LambOctober 10, 2018 2:00 PM

@Warren

why are they (sel4.systems) serving over http and not https?

Good question. Even farmers have https these days. https://chicken.coop/

They might have problems with low-level tech staff at the commercial ssl cert shops who (A) fear competition in the security biz on behalf of their bosses, and (B) play dumb with the unusual top-level domain.

They appear to be publishing general information on an open source project. An analogy is that a newspaper publishing company is usually satisfied with low to moderate security on the lock-box coin-op vending machines, and only marginally concerned with their security as such.

They are on GitHub as well, which does serve over https. https://github.com/seL4 although there are plenty of human factors to computer security there which are not subject to mathematical proof.

Clive RobinsonOctober 10, 2018 3:50 PM

@ Little Lamb,

This sort of thing sounds like the answer to this sort of problem. Open source with a mathematical proof of correctness of the implementation.

It's not a proof of security.

To understand this you have to understand that the proof is "top down" to the CPU ISA at best. It can not for instance stop variations on RowHammer and many other "bubbling up" attacks from below the CPU ISA level in the computing stack.

There are partial at best solutions to bubbling up attacks but they require quite extensive hardware that is a couple of orders over "memory parity" or even "memory tagging" (look at CHERI over at Cambridge for instance the "capabilities" are not bubbling up proof),

https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri-faq.html

Bauke Jan DoumaOctober 10, 2018 4:23 PM

The problem can be fixed by getting rid of those system.
It's a no brainer, and therefore prob. an illusion.

Little LambOctober 10, 2018 8:32 PM

variations on RowHammer and many other "bubbling up" attacks from below the CPU ISA level in the computing stack

It is still a very interesting methodology, and there is no reason it cannot be rolled out at the chip fab design level as well as the O/S level, assuming the mathematical proof part is legit and proves what it claims to prove.

To paraphrase the claim: "I can prove mathematically that I did my job correctly, and my implementation of a basic O/S performs correctly assuming the CPU performs as documented."

This is still a significant claim which is not at all invalidated by RowHammer-level issues.

Of course "security" in general involves human and human-interface issues which are not subject to mathematical proof, so there is no danger of putting Bruce and colleagues out of work.

Little LambOctober 10, 2018 9:03 PM

To my previous question about competition to seL4, I would tend to think on the order of GNU Hurd.

https://www.gnu.org/software/hurd/hurd.html

This one is probably on a more advanced ready-for-market level, as it appears that the "Debian" distribution has been more or less successfully ported from Linux to the Hurd.

https://www.debian.org/ports/hurd/

The Hurd actually runs on the Mach microkernel

https://www.gnu.org/software/hurd/microkernel/mach/gnumach.html

So another possibility would be to port the Hurd from Mach to seL4 to take advantage of its mathematical proof of correctness. (Or one might even consider porting the proof mechanisms from seL4 to Mach.)

"The Hurd" as such is intended to fill the gap between the Mach microkernel and what a monolithic kernel such as Linux (or, say, BSD) provides, and to mitigate and alleviate the "monolith" problems that lead to so much cursing and swearing on the lkml.org lists, ("the tangled webs we weave," etc.)

Clive RobinsonOctober 10, 2018 10:11 PM

@ Little Lamb,

To paraphrase the claim: "I can prove mathematically that I did my job correctly, and my implementation of a basic O/S performs correctly assuming the CPU performs as documented."

That "assumption" is the deal breaker. All attacks below the CPU ISA level invalidates the statment. You can not make those assumptions at all ever.

The big problem is that the computing stack in reality goes all the way down to quantum physics. Thus there will always be a layer below that you can apply your theorm checkers[1].

If you search back in this blog you will find @RobertT explained in some detail to @Nick P and others how you could do this in a way that few would ever be capable of finding...

Digging down the computing stack gets exponentially more expensive and essentially pointless.

The solution is to work out how to mitigate any implant be it hardware or software.

For instance lets assume an implant will working below the CPU at the RAM level or lower can be used to read out any part of the RAM contents across the network. There is nothing you can do in software to stop this. What you can do however is put a hardware encryptor inside the CPU chip at the address and data bus level that encrypts the RAM. Provided the KeyMat never gets copied to RAM then that implant and all others in the same class of attack are mitigated. However whilst that stops a below CPU ISA implant it does not stop an implant at or above the CPU ISA.

[1] Without going into it again within a week, all systems are made of component parts. These parts need to reach a certain degree of complexity to be secure, below that level they are not secure. It can be shown that you can make an implant that will work with less complexity than is required to be secure...

Little LambOctober 10, 2018 11:14 PM

"... assuming the CPU performs as documented." ... That "assumption" is the deal breaker. All attacks below the CPU ISA level invalidates the statment. You can not make those assumptions at all ever.

Granted, you do need to fix the instruction-set level problems in the hardware, but don't let that slow down the O/S work or be a deal-breaker.

Where is the open-source hardware design? A "corner-cutting" choice was made somewhere along the line to sacrifice some integrity and correctness for speed and performance. Just back off a little bit from that surgical scalpel at the "cutting edge" of medicine.

Most CPUs on the market (if they are not wildly overclocked) execute instructions "well enough" to bootstrap a provably correct O/S even if all their chip-level logic is not provably correct or even open-source at this point.

name.withheld.for.obvious.reasonsOctober 11, 2018 3:43 AM

Back in the day, when missile (not hittile) systems were tipped with thermonuclear MRV's and their control systems were under exploration, lots of 68000 (6502/6809/29000) CMOS boards (and others like the Z800, or custom PAL's) and other evaluation/development kits--hardware layer software (linear programming and binhex) was the most common off-the-shelf choice for developers. How many developers uploaded code using standard out to a simple binary (BCD) stream? We just needed to get the die RAD hard or MilSpec'd so we can ship it...

Boards (prototypes and production) had everything from terminal serial ports, one or more JTAG ports, or some other debugger (ICE like) left pinned, enabled, with a nice header. One chap that had worked on early FPGA's that was responsible for a platform design where he described the security profile of a Xilinx system--he hadn't realized that the tools to provide the runtime level security had to be implemented. The assumption had been that it came for free (did its own LUT and cryptographic assemble somehow?). I just walked away, this kind of problem would only be resolved when the next generation (or upgrade) of these systems. Just keep the old fingers crossed and hope for the best.

name.withheld.for.obvious.reasonsOctober 11, 2018 3:55 AM

Not long ago I sat in on a project meeting (avionic/flight) control system where the design team decided early on (the classic example of the tool defining the design, not the design driving the tools) a systems platform that was to be based on a RT-Java implementation on a separation kernel platform. Does not matter that the system was EAL-?, the runtime behavior was not completely deterministic (non-profiled code, and JIT is not an excuse) as platform changes would be script level (several orders of magnitude away from the platform performance domain).

Again, walk away--make sure you are looking at everyone even if you have to go out the door with your back to the door. Don't break eye contact until the exit can be reached.

MarkHOctober 11, 2018 4:45 AM

@Nixon:

It's happened before on this forum, that folks made silly assumptions concerning systems they didn't know much about.

I stated the obvious, in the faint hope of forestalling such nonsense.

Signed,

Captain Obvious
__________________________________

The presence or absence of a public internet connection is significant, insofar as it determines the types of attack, required physical access, and specific hardware (and other capabilities) required to take control of a target system.

Some whiz-bang upgrade may break all of this, but at least until recently, nobody sitting in the comfort of their office or home had the capacity to command the unauthorized launch of a US long-range nuclear ballistic missile.
__________________________________

Stuxnet offers a useful example. When a target system has sufficiently limited external communication, "cyber" attacks may require physical access.

The penetrations in Iran, as I understand the matter, reflected multiple failures to secure physical access and to control personnel with sensitive access.

Physical security is hard, and really costly, but if done thoroughly it won't be defeated by a remote attacker, or without the breach becoming evident.

The folks who manage the US strategic nuclear arsenal have a lot of experience maintaining physical security, and an ample budget to fund it. I've seen some of their safeguards up close ... they may seem almost comically extreme, but have the effect of rendering unauthorized access -- at least, certain types of access -- practically unachievable.

Clive RobinsonOctober 11, 2018 4:50 AM

@ Little Lamb,

... but don't let that slow down the O/S work or be a deal-breaker.

I don't want it to slow down "correct" software development.

The problem is that it's not just "go for the burn" hardware development it's become "shot down in flames" hardware development.

And like it or not as I indicated what happens on one side of the CPU ISA level does effect the other side of the CPU ISA layer.

Although it caused a bit of an argument on this blog at the time, the hardware issues with RowHammer were known at the time it became public and had been known since before DRAM realy got going with 8bit computer chips. Likewise the hardware issies with Meltdown and Spector were not unexpected in general, because some one had already shown that the addressing logic inside x86 CPU's had become "Turing compleate",

https://www.usenix.org/system/files/conference/woot13/woot13-bangert.pdf

If you look in the papers refrences you will find that the academic community were aware of what they had started to call "Weird Machines" since 2009.

Likewise this quite readable paper from the UK's Cambridge Computer Labs,

https://www.cl.cam.ac.uk/~sd601/papers/mov.pdf

Show that the roots of "weird machines" and "Single instruction Turing compleat engines" goes back to befor the late 1980's.

The discovery of Meltdown and Spector were just extensions of this. In essence the knowledge that they existed went back atleast a year or two before then but developing the Proof of Concept code fell to others who had more time on their hands "to play".

There is an awful lot wrong below the CPU ISA level and as I've said often on this blog "this is the Xmas gift that is just going to keep giving".

The fact that what are "weird machines" has been, not just known about but taught since the late 1980's, idicates very very strongly that not just Intel but other CPU designers were well aware of the below CPU problems and were for some reason ignoring it. It might well have something to do with "Managment & Marketing's" desire for "Specmanship" or any one of many other sins.

Now it's finaly in the more general mindset more and more young researchers are going to "dig, dig, dig for glory" below the CPU ISA and CPU levels. And I would say in the majority of cases that they will have major and complratly unavoidable effects on anything above the CPU ISA level.

Thus what we realy need is not a top down or bottom up "proof" system but a "meet in the middle" proof system, which currently we realy do not have.

Oh and just to reiterate the "complexity" point. To catch or trap "bubbling up" attacks needs more complicated hardware to check that the Core Memory contains the values put in it by the "top down" provably correct methods tagging alone is not sufficient in the same way as "parity checking" RAM is insufficient. This involves a lot of extra complexity, and as we know more complexity often means increased attack surface.

However as I point out from time to time "security needs a certain level of complexity to be possible" and currently we don't have that type of complexity below the CPU level we need.

One almost immediate solution that could be implemented is to take "critical functionality" like MMU, Interupt and similar tables out of Core Memory and give them entirely seperate "privileged access only" memory inside the CPU chip. However this has the downside that it would not work with "multiple CPU chip" systems. Thus you need entirely seperate external memory that does not share the Core Memory pins or logic which would be an unacceptable overhead on single CPU chip systems which are still by far the majority of machines. Which illistrates the issues and trade offs that have to be considered and chosen.

It's why in the past I've looked at an entirely different way to mitigate the problem.

MarkHOctober 11, 2018 4:07 PM

Some more ruminations on the security of long-range nuclear ballistic missiles against unauthorized launch ...

[Disclaimer: because much about the relevant systems is (not surprisingly) secret, my knowledge is fragmentary; because they are frequently upgraded, it may also be out-of-date.]

When the command and control architecture for US strategic nuclear missiles was designed, there were grown-ups in the room. They realized that the danger of unauthorized launch was vastly greater than the danger of failure to launch. (An unauthorized launch could, in itself, precipitate the end of civilization; a failure to launch would minutely alter a destruction of civilization that was already underway.)

Accordingly, the design of the US C3 (command, control, communications) system has always been designed so that the negative controls (inhibiting launch) are much stronger than the positive controls (authorizing launch).

The architecture assumes that an attacker (in the cyber-security sense of that term) may inject messages at any point, and provides a variety of cryptographic safeguards to protect against such attacks.

Though the architecture and some of the infrastructure are very old, this isn't really grandpa's encryption, because the systems have been updated at intervals of not many years.

I suppose the worst-case attack would be based on exfiltration of the secret(s) contained in the notorious case (nicknamed "the football") which is always kept handy to the US President. In principle, a possessor of this secret information could then use one of the available communication channels (for example, a satellite link) to inject launch commands.

The generation and protection of "football" secrets is, of course, subject to the highest level of physical safeguards. If unauthorized access to such secrets is discovered (with physical security, undetected access can be extremely difficult), the secrets can be promptly invalidated. If you smuggle them out without discovery, the secrets are volatile, because they are changed at intervals of a few weeks. Further, hacking into the command channels is probably a non-trivial challenge.

Then there's the question of negative controls. A civilian investigator of the nuclear C3 system has suggested that at a low alert level (the typical DEFCON 5 or even 4), the negative controls are so strong that even a fully authentic launch command originating from the actual POTUS would not be executed.

US nuclear missile launch might become automatic after a nuclear war had started, but under other conditions always keeps a "person in the loop." This precaution, far too costly for almost all other systems, renders many cyber-attack strategies infeasible. [Note that almost all of those in-the-loop persons have negative authority, but can take no positive action on their own.]

Another level of attack for land-based missiles would be between the Launch Control Centers and the Launch Facilities (silos). These don't depend on the President's "football," because a launch command from an LCC is presumed to have been authenticated. The secrets for the missiles themselves have a bit more shelf life, being updated annually.

Bruce Blair, a respected nuclear security analyst and himself a former Minuteman Launch Control Officer, has proposed that access to the buried cables is a soft spot in the system's security. Such an attack would require digging an access hole to the cable (there are tens of thousands of miles of them, and their routes are publicly known), most conveniently at a splice case, if you know where to find it.

Then you would need to open it up, identify the appropriate conductors, and break the electrical connections (presumably, without setting off any alarms). With the right equipment, and knowledge of the right secrets (again, such secrets being subject to extreme physical security safeguards*) you could then send the commands needed to assign a missile target, and to launch against that target.

But if I understand correctly, you would really need to do this twice, simultaneously. Minuteman is designed to launch immediately after receipt of two identical launch commands received via physically separate pathways. Without the second redundant command, the missile commences an automatic 90 minute delay, and broadcasts by every available means of communication (there are several redundant channels), "I'm counting down to a launch without a verifying command." Hopefully, this situation would draw some appropriate attention.
____________________________

* Each missile has its own secret "key", which is changed every year.

Wesley ParishOctober 12, 2018 4:08 AM

@MarkH

under other conditions always keeps a "person in the loop." /blockquote>

This iirc, was one of the points raised against The Gypper's SDI, that it effectively took out the "person in the loop" and was thus an even bigger threat than the risk of accidental nuclear war.

Seriously, I think the most dangerous security vulnerabilities would not be those in the Strategic Command, but those in the C3 of the drone warriors, particularly if there are militarized drones operating in the metropolitan state itself. Just imagine, it'd be a SWATter's wet dream. SWAT the LEA from the comfort of your own computer! Bring Drone War home to the Drone Warriors in the Capitol! Etc ...

It's American Exceptionalism, after all! (Somebody's got to be exceptionally stupid! :)

Clive RobinsonOctober 12, 2018 6:42 AM

@ MarkH,

I suppose the worst-case attack would be based on exfiltration of the secret(s) contained in the notorious case (nicknamed "the football") which is always kept handy to the US President.

Not as much as you would think.

The design of the system from day one realised that the football was a very significant weak link in both directions.

Therefore there is another layer that will work even without the football but has to be used through the football.

It is rumored to be as simple as the Leslie Groves "random square" method of challenge and response.

Some have said that those in the "list" are given a plastic "snap card" with the random square in side it. The snap card is simply a more robust and tamper evident version of a sealed paper envelop kept in a safe.

In the UK for instance it's fairly well known that one of the first duties a Prime Minister has is to write the "independent action letters" for the commanders of the UK nuclear deterrent that end up in sealed envelopes in the "commanders safe" on each submarine". Tradition has it that these envelopes are never opened except under "conditions of war" and are thus destroyed unopened when the PM changes (it has been suggested they are actually kept at the National Archive near Kew under a "hundred year rule").

Another thing that appears ammusing these days of mobile phone and instant Internet, is that there were arangments with not just the National Defence organisations but with the Automobile Association (AA) and Royal Automobile Club (RAC) to use their radio communications network to relay the launch command. Likewise the requirment that thr PM's staff all carry atleast six penny pieces when away from Downing St / Whitehall such that the Post Office Telephones could br used without having to liase with an opperator.

But the funniest thing of all is the roll that "chickens" nearly played. In the early days of aircraft carried nuclear weapons in the 1950's they were big around 7.3tons in a very large steel container. These were known as Blue Danube, but plans were made to also use them as land mines in Germany and this was code named Blue Peacock. The problem was the actuall physics packages were going to be kept in fairly deep holes for upto a week, in the ground in woodlands, forrests and river plains. However they had to be maintained at a temprature that was higher than would be expected (~57F) in such a deep hole. The solution some one suggested was "a chicken on a nest" heater. Apparently some scientist at Aldermastan worked out that a certain number of chickens inside the bomb case with sufficient grain and water would generate enough heat as well as survive for eight days in the hermetically sealed anti tamper device riddled bomb casing...

MarkHOctober 12, 2018 1:00 PM

One useful response to this problem, seems to be painfully obvious.

It could help to have a "red team" do their best to hack into each of these systems (in other words, conduct a penetration study).

And the Pentagon wouldn't have to look far ... the NSA is actually a branch of the Department of Defense, and has a gold mine of expertise on hand.

They could take a little break from illegal domestic spying, and actually carry out their intended function for a change.

SecReportOctober 13, 2018 12:08 AM

“Assessing and Strengthening the Manufacturing and Defense Industrial Base and Supply Chain Resiliency of the United States” September 2018
https://s3.amazonaws.com/static.militarytimes.com/assets/eo-13806-report-final.pdf

“Team Trump is Protecting America’s Vital Manufacturing, Defense Industrial Base from Big Risks”
https://www.whitehouse.gov/articles/team-trump-protecting-americas-vital-manufacturing-defense-industrial-base-big-risks/

“Fortunately, President Trump has long recognized that to be strong and secure our nation must be able to rely on U.S. companies to manufacture products needed for our national defense. He understands that we must never become dependent on foreign nations to design, produce and maintain the aircraft, ground combat vehicles, ships, munitions, components of our nuclear arsenal, and space capabilities that are critically important to our nation’s defense…

This landmark report outlines ways to harness the capabilities of industry and government to work together to defend our country effectively and efficiently, ensuring that taxpayer dollars are spent frugally and wisely.”

Wesley ParishOctober 15, 2018 5:17 AM

@SecReport

O brave new world, That has such people in ’t!

Keeping the supply chain in-house so to speak, is one way of making sure that vulnerabilities that exist can't be blamed on the PRC. Instead, courtesy of Edward Snowden, we know that the NSA et alii will quite happily corrupt the supply chain on their ownsome.

And then, there's that famous quote from both Alan Shepard and John Glenn:

“I guess the question I'm asked the most often is: "When you were sitting in that capsule listening to the count-down, how did you feel?" Well, the answer to that one is easy. I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of two million parts -- all built by the lowest bidder on a government contract.”

With USD $600.00 toilet seats - lowest bidder - it's caviar for the bosses' mistresses and tortoises all the way down for the rest. (I leave that for the inquisitive amongst the spooks to figure out: what level tortoise are they?)

Clive RobinsonOctober 15, 2018 7:24 AM

@ Wesley Parish,

I leave that for the inquisitive amongst the spooks to figure out: what level tortoise are they?

Ever hear the exptession "Lower than a snakes belly in a waggon wheel rut"?

Well they still need to keep a digging and a digging ;-)

SecReportOctober 15, 2018 9:48 AM

@ Wesley Parish

Your quote from Alan Shepard and John Glenn is a keeper! Nothing quite like realists with a sardonic eye :)

The DoD-led report on the Industrial Base and Supply Chain does have a series of recommendations: 19 in the Executive Summary, expanded into 24 under Section: “VII.A Blueprint for Action.”

Some of those are:

• Creation of a National Advanced Manufacturing Strategy by the White House Office of Science and Technology Policy, focused on opportunities in advanced manufacturing
• Department of Labor’s chairing of a Task Force on Apprenticeship Expansion to identify strategies and proposals to promote apprenticeships, particularly in industries where they are insufficient
• Diversify away from complete dependency on sources of supply in politically unstable countries who may cut off U.S. access; diversification strategies may include reengineering, expanded use of the National Defense Stockpile program, or qualification of new suppliers

The White House's Office of Science and Technology Policy (@WHOTP) released NSTC’s “Strategy for American Leadership in Advanced Manufacturing” in October of 2018, I guess this month that is.

Goal 3 of the Strategy “Expand the Capabilities of the Domestic Manufacturing Supply Chain” does mention a sub-goal to “Strengthen the Defense Manufacturing Base.”

There was no mention of bathroom seats or dual-use capabilities; well I take that back, there is :)

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.