NSA "Red Disk" Data Leak

ZDNet is reporting about another data leak, this one from US Army’s Intelligence and Security Command (INSCOM), which is also within the NSA.

The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM’s Futures Directorate, was slated to complement the Army’s so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.

[…]

Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.

[…]

Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is “top secret,” and restricted from being shared to foreign intelligence partners.

A couple of points. One, this isn’t particularly sensitive. It’s an intelligence distribution system under development. It’s not raw intelligence. Two, this doesn’t seem to be classified data. Even the article hedges, using the unofficial term of “highly sensitive.” Three, it doesn’t seem that Chris Vickery, the researcher that discovered the data, has published it.

Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.

This doesn’t feel like a big deal to me.

Slashdot thread.

Posted on November 30, 2017 at 6:44 AM27 Comments

Comments

Ollie Jones November 30, 2017 7:23 AM

As you say, Dr. S., this isn’t a big deal. One can probably commit just as much electro-mayhem with the Kali distro of linux.

But it underlines the emerging inconvenient fact of infosec. All secrets leak. The question isn’t whether they will leak, but when, and how much damage the leak causes.

maggotification November 30, 2017 8:11 AM

I suppose the big deal is that this leaked at all. Of all people, something like the NSA should be able to prevent secrets leaking.

Clive Robinson November 30, 2017 9:25 AM

It’s not at all clear how the data got where it did, some sources say it’s a backup, some say the system was in development and turnd into a real turkey that was more of a hinderance than a help when field tested.

Thus this disk could be a “scrapper” that got diverted on route to the great secret dust cloud that over hangs some “recycling plant” with a fifth hand contract from the DoD.

We just don’t know “yet” so there is entertainment to be had out of watching how this little embarrassment gets swept under a carpet, and who gets to be the goat…

Clive Robinson November 30, 2017 9:28 AM

@ Bruce,

which is also within to the NSA.

Is the “to” superfluous, or is smething else missing?

David Rudling November 30, 2017 10:32 AM

@Bruce
What you didn’t highlight was this statement in the report.

“The disk image was left on an unlisted but public Amazon Web Services storage server, without a password, open for anyone to download.”

I would remind you of your other recent post about Amazon being used/trusted for high security storage.

Petre Peter November 30, 2017 11:04 AM

Remember! “If it’s in the news …” is not new: it is precedence: like Miss Cyberevo was preceded by Miss Sarajevo. “In order of some appearance” appeals to order and chaos in the same space but not the same time if to app.ear in the news it’s to become a.live. Dreams of change without the threat of life is utopia but they can create the trap of nothingness. Therefore, the biggest threat to life is sleeping because that’s when i am most vulnerable; however, that’s exactly when cruiseriders tell me i am on my own. Securing people after the fact secures cruising not people who pay to be secure. A peaceful pro.tester is the best test i have for security since without peace i cannot sleep to get about 16 hours of wake time. The cost of living becomes the cost of sleeping when the ones paid to protect the peace disturb it for recruitment purposes. Utopia takes violence out of war and…war becomes warrant instead of a peaceful System Update from under the thumbs of “we the people”.

wumpus November 30, 2017 12:36 PM

@maggotification

Until Snowden the NSA was willing to hand terabytes of top secret data to contractors with the “top secret clearance” blessing. This isn’t formally classified at all.

The NSA simply has too many secrets and this isn’t “secret”. The fact that it was exposed should be seen as the “system worked” (it wasn’t intentionally leaked, but manhours weren’t wasted securing information that doesn’t need to be secured.

Unless someone is willing to claim that this should be formally classified (to at least “secret”, which is effectively handed out to any US citizen who “needs to know”) I can’t see how this is a problem. If you are going to claim it should be secret, just how much more data does the NSA need to keep secrets (don’t ask about the datacenters they already maintain full of all their “superbuttonlipsecret” data).

To me, letting unimportant data be unimportant may be a huge step in the right direction for the agency formerly known as “Never Say Anything”. But I suspect that this is a single datapoint and it is far easier to blame the guy who late non-secret data slip than to be glad the system is working even better.

hmm November 30, 2017 2:33 PM

” Of all people, something like the NSA should be able to prevent secrets leaking.”

Except it’s not really an NSA program. It’s an ARMY program.

If you want to blame the NSA you have a few more layers of abstraction there.

hmm November 30, 2017 2:40 PM

“All secrets leak. The question isn’t whether they will leak, but when, and how much damage the leak causes.”

It’s more a function of how many people have access to something. Humans are humans.
Careless leakers shouldn’t exist at this level of the organization. But they do.

We’ve gotten into this laissez-faire attitude where people don’t get prosecuted for major failures.
Hillary is only one example, this has been going on for decades. Colin Powell had insecure comms.

And then on the flipside as others point out, we’ve got the 180 degree opposite tack on Snowden and whistleblowers.

This whole thing just breeds contempt for the mission, which is as important as ever it was.
Unfortunately I think these incidents will lead to less transparency instead of more.

Who? November 30, 2017 4:33 PM

Let me play devil’s advocate.

It is not Amazon’s fault, it is not a weakness on its cloud service that worked as expected.

It is not NSA staff fault either, they are human beings so it is unreal expecting them not making mistakes.

The problem here is that classified data should not be shared over widely reachable networks, it should not be stored on publicly reachable servers. Truly classified networks should have countermeasures to minimize the impact of unavoidable human mistakes.

Clive Robinson November 30, 2017 4:52 PM

@ Who?,

The problem here is that classified data should not be shared over widely reachable networks

Ahh but the alternative is not going to be popular with current cost cutting encumbrants in the executive.

We started getting into the leaking game when the politicos said that COTS soloutions were the way to go to save money. So consummer equipment might be cheap, but it’s also widely compatable with other cheap easiky available and nigh on impossible to trace consumer devices. Like oh CDRW (manning) and thumbdrives (snowden) and other places such as poping up in middle east markets with secret information on them.

Thus to save money public networks will be used with inadiquate[1] precautions in place.

As they used to say “You reep what you sow”…

[1] I think it’s finally dawning on people that COTS equipment on publicaly accessable networks “IS NOT SECURABLE” without specialised techniques that guess what involves very expensive non COTS equipment and setups…

Sancho_P November 30, 2017 6:49 PM

It seems @Bruce only skimmed through the UpGuard page(s)? [1].

Yup, let’s downplay it because sensitive leaks are just normal nowadays.
So it isn’t a big deal, let’s call it transparency (hint, hint: Snowden).
But the bummer is the content, it’s what they feel to be important in the battlefield: Almost a decade of facecrook and shitter postings.

American citizen or just ordinary world human: Take care whatever you post!

Funny is to call “intelligence” what others would call “stupidity”.
Our govs produce fake data (like social media postings “to influence elections”) and then collect them back to gather intelligence.
It’s an increasing whirl of wasting taxpayer’s money, just in a time where we can’t explain to our grandkids why we have pensions but they will have war.

[1] (mind the date)
https://www.upguard.com/breaches/cloud-leak-centcom

Clive Robinson December 1, 2017 12:25 AM

@ Sancho_P,

But the bummer is the content, it’s what they feel to be important in the battlefield: Almost a decade of facecrook and shitter postings.

The Western military has worked on the “total domination” principle for atleast a half century now. But the roots go back into WWII and what became of not just Ultra but the less talked about Traffic Analysis.

A very significant but not much talked about part of breaking the Enigma system was “Known Plaintext” an idea that had come about in Room 40 during WWI. In essence every broken message was analysed for information and that was stored in what was called “the registry” which was millions of file cards. It’s still not publicaly known just what information was kept, but we do know the registry contents went on to form the basis of traffic analysis.

Importantly the notion of “keep everything” as you don’t know when your opponent may change things and old information become vital to gain entry again. Became the idea that any and all daya from “collect it all” would become the basis of new methods. One such from WWII was noticing when things were not quite right, became a further technique for spotting Deception attempts by the opponent for various reasons. The lessons learnt from that assisted the allies in creating a whole fake army designed to fool Germany about when and where the D-Day landings would be.

And at this point you might just see the subtal trap that military thinking fell into. We give it fancy names such as “The Big Data issue” or the “Information haystack problem” in essence it’s “Information overload”.

The likes of the NSA have always relied on the idea that “Information is key”. Whilst it is to some functions it can also induce the phenomena of “Paralysis by Analysis” where any attempt to sift the vast never ending haystacks of data becomes impossible due to lack of analytical resources.

As far as we can tell currently the way these information stores are used is as “time machines”. That is the assumption that any event has prior events leading up to it or “cause and effect”. The thinking is augmented by the idea that events are not issolated but related or even coordinated. Which thus becomes a giant game of “join the dots”. When an event happens you go backwards in time looking at the players previous contacts and then move forward on the contacts contacts etc. This zig-zaging enables lists of personnel to be drawn up and the links between them.

In theory these links enable any coordination to come out which is in effect the opponents “order of battle”. You can then work out “Kingpins and lynchpins” in the opposition and work out who to watch and who to drop a hellfire or blackhawk on.

The problem is the difference between “mystic and reality” SigInt and IC agencies “cherry pick” successes and bury failures. Thus they look or successful than they realy are. The problem with success is it breeds expectations. We saw this after 9/11 with the “Why did they not warn us” questions that gave way to the largest shake up of the US IC.

Thus the desire to maintain the myth, which means any and all electronic communications are considered to be of use for such analysis, even though we can not yet do the analysis due to resource limitations…

And that is what this is realy telling us, not the breach of privacy we’ve known that has gone on for fifty years or so. No it gives us a glimer of where they want to go. We know about AI in drones turning them into “killer-bots” etc but people are not taking a step backwards and asking where does the intel for such drones/bots come from. We where told remember that “We Kill By Metadata” line? And the TAO “Find Fix and Finish” equipment… Well they are developing AI analysts to try to unblock that resource bottle neck. Thus the human analysts are looking for rule sets for Soft AI so technology can hunt for targets and their metadata.

The thing is it’s not going to solve the problem and we already know why. It did not take long for the opposition to realise that electronic communications is a liability with a lethal penalty. So there are a number of things they have done.

1, Where Possible don’t use it.
2, If it has to be used mitigate it.

And they have been reasonably successful. We’ve seen two mitigations, the first is the use of disposable cell phones but with a twist. Rather than “burn the burner” they pass it on to somebody innocent. This not only mucks up the contact analysis it also creates realy bad “collateral damage” a hellfire on a weading party sends out lots of politically bad messages for the “Kill by metadat warriors” sitting in airconditioned boxes in Nevada. And writes large around the world not just “Uncle Sam is a screw up” but also bangs the recruitment drum. Which brings us to the second mitigation, such bad news brings in lots of new interest from young people, all using electronic communications. Thus “new noise” to pollute any analysis database, and as a result alow some electronic communications by the opponents “below the grass”.

Back in the 1980’s –it has become clear looking back,– the acceptance of alowing the Ultra secret out was preferable to letting out “the real jewel in the crown” of SigInt tools “traffic analysis”. Unfortunatly the plan backfired due to rank stupidity in the highest levels of English Politics.

Thus a “Red Queens Race” has started between the analysts and opposition opperatives. The analysts have to run flat out to not quite keep up as the opposition evolve their behaviour.

The idea that AI could replace the analysts is predicated on the notion that the analysts will with AI asistance be able to evolve faster than the opposition. The notion is currently flawed in that AI is not yet at the point of finding it’s own rule sets to measure by. Thus the analysts have in effect moved from finding the opposirion, to finding rules that the AI can use to find the opposition or atleast cut down the load on analysts. After a little thought you can see one or two flaws in the plan…

I could go on to make further points but I think I’ve shown enough for others to think on. The game has moved on we are fighting a battle we can not win over privacy of our electronic communications because of the “comfort factor”. That is we know how to make them a lot lot more secure, but to do so is a little bit to much effort for thr majority of users.

The battle we should be stepping up for is the AI on big data one, it will cause not just military conflict but one heck of a lot of social injustice as we are already starting to see.

Cassandra December 1, 2017 3:03 AM

@Clive

As ever, you talk a worrying amount of good sense.

It is obvious that the Security & Intelligence services will want to use AI to search the captured communications data for leads. I’m sure that AI conclusions will be subject to some kind of review process followed by humans, but there are at least two problems with using AI:

1) Non-ruleset based AIs cannot give a traceback of the reasons for decision making – a trained neural network (like AlphaGo Zero) is opaque.
2) Selection bias, which operates on two levels: (a) people will tend to give more credence to AI-supported conclusions, and (b) because of (1) will not be able to determine easily, if at all, if the AI is biased itself (not least because the AI only operates on data it knows about).

It is entirely possible that people will be killed as a result of conclusions drawn by opaque AI from communications data. Now, AlphaGo Zero is a better player of Go than any human player, so that may be a good thing, and AI evaluated conclusions might be higher quality than solely human evaluated data, but on the other hand, nobody dies as part of playing Go.

“We had to kill him because the AI said so.” is a chilling thought.

This is not a new problem, and has been all over the media in the context of customer service and driverless cars e.g. see

https://www.theguardian.com/technology/2017/jan/27/ai-artificial-intelligence-watchdog-needed-to-prevent-discriminatory-automated-decisions
https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/
https://www.wired.com/story/ai-experts-want-to-end-black-box-algorithms-in-government/

AI bias is a problem because the AI itself won’t know that it is biased – an AI is dependent on the learning environment it operates in, and on the completeness and lack of bias in the training data it uses. The learning environment is set up by people who have their own biases – and while the following example is fanciful, it illustrates a meta-problem: how do you know that an opaque AI wont come to the conclusion that a good way to win a game of Go is to assassinate the opponent? ‘Obviously’ AlphaGo Zero can’t do that, but how do you prevent the same class of problem and inappropriate solution appearing in other AIs? If AIs become sophisticated enough to require teaching in human ethics, then who does the teaching, and what ethical system should be taught? If you think that is straying into the realms of Science-Fiction, then I’ll quote from one of the linked articles above: “Already, basic machine-learning techniques are being used in the [American] justice system.” and “…the [Wisconsin] state supreme court ruled …, reasoning that knowledge of the algorithm’s output was a sufficient level of transparency.” So we already have a situation where the length of a gaol sentence is dependant upon the difficult-to-challenge conclusion of a legally opaque AI.

Does anybody think that the Security and Intelligence services are not using AIs?

hmm December 1, 2017 3:59 AM

“AlphaGo Zero is a better player of Go than any human player,”

FWIW they’ve mathematically represented Go and picked the unbeatable position. Unless the computer fails to make the optimal move, there is no beating it. As complex as Go is, it’s a straightforward game with rather simple rules and limited dimensions.

Not so for the calculus of killing people in the name of security. It’s vague, open ended. Often wrong.

What works in one case will surely be the worst decision in another. As they try to ramp up the machine learning on such a thing, it will be a series of failures that they will rationalize as part of a process of perfecting it. The schizophrenic logic that results is not what any of us would want to rely on for our lives or safety ongoing. We would never agree to that. Anyone who would hasn’t thought about it at depth.

As we unaccountably inflict this upon the world in the name of fighting terrorism we inevitably invite it to be used against us also. What we’ve learned is that when it comes down to it, there are no rules that won’t be broken in the name of convenience or a powerful executive desire, and mistakes are more the rule than any well defined rules themselves. Mistakes, and lying about the aftermath for as many years as they can get away with.

http://foreignpolicy.com/2016/07/05/do-not-believe-the-u-s-governments-official-numbers-on-drone-strike-civilian-casualties/

The problem of killer AI is the same as the problem of killer soldiers, just less accountable and easier to use without the weak human element interfering – including morality or critical judgment of orders. Where they eliminate this weakness, they have also eliminated a failsafe.

Just on a practical, feasibility level does anyone believe that the various armies and defense contractors of the world are ALL going to abide some universal moral programming code, and that it would work perfectly and never be subverted for a nefarious purpose?

Even if we make rules, how can we expect them to be enforced 20-30-40 years from now?
We can’t enforce them now, if we even had an idea of how to go about it.

Mike Barno December 1, 2017 7:08 AM

@ Clive Robinson :

and work out who to watch and who to drop a hellfire or blackhawk on.

I understand dropping a Hellfile missile on a terrorist leader [even if it mostly uses horizontal rocket-powered motion rather than gravity drop], but wouldn’t it be ridiculously expensive and cause excessive collateral damage to drop a Blackhawk helicopter on him? Plus we would rapidly deplete the ranks of chopper pilots.

Clive Robinson December 1, 2017 7:37 AM

@ Mike Barno,

… but wouldn’t it be ridiculously expensive and cause excessive collateral damage to drop a Blackhawk helicopter on him?

I was using “on” in the slightly less strict sense as in poker “to lay a card on him”, or as your boss might say to one of your cow orkers who has a tough problem “Go drop it on Mike” or as certain law enforcment persons say “I’m going to lay charges on him, he can not crawl out from under”.

As I suspect you well know 😉

Whilst OBL did not physically have a black hawk land “on” him in 2011. The bullets from the guns of those who were on board certainly “landed on him” and they left one behind… Mind you the “fish food” story still sounds to pat to be realy believable (which is oddly probably why it’s true)…

wumpus December 1, 2017 9:08 AM

@Who: “The problem here is that classified data should not be shared over widely reachable networks, it should not be stored on publicly reachable servers. Truly classified networks should have countermeasures to minimize the impact of unavoidable human mistakes.”

Er, except that no “classified data” was found/leaked/released. If you want to blame NSA for not stamping every little executable they use “TOP SECRET”, I’m sure you find plenty of allies inside the agency, but most outsiders who have looked seem to think they use those stamps far too often and are thus bogged down with too much data looking for needles in haystacks.

“No Such Agency” by now admits it exists. It has to figure out what secrets it wants to keep and how much information isn’t worth the manpower to keep secret. There is little reason it would be worth heaping all the “classified overhead” on top of “government overhead” to keep secret something presumably irrelevant.

supersaurus December 1, 2017 10:55 AM

the problem with a strong AI that can make or modify its own rules is that it might decide we are too dangerous to keep around.

the program Eurisko was able to generate its own heuristics to defeat all human players in the strategy game “Traveller TCS” in 1981 and 1982. in 1982 it discovered a novel strategy destroying its own ships to win. relatively speaking this program was small and it ran on (by today’s standards) rudimentary hardware. in other words 35 years ago there was a successful program that generated its own rules, solved a complex problem using them and defeated all human tournament players. one can argue that real warfare is too complex for software, but that was an example where novel “thinking” cut the Gordian knot. it wouldn’t take a very “smart” AI to notice that the human species presents an existential threat. and please…don’t get the idea that a strong AI could be contained.

Petre Peter December 1, 2017 11:04 AM

@Clive Robinson

We give it fancy names such as “The Big Data issue” or the “Information haystack problem” in essence it’s “Information overload”.

It seems that the information overload has something to do with information overlords. It comes from the paradox of our willingness to protect our names with pseudonyms-also known as the Reset button, or “Click Here To Kill Everyone”. This is how reputation becomes a luxury and i have to start from scratch after the black.out. Yes, “democracy cannot exist without a secret ballot” but living under a pseudonym is the topic of utopia created by fear when i pay to be secure but i am not expected to be engaged in a conversation of equals with those employed by me-security makes them my superiors because they know the meaning of pseudonyms-the main tool of overloading also known as a disinformation campaign. i have seen this type of campaign in the 90s when Romania went through the Berlin wall. The Berlin wall was not collapsing, it was being upgraded to a seawall on whose tide, the West becomes the East and the East becomes the West. Data ownership laws is the best proof of this tide because in the west, i no longer own my data, my data is owned by my …lords.
The cloud must “retain minimal functionality” when it becomes cloudy; otherwise i will have a cloudy mind instead of the promise space. For me, the tool to avoid this unwanted tide is a Smart Bowl. “PP and Kaku drop Smart Bowl. The bowl looks at big data for early detection of cancer cells…” This drop could rebuild our healthcare system which is, after all, part of security. Give me a Birthday Party, and i will give you The Information Age Rap.sody.

Who? December 1, 2017 11:13 AM

@ Clive Robinson

Same happens to the banks on my country. Twenty years ago branch offices were connected by means of circuit switching networks. Expensive (not to say in cases a circuit was built to connect a branch office on a small town, let us say a one thousand citizens town) but secure. Now our central bank is on the last stages of a migration to VPNs built over Internet. Cheap and fast deployment, but open to a wide range of attacks.

Mike Barno December 1, 2017 3:51 PM

@ Clive Robinson :

…”Go drop it on Mike” or as certain law enforcment persons say “I’m going to lay charges on him, he can not crawl out from under”.

The only time I was put in jail, I saw a young guy lay a “Get Out Of Jail Free” card, from the Parker Brothers game Monopoly, on the county sheriff’s deputy who was booking him into the jail’s custody. Of course, the deputy dropped a “Go To Jail” card on the kid, who promptly lost his smile.

vas pup December 2, 2017 10:56 AM

This book is VERY informative on all aspects of cyber domain related to this blog:
“Dark Territory” related to the history of cyber war. Good, informative and enjoyable reading.

Nta December 2, 2017 4:51 PM

Unless the “army” wants to loose the next war, this disk is of primary importance because it has all its known security flaws and all its zero day exploits fixed/patched. Even comparing executables would be very informative.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.