Schneier on Security
A blog covering security and security technology.
« Prison Escape |
| Gauss »
March 18, 2013
A 1962 Speculative Essay on Computers and Intelligence
From the CIA archives: Orrin Clotworthy, "Some Far-out Thoughts on Computers," Studies in Intelligence v. 6 (1962).
EDITED TO ADD (4/12): A transcript of the original, scanned article.
Posted on March 18, 2013 at 1:00 PM
• 21 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
If this is intended to demo prescience of 'Big Data', read Gleick’s 'Chaos' as a primer on non-linear dynamic processes. Back in 1962 many were still thinking if they had a computer big enough and powerful enough they could predict the weather accurately months into the future.
So basically, the CIA had "Pong" a decade before the rest of us! And many scoff at conspiracies... Ha!
I think it provides a useful reminder that "If we can just grab enough information about everyone, everything will become easy" is not an attitude that arrived brand-new in the last decade or so. It's a natural property of national security agencies.
@ Petréa Mitchell
It's a natural property of national security agencies.
Unfortunately also picked up by big corporations now.
This would have been the same era as McNamara's 'whiz kids', eh? That didnt work out so well thanks to GIGO...
In 1964, I started programming on the first computer that was available in the college I attended, an IBM 1620 Model I with 20,000 characters of memory (6 bit characters). It did not have hardware circuitry for math operations, instead using table lookup. It was called a "CADET" computer...can't add don't even try. Input was typewriter or card reader, output was typewriter or card punch (80 column IBM cards, the kind used on tablulating machines as far back as the year I was born). Graduate school moved up to an IBM 1620 Model II, with 100,000 characters of memory, real hardware for arithmetic, and disk drives (IBM 1311 with 1 megabyte of storage space) plus a high speed printer.
Computer thoughts back then were far out, but achievable.
Oh dear. When the introduction advocates CROSSTABS ALL, you know it won't end well.
(Only... they're probably still trying it, aren't they?)
Interesting. Though he comes across as somewhat naive in believing that we'll be needing to predict election outcomes; these day's they're all pretty much fixed from the start and everyone involved knows which way it'll go.
Off-topic, but y'all been following The Adventures of Brian Krebs lately? Hacked, SWATted, and DDoSed.
This thing reads like an early blueprint for Facebook & Google. A nice, rare, example of insightful prediction.
One can only speculate what our data collecting government overlords are anticipating in the next 50 years...
"One can only speculate what our data collecting government overlords are anticipating in the next 50 years..."
That's easy. With the NSA center in Utah, they plan to collect EVERYTHING about EVERYBODY in the U.S. and beyond, all supposedly in the name of fighting terrorism. However, such outrageous objectives often struggle long term just to keep up (witness the IRS and its pathetic computing infrastructure, along with those of many other government agencies). The NSA data center is to go operational later this year, and they will have problems getting data to it (10 gigabit connections might not be fast enough), it will have problems storing all that data (25 acres might not be enough space), and it will have problems with CPU processing power just to obtain and store the data, much less process it for any information of value.
Eventually, new technology will address the shortcomings in data collection, storage, and processing (bigger, faster, & cheaper) but then there won't be enough time to convert the old data to the new technology so it can be processed. Hmm. That seems to be what happened to the IRS, not enough time and money to convert the old data to newer technology (if they don't catch you in 7 years, they likely never will).
Furthermore, the overlords will need analysis technologies that don't yet exist to process such huge data volumes in order to get anything of value out of it. Specific selective results perhaps, but not long term overall useful output. It is doubtful the data center will ever achieve its stated purpose, and will eventually go from collecting everything about everybody and degrade to collecting only a sampling of bad stuff that can be used against you (none of that data will EVER be used for you or to protect you).
Shortsighted? Maybe. Realistic? Probably so. Now, what are your thoughts on how this will all turn out?
> IBM 1620 Model II, with 100,000 characters of memory
Don't let your tears drown your memories in mist! Although the 1620 might in principle address that many digits, no 1620 was ever built with more than 60,000 ! And those were 60,000 (decimal, signed) /digits/. Which would equal 30,000 /characters/ at most, since each alphnumeric char was stored using 2 consecutive digits...
@alanm - If one is to rig an election without being obvious enough to trigger a possible revolution, one needs to know where the "close races" are. Especially in a country like the U.S., where the multi-layer scheme of elections mean some votes are significantly more valuable than others. Knowing the "fair market price" for bribes to election officials, software developers, or polling-place thugs can maximize benefit/cost.
BTW: in 1970 or so, there was an (ultimately unsuccessful) petition to have the source code of the ballot-counting software for the U.C. Berkeley student government elections published. So at least some people were aware of the political uses of information technology by then.
As Simon mentioned. If you are doing non-linear calculations, one part in a million garbage in can quickly give all garbage out. It is also rather surprising just how quickly you need double floats just to maintain accuracy well inside a single floats domain (something like a 10k point FFT). There were many, many ways to get garbage out in 1962 that weren't know yet, and I'm sure there are plenty still unknown now.
Short: The real and most relevant question is, "What questions are going to be asked of this vast database?"
The issue is not whether they can collect all of that information, nor if they can retrieve, sort, combine, and manipulate it. The answer to that is so "yes" that it's not funny, and as others have said, it's not limited to government, either. The capability is there, access to it is relatively easy, and the only controls are legislative. Unfortunately, between either membership in the government (law enforcement) or legislative lobbyists (corporate interests) the parties most likely to enact any controls don't want them to be effective, at least not against themselves.
But as mentioned, the real issue is, what questions will be made against the database. In perhaps the worst application, they could make an enemies list, and search for information that could be used to remove those people from society. In another bad scenario, they could find the "buttons to push" to steer elections and retain power.
Personally, I believe that the up-front wish of those doing this is that they want to protect the nation - they're basically well-intentioned, even if a bit off-the-mark as they go about it. I also believe that it doesn't have to be that way, it could be a lot worse, and perhaps the most important thing is to keep enough good apples in there to keep the bad apples from taking over.
But as mentioned, the real issue is, what questions will be made against the database. In perhaps the worst application, they could make an enemies list,
The worst case is that they could try to make an enemies list, and screw it up due to any or all of the following: outdated information, bad information architecture, bad usability, software errors, and the fact that most people are not good at constructing the kind of query today's computers expect.
It's one thing to collect an enormous pile of data, but quite another to be able to structure and manage it (and secure it, as Bruce has written about before) so that it's of any real use.
(Perhaps a good name for this compulsive behavior on the part of security agencies would be "data hoarding", by analogy with people who have hoarding disorders.)
After a quick check around the Web, I see the hoarder analogy has occurred to a few people already.
I always think of Orwell's comment about a boot being on mankind's neck forever. But it is rather humorous.
While they're actively believing that all we have to do is gather all the information about any particular subject and every particular man and crying that Big Data will save us, they do everything possible to ensure that people are going to do everything possible to foul up the data that the impossible task is to have data that has any reliability at all. So the Big Data become corrupted data and never, ever, ever can turn into Big Data. Because the system is particularly inspired to make sure that everyone turns out radically imperfect data. So Big Data will never be reliable.
Here is an example of an early "big data" system, that over time, turned out to be not so "big data":
While everyone thought Nielsens was possibly employing all the new data as well, it wasn't, showing that once implemented, such systems are difficult to keep up with modern times. Thus, the NSA data center in Utah will probably have the same issues, eventually failing its targeted purpose (becoming old data or just bad data; most certainly unusable data with high probability).
Aha! With the cia being taken out of the game of drones, it is now attempting to complete with the NSA:
Let the data collection battles begin!
And, since the government claims any data in the "cloud" can be taken for government use, the NSA can usurp this cia cloud data at any time (not vice versa because the NSA data center is not in the cloud).
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.