Friday Squid Blogging: Prehistoric Sentient Squid—Or Not

There’s big news in the world of giant squid:

Researchers initially thought that this strange grouping of 45-foot-long marine reptiles had either died en masse from a poisonous plankton bloom or had become stranded in shallow water.

But recent geological analysis of the fossil site indicates that the park was deep underwater when these shonisaurs swam the prehistoric seas. So why were their bones laid in such a bizarre pattern? A new theory suggests that a 100-foot-long cephalopod arranged these bones as a self-portrait after drowning the reptiles

Here’s a good debunking:

There is no direct evidence for the existence of the animal the McMenamins call “the kraken.” No exceptionally preserved body, no fossilized tentacle hooks, no beak—nothing. The McMenamins’ entire case is based on peculiar inferences about the site.

Another article. And another debunking.

Posted on October 14, 2011 at 4:07 PM21 Comments


Daniel October 14, 2011 4:41 PM

The US Supreme Court will be hearing a case this term on a federal law known as the Stolen Valor Act. “The core constitutional issue in the case is whether and to what degree the First Amendment protects false statements of fact.” In other words, can the federal government outlaw lying.

I think the interaction of trust and freedom of speech is interesting. How can a society build trust if people are free to lie? But if people are not free to lie then what does freedom of speech mean, anyway. Who gets to decide what is truth and what is a lie? What if the people who are supposed to safeguard the truth turn out to be liars themselves?

There’s a good discussion at the link above.

Gabriel October 14, 2011 5:05 PM

This isn’t directly security related, but I wanted to give an RIP to someone who’s work has greatly affected security, for better and for worse. While the news focuses on the great tech leaders, a more influential man, Dennis Ritchie, inventor of C and co-inventor of Unix passed away. Perhaps we can look at how his work and the infinite derivatives of it has affected security, from hard lessons learned (buffer overflows, privilege escalation) to the advancements including multi user time sharing with isolated processes.

Petréa Mitchell October 14, 2011 5:11 PM

I saw the “kraken” headlines, didn’t read the articles, and just waited for the Science News article for the real story. Now I see why SN never did an article– they can still recognize crank claims when they see them.

Gabriel October 14, 2011 8:13 PM

Getting back to the kraken, some articles I read referred to it as an octopus like creature. So according to the researcher, what type of cephalopod is it supposed to be? Out of curiosity, Bruce, what is your take on the squids’ distant cousins?

Gareth Randall October 15, 2011 1:56 AM

A new backup algorithm (Triplyx) that avoids the need for passwords, yet is strongly encrypted. Writes three separate volumes – any one on its own is useless, but any two together reconstruct the entire plaintext.

Avoids the business risk of backups being stolen, or losing the encryption password or keys. Intended for offsite backups.

As an example, storing an input data stream of 100kbyte would result in the following being written to the devices:

Volume 1: 50k of D1^A, striped with 50k of B.
Volume 2: 50k of D2^B, striped with 50k of A.
Volume 3: 50k of D1^D2^A, striped with 50k of D1^D2^B.

Working Java implementation with full unit tests at Sourceforge:

Nick P October 15, 2011 8:51 AM

@ Gabriel on Ritchie

The man certainly did a lot for the computing industry. I have to say, though, that his contributions to security were mostly negative. The B2-class Multics system was an example of good design:

  1. Minimal TCB in kernel mode (microkernel).
  2. Invented rings of protection.
  3. Invented virtual memory.
  4. Immune to stack overflows thanks to a reverse stack. (UNIX coulve done that)
  5. Good access controls and logging.
  6. Hardware-enforced memory protection, I think.
  7. Sensible, easy, OS commands and software.
  8. Plenty of uptime.

UNIX undid many of these good things, even when it could have kept them at virtually no cost. It was a nightmare in terms of security, reliability & usability. See the UNIX Hater’s Handbook for many specifics. (Funny that, even after $1+bil investment of labor, Linux & the UNIX’s still have some of these same problems.)

Additionally, a few small changes could’ve made C much safer to program in, but it’s design makes it extremely easy to screw things up. Whereas languages like Ada or Cyclone make it hard to screw up. So, it was useful, portable, versatile, etc. but very bad from a security or even app robustness perspective.

I see Ritchies work as a contribution to computing and software development in general. I’d rather not think about the literally millions of headaches he caused us IT security folks. He deserves a better memory than that.

Peter E Retep October 15, 2011 10:27 AM

Cross cultural files,
when initially set up at the Ivy League universities and their corresdponding institutions,
produced one very curious initial finding:
no matter where the culture was on the surface of the earth,
nomatter how far from the sea [ie- Siberia, Mongolia],
every people had ancient folk tale traditions
about man-eating kraken.

Gabriel October 15, 2011 8:08 PM

@Nick P: The funny thing about criticisms of Unix, is that they are all true, and yet often wrong, at the same time. I’m familiar with Multics, although it rode into the sunset before my time. From some research, it was HUGE for its time and very expensive, designed to run on a multi-million dollar mainframe. Also, I believe it was a monolithic system, the micro-kernel didn’t really come into play until the 80’s. It also suffered its own share of break-ins and vulnerabilities, and had to evolve into a more secure system as well. Remember, the archetypical machine to run Unix in the early 70’s was the PDP-11, particularly the 11/45. This was a machine with a couple hundred Kilobytes of ram, and a MMU was a separate optional card, rather than something built into the CPU card. There is no way it could have run Multics, the kernel would have taken half the memory. Bell labs built a system that would run on the closest thing to affordable and commodity hardware, which is why it was able to be ported to the 286 in the 80’s. While Unix security sucked, everything at its level was worse. Remember, IBM had virtualization, out-of order execution, speculation, etc. in the 60’s. We didn’t get that in the Mini-computer and PC market that replaced it until the 90’s/2000’s. This isn’t security related in itself, but it just shows how “computers for everyone”, such as the mini and micro computers, took decades to catch up to big iron in almost all capabilities. Also, when you look at it, most Unixes were far more secure in design and concept than most other widely used products on equivalent machines. MS didn’t even get “security religion” until the last decade. Most of Unix limitations in regards to security are due to 1: a change in the threat model, particularly with the rise of the Internet, and 2: limitations in capability at the time such as small memory, lack of certain hardware capabilities/protections, etc.

Also, Unix Hater’s handbook is so full of hyperbole, half of it could be thrown out. This distracts from the other half that is valid criticism. However, as can be seen, the Unix community has been a strong learner over all, picking up from its mistakes. Remember any modern Unix, properly configured, will be far stronger than the weakest links at any organizations. HB Gary Federal fell due to a stupid web vulnerability on some custom software they had written (bad idea), and HB Gary fell because a collaborator on who had access to a root account fell to a phishing attempt. The weakest links in any organization are the administration of the system, applications that run on the system, and the people who use the system (weakest of all).

Not to say that I agree that we should see security increase in systems, including high assurance systems that provide even better isolation. This is especially important for applications running in the most sensitive settings. Sadly, the prevalence of Windows XP on so many critical systems makes almost any Unix look very good by comparison. If anything, almost all Unixes are relatively easy to understand at a system level, and much more easy to configure. Windows is a tangled rat’s nest below the GUI, and you really can’t tell what’s on there or even what is running.

Gabriel October 15, 2011 8:10 PM

@Peter: Couldn’t the giant squid be enough to explain the legends of the “Kraken”? Even if the Kraken referenced in the articles was real, it was extinct well before the last dinosaurs went extinct. There is no way this one could have shaped the legend, especially since unlike “dragons” (dinosaurs), squid did not leave remains for the ages.

Nick P October 15, 2011 11:20 PM

You’re explanations regarding UNIX in the early days make sense. And I was certainly referring to the later incarnation of Multics: it was basically both a developing and production product at the same time. I read the Multics vulnerability assessment & it was way ahead of its time (the assessment, that is). Many of the problems found there turned up in UNIX & even IBM mainframe OS’s. When I refer to Multics, I’m typically referring to the B2 version. (And yes it was expensive: $7 million was the figure I found a while back.)

I guess we could say that UNIX was better than competing offerings, but not particularly strong on security. And that this weakness was justified at the time.

“However, as can be seen, the Unix community has been a strong learner over all, picking up from its mistakes. Remember any modern Unix, properly configured, will be far stronger than the weakest links at any organizations”


“If anything, almost all Unixes are relatively easy to understand at a system level, and much more easy to configure.”

I’d agree with the first, but the second point is questionable. Easy configuraiton, deployment & maintenance has always been a selling point of Microsoft Server products. The first point is very important for security though: hard to say something is safe if u can’t see how it works (and if the docs suck).

Gabriel October 16, 2011 8:42 AM

@Nick P: Regarding MS products, I must disagree with you there. They have a nice shiny (although sometimes much more complex than necessary) UI that can make it easier to initially set something up. But after that, where do you go? The service management snap-in has always been rather cryptic, the sheer number of services running in just a client OS is staggering. Task manager? Dozens of SVCHOST processes, that corresponds to some service running as a DLL or something. Again, very hard to understand what is actually running on your system. MS configuration settings for hardening the system, particularly against executing off removable storage (and the horrendous, promiscuous autorun)? You have to go into regedit, where a simple mistake can break your system, and you can’t get back in with a rescue CD and just use nano/vim/emacs to save yourself (Or a pretty gui livecd). Go to C:\Windows\SYstem32, and see how long it takes for you to figure out what each DLL, each EXE, and each CPL does. Not to mention the OCXs and other files. On a Unix system, their is a nice little heirarchy that has been established over the years. /bin, /sbin, /lib. /usr, /usr/local, etc. You can discover everything going on in a Unix system, as sys admin, using bash, and your favorite text editor. Or you could use a fancy explorer-like shell. You can discover all settings for any service by evaluating their entry under /etc and their init script launch options. I will admit, a number of them can be cryptic at first, but once you learn them, you can quickly get the full picture of how something is configured. And of course, most modern Unix systems have U’s at least as elegant as windows. In fact, some UI elements are much friendlier than Windows. I find it easier to find out my IP address using NetworkManager GUIs, than using the Windows 7 tray icon. Actually, Windows XP used to have a support tab that would reveal that information, but it appears to be gone on my Win 7 workstation at work.

Where MS is strong is on centralized tools for configuring clients. Of course, with many Unix systems, it’s easier to create a base image with the proper configuration, and deploy it on all workstations or servers. Also, the package management systems that have been developed for newer Unixes are simpler, yet far more robust, easier to manage, integrate, and evaluate than WSUS. So, what I like about Unixes vs Windows is that it has a much more transparent system configuration at a much lower level than windows systems. Once you leave the GUI, you have to work really hard to evaluate and configure a windows system.

Anyways, getting back to my original “thesis”, DMR’s work was where we all cut our teeth on security. His failures and successes are what taught us some of the most valuable lessons of the past 40 years. And I believe he recognized the limitations of his own system, which is a very important quality. Additionally, no one else haad succeeded in developing a portable systems level language until C came along. There were many other prior attempts that were greatly influential, but that seems to be how the market always works. Someone creates a novel system that fails, and then someone else picks up some of the pieces and succeeds. Without a reasonably portable language, it would be much harder to maintain an ASM codebase for multiple architectures. Hard to maintain means hard to detect and fix security vulnerabilities, especially when you have to fix a core vulnerability in the algorithm at a high level across so many different ASM source trees. Having different codebases for every architecture also means the more subtle vulnerabilities in each system could be entirely different. So, in almost every respect, he is one of the giants on whose shoulders we stand, not just in computing, but in security as well.

Clive Robinson October 16, 2011 10:33 AM

@ Gabriel,

You forgot to mention one very very important difference between *nix and Windoze, and that’s the learning curve.

In *nix it’s definatly a curve, and an individual can work their way up it on their own as far and as fast as they wish to. the MS OS’s however are lamentably different, they do not have a curve but a series of steps, and it is well neigh impossible to get from a lower step to a higher step on your own, and as MS keep changing things in the UI with just about every iteration of the OS’s keeping current is partaking of a red queens race on the hamster wheel of pain, with the certain knowledge that unless you shell out large quantities of money you will fail and fall often fatally off of the endless upgrade cycle and become out of date faster than last years Jimmy Choo’s.

Indeed a whole parasitic industry has grown around MS’s OS and software upgrades, which places a very high cost on ordinary users and individuals. Look at it this way how long would a car manufacturer last if it effectivly forced you to buy a new car every three years, but also not only moved all the controls around but changed their functionality, so that you would have to go back to driving school each time…

What appears to be little known is that although MS change the UI more frequently than some people change their underware the MS Command line tools do appear to change more slowly, but often lag behind the latest features. Partly this is to do with Admins writing scripts in the various arcane successors to the .BAT file command language, and partly because their own staff just cannot keep up on the hamster wheel of pain.

If you look at the MS command line offerings it is fairly quick to see that they actually took on mass much of the low level *nix functionality.

It often strikes me as funny that Bill Gates took on Dave Cutler in 1988 to “Develop a better unix than unix” and ended up making a shoddy wanabe that is effectivly a skunk in lambs clothing.
The result was MS “New Technology” which as you noted has some very real and very significant security failings (such as the inability to find and manage processess in memory unless they chose to notify the kernel).

Oh and whilst at MS Dave Cuttler’s reputation kind of sunk as much of that he touched turned not to Gold but lead and sunk without trace… He is currently working on MS’s Cloud offering, that whilst it might be a comercial success if MS get their act together, currently shows many signs of being a security and training nightmare and extreamly costly to use both in real terms and in training etc.

It’s sad because Dave Cutler (who is only a year younger than DMR) actually was involved in much of the hardware improvments that you and Nick P mention as being the “improving road of security”.

Daniel October 16, 2011 1:08 PM

@clive and others.

I think you are missing the larger point about product adaptation and market penetration. Windows and even MS-DOS wasn’t designed to be technologically elegant. They were designed as tools to speed the penetration of computers into American life. You can call the learning curve with MS products “parasitic” if you want to but Bill Gates has another word for…”profitable.” He’s not one of the richest men in the world for no reason.

IMO Gates is nothing but a Rockefeller with geeky glasses. I think his technological savvy is vastly overrated but his business acumen is vastly underrated.

BTW, I do think there is a cogent argument that some of the technological compromises that were made in the early days to get the general public and business in particular to adopt the PC as a tool have come back to haunt the industry. At the same time, if Gates had gone down the road of Apple in the 1980s it’s probably true that the WWW would have never been invented and the computers in general would have become a historical afterthought. It bugs me that some people treat the history of computing as a series of inevitable steps and that the present couldn’t look any differently than it does now. What a crock of BS. Like Gates or hate him, he oversaw changes that modified the world in profound ways.

Gabriel October 16, 2011 9:49 PM

@Daniel: This is starting to move off into platform trolling, which was not the original intent. Indeed, most of our discussions of other OS’s, including their weaknesses and strengths, are to compare and analyze the role of Unix and C in the evolution of computing and security. I don’t think anyone doubts that Gates was a smart business men. It’s rather difficult to argue against billions of dollars.

If you look at the 40’s through the 60’s, computing was for the largest organizations. First, for defense purposes during and after WWII, such as breaking enemy crypto (Bombes, Colossus, etc.), creating better atomic bombs (Don’t know what they used), and computing artillery tables/better ballistics (ENIAC). Then for large financial institutions and corporations (Systems by Sperry Rand, Honeywell, IBM, GE, etc.). Big expensive machines, requiring a large room and substantial resources. Even the aforementioned Multics was from an era where the view of “mass” computing was a big expensive computer that everyone could call into via a phone interface in order to perform some processing. So far off the mark from where history went, and where it wanted to go.

Now, at this point, most of the books start talking about the microcomputer, which became computing for every office, and eventually, every home. They’ll talk about Apple, the IBM PC, the rise of Microsoft, etc. While that is true, they ignore the earlier part of this revolution. DEC. The PDP. Know why they called their systems PDPs? Because in the late 60’s, “computer” meant large Univacs, IBM 360s, etc. The multi-million dollar machines that required an entire staff to maintain. Computer was a turn-off to the smaller corporation or division that couldn’t afford this. Now, with the rise of the mini-computer, you could start with something about the size of a dishwasher. Perhaps a couple of racks, if you wanted a disk pak drive (RK drives), more memory, tape drives, etc. And a nice little console more friendly than blinking lights. And what OS and programming language rose to great heights in that era, particularly on that PDP and later VAX platform? Unix and C. It started a distributed computing revolution, where an office within a company, or a university department could afford a robust and relatively powerful multi-user system. In fact, for a lucky few, a PDP became a big personal computer. One could argue this revolution in the 70’s is what naturally evolved into the microcomputer revolution, and now the more ubiquitous “smart devices”. After all, as they got smaller, Unix moved into these smaller systems. You even have to give credit to Steve Jobs for coming up with the idea of producing a mass appeal type system that ran a Unix-like system (The NeXt, running a Mach microkernel with a Unix layer on top). While NeXt was a failure, he did finally achieve this with OS X. But even before OS X, you saw the rise of BSD, Linux, etc. You saw SCO (The original, and respectable company) who produced a Unix for x86. So, Unix has always been very much a part of the revolution for the past 40 years, putting its hands into as many and even more places than Windows, which is rather impressive.

Now, what does this have to do with security? Unix led the change in environment that drove new requirements for security. In the 60’s, computer security was locking the door to the mainframe and controlling who could submit jobs. Unix was a multi-user system accessed by all sorts of employees at a company, by many students, etc. Unix was where much of the Internet was first developed, where network security, or the lack of and need for, became readily apparent.And as we have all discussed, it suffered from many fundamental security problems, as well as C. In fact, the greatest weakness still remaining in C is probably the lack of a good and robust strings library. One that won’t let you compile with scanf. For all its limitations, I would have to argue that it has been able to age more gracefully than most other systems that followed it. I believe that is the legacy that DMR left behind.

Gabriel October 17, 2011 7:17 AM

@AC2: yeah, more of the same old drm crap. I love the last paragraph:
With a belief that his L3 technology will address problems like piracy, cost of distribution, monetisation difficulties and global reach, Gupte said that his video protection technology will open up new market segments for independent films, semi-professional and amateur video makers.

Let’s see, cost of distribution? How does paying licensing costs, building faster server farms to support drm, and spending more on tech support for legit customers who can’t get the player to work due to restrictive drm lower costs?

More importantly, how does it reduce piracy when the pirated product becomes easier to use than the legit one? Obligatory xkcd:

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.