LLM Summary of My Book Beyond Fear

Claude (Anthropic’s LLM) was given this prompt: Please summarize the themes and arguments of Bruce Schneier’s book Beyond Fear. I’m particularly interested in a taxonomy of his ethical arguments—please expand on that. Then lay out the most salient criticisms of the book.

Claude’s reply:

Here’s a brief summary of the key themes and arguments made in security expert Bruce Schneier’s book Beyond Fear: Thinking Sensibly About Security in an Uncertain World:

Central Themes:

  • Effective security requires realistic risk assessment, not fear-driven responses. We need to rationally determine actual threats versus perceived ones.
  • Security trade-offs need to be carefully balanced against costs to other values like privacy, freedom, and social/economic impacts.
  • Human psychology and biases lead to exaggerated fears that politicians/marketers can exploit, resulting in wasteful and theatrical “security theater.”
  • Security is a process that requires ongoing flexibility, not total prevention. We can’t anticipate every threat but can build resilience.

Key Arguments:

  • Analyze risks empirically using evidence, statistics, and probability rather than intuition. Avoid underestimating everyday risks while overestimating novel or sensational ones.
  • Reject the temptation to implement oppressive security restrictions after tragic but unlikely events. Focus only on proportional responses.
  • Where possible, favor openness and transparency over aggressive data collection or restrictions which erode civil liberties.
  • Seek security policies and technologies that enhance society broadly rather than benefit special interests.

Schneier argues for rational, ethical approaches to security aimed at genuine protection while avoiding violations of human rights in the name of safety.

Here are some of the key ethical arguments Bruce Schneier makes in Beyond Fear regarding security policy and practices:

  • Utilitarian Ethics – Security measures should aim to provide the greatest societal benefit and well-being for the most people, not just special interests.
  • Individual Autonomy – Restrictions should be minimized to avoid excessive infringements on civil liberties and individual freedoms. Surveillance creep risks violating autonomy.
  • Privacy Rights – Pervasive monitoring and data collection erode privacy rights and dignity. Citizens are entitled to some degree of opacity.
  • Proportional Responses – Reactive policies should weigh costs versus statistically likely benefits. Draconian rules beyond proportionality are unethical.
  • Justice and Fairness – Security should not create racial, religious or other profiling biases that punish groups rather than conduct.
  • Accountability and Transparency – Ethical security requires checks against excessive power and oversight for responsibility. Secrecy enables abuses.
  • Honest Representation – Officials and pundits have an ethical duty not to stoke disproportionate public fears. Security theater manufactures consent.
  • Integrity and Character – Leaders should have the wisdom and courage to make difficult trade-offs rather than exploit fears for authority and control.

Overall Schneier argues security policies grounded in ethics rather than fear promote human well-being while recognizing inherent trade-offs. Prudence, transparency, and proportionality are essential.

Here are some of the most salient criticisms made against the arguments put forth in Bruce Schneier’s book Beyond Fear:

  • Downplays real threats – Some argue Schneier dangerously downplays serious risks like terrorism and cybercrime by calling fears overblown.
  • Too theoretical – His risk analysis approach may rely too much on statistics versus real-world uncertainties that defy probabilistic reasoning.
  • No easy answers – While citing misguided policies, critics say Schneier provides few actionable solutions or concrete policy guidance.
  • Conflicting values – His balancing of security, privacy and liberty involves inherent trade-offs and tough choices he gives little direction on.
  • Politically naive – Suggesting special interests drive policy may under-appreciate legitimate national security concerns of elected officials.
  • Hindsight bias – Assessing past overreactions easier than predicting future threats that justify caution today.
  • No accounting for psychology – Factoring human irrationality into policy is essential, not dismissed as “security theater.”
  • Technical focus – Focuses narrowly on certain technologies while ignoring broader organizational and sociological dimensions.

In summary, critics argue Schneier’s risk analysis orientation is too clinical, abstract, and devoid of political and human realities to transform concrete policy in the ways he intends. His framework may therefore lack sufficient nuance or guidance.

It’s interesting. The summary is pretty accurate, and so are the criticisms. The book was a product of my thinking twenty years ago, and my own thinking has evolved to address those criticisms.

Of course, this only works with older books that the LLM has ingested, and probably works better with books that have been written about.

Posted on September 15, 2023 at 3:12 PM14 Comments

Comments

vas pup September 15, 2023 5:26 PM

‘Overwhelming consensus’ on AI regulation – Musk
https://www.bbc.com/news/technology-66804996

“Tesla CEO Elon Musk says there was “overwhelming consensus” for regulation on artificial intelligence after tech heavyweights gathered in Washington to discuss AI.

Tech bosses attending the meeting included Meta’s Mark Zuckerberg and Google boss
Sundar Pichai.

Microsoft’s former CEO Bill Gates and Microsoft’s current CEO Satya Nadella were also in attendance.

The Wednesday meeting with US lawmakers was held behind closed doors.

The power of artificial intelligence – for both good and bad – has been the subject of keen interest from politicians around the world.

In May, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a
US Senate committee, describing the potential pitfalls of the new technology.

There are fears that the technology could lead to mass layoffs, turbo charge fraud and make misinformation more convincing.

=>AI companies have also been criticised for training their models on data scraped from the internet without permission or payment to creators.

In April, Mr Musk told the BBC: “I think there should be a regulatory body established for overseeing AI to make sure that it does not present a danger to the public.”

Mr Zuckerberg said that Congress “should engage with AI to support innovation and
safeguards”.

He added it was “better that the standard is set by American companies that can work
with our government to shape these models on important issues”.

Republican Senator Mike Rounds said it would take time for Congress to act.

=>”Are we ready to go out and write legislation? Absolutely not,” Mr Rounds said. “We’re not there.”

Clive Robinson September 15, 2023 7:25 PM

@ vas pup, ALL,

Re : Permission to raise drawbridge.

“Tesla CEO Elon Musk says there was “overwhelming consensus” for regulation on artificial intelligence after tech heavyweights gathered in Washington to discuss AI.”

I’ve seen this play before…

The “tech heavyweights” have their systems, but most others do not.

So to get a monopoly or cartel they get legislators and regulators to change the rules.

Thus lifting the drawbridge to stop competition entering…

It’s what Uber and similar tried to do…

Clive Robinson September 15, 2023 7:41 PM

@ Bruce, ALL,

Hmmm, my summary of Claude’s summary,

1, It’s a long winded way to say “You can not please all of the people”.
2, You are at fault for not writing a perscriptive “How-To”.

Also I’m seeing a lot of “A” but very little “I”, and what looks strongly like a high degree of “C&P” for Copy and Plagiarism with rewording from around about three sources.

So did you get Claude to site it’s refrences?

Winter September 16, 2023 7:17 AM

I have seen people struggling to develop automatic “summarizers” for decades. It is rather bizar to see that suddenly work. Especially as computers do it even better than humans.

Great.

Andy September 16, 2023 12:18 PM

I remember attending a talk by Schneier around 2003 and was initially rather shocked at his view of “one-time off” events, ex September 11. In hindsight, the past 20 years have proven him more than right. The wars on terror, Iraq, Afghanistan are enough illustration. Security Theater in airports doesn’t need explaining. One should have applied all of his arguments to coronomania and we wouldn’t have trillions of $ wasted and people killed by “healthcare”.

Winter September 16, 2023 12:56 PM

@All
Re: Trolls

After the Troll Factory campaigns last year, I see their messages have been updated to a new sneaky level. Still, the message aims have not changed: Trying to kill us with misinformation.

After more than 1M dead Americans and 7M world-wide (only confirmed cases), there are still people posting on, eg, this board trying to increase the death toll.

Only to be expected as there is a genocidal war going on in Europe and the Troll Factory was/is one of the warring parties.

P Coffman September 17, 2023 10:27 AM

It might be a minority opinion, though it might also answer a point or three made by Mr. Bruce Schneier (formality? I am not entirely an acolyte in the sense of having read his varied books), discussed with his role as author reviewed here.

To wit:

Mr. Musk reifies AI because he repeatedly reminds us that all humanity is at peril when AI is unbridled.

For example, After ChatGPT rolled out, all the big players joined the bandwagon, putting their pet AI projects out for public comment. Next, I noticed a rush to extend AI into areas such as writing code. As a consumer, I also experienced pathetic answers from this bum rush.

Under-wowed, Mr. Musk.

Some secondary AI answers seemed worse than a skilled hand with an ordinary search engine.

pattimichelle September 17, 2023 11:52 AM

Giving the final comments, it seems like it would be extraordinarily useful to compare the level of “contamination” of the LLM’s perceptions of a more recent, but similar, book as disinformation has blossomed on the internet – and the relative amount of contamination by this on a more recent book would be higher than by an older (and more forgotten or overlooked by conspiracy theorists, etc.) book.

Anyone doing these time-based, meta-LLM comparisons to look for machine bias?

pattimichelle September 17, 2023 12:00 PM

P Coffman: musk is one of the great shysters! A modern (and hurtful) P. T. Barnum (“…a sucker born every minute). Virtually everything he’s said has been garbage (hyperloop) or illegal (SpaceX).

Bruce Schneier September 17, 2023 3:45 PM

@pattimichelle:

“…it seems like it would be extraordinarily useful to compare the level of ‘contamination’ of the LLM’s perceptions of a more recent, but similar, book as disinformation has blossomed on the internet – and the relative amount of contamination by this on a more recent book would be higher than by an older (and more forgotten or overlooked by conspiracy theorists, etc.) book.

“Anyone doing these time-based, meta-LLM comparisons to look for machine bias?”

That is the first thing I thought about as well. I also wonder whether Claude has my book’s text in its corpus, or if it’s relying on what other people have written about my book over the years. So there are a bunch of confounding variables, but — yes — it would be really interesting to compare these LLMs abilities to summarize newer and older, well known or more obscure, popular or esoteric books.

Gert-Jan September 18, 2023 7:00 AM

Where possible, favor openness and transparency over aggressive data collection or restrictions which erode civil liberties.

A contradition is implied here that doesn’t exist. I’m not sure Bruce ever argued these things as being contradictory.

More and more, tech corporations are opening up and providing transparency about how they are aggessively collecting data and restricting users it benefits them.

I have to admit, that they don’t open up or provide transparency out of moral consideration, but – I believe – mostly to (legally) get away with said data collection and policies.

Julia Reed September 18, 2023 3:18 PM

As for me, I like the approach to security that aims to provide genuine protection without compromising human rights and privacy.

vas pup September 20, 2023 6:10 PM

Google DeepMind AI speeds up search for disease genes
https://www.bbc.com/news/science-environment-66847977

“Google’s AI firm DeepMind has used artificial intelligence to identify changes in human DNA that might cause diseases.

The researchers believe they have pinpointed 89% of all the key mutations.

The development is expected to speed up diagnosis and help in the search for better treatments.

A leading independent scientist told BBC News that the work was “a big step forward”.

Prof Ewan Birney, deputy director general of the European Molecular Biology Laboratory, said: “It will help clinical researchers prioritise where to look to find areas that could cause disease.”

The technique works by checking the order of the components in human DNA strands.

All living organisms are built from DNA. It is made from four blocks of chemicals
called adenine (A), cytosine (C), guanine (G) and thymine (T). In humans, when an
embryo is developing, the order of these letters are read to produce proteins, which are the building blocks of the the cells and tissues that make up various parts of the body.

=>But if the letters are in the wrong order – perhaps because of an inherited disorder – the body cells and tissues aren’t made properly – and this can lead to disease.

Last year Google DeepMind’s AI worked out the shape of nearly all proteins in the human body.

=>The new system, called AlphaMissense, can tell If the letters in the DNA will produce the correct shape. If not, it is listed as potentially disease-causing.

Currently genetic disease hunters have fairly limited limited knowledge of which areas of human DNA can lead to disease. They have classified 0.1% of letter changes, or mutations, as either benign or disease causing.

!!!Google DeepMind’s Pushmeet Kohli said that the new model pushed that percentage up to 89%.

Prof Birney said he expected AI to become a massive part of molecular biology and life sciences.

“I don’t know where it’s going to end but it’s changing nearly everything we do at the moment,” he said.”

Erik Terwiel October 15, 2023 6:06 AM

Have you all read “The Great Automatic Grammatizator”, a short story by Roald Dahl?
The story was written in 1953.
You can find it in the bundle “Someone like you”.
I found it chilling, astonishing.
Read it.
Now.
And then talk about LLM.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.