Security Risks of Chatbots

Good essay on the security risks -- to democratic discourse -- of chatbots.

Posted on December 5, 2018 at 6:30 AM • 29 Comments

Comments

IggyDecember 5, 2018 7:31 AM

Chatbots? Or aka children? We should be much more concerned with sub-adults. Children love wreaking chaos because it gives them a sense of power. When they impersonate adults online, they are taken seriously, as if they can cast actual votes. Twitter, FB, YT, GGL, all prove the axiom: children should be seen but NOT heard. Because on those lunch counters, we can't see them, but we get a fuckton of their half-baked, emotion-infused opinions. And we've been taking them all far too seriously.

Want to improve the discourse of democracy? Raise the age limit in the TOS. Some will still cheat, but most parents will make their kids comply.

Jon (fD)December 5, 2018 9:21 AM

I wonder if it includes "Lenny", the old-man chatbot for tying up telephone salesmen.

Jon (fD)

Yet another Bruce December 5, 2018 9:44 AM

@Denton

I propose we follow the example of computer & computor. An impostor is a human agent and an imposter is a chatbot.


Impossibly StupidDecember 5, 2018 10:39 AM

This is another case where the real danger is social engineering, not the technology.

Who would bother to join a debate where every contribution is ripped to shreds within seconds by a thousand digital adversaries?

Political discourse is not that sophisticated these days, whether it's chatbots or humans that participate. There aren't debates on ideas, but us-vs-them gang-style turf wars. There are seldom thousands, hundreds, or even tens of differing opinions that get voiced, but rather a two party system is advocated. And opponents get "ripped to shreds" not by any sort of complicated reasoning that only a chatbot is fast enough to counter, but by schoolyard insults and name calling.

The answer to all that is not to eliminate the bots, it's to rethink how you socially interact with all "people". Just because some random stranger disagrees with you online, pause for a second and think if they're worth your time (bot or not). Unless you actually know them and can meaningfully get or give some useful insight, just don't interact with them. Block them if they won't leave you alone, or abandon the site if it does a poor job of policing bad actors (sad to say, but you have some work to do in that regard yourself, Bruce). Find genuine people that can enrich your life. It's not hard to do.

albertDecember 5, 2018 10:46 AM

@echo, @Denton, @Yet a.B.,

An imposter is a person who posts comments using another persons handle.

. .. . .. --- ....

ThomasDecember 5, 2018 1:35 PM

So... looks like not joining twitter was the right decision after all :-)

Reminds me of email spam, and the desperate arms-race between spammers and spam filters.
I wonder how many of the same techniques will be tried in this arena?

Eventually someone will make money offering "filtered-chatbot-free" access to twitter et al.

bttbDecember 5, 2018 2:25 PM

A good thing about this site includes timely posting of relevant issues. For example, I like books, but books are often a year or two out of date. I like journal articles (at least, sometimes), and their: abstracts, sometimes parts of the discussion, or their conclusion). But often journal articles are months out of date.

In this fast moving world of political rat-fu?king, around the world, timely information and decision making may help citizens’ resist, or try to resist, where or when possible. From the first opinion "essay" link above, about AI chatbots or non-AI chatbots, :

“…Some chatbots, like the award-winning Mitsuku, can hold passable levels of conversation. Politics, however, is not Mitsuku’s strong suit. When asked “What do you think of the midterms?” Mitsuku replies, “I have never heard of midterms. Please enlighten me.” Reflecting the imperfect state of the art, Mitsuku will often give answers that are entertainingly weird. Asked, “What do you think of The New York Times?” Mitsuku replies, “I didn’t even know there was a new one.”

Most political bots these days are similarly crude, limited to the repetition of slogans like “#LockHerUp” or “#MAGA.” But a glance at recent political history suggests that chatbots have already begun to have an appreciable impact on political discourse. In the buildup to the midterms, for instance, an estimated 60 percent of the online chatter relating to “the caravan” of Central American migrants was initiated by chatbots.

In the days following the disappearance of the columnist Jamal Khashoggi, Arabic-language social media erupted in support for Crown Prince Mohammed bin Salman, who was widely rumored to have ordered his murder. On a single day in October, the phrase “we all have trust in Mohammed bin Salman” featured in 250,000 tweets. “We have to stand by our leader” was posted more than 60,000 times, along with 100,000 messages imploring Saudis to “Unfollow enemies of the nation.” In all likelihood, the majority of these messages were generated by chatbots.

Chatbots aren’t a recent phenomenon. Two years ago, around a fifth of all tweets discussing the 2016 presidential election are believed to have been the work of chatbots. And a third of all traffic on Twitter before the 2016 referendum on Britain’s membership in the European Union was said to come from chatbots, principally in support of the Leave side.

It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact.”

[...]"

Clive RobinsonDecember 5, 2018 4:12 PM

@ bttb,

Whilst the essay point of,

    It’s irrelevant that current bots are not “smart” like we are, or that they have not achieved the consciousness and creativity hoped for by A.I. purists. What matters is their impact.

Sounds compelling another question might be,

    Why despite the crude nature of chat bots, are so many gulled by them?

Take Twitter at one point you had 140 chars to get your message across this veritably encoraged crude slogans etc as there was no room for either the niceties of life or subtlety.

Invariably these days not just of "fast moving" but of near "instant everything" it's not the quality of a statment but the repetition that appears to persuade those that frequent that type of sniping social media.

Further the fact that the reader thinks they are smarter than the poster makes the issue worse (it's an old con artist trick, appear to be stupid and people think you do not have the ability to deceive...).

Provided that remains effective there would be no requirment to smarten up chat bots, and the problem would persist, untill people smarten up...

And I'm not holding my breath on that happening any millennium soon...

John SmithDecember 5, 2018 6:50 PM

from Clive Robertson:

"...Sounds compelling another question might be,

Why despite the crude nature of chat bots, are so many gulled by them?"

Bob Altermeyer has answered that:

https://www.theauthoritarians.org/

Authoritarian followers find it hard to think logically, not just on emotionally charged issues (we all do), but on abstract, emotionally neutral, mathematical problems.

Their logic seems to be: if you end up with the right answer, then the reasoning process that got you there, no matter how weird, must be correct - a.k.a. confirmation bias.

This seems like clear evidence of a cognitive defect, but it can be more subtle. I've seen supposedly logical engineers assume that because a prototype passed a test, the prototype must be ok; only later to find that the "test" didn't test what they thought it did. In other words, the engineers got the "right" answer for the wrong reason, but conluded all was good.

I imagine this occurs in security testing as well. To some extent we are all prone to it.

Getting back to chatbots, authoritarian followers are also prone to social proof (Robert Cialdini https://en.wikipedia.org/wiki/Robert_Cialdini) - if "everyone" is saying it then it must be true.

Combine that with a degree of logical impairment, and for this demographic in particular, effective chatbots don't need to be sophisticated. They just have to be ubiquitous and persistent.

gordoDecember 5, 2018 7:10 PM

From this thread's linked essay by Mr. Susskind:

A subtler method would involve mandatory identification: requiring all chatbots to be publicly registered and to state at all times the fact that they are chatbots, and the identity of their human owners and controllers. Again, the Bot Disclosure and Accountability Bill would go some way to meeting this aim, requiring the Federal Trade Commission to force social media platforms to introduce policies requiring users to provide “clear and conspicuous notice” of bots “in plain and clear language,” and to police breaches of that rule. The main onus would be on platforms to root out transgressors.

https://www.nytimes.com/2018/12/04/opinion/chatbots-ai-democracy-free-speech.html

California's bot disclosure law, which goes into effect on July 1, 2019, appears to cover chatbots:

(a) “Bot” means an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001

California Enacts Anti-Bot and IoT Laws
Thursday, October 4, 2018

On September 28, 2018, the Governor of California signed S.B. 1001 into law. In the absence of a clear and conspicuous disclosure, the legislation makes it unlawful "for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election."

https://www.natlawreview.com/article/california-enacts-anti-bot-and-iot-laws

This detail about the bill, however, is disappointing:

BAS[Bulletin of Atomic Scientists]: Under your law, would big social media platforms have to have systems for identifying undisclosed bots on their platforms, so that the companies could then go shut them down?


RH[California State Senator Robert Hertzberg]: Well, that’s what I want. [The bill] doesn’t have that. I’ve been fighting these guys. I’m debating what to do about this, because that, to me, would make it meaningful.

https://thebulletin.org/2018/08/the-california-lawmaker-who-wants-to-call-a-bot-a-bot/

I'm getting tired of legislative bodies letting platform owners use the "if you see something, say something" approach as a way to to avoid accountability. I suppose that the platforms will continue to collect it all, all the bot material generated, occurrences, etc., and it will eventually end up sitting for seven years in an academic archive somewhere—long after its had its intended effect.

Jon (fD)December 6, 2018 1:12 AM

So part of the risk of chatbots is forging the identities of legitimate posters.

Anyhow, I'm expecting some moderator hammer coming down soon.

J.

Clive RobinsonDecember 6, 2018 4:53 AM

@ Jon (fD),

So part of the risk of chatbots is forging the identities of legitimate posters.

It depends on what you mean by "legitimate posters".

That is,

1, A person who does not use the system but who's details are legitimate (ie ID theft usage).

2, A person who uses the system but the system does not enforce ID checking (authentication failure).

3, A person who uses the system but has compromised credentials the system accepts from others.

4, A non existant person for whom an identity that looks legitimate can be forged by others.

As was pointed out by Dame Stella Rimington when she was Director of the UKs MI5, a biological person is not an identity.

Thus the two are entirely seperate.

As it turns out you can fairly easily have two sets of DNA at the same time. Blood transfusion being an easy to see example, but organ transplant also along with importabtly bone marrow transplants.

In essence an individuals identity is "something they think they are" not "something they physically are".

Thus identity like gender can be quite fluid.

The real issue is those who govern being incapable of dealing with something so fluid, thus they try to "nail the jello to the wall" and unsurprisingly things go wrong for them.

We see a similar issue with web browser developers who don't accept the fact that a single person can have multiple roles in life. So a person who wishes to maintain seperation of their roles in life is forced to have multiple identities to get around the problem. But as,they don't have multiple "Government issued" identities it again goes horribly wrong.

The odd thing is when it comes to "legal persons" not "natural persons". A "legal person" is an entity recognised in law as having some kind of equivalence with a "natural person". So a limited company is a "legal person", in many places a "natural person" can hold as many "legal persons" as they have the resources to set up. Thus those with resources can in effect entirely seperate their "natural person identity" from any non physical activity they wish to carry out be it legal or illegal...

The reason we are in this mess is that all along the path to where we are now those who govern have made the "worst of choices" for "their convenience". Unfortunatly as we can now see sorting it all out is a very uphill struggle, with those who govern still wanting to make the wrong turn at every step, not just for their convenience but now also for the convenience of "big data Corporates"...

PeaceHeadDecember 6, 2018 7:25 AM

In addition to "chatbots", I'm concerned about the (already-present) usage of android drones, that is, remote-controlled drones that look and act like real people. I believe that they already exist and that I've even been around a few. They are just walking-talking interfaces disguised as people. But take note if the "person" (not a person) has zero response to extreme temperatures, wind, dust, and highly unhygienic circumstances. They might also show odd characteristics in terms of bodily mannerisms or lacktherof. They also might show almost zero fatigue over long periods of time that would make any ordinary person fatigued. The only good uses of such drone androids are for bomb squads and similar.

Clive RobinsonDecember 6, 2018 8:08 AM

@ John Smith,

... for this demographic in particular, effective chatbots don't need to be sophisticated. They just have to be ubiquitous and persistent.

And thus there are only two solutions to this particular issur,

1, Remove the chat bots entirely.
2, Remove this demograpic from influance.

We know that "the genie is out of the bottle" with regards chat bots, so the first option is probably not achievable "technically".

As for the second option although some politicians are trying to do similar but to different demographics, some but by no means all I suspect would consider it "immoral".

So flip a coin between the near impossible "technical" solution or the already practiced "immoral" solution... No prizes for guessing which way the penny will drop on that... Money and power usually come out on top.

Impossibly StupidDecember 6, 2018 10:19 AM

@Clive Robinson

As was pointed out by Dame Stella Rimington when she was Director of the UKs MI5, a biological person is not an identity.

Neither are they not not a chatbot. The underlying algorithm it uses can also be followed by a person rather than a computer. If that's acceptable (either ethically or legally) then it makes little sense to see the chatbots themselves as the problem. It's the underlying behavior that needs to be addressed, along with the targets/victims being educated in how they're being manipulated.

@PeaceHead

remote-controlled drones that look and act like real people. I believe that they already exist

A belief that is not backed by evidence is not scientific and may indicate a disassociation from reality. You could be experiencing a serious brain-related health issue. Please talk with a medical professional about those beliefs.

VRKDecember 6, 2018 11:50 AM

In aid of countermeasures, it seems helpful to remember that the in-depth work in linguistics, which has likely expedited bot development, can also contribute much to battling that fake text. Especially, IMO, parts of speech [noun pronoun adjective] frequencies, specifically those many good tabulations unlikely to include bot contaminated text.

But surely, filtering this will also harm savants or those who may write brokenly for other legitimate reasons, and in blocking them we would suffer a greater loss. A global blog can be hugely more fruitful because of their diverse contributions, however ellusive.

Also in contrast, even "good fakeness" changes little about the web. Deceptions are unlimited. It's "good with bad" for now.

Perhaps each host can provide a "realness score" and leave the rest to us.

albertDecember 6, 2018 1:33 PM

"...Good essay on the security risks -- to democratic discourse -- of chatbots...." - Bruce.

"...easy to miss the longer-term threats to democracy..." - author of essay.

Just what exactly is "democratic discourse"? And how are chatbots "threats to democracy"?

My BS detector is going off the scale.

Chatbots simply multiply the efforts of humans. It pains me to see the comments wander off into the weeds. You guys are missing the point. Who are the people who are actually being influenced by chatbots? Are there cause and effect statistics published? Are the effects of chatbots really known, or are they just reinforcing established beliefs.

The problem isn't the system, it's the people. It's easy to forget that most of the US population is clinically addicted to social media. For some strange reason, many folks lack the ability to critically assess claims published online, by anyone in any venue.

It looks like we have reached the point where we can't believe anything we read and half of what we see.

God forbid we should teach our kids critical thinking, instead of the rote regurgitation of vocational education.

. .. . .. --- ....

gordoDecember 6, 2018 5:17 PM

My user agents, i.e., my software agents or chatbots, are going off-script and not saying what I want them to say. Sometimes they even espouse ideas to which I'm diametrically opposed. They're learning things that I didn't teach them. I didn't have them register as bots on the various social media platforms that they're running on and I can't turn them off because I've lost all the keys. I now have no C&C over the extensions of my own views. I thought this would be a good way to maximize my engagement in representative democracy, but what I've got now is mis-representative bot-ocracy or botox for short. What's a directing mind to do?

GabrielDecember 7, 2018 12:30 AM

All the examples in this NYT article of bad things, are of bad things done on behalf of conservatives. That's a good way of unnecessarily alienating half of your audience.

echoDecember 7, 2018 1:13 AM

I cannot help noticing the author of the article is a lawyer. Lawyers are one of those certain kinds of establishment professions, like doctors, who thought they were above it all. Now with AI or expert systems to one degree or another developing critical diagnosis and now dialogue skills I cannot help wondering if the author, a lawyer, is using his lawyers skills to make an argument for the emerging competition to be regulated into oblivion to save his job and his status and influence which goes with this?

In common law a group of people with an accumulated body of knowledge form a "legal authority". This "legal authority" can be anyone and be about anything. Another item in common law is a point of view of society is a factor in courts judgment.

The issue of Brexit and problems with the validity of the outcome is a "natural experiment" worth examining. "Skilled liars" and "abuse of social platforms" and "dark money" had a short term effect. as we know from plenty of historial examinations of "unsafe verdicts" and "sucessful appeals" and later judge led "authoritative and influential reports" any unsafe or criminal gains may be short-term.

I believe protecting education and social society and combatting poverty may be the best defence against theoretical technology and network exploitation. Fair taxation to provide for this also takes money away from "lone olf" and "out of touch" people with "vested interests" who will have less to spend on AI driven chatbot armies.

PeaceHeadDecember 7, 2018 10:58 AM

@Impossibly Stupid: My claims were substantiated "as-is". Anything more precise than that and I'd be served an NDA or abducted into a federal prison for disclosing state secrets. What I stated is sufficient. Just because you are not familiar with what I describe doesn't make it false nor imaginary nor a manifestation of mental illness.

1) efforts to create a wide variety of drones exist
2) efforts to create a wide variety of androids and anthropomorphic robots exist
3) efforts of both are already very successful
4) remote-control technologies exist
5) defense and spycraft organizations have persistent interests and actual long-term histories in identity faking and camouflaging
6) current state-of-the-art in realistic prosthetics and human-like plastics/polymers is quite high
7) a remote-controlled android drone is NOT a robot, it's an avatar
8) non-realistic yet partially anthropomorphic android telecommunications robot interfaces already exist
9) you are NOT me, and you have NOT experienced what I have experienced; you are NOT an authority on my actual experiences; you have zero credibility about me, my experiences, my perception and my cognition
10) you demonstrate logical fallacies in your response to me
11) these 16 points reinforce my initial point
12) nobody on this site is obligated to "prove" allegations; we all regularly make baseless or slightly-baseless claims or claims which cannot be evaluated or claims which should not be evaluated; even the site author occasionally strays into pure speculation and this is not frowned upon entirely.
13) this nation's military and intelligence and media and government organizations have a long proven history of fraud use as well as Black Operations and wide varieties of spycraft and DoD technologies; such organizations routinely try to accomplish new innovations not yet utilised by others; other nations are similar.
14) it's already published and known and documented that the military uses remoted-controlled unmanned aquatic crafts for surveillance and warfare; this is not so different from using an android drone remotely; both are probes.
15) Attacks upon a person's alleged sanity due to mere disagreement are a known and documented historical militant hostile technique. I've done several years of study on that and related topics via historical records of things such as project/operations MKSEARCH, MKNAOMI, MKDELTA, MKULTRA, BLUEBIRD, ARTICHOKE, CHATTER, NORTHWOODS, PAPERCLIP. There are several others. All have in common attacks against innocent people in service of extremist ideologies otherwise repugnant to the masses.
16) Also, there is a common thread of technological misdirection.

Thusly, my original claims still stand.
There's a shorthand for this type of exchange: "stay in your lane".

Impossibly StupidDecember 7, 2018 4:12 PM

@PeaceHead

There's a shorthand for this type of exchange: "stay in your lane".

My lane is science, and that's why I can state your claims have no basis. You're, at best, off topic. Possibly trolling, but possibly ill. If you don't recognize people as people, that's a major problem. Please seek medical help.

There is absolutely no objective evidence that demonstrates anyone currently has a technological level that allows androids to pass as humans. Let alone superhumans; just the battery technology that enables "zero fatigue over long periods" would be revolutionary!

Mods should remove all these tangents that have nothing to do with chatbots. If you actually have any real proof of your extraordinary claim, offer it up with Bruce's next squid post. Otherwise, there is no reason anybody but a physician should be listening to you further.

gordoDecember 7, 2018 5:30 PM

@ VRK,

But surely, filtering this will also harm savants or those who may write brokenly for other legitimate reasons, and in blocking them we would suffer a greater loss.

My understanding is that bot identification is not single-criterion dependent.

Jon (fD)December 8, 2018 12:48 PM

@ Clive Robinson:

1, A person who does not use the system but who's details are legitimate (ie ID theft usage). : Not a poster, therefore legitimacy is not relevant

2, A person who uses the system but the system does not enforce ID checking (authentication failure). : All posters are seen as legitimate, therefore irrelevant

3, A person who uses the system but has compromised credentials the system accepts from others. : That person would be legitimate, it is through misuse another uses someone else's credentials (including pseudonym(s)).

4, A non existant person for whom an identity that looks legitimate can be forged by others. : If a poster does not exist, the matter is irrelevant again. Note that pseudonymous persons exist.

"As was pointed out by Dame Stella Rimington when she was Director of the UKs MI5, a biological person is not an identity."

And this I shall fairly violently disagree with, because as you said,

"In essence an individuals identity is "something they think they are" not "something they physically are"."

I think, therefore I am. Q.E.D.

Moreover, I think I am I, and what that means to my DNA, gender, or type of brain is also entirely irrelevant. This is a fundamental axiom.

"Thus the two are entirely seperate."

They are not. And directors of spy agencies are probably not the best sources for philosophy of the individual.

Anyhow, as was pointed out, this is a bit off-topic. And a bit late. Still, have fun,

Jon (fD)

PeaceHeadDecember 8, 2018 3:30 PM

The burden of proof rests firmly upon the person self-described as "Impossibly Stupid", not on me nor my claims. I already substantiated my claims well enough and provided sound reasoning for the parts which can't be safely substantiated. The sole person attempting to intimidate and ridicule me was unable to rebutt any of my 16 bullet points.

I will not be interacting with that person. It's a waste of intellect and emotion and words upon someone who already decided that they don't believe in whatever they aren't already familiar with. The person completely ignored the additional and valid info I provided just to continue to argue for no good reason.

For anybody else with some normal technological curiosity related to the cryptological, I can't point the way entirely, but some additional insights exist within your own continued studies related to these:

1) human gesture recording and playback
2) human biometric mimicry
3) virtual reality interfaces
4) animatronics (already used in movie special effects)
5) broadband data control systems
6) remote automation
7) unconventional drones
8) realistic prosthetics (as I mentioned before)
9) motorized prosthetics
10) mobile surveillance aparatus
11) wearable surveillance aparatus
12) hybrid material electronics (circuitry that is wearable, bendable/flexible, and made of historically atypical substances and mixtures)
13) the most realistic android chassis's
14) I'm NOT referring to a robot; I'm referring to a mobile anthropomorphic probe device or literally a cybernetically-enhanced cadaver. Either one is remotely controlled via broadcast technology and electrical signals. The interface of the human control is designed to transduce their gestures and bodily motions into recorded signals which are then sent to the probe to re-enact them distantly.

15) the foundation technologies are already there, even the manipulation of dead or living human or animal muscle tissue via electrical stimulation. It doesn't have to be recreated from scratch. The technique and knowledge goes all the way back to the 1600s. I looked it up.

A lot of the most disturbing stuff was discovered and developed from the 1920s into the 1990s. That's a very long time of focussed study, no matter how disturbing or covert or forgotten.

Take into account the recent breakthroughs with materials sciences related to DNA and the rapid mapping of DNA genomes and phenotypes. Take into account recent published breakthroughs into actual cybernetics and brain-controlled prosthetics whether invasively implanted or not. The science is there to those who follow it or create it or use it.

Anyhow, the point of me talking about this stuff is to get it "on the map".
If nobody ever acknowledges any new technologies, they stay entirely hidden of course, but that benefits none of us who would rather be aware of the changes and to cope with them.

I don't always go looking for wierd new stuff. Quite often it just shows up and then I notice it after the fact. If people are attempting some prototypes, it could be that they are testing them in the wild to see if they can be detected before being used in actual espionage or battles or crimes or rescues or whatnot.

Again, if you meet a "person" who is oblivious to extremes of cold, light, wind, heat, and contagious sickness... a "person" who never complains nor winces.... a "person" who hardly blinks... a "person" who mostly just follows others around and stares at the interiors and exteriors of buildings... a "person" who simply goes idle regularly for several contiguous weeks rather than actually sleeping contiguous hours of rest per evening... a "person" who spends most of their time asking questions probing for answers and hardly describing anything personal nor relevant... MAYBE THAT IS NOT A PERSON.

It could be a remote-controlled cadaver with cameras and mics implanted or something slightly more synthetic.

And if that disgusts you, then you won't get very far trying to read about any of the partially declassified projects/operations I referenced in my previous post. Terrible things have been done to involuntary people and animals and unfortunately it still continues. Both NAZI's and Americans were the perpetraitors and the results went directly towards weaponizing biology to create covert surveillance and demolition "tools". You can choose to look the other way or blame the victims and the historians, but several decades of irrefable evidence already exist in several forms spanning several continents.

People who only want to know about one type of "secrecy"/crypto to the detriment of some other more prominent type of "secrecy"/crypto don't make any sense to me. The cryptological realms aren't limited according to the preferences of the squeamish or the phobias of those who are too afraid to even complain about the injustices done to the prototypes.

Experimental "medicine" and experimental neurobiology and experimental surveillance may indeed tend to be covert or at least not well understood.

So if somebody shows up to deliver some new angle of info on those topics are you really going to attempt to brow beat them simply because you don't know what they know?

OK, enough of that.
Again, consider how many ideological civil wars we are currently engaged in. There are lots of them. I'm trying to bring some extra data to those who want to know, not to deliver arguments and wasted time to those who don't want to know.

I didn't ask to know about most of the wierder, dangerous, disgusting, bizarre stuff. I was exposed to some of it according to somebody else's situations. My reaction is to first study, then acknowledge, then study again, then finally act. I don't start out making judgments about what I'm not qualified to comprehend during the entire history of my existence. Along the way, for several years, I've been like this, simply learning. What doesn't traumatize me entirely gives me further concern and insight and a foothold into an approach to potential countermeasures for a few things that conveniently for some don't even cross anyone's minds.

May All The Covert Suffering Be Permanently Ended One Day Soon

PeaceHeadDecember 8, 2018 3:39 PM

Ugh, again, my posts are getting modified in transit.
I know it for certain. The word I entered into the form as "irrefutable" showed up as " irrefable". Those are NOT equivalent terms/syntax.

It ONLY happens to me on THIS site, via THIS Google form.

I'm not falling for it, nobody else should either.
It's also a known cryptanalytical technique to "spoil" inputs with the attackers input material data. But you've failed again, because I'm NOT doing cryptography nor steganography here in these forms. To the spoiler: You are wasting YOUR time as well as mine. The joke's on you. Like I said before, you'd get better mileage out of twitter or just piggyback onto somebody's email and get busted eventually by the authorities. I'm not your bait.

To everyone else, take note.
And of course, Peace be with you.

Eventually I will be reinstating my webpage too, with some new audio/graphics/text contents.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.