Albina Bouse September 22, 2015 2:34 PM

You’ve got to love the ingrained hipocrisy of 21st century .com businesses:

“Since March, says Facebook, it’s “notified 200,000 Pages that we’ve protected their accounts from fake likes.””

They “protect” companies from using their fake likes. LMAO.

Anura September 22, 2015 2:39 PM

Next you’re going to tell me those commenters that say “Interesting. I really enjoyed hearing about ‘Buying an Online Reputation’ – here’s a link to my website.” are being paid to write that stuff.

Daniel September 22, 2015 2:57 PM

Ah HA!

Now I have the perfect tool to finally make Clive, and Nick P, and Dirk to respect me.

/end sarcasm.

Bill Stewart September 22, 2015 3:10 PM

Before I saw Kash’s article on it being fake, I was wondering how such a business would work – it seems like an obvious cultural fit for places like San Francisco, but karaoke is heavily dependent on alcohol, and I don’t know how you could get a liquor license for a karaoke truck 🙂

brendafdez September 22, 2015 4:17 PM

It’s easy and cheap = It’s not worth anything. So it’s not surprising at all.

You can’t consider a follower/friend counter on social media as any kind of reputation. Or well, you can, at your own peril.

albert September 22, 2015 5:16 PM


Does anyone take Facebook, Twitter, and Yelp seriously? It’s a load of bull. You can’t trust ANY online review, about ANYTHING. I though Angies List was a joke when it started, and now they advertise on TV. I can see why Yelp is comparatively aggressive; their ‘business model’ depends on it. FB and TWT don’t give a RSA.
When there’s a lack of information, folks will make assumptions based on their personal wishes or inclinations. ‘Respected’ ‘news’ outlets or some random goober, the Internet is the Great Equalizer. Too bad we can’t do something about the signal-to-noise ratio….
. .. . .. oh

Clive Robinson September 22, 2015 6:15 PM

@ Anura,

My recommendation is to feed their comments into MegaHAL.

I’ll have you know MegaHAL taught me everything I don’t know…

More seriously though the designer of MegaHAL Jason Hutchens wrote a little piece on how it works[1]. It contains a paragraph that almost always makes me smile when somebody goes on anoyingly about Hard AI,

    Users have succesfully taught the program to respond to sentences in French, Spanish, Greek, German Italian, Latin, Jananese and Hebrew, amongst others. A clergyman spent hours teaching MegaHAL about the love of Jesus, only to constantly receive blasphemous responses.

I guess you can take a program to water but you can’t make it drink from the well of knowledge 🙂


name.withheld.for.obvious.reasons September 22, 2015 10:42 PM

@ Clive Robinson,
You might have guessed I chime in here…

It contains a paragraph that almost always makes me smile when somebody goes on anoyingly about Hard AI,

Remember my mathematician friend, the one that caught hell from the government (Argentina) for working with me. Back in the start of the 21st century we’d go pub’ing for hours and talk nothing but AI. My friend is fascinated with the topic while I have had a chance or two to work in this space. One of the things that I expressed was the plasticity of thought (that is a electro-chemical organic structure capable of remapping synaptic connections to build, or rebuild, a highly non-linear net), that in fact information acquisition and the integration of disparate acquisition types/classes constitutes a problem space that we have a tendency to Elucidate (my observation). Interestingly enough we would discuss Minsky’s Frames (a conceptual model), and Kurzweil’s integration model–but–when looking at Chomsky’s early work in linguistics, specifically deep structure, a workable system could be built. And again, it wouldn’t be AI.

Today, Bayesian networks (with Markoff’s Chains) and some Monte Carlo modeling can produce interest results–but it is not AI. Early on I thought genetic programming would provide some muscle when it came to solving some problems…but to no avail. It wasn’t until I looked at the hardware model that I understood where the problem was…

Highly linear systems, fabrics of semiconductor layering on lattices over masks seems counter-intuitive. Something a bit more “plastic” might be more suitable for building hardware that is capable of “thought” but I believe consciousness to the degree of sentient being capable of hypothesis and theoretical conjecture is still a way off.

I beleive most efforts can be stated as “Cannot see the forest, or the trees!”

Bill September 22, 2015 11:22 PM

I find this article to be funny but not very surprising because even though the internet is a great resource of information it is also loaded with lies. I personally bought Facebook fans for a business of mine years ago and it definitely helped to at least make the fan page seem legit. Although, the “fans” I bought were obviously fake accounts it at least made my fan page look like sort of a big deal.

After I learned/took part in buying facebook fans it always makes me wonder how many popular fan pages have legitimate numbers.

I have yet to use Fiverr because even though it may be legit, it does not seem to offer the best quality work, but who knows!

bruhah September 22, 2015 11:31 PM

It a vicious cycle.

Corps get paid to astroturf each other’s astroturf in a never ending loop around. Food for profits.

As all things are relative, we and the Chinese keep printing dough to invest in each other while calling each other out for nefarious acts, ad infinity.

Its the cycle of life as we know earth.

Clive Robinson September 23, 2015 2:42 AM

@ Name.withheld…,

Yes I do remember your friend, and I trust whatever the problem was it’s now all safely in the past.

As for the “woods and trees” issue of Hard AI, I don’t think we know enough yet to even guess at a model for human learning, let alone even start to think how to measure it let alone test it, so we are not yet out of the Aristotle method and at the first stage of the Newtonian method.

For years the argument appeared to be the one of the “Horror Mad Doctor” screaming “More power Egor” trying to breath life into what we know is dead and should be buried. Well they got their power but whilst the spin offs have been good or interesting we appear to be as further away than when we started. Some talked about the use of chaos whilst others went out even further, with the likes of Roger Penrose and his quantum theory of mind looking for nano tubes in the brains structure.

I just get the feeling that whatever the next “neat field of endevor” science or maths opens up, somebody in the Hard AI community will jump on it as “fresh meat for the cadaver”. It’s almost turning into it’s own space race, where the goal is almost irrelevant compared to what the spin offs give us, the least of which is the entertainment value of trying to get your head around the latest ideas. Whilst maths has the peculiar habit of turning what was thought unknowable into the almost mundain when you least expect it I suspect we are in for the long haul on Hard AI.

Clive Robinson September 23, 2015 2:51 AM

@ bruhah,

It a vicious cycle

So was the near collapse of the Lloyd’s Insurance market with the “LMX spiral”. If the history of that is anything to go by, then “buying reputation” is going to blow up like a super nova, and it will be the late entrants that lose the most, just like any pyramid scheme.

65535 September 23, 2015 6:08 AM

“If the history of that is anything to go by, then “buying reputation” is going to blow up like a super nova, and it will be the late entrants that lose the most, just like any pyramid scheme.” –Clive

It will not be the big corporations like Twitter, Facebook, and Youtube? Darn. It will probably be some poor girl like Michelle Phan… wait she is not poor.

Nick P September 23, 2015 12:14 PM

@ name.withheld

Brings back memories. I used frames when I was working on AI. I think I got it from a book on Agent-oriented programming, bots, etc from 90’s whose name I can’t remember for life of me. It covered AI methods, web scrapers, Telescript, Obliq, and so on. Cool book.

Anyway, Frames always reminded me of OOP if it was more interesting in usage. It’s good for organizing knowledge and action but has OOP-like limitations, too. Hard to automate making that and was hard to process at one point. So, I left it to explore alternatives that were relation or rule-based although not necessarily hand coded. I didn’t get anywhere because all the methods sucked at the time lol. So, I went into automatic programming from there and scored some interesting results maybe worth rebuilding.

“Something a bit more “plastic” might be more suitable for building hardware that is capable of “thought” but I believe consciousness to the degree of sentient being capable of hypothesis and theoretical conjecture is still a way off.”

A network of networks of dense, I/O-oriented FPGA’s. More on that in a second.

@ Clive Robinson

” I don’t think we know enough yet to even guess at a model for human learning, let alone even start to think how to measure it let alone test it”

Well, confabulation was one. I’m sure there are others. In any case, we know the brain is a set of neural networks. Over a decade ago, I said the best route is to just put a ton of work into identifying what neural architectures the brain uses, what functions they appear to do, experiment with artificial variants of those, and experiment with integration strategies. I also suggested aiming for the intelligence of a lobster or something as a start. 😉

The renaissance of ANN’s in form of “deep learning” is proving the recommendation out in exciting ways. These things are learning at a level that you can almost call nothing but learning. Visualization is also helpful given you can see individual concepts or attributes in the layers.

So, my recommendation stays. Further, I think they should do what was done in supercomputing: 3D interconnect of many nodes (ASIC’s) custom-designed for this. The routing should be local rather than global. That lets us approach a brain-sized amount of neurons. Further, if interconnect is standardized, we can use different neural architectures (or chips) in the same system like brains do. The system will use FPGA’s or a similarly malleable chip for learning phase with ASIC’s or S-ASIC’s for production phase.

Jesse September 23, 2015 2:16 PM

@Clive Robinson

I think that the correct approach is to study, model and perfect test cases for behavior in simpler, such as single-celled organisms and then just climb the evolutionary tree.

Build tests that describe the abstract behavior of the simplest bacteria we can find, build simulations in simulated environments that meet those tests, keep making the organism we mimic and the tests to confirm we are mimicking properly and the environment we do our mimicking in (likely filled with all of the previous generations of simulation to compete against for energy and resources) more complicated.

But then, we begin weighting the simulations to bias in favor of simpler abstractions. EG: more lines of code in the simulation carry with it some form of “cost” that has to be optimized away by more efficient code.

That way, we refine and build ever more sophisticated and successful bio-mimetic simulations until we climb the ladder from reaction, to abstract forms of genetic inheritance, to instinct, to pulling rung after rung of genuine learning ideally while keeping the code base at birth down to DNA-levels of complication or even simpler, if we can.

Film Falm September 23, 2015 4:39 PM

It’s not just the sites mentioned in the article. It’s also the obvious (and not so obvious) astroturfing and sock puppets in Amazon reviews. It’s also the gaming of IMDB reviews.

Bet you didn’t realize IMDB is seriously gamed. Here’s how it works, as best I can determine. The top IMDB reviewers are given extra weighting in the score. However, top voters are accorded that status simply by volume. So what you will find is that some proportion of top reviewers are mercenaries. The mercenary reviewers will score movies in the same category (or in some overlapping set) as 1-star and the movies they deign to bump up as 10-star. I’m not entirely sure how the operation works, but I have observed the dubious 1-star reviews firsthand. The rest of this is conjecture. There are many threads on about IMDB’s broken scoring system. Google it if you doubt me.

Caveat Emptor applies even more today than it did millennia ago.

tyr September 23, 2015 10:57 PM

@Clive, et al,

One real problem for AI is/was no clear definition of
what the goal is.
Are you looking for an expert system to supplement a
human ?
Are you mis-reading the field whole idea like Searle
with his Chinese room ? The functional elements of any
model of intelligence and consciousness have to be in
the chinese room mode.
Rod Brooks made the definitive breakthrough years ago
with his robotics modelled on animal nervous systems.
If you apply it to the human mind it is immediately
seen as heresy because you have to discard all of the
past models used to explain the mind and start over.
Eventually the evidence will pile up high enough to
overwhelm the morass of incorrect speculations that
have been gospel for thousands of years.
Currently the cutting edge is that consciousness is
a side effect of an inadequately understood network
of processes.
If you want to model that thinking it is the main event
or holy grail of AI you are going to have severe and
frequent failure. That’s like trying to build a movie
by making a film emulator, without making a projector.

Eventually there may be a confluence of biological facts
and computer science and engineering which might come up
with something. So far both sides are crippled by basic
ignorance. That makes purpose built AI that achieves a
form of consciousness impossible. It is possible to do
but requires a level of discarding the past assumptions
one that few dare even contemplate.

Couple that with the unsolved problems of the human
system and it becomes even harder to think about any
emulation. Here’s one, where is your memory ? Obviously
it is stored somewhere. It can even be retrieved by an
electrical stimulation of the brain, but it cannot be
pointed to exactly enough to recover the mechanisms.

Now I’m sure they will build things called AIs in the
near future and they can pass the Turing test for the
ordinary folk. But that isn’t what I think of as AI.

In 1976 we had a new system which had a telephone on the
console. A visiting PhD asked ” can you talk to it?”
we said yes, it had a keyboard attached. He grabbed the
phone and said “hello computer”. That was a level of
rising expectations that no one expected from an educated
human being. A couple of years later we would have put
ELIZA on the system to dazzle the credulous type.

One thing you will notice if you dig into the details of
the human nervous system is the incredible levels of
just barely enough to do the job that the evolutionary
process has done. The feedback systems are rudimentary
or a cobbling of other inputs for whatever task exists.
Warwick has done some nice preliminary work but like he
says it opens more new questions than it answered for
them (connector implants into nervous system).

The level of ignorance about themselves of the ordinary
person is enough to inspire Lovecraftian fiction depicting
the horrors which study will present to society as each
new revelation erodes their pretty picture of themself.

I can’t wait for some bureaucracy to decide an expert AI
system is the way to remove human biases from security
matters. There’s even an RPG game called Paranoia so
you can practice beforehand.

Anonymous Cow September 24, 2015 2:36 PM

…I thought Angies List was a joke when it started…

Any review site that requires registration just to read the reviews is a joke. I can understand registration to write and post, but just to read???

Just think if Mr. Schneier required registration just to read these comments? Would he still have as many readers and followers as he does today?

tyr September 24, 2015 3:29 PM

@Anonymous Cow

The last thing a fish notices is water, the culture
forces the average into devaluation of anything that
lacks a clear pricing. By paying for reviews you
reinforce the idea that they are valuable to you.
That has nothing to do with true worth and everything
to do with the invisible cultural water of “value”.

The major squabbles about open sourcing and whether
the product has value revolve around this point.
Commercial software in the early microcomp days was
unbelievable crap, open source being hobbyist labour
of love was completely superior to it. The debate
still goes on over the same flawed premise that no
price tag means no real value.

Bruce could easily screen out the passionate here by
asking for money to make them pay for his collecting
their opinions. That is the sterile business model of
a dying power floundering in the morass of its past
mistakes. It makes it almost impossible for them to
correct the mistakes imposed by bad ideas.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.