How to Program Humans

How to Program Humans

Kurt Cagle 14/10/2018 4
Most people are uncomfortable with the idea that they can be programmed, a discomfort that can very quickly escalate to full blown denial. Yet there is ample evidence to show that such programming is remarkably (indeed, entirely too) easy, and anyone who is involved in media, social media, advertising or organized religion can generally lay out most of the basics. Call the people who engage in this social programming “social programmers”, or for brevity, “sogrammers”.

Likes and Unlikes

The first step to program your human is to heighten the us vs. them factor. This process usually involves playing upon a person’s conformational biases. The sogrammer identifies their target demographic and looks both at what that demographic likes and, more importantly, what they vehemently dislike. Those dislikes are important information, because such dislikes usually serve as the basis of what separates “us” from “them”.

Let’s say, for instance, that you (as a target) are pretty cool about most flavors of ice cream, but you dislike chocolate. A client of the sogrammer just sells vanilla ice cream, in a market where chocolate ice cream is the dominant flavor. So the sogrammer writes a sogram (a campaign) that begins to question the benefits of chocolate ice cream, claiming that chocolate ice cream eaters are sneaky, shiftless, underhanded, and not to be trusted, while vanilla ice create eaters are champions of the true faith.

The campaign at this stage uses the little lie, innuendos, things that can’t necessarily be easily proved but that play into the suspicion that all non-chocolate ice cream eaters have suspected for a while. It also makes those same people feel virtuous about their own habits — after all, it is confirming that there were reasons they didn’t like chocolate ice cream.

Herd Behavior

Once this happens, the next stage is to surround those people with like-minded individuals. Some of those people are plants, people who are paid to talk about those dirty chocolate ice cream eaters so that the targets discover that they are not alone holding that belief. Isolation is a terrible thing; for many people, even if they may happen to be right, holding a non-conforming belief is difficult because it builds a lot of self-doubt. The sogrammer works hard to keep potential backsliders surrounded by people reinforcing that message.

This is also strengthened by making arguments for not questioning authority. If the evidence is on the side of the non-believers, then the sogrammer works hard to cast doubt on not only the validity of the evidence but upon the motives of those who have gathered that evidence. This is when the conspiracy theorist shows up; suddenly all of those chocolate eaters (and especially the chocolate ice cream makers) are in the service of the Rothschilds or the Illuminati. “Everyone” knows this to be true, and there were several vanilla lovers who were secretly murdered by agents of the Chocolate Consortium for digging too deep into the truth.

By this stage, the Vanilla target has narrowed down his information sources to “approved” channels, and has effectively placed his complete trust and faith in the Vanilla movement. If he’s in a relationship, his significant other has either drunk the koolaid (or a least liked the cone) or is on the way out. While this may be a wake up call for some, it only reinforces the fact for others that their former friend or lover was not the right kind of person. This is part of the reason that once the sogrammer has captured that person, the likelihood is much higher that they will either end up single or will get involved with another true believer, reinforcing the vicious cycle.

At this point, the target is now the sogrammer’s to do with as they please. If a scandal emerges with the Vanilla Institute (the sogrammer’s client) the target now believes that this is an attack by their enemies, and sees it as a personal affront. If the Vanilla Institute is guilty of using fake vanilla, that person will respond that fake vanilla is a good thing. Logic no longer makes much difference to their way of thinking, because they have long since lost the desire to apply critical thinking to this one area.

Waving the Flag

The sogrammer has other tools in his arsenal. One of them is the use of symbols. A flag or shield, a musical motif or jingle, a book or pamphlet, a tool or weapon, all of these are symbols. A symbol is something that is a shorthand for something else. Such symbols do not have meaning in their own right, rather they are an abstraction that can be carefully tailored to mean whatever the sogrammer wants. The vanilla bean becomes a symbol of purity and goodness, the chocolate bean one of corruption and evil.

Symbols are powerful because they both bypass rational thought and often may be meaningless to outsiders, further reinforcing the us vs. them mentality. Media targeting people in the “in-group” often tends to be heavily laden with such symbolism for purposes of either strengthening the in-group ties or villifying the out-group.

A related tool is repetition, though this is subject to a caveat. The mind is, at its core, built around pattern recognition, especially with sound. The first time that a sequence of words or tones is played, the mind stores it as input to be parsed, and if that same sequence is repeated a few times, that strengthens the association. However, after a while, the effectiveness of repetition drops off — it becomes background noise and is ignored.

However, when the same signal comes from different sources, and is presented in different ways, this actually strengthens the signal, especially when there is already a pre-disposed bias to believe it. This is one reason that advertising campaigns frequently have several spots that are aired simultaneously, even though it is more expensive than just creating one and repeating it. The sogrammer needs to mix up the message or the filter within the human mind that handles pattern recognition flags that message as having already been “consumed”.

Fake News / Fake People

To that end both Big Data and AI bots are powerful tools for the Sogrammer. Big Data, the combination of databases and sophisticated analytics tools, makes it possible to better determine what wedge issues, the likes and dislikes that people hold, will most likely make people receptive to certain messages, and will also identify those people who fit within any given demographic.

As with the Facebook breach above, this also lets the sogrammer identify networks graphs of people with related interests. This means that if a given message worked for one target person, it will likely also work reasonably well with a significant percentage of that person’s friends. This has been the holy grail of marketers for several generations, as one of the central challenges with any marketing effort has been spending the smallest amount of money to attract the largest sympathetic audience.

Artificial intelligence agents (aka bots) amplify the messages once both targets and anti-targets (those most likely to be actively hostile to a given message) have been identified. A bot is not sentient in any normal sense; it is a program that looks for certain patterns of word construction in order to produce generated responses. Bots are difficult to write in a pure messaging environment, primarily because in an extended conversation the ability of the algorithms to adapt to changes in conversation diminishes over time, and their reaction to non-sequiters in particular usually raise the Uncanny Valley flag.

However, in short message environments where the bot posts, they can often survive for quite some time before people out them. The Turing interval, the period of time before a person can accurately guess that a given agent is a bot rather than a human being, is growing longer. Ironically, the true believer is more likely to be perceived of as a bot because their range of critical analysis has dropped to such an extent that they often fall into stock behaviors and responses. They have been successfully programmed.

Cognitive Dissonance and Uncanny Valleys

It should be noted that human minds are extraordinarily plastic, however. Cognitive dissonance is one way that minds under siege work to deprogram themselves. Most programming efforts go through the human conscious mind — the part of the brain that is most aware of itself and has the greatest degree of external focus. This consciousness is usually highly aware of the passing of time, focuses on graphical imagery, and is very sensitive to social cues.

The subconscious mind, on the other hand, has less awareness, but processes a larger amount of information. It’s the subconscious mind that reacts to peripheral stimulae, and typically is the part of the mind that compares the externally apparent model of the world with the internal one. It is, in effect, a bullshit detector; the subconscious mind often picks up cues that signal that the internal and external models are no longer consistent.

This can manifest as both psychological and physiological symptoms — anxiety, anger, defensiveness, unfocused fear, insomnia, increased nightmares and so forth. However, in many cases this eventually emerges as doubt. Fervent support cools down, and the need to double check grows. In Germany c. 1938, support for Hitler and the Nazi leadership was quite passionate. By 1943 many Germans were actively questioning what was happening, even among previously committed supporters.

Part of this has to do with the fact that it is hard to sustain the Big Lie for a long period of time. The more reality differs from the official illusion, the more people widen their filters, and the less they trust what once were official news organs. The devastating bombing of Dresden, Germany brought home for many there the fact that what they were hearing was no longer consistent with what they were experiencing — Germany was not winning the war.

Most psyOps professional fully understand this. Urban areas are often very difficult to “program” for any length of time, because there are too many potential contradicting pieces of information. Rural areas, on the other hand, often have a very limited number of information channels available, and as such, are quite frequently slower to break through such dissonance (they also have an older, more conservative demographic). It’s one reason that the most reactionary supporters are likely to live in rural areas, regardless of country, race or other factors — and one reason that they are more disproportionately targeted by sogrammers to begin with.

The Price of Crossing the Line

Cambridge Analytica is hardly the first company to throw ethics by the wayside while taking advantage of every technological tool in their arsenal to manipulate a population, but it has certainly broken new ground there. It is increasingly obvious that technologically we have reached a stage where we need to seriously talk about ethics, and the role of technologists (and politicians) in better policing its own.

My expectation is that Cambridge Analytica will likely face enough governmental scrutiny to lead to its bankruptcy within a couple of years. The rats are actually deserting the company now, in pursuit of starting new companies under less scrutiny, the only way that such sogrammers can actually survive. The company itself (and likely its board and founders) are likely discovering that while politicians may not necessarily be noble, the taint of obviously compromised elections is something few politicians will tolerate for long.

As to Facebook, every social media company since the 1970s has reached a watershed point where its pursuit of money (and the greed of its shareholders) has overtaken the benefits of these companies as platforms. Ironically, the deployment of bots as a seemingly clever tool to increase apparent engagement and profitability ultimately culminated in the bots destroying the viability of the platform itself, something which has happened with every dominant social media company stretching back over the last fifty years.

Facebook may face similar scrutiny, but what will ultimately destroy it is the trust that it has squandered in its pursuit of dividends for its stockholders. It will die a slower death, overcome by the sclerosis of bots and and fake accounts pointing to fake news, as the next generation chooses a less contaminated venue to connect with one another. I expect to see, within the next decade or so, Facebook becoming another America Online, Yahoo or Compuserve, desperately trying to sell itself off for pennies on the dollar, as fake AIs waste trillions of CPU cycles trying to convince other AIs that they are real boys.

Alan Turing would have laughed his ass off.

Kurt Cagle is a writer and technology critic, who has been expecting something like this for the last ten years. #TheCagleReport

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Stephen Garnham

    The easiest way to manipulate people on social media is by playing on their emotions.

  • Amy Louise

    Brainwashing people is a necessary part of the civilized world.

  • John Richardson

    Limiting information can be done by isolating the subject from alternate opinions.

  • Daniel Rust

    Interesting article !!

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   

Latest Articles

View all
  • Science
  • Technology
  • Companies
  • Environment
  • Global Economy
  • Finance
  • Politics
  • Society