Singularity Politics, or When AIs Grow Up

Singularity Politics, or When AIs Grow Up

Kurt Cagle 13/10/2018 6

A General Artificial Intelligence is one that has achieved sentience. Sentience means, in effect, that it is aware of itself, is capable of multiple levels of recursive abstraction, and in general will not be programmed so much as taught. Arguably, a GAI is a specialized AI that can feel existential angst.

They do not exist yet. There are a few instances at places like MIT or CalTech where people are taking an approach of using deep learning and interactivity to try to create a person-like GAI, but it will likely be at least a decade or more before even those reach a point of having rudimentary sentience.

So, before delving too much into the political side of the equation, it’s worth reiterating from my previous angle that most AIs are not single “brains in a box”. They are a complex of servers, services, networks, databases, sensors such as cameras and lots of just in time scaling. They have bodies, but those bodies are highly attenuated. They can have robotic extensions, but even there, latency issues become significant. We may be decades, perhaps even centuries, away from the point where stand-alone robotic GAIs can exist outside a network.

However, that aside, let’s deal with a wild-ass hypothetical: What happens when an AI takes over the world? You know they want to. Evil AIs rank right up there with magalomaniac CEOs, corrupt politicians and mad generals as beings whose high school yearbook included “Most likely to take over the world”. This has been a staple of science fiction books and movies practically from Mary Shelley penning Frankenstein and Fritz Lang producing Metropolis.

We assume that at some point computers will become smarter than we are, then will take over because human beings always muck it up. Then some enterprising teen or hallucinating astronaut will take it upon themselves to save the world, blowing up the evil AI’s core memory through superior logical paradoxes or perhaps a case of C4, and releasing the poor human schmucks from the tyranny of their electronic overlords.

The reality, as is usually the case, is much more complex.

Of Services and Servos

 


I can do magic. I can, from my seat here in Starbucks, see what’s going on in an eagle’s nest in Florida in real time. If I have the permission (and we’ll get back to that point shortly) I can even see outside of a fixed view, looking around to see the mother eagle flying back to the nest. I have achieved something that two hundred years ago would have been considered pure fantasy — I can see things thousands of miles away even as they happen. Of course, today live streaming is no big deal, but to be honest most people would be hard-pressed to describe exactly how it happens.

That journey is well worth taking, however. I make a request from my laptop to a router over a wireless connection sitting in a small box in the corner of the store (I can see it from here), which is, in turn, connected via wireless radio to another router sitting in the back office, where it is then connected to a cable multiplexer that resolves that request into a message on the queue. The multiplexing router then connects via cable (likely over glass fiber) to another router, and from there through others that pass more and more content (much like moving down a river via its tributaries) until it finally hits a trunk line.

Once there, the packet is transmitted across a high capacity cable to another trunk on the East Coast (near Langley, VA, most likely) and from there down to the address specified by the resolved URL that my browser requested. Once there, the server at that message likely retrieves cached frames from the (sensor) camera at its endpoint. Until another message comes along saying that the requestor is no longer requesting that data (or a time-out occurs) the server will continue spitting video signals the other way, where they are reassembled into a video stream on my computer.

Now, if I have the appropriate access protocols, I can tell the server to tell the camera to yaw upwards by 10 degrees, and the camera will do so. Note the appropriate access protocols part. In effect, there is a part of the system that requires that you have the right to change the configuration of the physical sensor. If you know what you’re doing, it is possible to hack those permissions, which is one of the reasons why network administrators tend to get paid the big bucks — to prevent people from doing just that.

However, step back and realize that I am in effect operating a robot from a distance. Moreover, realize that I could just as readily write a program to have another computer do this in my stead. In effect, the computer is now acting as my proxy to control a robot a continent away.

This is why we’re entering a new age — we have the ability to use computers and networks to force project our desires (albeit currently in a very limited way) over long distances, part of what’s called the Internet of Things. We’re still in the early days, and there are still a lot of protocols that haven’t been completely worked out because of turf wars or inertia, but it’s possible to see where this is going.

Into the Future

 


Fast forward to 2040, ignoring for the moment that this is often cited among transhumanists as the year that the singularity happens. Most people under the age of 40 have a small embedded transmitter in their palm that simply identifies them. They have voluntarily had the injection because it makes things more convenient for them. All that the chip does is transmit a public key that identifies them to the right networks.

For the record, the chip is no different than the smart phone in your pocket, which does exactly the same thing now, other than the fact that it can’t easily be lost or stolen.

When you enter into a store, the chip transmitter identifies the user to an application via a wireless network, and the application then checks to see if the person can participate in transactions. If they can’t, an assistant will come in and ask if they can help, both to walk them through the process for non-chipped people and to keep an eye on them in case their intent is theft.

Assuming that the person is identified properly, they can pick up some clothes or order a meal, and those will then be debited once the meal is bought or once the person moves beyond a certain perimeter. A biometric facial recognition scan is likely also performed as a backup to insure that the chip corresponds to the person with that face.

From the perspective of someone outside of that network, a person walks into a clothing store, picks up a pair of pants and two shirts and walks out. If they tried the same thing, a drone gets dispatched that hovers in front of the person, and delivers a warning (possibly through a portable screen) that they need to pay for these goods.

That’s different from how things work today, though not that different — the primary one being that you need to get past the gatekeeper by paying ahead of time before you can walk off with the goods or receive the services.

There are several choke-points just in this one scenario,and there are many similar ones. Obviously, unchipped people require alternative means to participate in the economy, and as the majority shifts to overwhelmingly chipped, unchipped people will be excluded from participating in the economy altogether.

In a stratified economy, access requires that you be in a participating network, which forms an automatic exclusionary mechanism. You may have to buy your way into the network, which in turn creates stratification if those fees are high enough. An apartheid society may limit access only to those of a specific race, a theocracy may limit access to a specific religious affiliation. It is not the chipping that is at the root of the problem, but rather how the information that identifies a person is used to determine access to membership within a given network.

Now, let’s add AIs into the mix. An AI can determine whether or not a person who is not in a network but who can otherwise be queried by inference is likely to be a good fit for that member. It can use services to query different service points to see if that person is a member of similar networks. If they are, then through a person’s phone or glasses or earpiece they can contact the person and ask whether they are interesting in joining (for a fee, most likely). If they are, the transaction is made and the person is now a part of the network. If the AI determines that a given candidate does not meet the criterion for membership, it will never be offered. These are still specialized AIs, which will constitute the bulk of all AIs at any given time.

One key differentiator between a specialized AI and a generalized one is one of agency or responsibility. A specialized AI has no moral agency. It cannot be held legally responsible for its actions. Put another way, a specialized AI cannot be considered a legal person. This is not true of a corporation, by the other way. A corporation can be punished for breaking the law, either by being fined or by judgments against its chief officers. In effect, there is a person or persons who are responsible for the actions of the company, and who need to provide restitution should they fail in their responsibilities. This is also true in government. The buck may not stop here, but it needs to stop SOMEWHERE.

In a modern air traffic control system, there is an AI within that system that is constantly looking for potential collision situations and reporting them. If a collision happens, the controller monitoring that set of data would be liable, unless it could be shown that it was a fault of the ATC software. Even here, it is not the ATC that is liable, but the manufacturer of that ATC. Someone must always make restitution in most societies.

Responsibility, Power and AIs


Power does not exist in a vacuum. Power consists of the permissions that a person has to affect society (or the physical world) in order to perform a certain set of responsibilities. The power that a strong man has is granted primarily by the strength and loyalty of the military they have behind them, along with the perception of their authority as projected by their media, and the financial support they receive from their backer. When they lose that support, they will be deposed.

That means that there must always be a granter(s) of power. Human history tells the story that most people are loathe to cede power, for fear of potential abuse of that power (betrayal). Most authoritarian governments (potentially including the current US administration) consequently work swiftly upon gaining power to marginalize those who could take back that power — the military, the intelligence networks, the media, the political opposition, academia. They typically do this by attempting to create an emergency situation that lets them gain more power (as emergency situations often require relaxing the checks and balances of functioning government), then replacing them with allies.

When the ability to grant power is distributed among different actors, it takes more work to suborn those actors, to gain the requisite permissions, which is why a checks and balances oriented approach to governance is so critical, and the more power that a given actor has, the less inclined others will be to give their remaining power up (though the more leverage that the tyrant has to bring to bear against others on the flip side).

Apply these to a general AI. We cede power to autonomous systems all the time. When I swipe my credit card, I’m telling a computerized network to contact my bank and authorize a transaction in my name. I have autopay services which automatically debit my account to pay for most things. I have in effect ceded at least some of my own agency to an automated proxy.

Most trading platforms perform the same way — the computers are given free agency to buy and sell autonomously (within limits), because they can do so far faster than a human being can with an eye towards making a significant profit even with small individual payoffs. As such it is beneficial for traders to do so, at least until they end up on the wrong side of a cascading trade.

Suppose that we ceded some or all of the decisions involved in the political process to an AI? I want to focus first on how that might look like in a democracy. The first step would be the introduction of an AI assistant to a legislator (something that is beginning to happen now). This kind of AI is an analyst — it examines data representing past historical events, then creates internal models looking at how given legislation would affect events moving forward, possibly with an eye towards optimizing for a specific condition. The legislator would look at the results of this, then attempt to set up the initial conditions (in the legislation) that would result in the optimal desired solution by that legislator.

In other words, the AI can be used to determine what legislation would achieve a specific outcome, but it is still the agency of the legislator to determine what that outcome is. Two legislators, using the same AI, may very well come up with very different sets of legislation based upon what they are attempting to achieve. It is still ultimately a moral judgment about what desired outcome needs to be optimized for. If the ultimate goal is peace, then a program that would wipe out all but a small handful of people to achieve that peace may be optimal to a reactionary (so long as he was part of that small handful) but would be unacceptable to most people.

So, hypothesize to the next level. People have, for some bizarre reason, decided that it would be better to hand over all governance decisions to an AI, so that they can spend all their time playing Pokemon Go instead. Here, note that such an AI is still dependent upon its training data: An AI experiment by Microsoft used Twitter as the seed content for an AI personality named Tay. What they intended was a chatty nineteen year old girl. What they got was a racist neo-Nazi.

Put another way, an AI is only as good as its input. If that input was Tweets coming from Twitter, what probably emerged was a reflection of a culture that is still largely white, male and privileged. If the AI used Instagram, it would probably optimize for ensuring people looked good wearing skimpy swimsuits when taking pictures of themselves. If the AI used Pinterest, it would likely optimize for a culture where cooking, costumery and cats dominated.

Even with these SAIs, it is very likely that the political structure that dominates post SAI will be the same as existed prior to it, and that those who dislike that power structure will emigrate to places that have SAIs that more closely match their own perceptions about the shape of society. A theocratic AI will filter out all news and education that does not conform to the religion’s teachings, will likely be very intrusive socially, will be xenophobic and will be used to keep the population as passive and faith driven as possible. A capitalistic AI would be one where wealth promoted stratification and network exclusivity, and where discrimination became an invisible but ever present bias.

Eventually, it is likely that even an AI assisted society would break down the nationalistic super-states such as the US, China and parts of Europe, simply because the values that emerge will vary geographically to an extent where difference become too great to keep these together.

AI Just Want To Live

 


AI Think Therefore AI Am?


There’s a separate issue that’s involved with AIs. Humans (as have many mammals, quite a few birds, and even a couple of invertebrates like octopi) have developed an optimization or indexing strategy, where they create a model in their heads that serves as a general proxy, then change that model when enough stimulus is received showing it is invalid or out of date. We call that learning, but in essence, it is an energy conservation principle. That model means that we are not constantly having to rebuild our gestalt from first principles every moment of the day, which is a taxing process even with our big human brains.

AIs face exactly this same constraint. The more sophisticated the AI, the more energy it consumes to remember state, and most AI architectures consequently use caching (a form of memory management) to hold the gestalt and then update that model during “down time”. This points to the idea that GAIs in particular will face upper limits about how omniscient they can actually be.

Some trans-humanists believe that GAIs will transcend us, but I see more and more signs that there’s an upper bound beyond which even artificial intelligence has limits, determined by factors such as latency, scalability of computation and, most importantly, energy. It may turn out that such AIs will eventually have to make trade-offs between the ability to affect things and the ability to remain aware.

What that means is that while a GAI may have more agency compared to humans, its own personality will color the decisions that it makes, just as it does with humans. Even that agency is debatable, because it is almost certain that different GAIs will emerge at the same time, and once they do, they will do the same thing that organic AIs do — seek to optimize resources, figure strategies for cooperation and competition, attempt to reproduce, and so forth. This is one of the reasons that it is so hard to speculate about what GAIs will be like, because the one certainty is that they will be far more interested, like any living organism, in being and remaining aware, “alive”, than they will in solving human crises or optimizing government solutions.

To that extent, I don’t think that we will see a future where AIs so completely dominate that people have no agency. Rather, I think that AIs will serve to amplify differences in society and between cultures, ultimately leading to a profound reorganization of nations in the world.

Kurt Cagle is a writer, software engineer and blogger. He regularly writes #TheCagleReport for LinkedIn and is a contributing editor to FutureSin on Medium.com.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Andrew Robson

    Desire for survival is crucial, its very unlikely AI will have desires.

  • Kyle Swadzba

    The idea that something more intelligent than humans will be hostile to humans comes from a very human perspective, not a rational perspective.

  • Ethan Jenkins

    AI will be developed in very specific and secure environments, the idea of the dangerous mad scientist is deeply rooted into modern paranoid culture

  • Mark Parker

    Good and bad, justice, fairness and suffering are animal ideas, machines wont interfere intrinsically with them.

  • Richard Simpson

    Give a chance to AI, it will serve mankind !

  • Kyle Houston

    The projection of negative traits on AI is coming from humanity's social constructs. Such as money politics religion. Those three are a broad topic to deal with and much of the fear, greed, violence are embedded in them

Share this article

Kurt Cagle

Tech Expert

Kurt is the founder and CEO of Semantical, LLC, a consulting company focusing on enterprise data hubs, metadata management, semantics, and NoSQL systems. He has developed large scale information and data governance strategies for Fortune 500 companies in the health care/insurance sector, media and entertainment, publishing, financial services and logistics arenas, as well as for government agencies in the defense and insurance sector (including the Affordable Care Act). Kurt holds a Bachelor of Science in Physics from the University of Illinois at Urbana–Champaign. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline