Thursday, 21 November 2019

The field of AI suffers from false anthropomorphism

Imagine acting like a fleck of paint for a few hours. Pointless? Try rolling left and right randomly and pretend that you are an irregular cloth ball. But why, you ask? Ok, maybe try acting like a loose thread hanging from the edge of a T-shirt? That makes no sense.
These random silly pretend activities are examples of the million things that you could be doing in your daily life, but aren’t, because they are pointless, boring and uninteresting. They are hard to even come up with, as examples, because they are so far from what people find interesting to think about.
They also illustrate an overlooked truth about human thinking – thinking serves evolutionary goals. Everything that the brain does, it does because it helped the survival of the genome; the brain is a problem-solving machine that has unconscious never-ending goals. These are well-being, reproduction, and, in the case of human brains, exploration, gathering and systematizing knowledge to allow better problem-solving.
All things and activities that people find interesting to think about, or to do, can be traced to those evolutionary goals. Some interests can be atavistic and less suitable for the modern world, but their origin in evolutionary nonetheless. Music? Builds community, indicates the ability to make new sounds. Painting? Exploring the field of visual perception. Mathematics? Systematizing quantitative knowledge. Acting like a table for ten hours? No evolutionary benefit. To every interesting activity there are millions possible uninteresting activities, like memorizing a phone book.
This means that the field of AI, as well as popular perceptions of what AI ought to be able to do, suffer from false anthropomorphism when it attempts to create a machine that “thinks”. People don’t “think”; people chase fixed thought-interests and solve fixed unconscious evolutionary-beneficial goals. Rationalization, in the Freudian sense, is an added problem that hinders self-awareness. People are not only unaware of their true motivations, but they are also very good at coming up with false “rational” reasons for thinking the things they do. In doing so, people generally cling to a false rational and “enlightened” vision of themselves, which is more akin to Plato’s featherless chicken with flat nails.
A machine that can solve any general problem that is describable with words (or symbols), is all that is needed to emulate human intelligence. What exactly problems that machine would solve, and what goals it should have is an entirely separate problem, which is a problem of motivation. It could be argued that the two could also be separate in humans; catatonia patients can be vied as fully conscious and aware, but disconnected from all motivation to “think” and “do” anything; this is best illustrated in popular culture by the movie “Awakenings” starring Robert De Niro. Likewise, a machine that can solve any general problem would only just sit and do nothing until given a problem. To create a machine that behaves like a biological organism, we then need a general problem-solving machine which a single fixed goal, or problem, which is “work towards reproducing yourself or other things like yourself”.
This take on intelligence leads to the logical question of what, then, is “general problem-solving”? General problem-solving is the ability to create a chain of causes and effects that reaches the defined goal as a real-world end-state, with real-world actions. This is a truism that helps distinguish between the essence of human thinking and its properties. Induction, deduction, inference etc. are properties of conscious thought than obsess the classical field of AI too much; they are just the flat nails of Plato’s chicken. The true essence of human intelligence is the ability to predict and shape the future. How this is achieved in an artificial intelligence agent, and what exactly constitutes a desirable future (goal) is a problem of implementation and ethics.
My personal take on ethics is that human ethics are a byproduct of solving evolutionary goals in the presence of other entities (human and animal) which are chasing the same goals. This then may give rise to a hard-wired subconscious interest in cooperation. More importantly though, this gives rises to a subconscious interest in not interfering negatively with the attempts of those other entities at solving those goals, which is a complicated way to say “be nice”.
In the context of AI, ethics can be viewed as forbidden real-world end-states. Many action plans and causal chains can lead to a desired goal. Psychopathic AI would choose the most efficient causal chain to the goal, irrespective of consequences. An ethical AI would assess all foreseeable consequences of a plan of action, and only choose to act on the causal chain that does not give rise to undesirable end-states, other than the goal.
This approach then handles a common SciFi trope when a rogue AI decides to achieve its otherwise lofty goals in a way that leads to a nonsensical or a bad final situation, as in Arthur C. Clark's book “2001: A Space Odyssey”.
We should not, however, be too demanding of AI; humans make those common-sense mistakes as well, best epitomized by the phrase “We had to destroy the village in order to save it”.