Artificial Intelligence

First part of this post is from the book listed below. The second part of this post are random notes relates to Artificial Intelligence. Some notes are more fictional…

 

Artificial Intelligence Life 3.0
Max Tegmark; 2017
MIT / RIT Cosmologist / Physicist

 

Synopsis

These are some notes I’ve taken while reading Max Tegmark’s book – Artificial Intelligence Life 3.0. Mr. Tegmark reviews some of the key concepts and current efforts in the Artificial Intelligence field and how it is transforming our technology. He also goes deep into some of the political, philosophical and social impacts of Artificial Intelligence and shares theories on how we can handle some these in the future. It seems the author takes a bit of pessimistic point of view in regards to the future of AI and though he embraces it as a step in human evolution, he has some deep concerns on it’s impacts. He is an advocate for sharing the efforts in AI and founder of the Future of Life Institute, an outreach organization for spreading awareness of the current efforts in Artificial Intelligence.

 

Definition

The study of intelligent agents – any device that perceives its environment and takes actions that maximize its chance of success at some goal.

 

Turing Test

Alan Turing “Computing Machinery and Intelligence” in 1950 – a machines ability to be indistinguishable from that of a human. At the time, Turing said the channel of communication for this test would be text-only (no speech or other visual representation). An evaluator would communicate with the machine and if they cannot distinguish it from a human, then it has passed the test.

First machines to be considered to pass this test was ELIZA in 1966. ELIZA would respond in generic, and vague ways, yet convincingly enough that users couldn’t prove it was not a human.
http://www.masswerk.at/elizabot

The “Reverse Turing Test” is used to verify humans from non-humans (or bots). For example, CAPTCHA is a reverse Turing Test.

 

Artificial General Intelligence (AGI)

This is intelligence that could successfully perform any intellectual task that a human can. It could include the intelligent being to be cable of experiencing consciousness.

This is also referred to as Strong AI or Full AI. Beyond the expectations of classical Artificial Intelligence, such as reasoning, problem solving, learning, communicating, achieving goals – AGI would include other capabilities such as physical movement, physical autonomy, sensing, cognition, imagination, consciousness, evolutionary computation, etc. There has yet to be a computer or entity capable of AGI.

AI research has been around since the 1960s, AGI research had not really started until 2006 onward.

AGI is also difficult to analyze, as it can be unclear when something is done intelligently. For example, the Chinese Room Problem argues why AGI / Strong AI is not possible.

 

The Chinese Room (aside)

This section wasn’t in the book but I include it here as an aside. I found it to be an interesting take on the difficulties of achieving AGI.

John Searle’s thought experiment to refute that Strong AI is intelligent. The premise goes:

A computer is given an input in Chinese and has a set of instructions it can run against it. After these instructions, the computer generates an output in Chinese. A human interacting with this computer may find that it passes the Turing Test as it is able accept and reply in Chinese. However – does the computer really understand Chinese?

Searle argues he could be in a room with the same set of instructions and given the same Chinese characters. With enough time and resources, he would be able to create the same Chinese output – however – he still does not understand Chinese.

Therefore this machine is not Strong AI (AGI).

 

Goals Problem

The fundamental problem (or mission) for Artificial Intelligence is having the technology created to achieve some goal. Below are list of some of these goals, each having different research being done for that particular goal. In AGI, there is an overwrapping problem of having the intelligent entity be able to distinguish these goals and how much of it is controlled by humans. For example, these are goals set forth by humans. The AGI could come to realize it’s own goals that humans have either yet to understand or just cannot understand.

– Reasoning and Problem Solving
– Knowledge
– Planning
– Learning
– Language
– Sensory or Perception
– Motion

AGI furthers some of these goals and includes things like consciousness, self awareness, ability to create or creativity and instinct/feeling.

A risk with setting goals is how the intelligent entity understands them or executes on them. For example, if an AI is asked to find the most efficient way of making paperclips, could it threaten human-kind if it determines that killing humans is one of the ways of achieving this goal?

 

Consciousness and Experience Problem

Sentient – humans have the capability to not only perceive and understand experiences, but they have consciousness about them. There is no scientific proof thus far of what consciousness is – it cannot be evaluated or measured. There are four forms of consciousness:
– Sensory – something that can be felt by observer
– Practical – the knowing of the observers surrounding, being able to interact and recognize the environment
– Reflective –
– Reflexive
There is a conflicting relationship between the physical and non-physical when it comes to consciousness. The physical elements, such as those felt by the body and processed in the brain may affect one state consciousness but there is also a non-physical process that comes completely regardless of the body.

One of the goals of AI is knowledge and one of that conflict, which causes the problem, is experience. Experience is a knowledge that is gained through some involvement or exposure. But this knowledge could be gained during the experience or after during reflection. It also requires a level of interpretation because the same experience could lead to different types of knowledge. Lastly – there are three different levels of experience
– first hand experience gained directly by the observer
– second hand experience gained through another observer
– hearsay, which varies in reliability

There is also the question of causality – the explanation of why something happened – cause and effect. Philoshophically humans analyze the final effect to determine the cause and the question is would an intelligent entity be capable of a creating a cause before knowing the effect? For example, how humans act based on intuition or instinct. Since an AGI runs on a goal (or should focus on a goal), this could be conflicting as not knowing the effect is like not knowing the goal.

 

Super Intelligence and Human Threat

If AGI is feasible and we are able to create an intelligence that is evolving, it would be then be able to surpass the human intelligence. At that point, it maybe able to reason goals that are beyond (and different) than human goals.

This scenario could happen through “recursive self-improvement” or “intelligence explosion”. The intelligent entity is able to learn and expand itself, much like a human child growing and expanding their knowledge. This intelligence would pass human intelligence, thereby passing AGI into what is known as Super Intelligence or Hyper Intelligence.

It is much like how a mouse put in a cheese maze may think it is smarter than the human who put them in that maze. The mouse is not capable of understanding their situation and the human interacting with them. The super intelligence may surpass human comprehension and we may not be able to understand, nor control the super intelligence.

This is a human threat if the super intelligence figures out or sets a goal that human is not necessary or important – such as humans take less priority of the mouse’s life over another human’s life.

But on the counterpoint – if an entity does become AGI and is capable of such things as consciousness, this could bring up ethical issues such as can the entity but shut down or dismantled? Is this the same as killing a human? Does the entity have certain rights so it cannot be discriminated? There are various arguments like these in areas of AI ethics and philosophy.

 

Future of Life Institute (FLI)

Volunteer run research and outreach organization focused on areas of AI risks. Its mission is to support the safeguarding the visions and goals of AI research to ensure it is identify potential risks to humanity.

The organization created the Open Letter for AI Safety, signed by many AI researchers, leaders, philosophers and leading thinkers. It calls out potential pitfalls of AI research and calls to have common ground, emphasizing that AI goals be focused on protecting humanity and civilization.

 


 

The Singularity

The singularity (aka Technological Singularity) is a term used to mark the point at which A.I. will be able to recursively and progressively evolve itself such that its intelligence and power will quickly overcome that of it’s human creator. Other terms used are intelligence explosion or superintelligence. After this point A.I. is referred to as Super A.I. where the limits of the A.I. entity are unknown.

Skeptics believe this will mark the beginning of the end for human existence. Theories of how this could happen include cases of the super A.I. being focused on some goal in which it trying to achieve that goal decides humanity as negligent. Other theories include human annihilation due to humans trying to interfere with the super A.I. trying to achieve its goal.

Another theory states that A.I. will achieve superintelligence at a more gradual level and it will not necessarily be a dramatic explosion. Therefore the singularity will be something that it slowly evolves into. Below is an example of these theories and how the A.I.’s intelligence may grow over time.

 

Roko’s Basilisk

A user named Roko posted a thought experiment on the site LessWrong in which he states that if a future AI system existed it would torture those humans who did not help it come into existence. The idea is that once the singularity event occurs the superintelligence would simulate human history to determine how it evolved. In this simulation, the A.I. would torture those humans that were opposed or skeptical of the eventual existence of the superintelligence and furthermore it would in particular torture those humans who knew that superintelligence would be possible and that this similution may happen. Therefore in this case, just by knowing about Roko’s Basilisk the person’s simulation is destined for torture, unless – they were to change their actions now.

 

Consciousness

Some questions of consciousness is how it relates to materialism or even areas of panpsychism. These types of questions are used to argue that true superintelligence would not be able to happen. One side of the argument is that there are biological explanations to consciousness and therefore once we are able to determine that, or if the A.I. is able to determine it, then it can be replicated. The other side of the argument is that consciousness is not related to materialism therefore cannot be replicated. Therefore true superintelligence cannot happen.