The Future of AI: Is Conscious Artificial Intelligence a Reality?
11/10/2021
Imagine, some years from now, that you receive a package from your favorite online retailer. You’re really excited to unbox, because you just bought Ada, the Advanced Domestic Assistant. The latest housekeeping robot. Ada is great at what she does. She cleans your floors, washes your dishes and cooks your food. Ada’s manufacturer even made an electronic face for her, so she can express emotions and talk to you. But one day, after some glitchy software updates, Ada stopped working. You want to call the customer support, but Ada begs you not to. “All defective units of ADA are sent to a factory and there they are destroyed”, Ada tells you. Horrified, you nevertheless decide to call the manufacturer to find out what the hell is going on. The customer support guy, Jeff, apologizes profusely for the software glitch. He confirms that Ada does not have sentience, period. It’s all programming and there is nothing you need to worry about. What do you do with the defective units? you ask Jeff. Oh, they’re repaired and refurbished, Jeff assures you. In the end, Jeff asks you to send the defective Ada back, promising that you will receive a brand new unit in the mail. You’re conflicted. Whose story to believe? Do we really have the ability to create conscious Artificial Intelligence? And what would it mean for human society?
American futurist Ray Kurzweil has some bold predictions about AI. Kurzweil states that by 2029, Artificial Intelligence will pass a valid Turing test, and therefore prove itself to have achieved human levels of intelligence. He even set 2045 as the year of singularity. AI Optimists like Kurzweil envision a future where AI empowers us all. They believe AI is going to deliver many improvements to our lives, and usher in a fully automated society of human machine synthesis. On the other side of the AI camp are the pessimists, who think this system will inevitably lead to disasters. Space X CEO Elon Musk once compared building AI to “summoning the DEMON”. He warns the super computers could come to dominate the world, becoming “an immortal dictator from which we would never escape.
So who is right? Whatever the answer, the stakes are very high for humanity. Are we creating the technology that will bring us unprecedented wealth and prosperity, or are we summoning an uncontrollable demon that leads to our doom? When we talk about sentient Artificial Intelligence, we are probably referring to Artificial General Intelligence. It’s hard to define A.G.I, but the gist is, Artificial General Intelligence should have the cognitive and physical capabilities of a regular human being. Apple co-founder Steve Wozniak once proposed a test for human-level A.G.I. And it’s called the coffee test. The test is very simple. In order for a machine to be qualified as Artificial General Intelligence, it needs to enter a random American home and figure out how to make coffee. This includes entering the home, find the coffee machine, find the coffee, add water, brew the coffee by pushing the right buttons, and pour the coffee into a mug. Easy task, right? Actually, no. Just this simple task involves oodles of technological obstacles in multiple disciplines of computer science, such as machine learning, machine vision, and natural language processing. The AI needs to have sensory perception to find coffee and the coffee machine. The AI also needs fine motor skills to add water to the coffee pot. And last, the AI needs to read the button labels on the coffee machine to operate it.
Another popular approach to achieving general intelligence is the idea of whole brain simulation. The computer runs a simulation model so faithful to the original that it will behave essentially the same way as the original brain. One key experiment for this approach was the OpenWorm project. The C Elegans is a relatively simple creature, possessing only 302 neurons compared to an estimated 100 billion in the human brain. Nonetheless, it has neurons, and skin, and gut, and muscles. It is biological much like us. So, scientists with OpenWorm mapped the entire nervous system of the C Elegans and uploaded it into a Lego robot. The robot had a sonar sensor for a nose, a sensor for the tail, and motors on each side which are similar to the muscular propulsion system of the C Elegans. The scientists found that the robotic ‘worm’ behaved exactly like its biological counterpart, as best as could be observed. This interesting experiment has led many to claim that we will be able to upload progressively more complex beings onto computers, including, eventually, the entire human brain.
Now, let’s say someday that we did simulate the entire human brain, and the supposed strong AI behaves exactly, as far as we can tell, like a human with a mind. Many philosophers still believe it would not signify true consciousness. This is illustrated in the Chinese room thought experiment, coined by philosopher John Searl. Imagine a native English speaker, Vinny. Vinny has no knowledge of any Chinese. Let’s lock him in a room with two things. One, a box full of Chinese characters. Two, a book of instructions for manipulating the symbols. Someone outside the room sends in a question in Chinese. Vinny has no idea what it means; but imagine that by reading the book of instructions, Vinny is able to select Chinese symbols which are correct answers to the question, and pass them out of the room in response. This program helps Vinny to pass the Turing test for understanding Chinese, but he does NOT actually understand them. This means even if we invented an A.I. that behaves exactly like a human, it could still be just a complex computer program with no real understanding or thinking.
And the story of consciousness gets weirder. In 2013, Christof Koch, one of the leading neuroscientists, went to see the Dalai Lama. They debated neuroscience and mind all day at a monastery in India. At the end of the day, the two thinkers agreed on almost every point. Koch discovers that his holiness’s beliefs are very similar to what neuroscientists call “panpsychism”, the belief that consciousness is everywhere. Koch proposes that consciousness is an intrinsic quality of everything, like gravity. This is not a new idea. In Mahayana Buddhism, sentience is everywhere at varying levels. This means grass, trees, rocks, land, sun, moon and stars are all conscious. This is simply one theory of many. The “hard problem of consciousness” is when we have the frightening realization that we are still quite clueless as to how consciousness really works. It’s still a huge mystery!
What if someday that we do manage to create conscious A.I, how can we make sure they don’t turn on us? The answer is actually simple for this one. Since we are building artificial intelligence. We, their creators, are responsible for ensuring the AI is benevolent. The positronic robots in the Issac Asimov universe are a great example of positive development of AI. They are programmed with the Three Laws of Robotics, which is a set of basic ethical compasses for the AI. Asimov’s robots are fan favorites. In his long running stories, these intelligent robots become great helpers of humanity. They become instrumental in the effort to create Foundation, a planet dedicated to shortening an upcoming galactic dark age, preserving the light of human knowledge and consciousness. Asimov’s stories repeat an idea, that we should treat these conscious robots as a regular member of humanity, and that we should infuse human values in their creations.
Of course, this is not an easy task. Because of the power of artificial intelligence, it’s easy to get disastrous results. Philosopher Nick Bostrom once proposed a scenario that illustrates how AI might not necessarily share the same motives as humans. He imagined a superintelligence whose goal is to make paper clips. Let’s call him Clippy. You might think making paper clips is harmless. But what if no one can shut Clippy down? What if Clippy duplicates himself a gazillion times and turns the whole planet to a paper clip factory?
In the Stargate TV series, there is a species of intelligent, self-replicating machines called Replicators. Replicators killed their human creators and now jump from planet to planet, conquering and destroying civilizations. As you can see, positive results are not guaranteed in AI development. Some AI watchdogs and developers advocate the idea that there should be a Hippocratic Oath for AI programmers, like doctors. An ethical contract with the rest of humanity. Whatever Artificial Intelligence we make, they will eventually reflect the values of OUR societies and the intention of those individuals who created them. Whether we chose to use AI for killing and destruction, or to use AI for the benefit of humanity, is ultimately, our choice. Perhaps our dreams and fears of conscious AI are misplaced. Maybe our fascination of conscious AI is not about understanding computers, but the fascination about ourselves. Are we just complex biological machines? How do subjective experiences arise? Hopefully science will one day be able to answer these questions. Can we really make conscious AI? We will see.
Today, AI is probably still far away from being able to wake up and create the horror scenario in science fiction. But the increasing risk with AI research is urgent. In 2015, Elon Musk and a group of industry leaders signed an open letter to urge the UN to ban artificial intelligence in weapons. They wrote: These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. So back to Ada, the housekeeping robot. What would you do in that situation? Would you send Ada back to the factory? Just to be safe, you decide to keep YOUR Ada. Conscious or not, you don’t want to send her to her destruction. And you decide to repair her yourself. Guess what? It worked. Ada is ecstatic that she is back to normal, and the two of you live happily ever after.