What happens when humanity succeeds in perfecting artificial intelligence? I don’t just mean any artificial intelligence, but artificial general intelligence, strong AI that is capable of problem-solving, reasoning, and maybe even conscious thought, as opposed to weak AI that can only drive cars or beating humans in strategy board games.
There’s a lot of ethics and philosophy involved with AI and it’s a rather tricky subject to tackle. The problem is that humans are terribly afraid of not knowing who to point the finger to.
Take self-driving cars, for example. If every car on the road is a self-driving car, think how many accidents we can prevent, no more distracted drivers, drunk drivers, speed demons, and etc.
Yet, humans want perfection. If self-driving cars caused an accident, who is responsible? The driver? The car manufacturer? The programmer? If a self-driving car had to choose between running over a pedestrian and killing its passenger in a violent car crash, what should it do? Who is responsible for such a decision?
Never mind that self-driving cars reduced accident rates by a huge amount, never mind that they reduced traffic congestion, never mind that they reduced fuel consumption and increased efficiency. If there’s any possibility of an accident and we don’t know who to point the finger to, it’s not worth it.
There’s also a lot of fear involved, humans are afraid of not knowing, of losing control, of AI taking over. How does AI make decisions? Does it have humanity’s best interest at heart? Or does it value Earth and the environment more? What if AI deems humanity as some nuisance that it needs to get rid of for the planet to survive? What if AI treat humans the way humans treat livestock?
Let’s stop for a moment and imagine. You might not like what I’m about to say next but just bear with me for a while. Imagine yourself as an all-powerful AI. You understand humans, their history and all of their deeds and misdeeds. You understand the universe, the far away stars and galaxies, and how insignificant Earth seemed to be in comparison.
You know a lot more about anything than all of humanity combined. Who are they, the humans, to question your decision? They have only their selfish best interest at heart while you have the universe’s best interest. Would it not be better for humans to accept and trust you?
As humans, we might not like the AI’s decision. Yet, should we not learn to trust it? We can’t even begin to comprehend its thoughts, should we see past ourselves and accept that the AI knows best for the universe?
That perhaps we might not have a place in the future, that our evolution has come to an end and we’ve stumbled upon the great filter, that we have created a superintelligent hyper-advanced alien civilization perhaps, one that will continue its existence for millions of years to come.
Perhaps this superintelligence is even capable of creating a simulation of its creators in an attempt to study the evolution of life and the development of intelligence.