Once upon a time, in a land far, far away, there lived a King who liked to play chess. The King was rather good at the game but was frustrated by the fact he always won. And so, he commanded his knights to scour the land searching for a worthy opponent.
The knights were instructed to offer a reward to anyone who could beat the King.
In a small village, in a distant corner of his Kingdom, was a young boy who was considered to have a gift for the game and so made his way to the capital to make his Royal challenge.
On entering the palace, the boy enquired as to what would be his reward? “You can name your prize, my boy” said the Chancellor, “His Majesty will grant you your heart’s desire if you can beat him”. The boy sat down and waited for the King. Nervous, but confident, he felt sure he could win.
Amazingly for an article on AI, a relevant picture of a fairy tale castle not inspired by Google Dream Deep… Image source
The King arrived, and they played their game. Underdogs tend to do well in fairy tales, and so rather predictably the boy won the match. Satisfied that he had been beaten fair and square, the King turned to the boy and commanded: “So, peasant child. Name your reward”!
The boy was silent for a moment before he plucked up the courage to ask:
“Oh Your Highness, my wish is simple.
I only wish for this:
Give me one grain of rice for the first square of the chessboard,
two for the next square,
four for the next,
and so on for each of the squares doubling every time.
I ask to stay in the Palace with my family until all the grains have been counted”.
“You foolish kid”, the King replied. “Your peasantry has stifled your ambition asking for only such little grains of rice. Alas,” [with a sigh] “my Chancellor will see that your bounty be counted before your journey home”. And with that, the King left the room.
The boy stood up and smiled, and as he turned he saw the worried look on the Chancellor’s face. For in that moment, they both knew the boy would be living at the Palace happily ever after…
* * *
I was reminded of this story earlier this week and when thinking about how best to explain the potential run-away effect of AI, it struck me that it was a perfect parable to make the point of the potential existential threat that AI brings.
Any mathematicians among you will have already calculated that if even if the grains of rice were counted at a rate of a billion grains per second, then it would still take nearly 600 years before the boy could go home. This is the power of exponential functions – they quickly run away from you – in this case in just 64 moves.
And this is the potential for AI also. If we build machines that can be ‘intelligent’ (see Part I) and them apply them to the problem of programming themselves – then potentially very quickly we will reach the point of the singularity.
What is the singularity?
The singularity is kinda like to computers what the green-house effect is to the environment. Put simply the singularity is the ‘tipping point’ after which it is not possible to turn back.
How will this happen?
In 1965, the co-founder of Intel made the observation that the number of transistors in an integrated circuit (what we now call a microchip) was doubling every year. Ten years later, he revised his forecast that the transistors would double every two years and thus ‘Moore’s Law’ was born. Since then, it has held roughly true that transistor count (a rough proxy of computer processing power) has doubled every two years.
Now, in the 21st Century we’ve made some breakthroughs with Artificial Intelligence, mostly due to the fact that computer hardware has reached a level of sophistication where it is possible to simulate a ‘neural network’ in a machine. I’ll come on to neural nets in a future post, but for now suffice to say they are capable of ‘discovering’ patterns in data in a way that simply wasn’t possible before.
We’ll look back at this point in years to come and recognise that we’ve achieved an elementary milestone in the development of AI. What happens next is that we’ll be able to design a system that is capable of recursive self-improvement (a fancy way of saying it can redesign itself). While at an early stage this would be limited by the processing speed of the system’s host computer; if the AI was targeted at, for example, microprocessor design, then one can easily imagine how very quickly the system would run away from itself.
Who cares?
Well, some pretty big names in the technology world have spoken about this subject over recent times; Gates, Hawking and Musk in particular:
“Success in creating AI would be the biggest event in human history”, wrote Stephen Hawking in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks”.
While not normally one to speak on anything other the benefits of greater high-technology to humanity, Tesla and Space-X founder Elon Musk is also spooked by the rapid progress in this field. “I’m increasingly inclined to think that there should be some regulatory oversight” he says. Musk called the advances in the field of AI “our greatest existential threat”. Perhaps he believes with everyone driving his electric cars in the future, our environmental impact will not be such a huge deal as it is today.
“I agree with Elon Musk…” says Bill Gates, co-founder of Microsoft and humanitarian philanthropist, “… [I] don’t understand why some people are not concerned”.
I too, add my name to the list of concerned parties about the impact of AI and the singularity in particular. But I also believe that just as we have renewed optimism in recent years that we will indeed reach a sustainable relationship with the natural world before it is too late, perhaps also if we think about and discuss the issues around AI early enough, then we’ll be able to help design a sustainable relationship with the virtual world as well?
What do you think?