Last evening I had the great pleasure of attending a panel discussion on Machine Learning hosted by The Royal Society. Earlier this year I was asked to provide my own thoughts on the impact Machine Learning is likely to have on the world and the ethical challenges of this at an evidence gathering session, and so it was great to be invited to the first event where the issues surrounding this exciting area of computing were publicly aired.
The Chairman for the evening was Marcus du Sautoy, best known for popularising mathematics and in particular most recently for his myth-busting BBC documentary on algorithms in 2015. Enthused by the recent triumph of Google with AlphaGo – the first computer programme to beat a professional computer player at the ancient and complex game, “Go” – he only partially joked about the prospect of finding the mathematics profession one day being made obsolete by machines…
The first speaker to present was Dr Sabine Hauert, lecturer in Robotics at the University of Bristol. Dr Hauert provided a fascinating tour of applications of Machine Learning in diverse fields such as mimicking the swarm behaviour of insects and birds in drones, as well as how Machine Learning can be used in medical nano-technology in order to optimise the attachment of nano-devices to cancerous cells.
She was followed by Professor Chris Bishop, Head of Research at Microsoft and Fellow at Darwin College, Cambridge. Professor Bishop offered a wonderful overview of the background of the Machine Learning starting with the work of Frank Rosenblatt at Cornell in the development of the first neural networks resulting in the creation of the “perceptron algorithm” in 1957. How this research was iterated nearly three decades later with the first ‘deep’ neural networks, suddenly allowed computer scientists to solve problems previously thought impossible.
The real breakthrough, as explained by Professor Bishop has been in the last three-four years thanks to the perfect storm created by Big Data, significant computing power at low cost, and advancements in algorithmic design (see my earlier article explaining algorithms). Dramatically, Deep Learning has blown away the successes achieved by Expert Systems in fields such as Speech Recognition. For years we were unable to progress past a level of about 80% accuracy, improvements to this have been infinitesimally incremental. Deep learning very quickly has taken accuracy of Speech Recognition to near perfection in very short order. Problems such as visual scene interpretation, previously thought impossible are now achievable with Deep Learning at near-human levels of performance.
The tonic to the unbridled enthusiasm came from Professor Maja Pantic, from Imperial College London. Professor Pantic’s rather candid account of not just the limitations of Deep Learning but also the fact that we understand how Deep Learning actually works about as well as we understand the workings of the human brain, provided a clear picture of just how far we have yet to progress in this field before machines can truly master tasks that are for humans very simple, such as reading facial expressions. Particularly fascinating was her example of how working with children with autism has allowed researchers to better design algorithms that can analyse facial expressions both holistically and in their component parts, and in doing so, they have been able to create machines that autistic children can interact with more successfully than they can with other people.
The brief of The Royal Society in their Machine Learning initiative is to help educate the public as to the recent advances in this field and the potential opportunities it can create, at the same time as highlighting the areas of concern and risk. It is in particular validating to me to see how much of the debate centres around the ethics of the use of Machine Learning and in particular the philosophical questions that are raised as a result. Acknowledgement was made on multiple occasions through the session on the need for greater inter-disciplinary involvement, particularly from the social sciences. Despite this nod to philosophers, it’s still disappointing that events such as this major strongly on the technology advances and the potential application of them, as opposed to an informed debate on the philosophical questions and debate that is necessary as we implement it in society.
Particularly alarming was the Q&A session at the end of the debate. The most sensible questions came via the online feed (although one can presume these are easier to moderate than the show of eager hands in the audience). From Twitter came questions as to the likely impact on employment of the introduction of Machine Learning in the commercial world, as well as the dangers of a power imbalance between companies who own the technology, and those of us who are mere consumers of it. To me the greatest sadness here is that people tend to be focussed on how Machine Learning might affect their employability and productivity as opposed to the likely capital wealth redistribution that will occur if there isn’t sufficient state and regulatory involvement in the inward investment to this industry.
Leaving aside these big issues, perhaps its telling that all the other questions were variations of the theme “how do we stop robots from killing us”? It seems only the academics have the optimism that their creations won’t be abused by fellow man or that there won’t be unintended side effects of their development. Professor Bishop was clear that the ‘singularity’ was something so distant that it’s not something necessary for us to conjecture over. Professor Pantic was unequivocal on her views that the ‘terminator’ reality was merely the domain of science fiction, although at least highlighted her concern that more thought needs to go into the development of self-driving cars, particularly with regard to the question of human control. It was Dr Hauert however who made the most poignant comment on the topic of human/ machine co-existence. Despite the very clear potential uses for her research in Machine Learning enabled swarms of drones or nano-devices to cause significant harm, she offered the viewpoint that we should have respect for all things – living and mechanical, for if we adopt an approach where its acceptable to take a hammer to a robot but not to a creature, its likely to lead to the development of robots that have equal ambivalence to that which is not in the same form as itself.
Only 9% of respondents The Royal Society polled had even heard of Machine Learning, but it’s clear from events such as this that 100% of us have a view on its development and impact. The conclusion? With the advancement of computer science, we all need to be a little more philosophical. Roll over, Socrates!