It was a characteristically foggy February evening in Oxford as I was walking towards the University’s Mathematics Institute in the Radcliffe Observatory Quarter. Luckily the snow had melted so I could admire the Penrose tiling outside the main entrance as I was making my way towards a lecture hosted by the Oxford AI Society and given by David Wood (remember Symbian OS?). The title of the talk was “Why AGI Deserves Immediate Serious Attention” and I already disagreed on several points. AGI or Artificial General Intelligence is the term for AI that can match or exceed human level intelligence in every task, as commonly portrayed in science fiction. We are currently unsure if it is even possible to create and I do not think it deserves our immediate or serious attention. The talk began by comparing AGI to the nuclear bomb.
This is some strong rhetoric and one that is quite popular in contemporary media. From newspapers to popular science, many authors are worried about the implications of AI and robots and how they could change or even destroy our society. The talk concluded with general remarks about the importance of validation and verification and the call to formalize the still relatively new field of software engineering and include more safety guidelines. I can’t help but agree with the conclusions, although I do feel Alan Winfield managed to make the same point in his recent very enjoyable and insightful talk on Robot Ethics without mentioning the doomsday scenario.
Also, quite apart from preaching caution, both experts encourage continuing research in this field and even ramping up it up further. Let’s start with why that is.
It is not ethical to stop research into AI or robotics
Globally we live longer, are richer, better educated and healthier than ever before. Most sources agree this is mostly due to technology. There are also more people than ever, and, regardless of the impact on our environment, this is a success story of humanity that started with the industrial revolution. But because of our impact on the environment it is not enough. If we want to keep up this positive trend in a way that is sustainable, there is much more work to be done. It doesn’t however need to be our work. Assuming we can produce energy sustainably in all other critical aspects of life such as agriculture, mining, construction, medicine, education and manufacturing, then robotics and AI can create a world of plenty for all humanity and do so sustainably. Economists are already considering what people will do when work is no longer necessary to sustain humanity. One idea is universal basic income; others suggest new jobs will be created and in fact we are already seeing this: neither Facebook, Twitter, Netflix nor even the Internet as a whole is strictly necessary for human prosperity. For example people argue Blockbusters went out of business due to Netflix, which fulfils the same service with far less staff and the help of automation. However Blockbusters could only exist in the first place because we were able to mass-produce VHS/DVD players cheap enough for a population (already well fed and living in houses with TVs) to afford. But I digress. The bottom line is that more capacity of work globally, regardless of whether it comes from humans, robots or algorithms, positively impacts on our life and has the capacity to do so sustainably. Going back is not ethical; not exploiting the potential of robotics and AI is also therefore not ethical, and the only choice is to press on.
Pressing on (how to create AI)
So, the foreseeable future. Improvement will be incremental largely due to faster and more efficient computing power and better and more accessible sensors. Many individual problems such as perception, navigation, natural language processing are being actively worked on, in many cases in isolation. But nothing great has ever been achieved in isolation and big leaps in robotics and AI can be achieved only by efficient system integration. There are currently many parallel initiatives trying to formalize middleware (aka software glue) in order to efficiently integrate existing software components into robotics. In addition, people are looking at model-based systems engineering in order to allow designers to design complex, top-down systems more efficiently.
GMV SPS is leading the way in middleware (HRAF, ESROCOS) and model-based systems engineering (ESROCOS, HRAF) in Europe as well as embedded autonomy (ERGO, ADE). I believe these system-integration activities will be key factors in becoming able to bring AI to real world applications quickly and efficiently.
In addition to the above, I propose three rules we can follow to speed up the real world impact of intelligent robots:
RULE 1: There is no mind without body.
As discussed above, we cannot separate software from hardware. Any AI will require advances in hardware including computing architectures and sensors. The biggest impact towards a sustainable future will be achieved by AI that is inside robots capable of moving around and interacting with the real world and collaborating with one another. For developments in the near future we should not try to separate the two, as there is much to be gained in advancing both.
RULE 2: Intelligence is in the eye of the observer.
This rules means we will soon have systems that appear intelligent in certain contexts, to certain observers and in limited scenarios. I believe the Turning test is taken too literally and what the test really means, simply, is that if humans consistently observe intelligent behavior in a robot then we can call it intelligent. We may know that they are not truly intelligent, just sophisticated, and will fail in certain edge cases that may be obvious to humans. Nevertheless they can automate real world tasks with a high enough success rate to make them economically viable, and these systems will be extremely useful; in fact most if not all of the revolution towards a sustainable and prosperous future can be accomplished by these systems. It is OK to fake it.
RULE 3: Never lose curiosity.
Learning is the most important parts of AI and our goal should be to create self-improving systems. This shouldn’t be limited to training, like current machine learning systems, but learning during task execution. Even more important is experimenting beyond the original task parameters. These systems should be able to deal with long term deployment. Currently, robotic systems are limited to one-off deployments or training via simulations. Humans are capable of 100 years or so of continuous autonomous deployment. With our robots we should have the capability of learning continuously and cumulatively over 1000 years or more of real world deployment.
Also, as designers we should sometimes look beyond the mainstream and give radical ideas a chance to succeed or fail.
All of the above is simply Artificial Intelligence as defined by the recent report by the House of Lords:
Technologies with the ability to perform tasks that would otherwise require human intelligence and the ability to learn or adapt to new experiences or stimuli.
Such a definition may produce sophisticated AGI that we can talk and interact with seamlessly. But intelligence itself is not well defined and intelligence is not necessarily consciousness. Currently we do not know of any creature or system that we could call intelligent but not conscious and similarly we do not know of any intelligence that exists outside of consciousness (or a body). It remains to be seen if we will call our greatest creation, AI, a system or our friend. For now maybe we can try to aim for children.
As I was walking back across the foggy streets of Oxford I stopped by a street performer juggling fire. Watching his display, which any leading robotics lab would struggle to accomplish on their best robot platforms, I reflected on the talk I just heard. I do think what we call AGI is possible. I also think we are a long way off but, looking ahead to the longer term, or perhaps to a more immediate unforeseen breakthrough in AI, we might well ask: are we juggling with fire? One mistake, an overambitious throw and will we get burned? Possibly. But where would humanity be without fire?
Author’s notes: hyperlinks are not to definitions, and not even the most important topic of the highlighted text; however, I do hope they are to the most thought-provoking related subject. While I have discussed the implications and direction of AI and robotics I did not give an overview of the subject. For people interested I recommend the following reading (as a start):
Artificial Intelligence: A Very Short Introduction (Paperback) – 23 Aug 2018 by Margaret A. Boden; ISBN: 9780199602919
Robotics: A Very Short Introduction (Paperback) – 27 Sep 2012 by Alan Winfield; ISBN: 9780199695980;
Author: Aron Kisdi
Las opiniones vertidas por el autor son enteramente suyas y no siempre representan la opinión de GMV
The author’s views are entirely his own and may not reflect the views of GMV