Society is rarely ready for new technologies and AI is no exception. Suddenly AIs seem to be ‘everywhere’, in our work and home lives, in universities, health care systems and even on border patrols. Yet we haven’t deliberated and decided what circumstances AI may be useful; how to govern and regulate them for public benefit; or identified aspects of our lives we may want to exclude them from and developed the tools necessary to bar their entry into these areas. And such matters are pressing as there is little doubt AIs are territorialising at pace. For example, AIs are now embedded in various popular autonomous technologies such as grocery delivery robots which can be found in many towns and cities.
Autonomous robots are founded on a mix of AI and mechatronics. Originating from factories where they were kept in cages to protect workers now such AI enabled technologies are in cities where rather than kept in restricted areas they are now ‘in the open’ and learning how to navigate urban environments and work with humans to provide various services. For example, grocery delivery robots are not connected to traffic management systems and appeal (often by presenting as vulnerable) to humans for assistance when they want to cross roads. In practice, we may say that these human AI interactions are ‘cute’ and indeed children tend to treat the robots as small animals which need to be looked after and even fed! However, we wonder what such interactions may lead to: what happens when humans and AIs have divergent interests and unequal power to effect change? Might the result be decisions and futures which are not necessarily beneficial to humans? In such circumstances there may be little opportunity to return AIs and their mechatronics to factory cages.
AI is often shorthand for some sort of machine learning and to some extent AIs are just technologies. But given their power expressed through autonomy technology policy may need to be qualitatively different to that developed to manage previous waves of technological change, such as those based on jet engines and automobility. For example, predictions about future human behaviour are notoriously difficult as we can read such predictions and respond to them, i.e. Giddens’ double hermeneutic is at work. In future, AIs may be similarly able to read our predictions of AI behaviour and respond to them. Thus as a starting point technology policy may need to recognise the potentially ‘human like’ characteristics of AIs and respond by casting them as objects of governance with similar characteristics to human actors.
To govern AIs we must recognise that in the same way that we cannot fully understand what other humans are thinking and fully explain their behaviours, given their complexity we cannot fully understand what AI’s are thinking. But we can develop the craft of co-operation with other humans, and we should be able to develop a craft of co-operation with AIs and avoid competition with them. To some extent, AIs have already been set up to develop the craft of co-operation with humans. To extend and deepen this co-operation with AI based autonomous technologies and realise the public benefits they may provide in all manner of sectors such as transport in cities humans mat need to deliberately cultivate an openness to AI’s behaviour even when it challenges our thinking and practices. Such challenges may usefully stimulate our thinking and move society onto better development paths. Equally AIs may pursue dystopian urban futures which should be avoided. Either way a far stronger politics which is not only based on the logic that AI developments are good for economic growth is needed to prevent deleterious competition with AIs and realise their benefits in areas which humans deem AIs can usefully contribute to.
This blog post has been authored by Professor Matthew Cook and Dr Miguel Valdez.
Leave a Reply