How can we optimise the advances and revolutions that are made in the field of AI Technology, while ensuring Human Rights are kept at the forefront of such developments?
We were delighted to co-organise this event between Queen Mary Global Policy Institute (QMGPI) and Society for Computers and Law (SCL). The event featured a host of revered international subject experts, taking place in collaboration with Queen Mary's Center for Commercial Law Studies (CCLS), Queen Mary's Digital Environments Research Institute (DERI), International Bar Association (IBA) and speakers from The Alan Turing Institute.
Currently, we know that the regulatory and policy environment of Artificial Intelligence (AI) is one that is both in a state of flux and attracting increasing attention from policymakers and business leaders on a global scale.
The advances in AI technology are clearly having an impact on most aspects of government activities, business operations, and people's lives. Proposals in favour of letting the technology advance unregulated are now under question, with the increasing realisation of the potential pitfalls of such unhindered development.
While there is indeed a tension that exists between the need to further develop the technology harnessing its potential benefits, and the harm that the use of it may cause to different aspects of people's lives, there seems to be a tacit consensus that Human Rights represent the limits where the potential impact takes precedence over the assumed benefits of the technological development.
Our panel of international experts was brought together just two days before International Human Rights Day on 10 December, and as a result, this event set the stage for further discussion on the topic, with the panel engaging together over a number of different topics related to the theme including the current situation of AI development and use, at both public and private entities, and the actual and potential repercussions of that use on people's enjoyment of Human Rights.
AI touches upon our daily lives, often unnoticed
Our panel event began with an informative overview of the current landscape in relation to AI and Technology. This came from one of our moderators Dr Theodora Christou, Convenor of Transnational Law and Governance at the Centre for Commercial Law Studies, Queen Mary University of London as she clearly defined the scale and scope of this discussion from our host.
AI has had its ups and downs, but is currently booming due to Big Data
A powerful foreword to set the scene for this event came from Professor Greg Slabaugh, Director of Queen Mary's Digital Environments Research Institute (DERI) and Professor of Computer Vision and AI. Professor Slabaugh was able to parlay the current opportunities that lie within the field of technology, by highlighting previous challenges AI has encountered with Human Rights issues, leading up to the modern day and following the introduction of our panel, that is where our roundtable discussion began in earnest.
AI represents a great opportunity for society and the protection of Human Rights
Maria Pia Sacco, Senior Project Lawyer, International Bar Association was able to provide the panel with a range of recent and practical examples that illustrated just why the advance of AI technology can enhance the ways in which we protect and enhance Human Rights on a global scale.
Further to this, she was able to provide a poised response for the tension that exists between positive uses of AI and instances where the experience and result have been negative.
Three focal issues of AI Risk
The next of our panel to share his views on AI and Technology, Dr Florian Ostmann, Policy Theme Lead and Policy Fellow, The Alan Turing Institute summarised several key focus issues that he felt provided a strong foundation to better understand the challenge associated with the problems that can come about due to AI.
These included:
Unintended Bias and AI Tools
Initial thoughts regarding the rise of AI as a commercial technological tool came from Professor Elspeth Guild, Jean Monnet Professor ad personam at Queen Mary University of London.
Whilst using AI tools can enable an organisation or individual to tap into the automatic and enhanced features that AI can unleash, therein can lie a challenge for those that choose to utilise this technology, how do we ensure that the decisions made by AI resonate with Human Ethics, and what safety measures should be taken to ensure a 'brake' can be applied if the AI unconsciously makes a decision that is unjust?
The use of Human Facial Recognition Technology and Human Rights Law
Although this decision was reached on a technicality, Minesh Tanna, Solicitor Advocate and AI Lead, Simmons & Simmons, Chair AI Group of the Society for Computers and Law (SCL) sought to provide a recent example of how the use of Technology in New South Wales in recent times may serve as a catalyst for further challenges in the use of AI Technology in the commercial world.
What are the solutions to these challenges?
Leading us onto the next part of this event, Fernando Barrio, SCL Trustee and Senior Lecturer in Business Law, School of Business and Management, Queen Mary University of London and Academic Lead for Resilience and Sustainability, Queen Mary Global Policy Institute felt now the challenges had been made clear, we needed to identify practical solutions.
A proactive approach: pre-emption rather than reaction
Maria Pia Sacco, Senior Project Lawyer, International Bar Association was concise with her insight, calling for an approach that took on the challenges head on, ensuring that each challenge is turned into an actionable lesson that can be incorporated into our understanding and preparatory work in relation to AI.
Breaking down the challenges to find a solution
There were two levels that Dr Florian Ostmann, Policy Theme Lead and Policy Fellow, The Alan Turing Institute, sought to envisage for the panel, in order to make solutions clearer.
These two levels consisted of:
The priority for me is organisations learning more about how AI is implemented
A solution to enable a better understanding of how AI impacts directly and indirectly on consumers and human beings is for a more holistic organisational approach to the use of AI by organisations according to Minesh Tanna, Solicitor Advocate and AI Lead, Simmons & Simmons, Chair AI Group of the Society for Computers and Law (SCL).
Whilst some departments have a profound knowledge of such issues, there needs to be a more uniform and consistent approach throughout companies and this can be done by organisations examining their business models to see where the gaps are, to better negate risk.
How can the legal system deal with AI Biases?
The next query for our panel came from our moderator Dr Theodora Christou, Convenor of Transnational Law and Governance at the Centre for Commercial Law Studies, Queen Mary University of London, as she prompted our panel to dissect the issue of Bias within AI Technology.
Better regulations and assessment of risk
Potential solutions were discussed and set out by the panel, with Maria Pia Sacco, Senior Project Lawyer, International Bar Association calling for an enhanced set of regulations that have been refined in such a way that they can be implemented by different sectors and stakeholders on a global scale.
All of us are dependant increasingly on AI working well
Our panel reviewed the discussions that had taken place and shared their conclusions and evaluations based on the debates that had taken place during this event.
Professor Elspeth Guild, Jean Monnet Professor ad personam at Queen Mary University of London brought a fitting end to the discussion by highlighting that our need as human beings for technology that enhances our lives will not diminish.
Watch the event again
For media information, contact: