Skip to main content
AI, Ethics and Society

Future Shock: the generative AI policy dilemma

Published:
Future-shocks-1

During the reception hour, we presented our group's poster.
AIESG-poster

 

CREAATIF, a collaboration between QMUL and Turing Institute also presented their poster.

medium-size-poster

 

 

 

This event marked the launch of the Policy Forum of the Harvard Data Science Review’s special issue: Future Shock: Grappling the Generative AI Revolution.

The Policy Forum, co-edited by Professor David Leslie (Professor at Queen Mary and Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and Steering Committee Member of AIESG), collects short, Op-Ed style position papers and policy analyses by leading thinkers from around the world, including Yoshua Bengio, Rachel Coldicutt, Jacob Metcalf, and Shmyla Khan.

Our panel discussion with Professor Leslie and other contributors to the Policy Forum, discussed how the explosive rise of GenAI has brought both unforeseen challenges and immense potential. This event will delve into the complex policy and governance questions surrounding this transformative technology.

Professor David Leslie and researchers from The Alan Turing Institute’s Ethics & Responsible Innovation team also launched a new interactive platform and set of workbooks within the Turing’s AI Ethics & Governance in Practice programme, which aims to help the public sector apply AI ethics and safety to the design, development and deployment of algorithmic systems.

 

Below, find a list of the speakers and some insights from their speeches:

 

  • Prof. Yoshua Bengio (Professor, Université de Montréal, Founder and Scientific Director, Mila, Canada) (video) - 

    Yoshua Bengio discussed the need for self-preservation goals for future AI and what actions we should take in this regard. This involves implementing pieces of code or conditioning that ensure the following:

    • International treaties ensuring that companies only develop and showcase AI systems that are safe.
    • Preventing these systems from being used in harmful ways and ensuring that they don't fall into the hands of bad actors who could abuse them.
    • Developing methods to design and detect both safe and unsafe AI systems.
  • Dr. Ranjit Singh (Senior Researcher, Data & Society, US)
    •  
  • Dr. Jean Louis Fendji (Research Director, AfroLeadership, Cameroon)
    •  
  • Shmyla Kahn (former Research and Policy Director, Digital Rights Foundation, Pakistan)
    •  
  • Rachel Coldicutt OBE (Founder & Executive Director, Careful Industries, UK)
    •  
  • Nicola Solomon (Chair, Creators Rights Alliance, UK)
    • Nicola asks questions such as why are we creating machines that are more intelligent than us? What are the concerns of creatives? People who create machines think of all data as human, but in reality, they are just pieces of work. Companies have taken creative work without proper compensation. Policies need to focus on transparency, providing licenses, ensuring proper payment for use, and being transparent about how and when creative work is being used. Do people really want machines to create creative work? There's something special about the human touch and human-created work. Creators are already struggling; incomes are down. The Society of Authors recently conducted a survey on how many translators and illustrators have been left without jobs or are being paid less due to AI. We need an ecosystem that allows creatives to make a living.

  • Antonella Maia Perini (Research Associate, The Alan Turing Institute, UK)
  • Smera Jayadeva (Researcher, The Alan Turing Institute, UK)

 

Future-shocks-3Future-shocks-2

 

 

Author: Elona Shatri

 

 

Back to top