Skip to main content
AI, Ethics and Society

Terminology in AI and Ethics

Published:

On February 16, 2024, academics from various disciplines held a panel discussion at the Empire House for the workshop "AI Ethics Terminology—Toward an Interdisciplinary Glossary." This event aimed to delve into the complexities and challenges of terminology within the realm of AI ethics, fostering a dialogue that cuts across disciplinary boundaries to enhance our collective understanding and possibly create a multidisciplinary glossary.

The workshop was driven by a need to address the fluidity and variability of terminology as it traverses different academic and operational fields. The primary goal was to spark an inter- and trans-disciplinary conversation, striving for a more nuanced understanding of how specific terms are perceived and utilized in various disciplines and to explore the potential development of a unified glossary that could serve as a foundational resource in the study and application of AI ethics.

Key Discussions and Insights

Part 1: Diverse Disciplinary Perspectives on AI Ethics

Panelists discussed several foundational questions related to the impact of AI within their specific fields, how AI ethics is approached, the terminological challenges they face, and the terms that necessitate clearer definitions. This discussion highlighted that while terms like "trust," "fairness," and "privacy" are frequently discussed in AI ethics, they often suffer from a lack of precise definition, leading to potential misunderstandings and misapplications.

From governance and transparency to accountability, the discourse underscored the necessity for frameworks that enable the fair use of AI tools and promote user responsibility. Furthermore, the dialogue revealed a significant tension between conceptual utility and technological utility, emphasizing the need to balance operational practicability with ethical imperatives.

Part 2: In-depth Examination of 'Fairness'

The term 'fairness' was examined, revealing its multi-dimensional and contested nature. Initially discussed within technical realms in terms of formal fairness metrics (like equalized odds), the conversation has evolved to address broader issues of allocative and representational harms. The insights gathered indicated that fairness cannot merely be quantified mathematically but involves deep ethical considerations about justice, equality, and representation.

Challenges and Opportunities

The workshop highlighted several critical challenges, including the varying semantics of AI ethics terms across disciplines, the political implications of terminological ambiguity, and the need for ongoing reflexivity in how these terms are used and understood. The dynamic nature of terms like "fairness" requires continuous scrutiny and adaptation, underscoring the importance of not merely settling for static definitions but fostering an ongoing dialogue that accommodates evolving understandings and contexts.

Concluding Thoughts

The event made evident that creating a multi-disciplinary glossary of AI ethics is about defining terms and understanding their implications, uses, and the contexts in which they are employed. As AI continues to integrate into various aspects of human life, the clarity of terms we use to discuss ethical considerations becomes paramount. This workshop was a step toward building a common language supporting clearer and more effective discussions about AI ethics across different sectors and disciplines.

In conclusion, the AI Ethics Terminology workshop provided valuable insights into the complexities of terminology in AI ethics, offering a platform for interdisciplinary dialogue and the potential for more structured and unified approaches to understanding and applying ethical concepts in the realm of artificial intelligence.

 

 

Author: Elona Shatri

 

 

Back to top