"If we want to leverage this technology to advance social justice ... we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently" writes Dr Isadora Cruxên, Lecturer in Business and Society
Every other day there are now headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential.
Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us—or perhaps blowing past us at speed—while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.
If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”
For the last four years, I have been involved in a collaborative research project called Data Against Feminicide, which was originally developed by Catherine D’Ignazio, Helena Suárez Val, and Silvana Fumega, and which I now co-lead with them. This work is supported by a number of partners and students, is inspired by lineages of feminist activism against gender-related violence and speaks to various ongoing efforts to explore possibilities for agency, mobilisation, and resistance through data and AI.
Our work makes clear that the politics of data and AI is, at heart, a politics of knowledge production. We can start to dispute and transform these spaces by asking seemingly simple questions: who and what is this for, who is involved and how, and towards what ends?
Thinking about AI and data production differently necessarily requires asking what we want to achieve and whose work, knowledge, concerns, needs, and aspirations we want to support and uplift.
The Data Against Feminicide project did not start with the goal of building AI-based tools. At heart, it is an action-oriented and collaborative project that works with activists and civil society groups who monitor and produce data about gender-related violence and feminicide across various contexts.
Feminicide is the gender-related killing of cisgender and transgender women and girls, a form of violence that reflects structural and intersectional forms of inequality. We know this is a global challenge: around 89,000 women and girls were intentionally killed in 2022, according to United Nations’ estimates.
But we also know that existing statistics underestimate the problem due to underreporting and incomplete or inaccurate data. In Brazil, where I am from, the Laboratório de Estudos de Feminicídios (LESFEM), one of our project’s collaborators, counted at least 1,706 feminicides in the country in 2023, 16% more than official statistics. Addressing such data gaps, making the structural nature of this violence visible, and holding institutions to account are some of the reasons that many activists begin to produce data of their own.
For us, the goal is to support this activist work. We seek both to understand activists’ data-gathering practices and technological challenges, and to work with them to co-design and develop digital tools that facilitate the labour of feminicide data production. This labour, as we have learned in our research, is emotionally draining, time intensive, and often volunteer-based and unremunerated.
Inspired by the data feminism principles developed by Catherine D’Ignazio and Lauren Klein, we seek to make this labour visible and facilitate it—rather than automate and replace it. This perspective contrasts with prevailing approaches to labour in mainstream, corporate-driven data and AI production, which both mask the extractive nature of data labelling work and raise concerns about labour replacement and the future of workers across industries.
While reflecting on where we want to go with data production and AI is a good starting point, the questions of how we get anywhere, whom we bring on the ride, and how much they have a say on the journey are equally important.
Also grounded in data feminism principles, we believe in centring collaboration and participation as ways of including different lived experiences, perspectives, and contexts, and of moving towards technological design and development that work for diverse communities.
To illustrate, through this project we have developed a tailored email alerts system, the Data Against Feminicide Email Alerts, which relies on machine learning—a form of AI—to identify news articles that are highly likely to concern a feminicide and forward them to activists based on the geographies they monitor. From our qualitative work and co-design sessions with activists, we learned that many use news articles to monitor feminicide cases, meaning they need to read through a lot of violent—and often not directly relevant—content to identify cases in their contexts.
The email alerts platform aims to mitigate this issue by surfacing news articles that are more directly relevant. Through collaborative data annotation of news articles and participatory development and testing, we sought to engage activists throughout the process of developing this tool such that the system would be useful to them and reflect, as much as possible, their own understandings of feminicide. As our team has written elsewhere, “a pluralistic process helps enable conceptual pluralism.”
Creating and taking part in spaces for continuous dialogue and reflection about data and AI production is part of approaching or (re)claiming these sites as spaces of resistance. Audre Lorde reminds us in The Master’s Tools Will Never Dismantle the Master’s House, revolution is not “a one-time event” and “change is the immediate responsibility of each of us.”
That is the foundational thinking behind our work across the globe. Together, we are using AI for the benefit of society by ensuring it works for us, not against us.
Through collaboration and communication with activist communities, we’re working towards a world in which no woman suffers from violence and is intentionally killed – ever.
Want to learn more about how AI can be used for the public good?
Watch the full webinar recording today.
For media information, contact: