In today's rapidly evolving landscape, universities need to ensure digital accessibility is no longer a luxury, but a necessity. Digital Accessibility is not just a legal requirement under regulations such as the Public Sector Bodies (Websites and Module Applications) Accessibility Regulations 2018, but also a moral obligation for universities. With the recent increase in the use of artificial intelligence (AI), it is exciting to see the potential of AI to bridge the digital divide, to ensure that every student, regardless of ability, has access to the same learning opportunities.
Historically, creating accessible content has been a time-consuming process for academics and support staff, requiring a deep understanding of accessibility standards. AI could offer a way to streamline this process, automating many of the tedious tasks and allowing staff to focus on teaching and support.
Matthew Deeprose, an Accessible Solutions Architect from the University of Southampton, has been leveraging AI tools to make digital content more accessible. Cat (TELT at QM) joined a Digital Education webinar to find out what Matthew has been doing.
Matthew knows better than anyone how important digital accessibility is, and he was keen to answer the question, "can we use AI to reduce effort or workload, or otherwise increase efficiency with any of the digital accessibility workflows we might expect our colleagues to complete?"
While he mentions other tools, he mainly focused on the use of Microsoft Copilot. (It's useful to know that QM is a Microsoft institution, so we can access Copilot using our University credentials.) Being such an institution gives us a level of protection where our data isn't saved and won't be used to train AI models.
Matthew described four scenarios where he found Copilot helpful.
In the first scenario, he explored the process of generating readable transcripts from video captions. Using a tool called Subtitle Edit, he and a colleague had created captions for a series of educational videos. However, turning these captions into a coherent, readable transcript proved laborious. This is where Microsoft Copilot stepped in. Matthew crafted a prompt asking Copilot to format the caption text into a more readable transcript, using headings, paragraph breaks, and bullet points. The result was not only faster but significantly reduced the monotony of manual editing. This method not only saved time but ensured that the transcript was compliant with accessibility standards, making the content more accessible to individuals who prefer reading over watching videos.
In the second scenario, Matthew tackled the challenge of writing alternative text (alt text) for images—a vital component of web accessibility. Alt text provides descriptions of images for those using screen readers or other assistive technologies. Matthew demonstrated how AI tools like Microsoft Copilot and the Arizona State University’s Image Accessibility Creator can generate alt text. By inputting prompts that specify the image’s context, audience, and purpose, Matthew was able to create detailed, meaningful descriptions. However, he stressed the importance of reviewing AI-generated alt text to ensure it accurately conveys the image’s message. This method significantly reduces the workload while still ensuring high-quality, accessible content​.
The third scenario focused on transcription, but this time for audio content. Matthew worked with a Podcast from the University of Southampton’s Sustainability and Resilience Institute, where a colleague in the communications team, who is profoundly deaf, needed a transcript. Matthew utilised the open-source Whisper Desktop tool by OpenAI to transcribe the Podcast. Although the transcript was accurate, it lacked speaker attributions and formatting, making it difficult to follow. He then used Microsoft Copilot to add speaker names and improve the overall structure. This two-step process allowed him to produce a readable and user-friendly transcript, demonstrating AI's potential to streamline otherwise time-consuming tasks.
In his final scenario, Matthew showcased how AI can generate accessible sample content for training purposes. He often conducts digital accessibility awareness sessions, and instead of using real-world content, he asks AI to create fictitious but meaningful project documents. These documents contain intentional accessibility issues, such as poor contrast or missing table headers, which participants are tasked with identifying and remediating. This approach provides a safe, non-judgmental environment for learning, while also saving time in creating bespoke training materials. Matthew even used Copilot to generate images that fit the content, underscoring how AI can facilitate a comprehensive and engaging training experience.
Matthew's use of AI to automate repetitive tasks such as transcribing, formatting, and generating alt text, is helping his colleagues and his institution meet their accessibility obligations more efficiently.
Whilst we must consider that AI is not a silver bullet, it is important to know that it represents a significant step forward in our ability to create digital learning experiences that work for all students. As educators, we have a responsibility to advocate for and implement these tools in ways that enhance accessibility.
In the end, it's not just about compliance - it's about creating opportunities for all students to succeed, regardless of their circumstances. And AI might just be the key to unlocking that potential.
For media information, contact: