Inclusive AI for Digital Accessibility

Despite decades of accessibility standards, many AI-enabled digital products continue to be released with significant barriers contents like websites are still hard to use for many people with disabilities. Tools like overlays, auto-alt-text, and automated audits are frequently designed without the input of people with lived experience (PLE), leading to:

  • Conflicts with assistive technologies

  • Biased outcomes from non-representative data

  • Low real-world usability and adoption

Challenge

My research response

This research aims to move digital accessibility from checkbox compliance to active inclusion by co-developing practical, testable recommendations for inclusive AI. Grounded in participatory design and action research, the study engages developers, accessibility experts, researchers, and PLEs to explore how inclusion can be embedded meaningfully throughout the AI development lifecycle.

How I am doing it

I use a mix of methods, so the guidance is both human-centred and technically realistic:

  1. Listening and co-design

    • A focus group (co-design) with 9 participants, including people with lived experiences and practitioners.

    • Semi-structured interviews with stakeholders across design, development, auditing, advocacy, and policy.

  2. Real technical trials

    • Case Study 1: teaching a computer to help people navigate websites (a technique called reinforcement learning).

    • Case Study 2: teaching a model to recognise different parts of a webpage (like “navigation” or “main content”).

    • Case Study 3: A study of web accessibility overlays (with RIT’s CAIR Lab) using a survey and interviews with blind/low-vision participants.

  3. Industry engagement

    • National Disability Authority (NDA) internship to observe real auditing workflows and tools.

    • Collaboration with RIT CAIR Lab to run the overlays study and reach the right participants.

Read more here: