Tips and Traps When Considering Accessibility and Diversity in AI Adoption

AI adoption may well be the way of the future, yet if we’re not careful, it could very easily replicate and amplify the inequalities of the past. We’ve already seen algorithms misidentify people of colour, AI-powered tools overlook people with disabilities, and recruitment platforms quietly sideline candidates based on gender or education background.

The problem is not that AI is inherently biased—it’s that humans are. And if we’re the ones training the models, then our blind spots are being baked in, whether we realise it or not.

So if you’re exploring how AI can benefit your organisation (and you should be), here’s how to make sure that benefit extends to everyone—and not just a select few.


1. Embed inclusion, don’t bolt it on

Too often, accessibility and diversity are treated like a final check before deployment – something you fix in QA or address once someone raises a red flag. But if it’s not considered from the start, you’re not building inclusive AI – you’re retrofitting it – which often costs more time, energy and resources.

What this looks like in practice:

  • Involve people with lived experience in the design phase.
  • Budget for accessibility like it’s non-negotiable (because it is); you can run into issues down the road if budget isn’t available when it’s needed.
  • Make inclusive testing standard, not special.

Inclusive AI isn’t a nice-to-have, it’s fundamental. It affects who gets heard, who gets hired, and who gets help.


2. Don’t let “data” be your excuse

We hear it all the time: “We’d love to build more inclusive AI, but the data just isn’t there.” Waiting for perfect data is a luxury marginalised communities don’t have, so instead, think about how you can proactively address gaps:

Consider:

  • Whose data is missing?
  • What historical biases are we carrying forward?
  • How can we build responsibly even with imperfect information?

Collect data more consciously and audit it with diverse teams. Be transparent when gaps exist—end users don’t expect perfection, but they appreciate honesty.


3. Think beyond compliance

Following guidelines like WCAG or using fairness audits is essential—but it’s not the end goal. Too many organisations tick the box on technical accessibility while ignoring how people actually experience the product.

Let’s go deeper:

  • Is your AI equally useful across languages, literacy levels, and cultures?
  • Can a user with a screen reader navigate it with ease?
  • Does your model explain itself clearly, or does it confuse and exclude?

Accessibility is not just legal, it’s human. If it’s not baked into your user experience, you’re not meeting the bar—no matter what the checklist says.


4. Be cautious of “one-size-fits-all” fixes

It’s tempting to reach for off-the-shelf AI solutions with flashy fairness claims. But these tools weren’t built with your users, your workforce, or your local context in mind.

Think of it this way:

  • A sentiment model trained in the US won’t read Australian sarcasm the same way.
  • Voice recognition tools that struggle with non-Western accents are not “minor bugs”—they’re barriers.

So before you adopt, assess: whose worldview is baked into this tool? And what might it be missing?


5. Avoid the illusion of neutrality

AI is not neutral. It reflects who made it, what they prioritised, and what they ignored. Believing AI is “just the math” is one of the fastest ways to perpetuate harm under the guise of objectivity.

Take this seriously:

  • Build diverse teams, not just diverse user personas.
  • Question the framing of your problem, not just the solution.
  • Use AI ethics as a framework, not a fire extinguisher.

AI is one of the biggest shifts we’ll see in our lifetimes. But as with all big shifts, it can either widen the gap or close it.

This is our opportunity to hardwire accessibility and diversity into the future. Not just because it’s ethical, but because it’s smart, sustainable, and frankly, non-negotiable.

We’re past the point of performative DEI in AI. Now is the time to evolve from intention to integration. Because when tech changes the world, we need to make sure it changes it for everyone.


Written By Angelica Hunt, Senior Consultant at TDC Global.

@tdcglobal_

Check out what we’ve been up to and follow us on LinkedIn, Facebook and Instagram