top of page

Disability & AI Part II - Cyber Curb-Cut, meet Cyber Speedbump

5 days ago

4 min read


Digital silhouettes navigate a virtual grid, with one figure using a seeing-eye cane. Image generated with Google Gemini.
Digital silhouettes navigate a virtual grid, with one figure using a seeing-eye cane. Image generated with Google Gemini.

In Disability and AI I - AI Breaking Barriers, we looked at the “Cyber Curb Cut”—how AI tools built for the disability community end up making life easier for everyone. But a curb cut made of concrete is just a predictable dip in the sidewalk. A curb cut made of code is different. If the code is built on biased data, that “dip in the sidewalk” might actually become an insurmountable bump. Inclusion’s a great goal to drive toward, but reality can present a speedbump on that road. Here’s how:


1. The “Averages” Trap

AI doesn’t “see” the world like a human; it predicts the world based on patterns. Most AI models are trained on what developers call the “average” user. The problem? Nobody in the disability community is “average.”

When an AI is trained on millions of photos of people walking, it learns that a “pedestrian” has two legs and a specific gait. If that AI is powering a self-driving car, it might fail to recognize a person in a wheelchair or someone using a walker as a “human” in its path. Research conducted by the Disability Rights Education & Defense Fund (DREDF), and summarized by the nonprofit magazine publication Nonprofit Quarterly, highlights a chilling example: when a researcher tested a detection model using footage of a friend who propels her wheelchair backward using her feet, the system failed to recognize her as a person. Instead, it indicated that the vehicle should proceed through the intersection—essentially signaling it was safe to collide with her. Further, in a letter to the Department of Transportation, the DREDF noted that “an advocate in the Bay Area who uses a service animal walked in front of a Waymo. The Waymo sped up instead of stopping. The advocate’s daughter was, thankfully, able to intervene.”


2. The Automated “Screen-Out”

One of the most invisible barriers today is in the hiring process. Many companies now use AI to sort through thousands of resumes or even conduct “video interviews” where an algorithm analyzes a candidate’s facial expressions and tone of voice.

For a neurodivergent candidate or someone with a speech impairment, these “blind spots” can be devastating:

  • Resume Gaps: AI might automatically reject a resume with a two-year gap, where a human would ask for further information and learn that the gap was for medical recovery or rehabilitation.

  • The Vibe Check: Oftentimes, video AI will look for things like “standard” eye contact or “enthusiastic” tone. If a candidate is autistic or has facial palsy (paralysis), the AI may incorrectly identify them as “uninterested” or “dishonest” just because they don't fit the algorithm's programmed idea of “professional.”


3. Confidently Wrong: The Danger of Hallucinations

As we discussed in Part 1, apps like Be My AI are life-changing for blind users. However, AI has a habit of “hallucinating”—making up facts with 100% confidence.

While a hallucination in a chatbot might be funny, a hallucination in an accessibility tool can be life-threatening. If an AI-written foraging guide gives incorrect advice on mushroom identification, or an AI used for medical transcription hallucinates fictional “hyperactivated antibiotics”, the consequences are real.


4. Mitigating the Growing Pains

The solution isn't to “throw the baby out with the bathwater” by getting rid of AI wholesale; it’s to make sure the disability community is involved, to develop assistive AI with them, rather than for them.

  • Inclusive Data: Projects like Google’s Project Euphonia are working to specifically collect samples of “non-standard” speech to ensure voice assistants understand everyone, not just the “average” speaker.

  • Human-in-the-Loop: The most successful tools today use AI as a first step but keep a human on standby. For example, an AI can describe a room, but a human volunteer can be called in via the app to double-check a medication label.

To wrap things up, as we’ve seen, the ‘Cyber Curb Cut’ is only as reliable as the data used to build it. When we rely on algorithms that favor the ‘average’ user, we risk digitizing the same barriers we’ve spent decades trying to tear down in the physical world.

Inclusion’s not just a byproduct of tech—it has to be a deliberate design choice. The danger of applying AI here is that it lacks the human nuance to understand that there is no such thing as an average human. But the story doesn’t end with a “speedbump”. In the final part of this series, we’ll look at how the disability community is fighting back—reclaiming their data and building a future where the code finally recognizes us all.


ABOUT the AUTHOR:

Born with cerebral palsy, Mitch Blatt has been working as The Understanding’s Editor-in-Chief since 2019. He knows how tough it can be to navigate a world that wasn’t built for you. When work is done, he’s an avid gamer and world-builder, currently working on a thriller anthology that uses a logically-consistent constructed world to explore the complexities of life with mental illness.


Transparency Disclaimer: I used an AI assistant for research and drafting. I verified all external sources (correcting for those hallucinations!) and incorporated my own voice, and the article was iteratively reviewed and polished by our human team.


5 days ago

4 min read

1

11

0

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
  • Facebook
  • Instagram
  • Twitter
  • LinkedIn
  • YouTube

INCIGHT is a 501c(3)
P.O. Box 82056, Portland, OR 97282  |  971-244-0305

© 2026 INCIGHT, ALL RIGHTS RESERVED

bottom of page