Disability & AI III - Building a Better Digital World
- Mitch Blatt
- 6 hours ago
- 6 min read

When it comes to world-building (creating immersive fictional worlds, like Middle Earth or Dune’s Arrakis), there’s a golden rule: the world is only as believable as
the logic you build into its foundation. If the physics are inconsistent or the social rules are shaky, the whole story collapses.
For the last two articles, we’ve looked at the current “logic” of AI. We’ve seen the triumphs of the “Cyber Curb Cut” and the dangerous “Averages Trap” that occurs when developers build for a world that doesn’t actually exist—a world where every pedestrian walks with the same gait and every voice speaks with the same inflection.
But as any world-builder knows, when the foundation is flawed, you don't just abandon the project; you rewrite the code.
Building a better digital world isn’t about making AI “smarter”—it’s about making it more observant. It’s about moving away from the “Black Box” of secretive algorithms and toward a model of Data Sovereignty, where the disability community isn't just a group of test subjects, but the architects of their own digital future.
To turn the “bumps in the code” back into “curb cuts,” we need to focus on three structural pillars: ensuring AI is trained on Inclusive Data representative of people with disabilities, Human-in-the-Loop design, and the simple, radical idea of “Nothing About Us Without Us.”
The Landlord Problem
If you’ve never heard of Digital Sovereignty, let me explain the concept: it’s the difference between owning a house and renting an apartment; we give our voices, faces, and other biometric patterns to big tech companies to use their products, but we don’t control what they do with our data (as an analogy, apartment renters aren’t allowed to make permanent changes to their apartment, and it’s up to the landlord’s discretion whether you get to stay or not). A lot of AI is trained on data that’s “scraped” from all over the Internet and removed from its context—that’s like an architect building a house for someone they’ve never met. To an AI (specifically, AI that “sees” images or “hears” speech), a wheelchair equates to medical “emergency” or “tragedy”; an accent or speech impediment is "unintelligible".
The most effective way to exercise this sovereignty is through intentional, community-led data collection. If the “average” model doesn't recognize your voice or your gait, the solution is to overwhelm the algorithm with the truth.
Fortunately, there are tools and projects aimed at correcting the “averages” issue.
Project Understood: A partnership between Google and the Canadian Down Syndrome Society encouraged people with Down syndrome to "donate" their voice samples to the Project Euphonia database. By intentionally providing thousands of hours of "non-standard" speech, the community didn't just ask for a better voice assistant—they built the foundation for one.
The Voice Keeper: This technology allows individuals with progressive conditions to bank their own voice. Instead of a generic, robotic text-to-speech voice, the AI creates a digital twin that belongs to the user. This is sovereignty in action: using AI to preserve a person's identity rather than replacing it with a "standardized" version.
Inclusive Data
An AI algorithm is only as good as the data it’s trained on. Most AI today is trained on averages of data that’s “scraped” from all over the Internet, ignoring the nuances of disability. Inclusive Data is a practice that aims to counter this by giving AI access to the full spectrum of human diversity—people with speech impediments donating samples of their voices, or people with non-standard gaits contributing movement data.
Human-in-the-Loop
In world-building, you can’t just set a bunch of rules in motion and walk away; you have to playtest. You have to be the “Game Master” who steps in when the system produces a result that doesn't make sense.
In the AI world, this is known as Human-in-the-Loop. It is the intentional practice of requiring a human expert to verify an AI’s “final call,” especially in high-stakes areas like healthcare, legal rights, and employment.
As discussed in the previous article, AI is prone to the “Averages Trap.” If there’s no human in the loop, a hiring algorithm might automatically reject a candidate because their speech pattern doesn't match a “standard” corporate profile. A medical AI could misinterpret a symptom because it hasn't been trained on the specific physiology of someone with cerebral palsy.
To run with the housing vs apartment analogy, having a human in the loop is like having a skilled housing manager. While the AI handles the small stuff like sorting mail or keeping the lights on, the human manager makes bigger decisions that can affect a person’s life and health. This is exactly what you want, because humans can understand context, nuance and empathy. This is how we use AI as an assistant, not a judge. We need engineers, doctors, and hiring managers who are trained to look at an AI’s “red flag” and ask: “Is this an actual problem, or is the algorithm just failing to understand a different way of being?”
Nothing About Us Without Us
Internal consistency is everything in world-building. It’s the commitment to the rules you’ve established for your universe. If you tell the reader that magic costs a high price in one chapter, but then let your hero use it for free in the next, the world breaks. The reader stops believing in the story.
Currently, the AI world is suffering from a massive internal consistency error. The "rule" we are told is that AI is a universal tool designed to assist humanity. But when that same AI fails to recognize a non-standard voice or reflexively filters out a disabled job applicant, the system is breaking its own internal logic. It claims to be for "everyone," but its code only accounts for the "average."
The only way to fix a world that is internally inconsistent is to bring in the people who understand where the rules are breaking.
For decades, the disability community has used the rallying cry: “Nothing About Us Without Us.” In the AI era, this means tech companies must move beyond using disabled people as "test subjects" and start hiring them as architects.
A developer who uses a screen reader knows the “rules” of digital navigation better than anyone; they can ensure the AI’s logic remains consistent for blind users.
An engineer with a mobility aid understands the variables of a "logically-consistent" physical world, ensuring that navigation AI doesn't skip over the "nuances" of a broken curb or a steep grade.
When disabled people are the ones writing the code, the “rules” of the digital world finally apply to everyone. We stop being an “exception to the rule” and start being part of the foundation.
Building a world—whether it’s for a thriller anthology or a global tech platform—is an act of responsibility. As an editor, I know that you can’t fix a broken story just by polishing the prose; you have to go back to the bones of the plot.
AI is currently the most ambitious world-building project in human history. It’s a story we are writing in real-time. If we continue to let “average” be the only rule, we build a world where millions of us are written out of the narrative.
But we have the tools to change the script. By demanding Digital Sovereignty, keeping Humans-in-the-Loop, and ensuring that disabled architects are the ones holding the pen, we can build a digital world that is finally, truly, internally consistent.
We’ve built the ramp. Now, let’s make sure the world it leads to is one where everyone is invited to stay.
ABOUT the AUTHOR:
Born with cerebral palsy, Mitch Blatt has been working as The Understanding’s Editor-in-Chief since 2019. He knows how tough it can be to navigate a world that wasn’t built for you. When work is done, he’s an avid gamer and world-builder, currently working on a thriller anthology that uses a logically-consistent constructed world to explore the complexities of life with mental illness.
Transparency Disclaimer: I used an AI assistant for research and drafting. I verified all external sources, incorporated my own voice, and the article was iteratively reviewed and polished by our human team.




Comments