There was a cyber café near my school in Kathmandu where I used to go after classes. Twenty rupees an hour. The computers were old, the keyboards had the letters worn off half the keys, and load shedding knocked the power out at random intervals. You learned to save constantly, or you lost your work. Most days the connection was slow enough that images would load one strip at a time, top to bottom, like a curtain being raised. I didn't think of it as a bad experience. It was just access. And access was something you scheduled.
I think about that place often when I hear people talk about AI closing the digital divide. There's a version of the story that's genuinely hopeful. There's also a version that papers over real problems with enthusiasm about technology. I've spent enough time working at this intersection to know the difference matters.
The assumption problem
Every design decision has a user assumption baked in. What device they're on. How fast their connection is. Whether they can read Latin script. Whether they have a street address. Whether they're likely to use an app every day or once a year. These assumptions don't usually show up as explicit choices. They show up as defaults, as the shape of an interface, as what gets optimized and what gets treated as an edge case.
This isn't malicious. It's structural. The people building most AI systems have reliable broadband, modern phones, and live in places with consistent electricity. They build for their own context first because that's the context they understand. The result is that the communities that most need these tools are the ones the tools assume least about. That's a design problem, not a technology problem. Technology is just where it becomes visible.
What AI actually gets right
Healthcare is the strongest case. In parts of rural India and East Africa, AI-assisted diagnostics are catching things that would otherwise go undetected until they became serious. Aravind Eye Care in India has deployed AI screening for diabetic retinopathy that performs comparably to ophthalmologists, reaching patients in areas where an ophthalmologist visiting is a rare event. In Kenya, tools like Ubenwa have used ML to analyze infant cry patterns as an early indicator of birth asphyxia. These aren't pilots or demos. They're real systems with real outcomes.
Language translation is another genuine win. When I was in school, almost everything online was in English. That's still largely true, but the quality of translation tools has improved to the point where a student reading a textbook written for a Western curriculum has a materially better experience than I did. For a lot of learners, that's not a small thing.
Both of those are real. But they come with a caveat I don't want to bury: the wins are real in specific, well-resourced deployments. The Aravind program works partly because Aravind is an exceptional institution that's been building operational capacity for decades. A government ministry deploying the same AI tool without that infrastructure won't get the same results. Technology is not sufficient. It never is.
What it doesn't fix
Connectivity is still the actual problem. AI tools that run in the cloud don't help anyone 40 kilometers from a tower. A diagnostic model that requires uploading a high-resolution retinal image is not accessible to a clinic with a 2G connection that drops when it rains. There are attempts to address this with edge computing and offline-capable models, and some of them are promising. But the honest current state is that most AI applications assume connectivity that a significant part of the world doesn't have.
There's also a failure mode that gets less attention: systems built without community involvement. Not bad systems. Technically sound systems. Systems that were designed by smart people with good intentions who flew in, assessed needs, built something thoughtful, and deployed it. And then watched it go unused because the workflow didn't match how people at the clinic actually worked. Or because the person who was supposed to use it didn't have a phone with enough storage. Or because the training happened once and the staff who attended have since moved on. You can make something technically correct and still have it fail completely because the design didn't account for the actual humans in the loop.
This is one of the core lessons from participatory design research, and it keeps being relearned. Imposing a solution, even a good solution, without involving the people it's meant to serve tends not to work. The involvement isn't just consultation. It means building with feedback cycles long enough to actually catch the places where your assumptions broke.
Data and who owns it
This one doesn't have a clean answer, but it deserves to be named. Communities with less infrastructure tend to have less legal protection for their data. When a healthcare company deploys a diagnostic tool in a rural district and collects patient data, the regulatory framework governing what happens to that data is often weak or nonexistent. The company is not local. The servers are not local. And the people whose data is being collected frequently have no visibility into what's being stored or what it's used for.
AI systems improve by learning from data. Data from underserved communities is training data for systems that will be sold back to better-resourced markets. That's a real dynamic. I don't think it means AI shouldn't be deployed in those contexts. But I do think it means the people building these systems need to take data governance seriously in places where the law doesn't require them to.
How this shapes the work I actually do
The FCGO Revenue Platform I worked on is a government tax portal used by district civil servants across Nepal. The people using it had variable digital literacy. Some were comfortable with computers; others used a keyboard primarily for this application. Connectivity varied by region. There was no assumption of consistent, fast internet.
That context forced specific design choices. Free-text inputs were replaced with cascading dropdowns wherever possible because free text creates error states that are hard to recover from at low familiarity. Every state change had a visual confirmation so users could tell something had happened, even on a slow connection. Error messages were written to be specific and actionable, not technical. The mental model for every feature was the most uncertain user, not the most capable one.
That last point is the one I'd generalize: design for the hardest user, not the easiest. In most commercial software, the hardest user is treated as an outlier. In tools built for high-variance populations, the hardest user is the median. Getting that inversion right changes every decision you make.
The honest conclusion
AI can help in specific, well-designed applications. The healthcare work is real. The translation work is real. There are meaningful things technology can do for people who've historically been underserved by it. But the question I keep coming back to is whether we're honest about who we're assuming as a user when we build.
The cyber café where I learned to use a computer didn't have reliable power. It had keyboards with no letters on them. The people using it were resourceful and adaptive, not in spite of those constraints, but because of them. The tools that actually serve communities like that one are built by people who take those constraints seriously from the start, not as edge cases to address later. That's the design problem. Everything else follows from whether we decide to solve it.
One practical example of this work: the Revenue Platform, a centralized tax portal built for a government agency in Nepal, built to serve people who needed it most.
