The cyber café in Kathmandu charged 20 rupees an hour. This was 2005. The keyboards had been used so heavily that most of the letters had worn off, just smooth gray plastic where the characters used to be. If you didn't already know where the keys were, you had a problem. The connection was dial-up grade, the monitors were bulky CRTs that hummed, and the room smelled like a combination of incense from the street outside and something electrical that was probably overheating.
I loved it.
Nepal had load shedding back then, scheduled power cuts that ran 12 to 14 hours a day during the dry season when the hydroelectric reservoirs ran low. Most of the city lived around the outage schedule: you planned meals, errands, homework, around when the electricity would actually be on. The cyber café had a generator. That was actually the point. It was one of the few places in the neighborhood where you could count on being online when you wanted to be, not just when the grid happened to be running. You paid for connectivity, but you were also paying for reliability. That combination was not cheap or common.
What I didn't have language for then, but can name now: this was an experience of infrastructure as a condition you work around rather than a thing you ignore. Systems have assumptions built into them. The power grid assumed a stability that Nepal's grid didn't have. Software assumed connectivity that wasn't guaranteed. Those assumptions weren't visible to whoever built the systems, because the people who built them didn't have to notice them. I did.
The divide inside a single country
By the time I was in my mid-teens, friends in Kathmandu were getting smartphones. Facebook had arrived. The conversations, the social world, the access to information were all migrating online in ways that felt fast and irreversible. Two hours by bus, in the hills where my relatives lived, there was no cell signal at all. Not slow signal. No signal.
This matters because the way people talk about the digital divide often makes it about geography at a global scale, developed world versus developing world, a problem of countries rather than communities. What I watched was the divide operating within a single country, within a single family. My cousins and I had different access to information, to opportunities, to whatever was being discussed online that week. The gap wasn't theoretical. It was visible in what each of us knew and didn't know, what we could find out and couldn't. Access determined which opportunities were even legible to you.
What constrained access actually teaches you
There's a particular kind of attention you develop when you can't take infrastructure for granted. You notice things that people with unlimited access have no reason to notice.
Apps that don't work offline. I noticed this before I had a word for it. If your connection was intermittent, you learned very quickly which tools were usable and which required you to be continuously online. Forms that lose all input when the connection drops. Error messages that explain what failed but give you no instruction for what to do next. Mobile interfaces that were clearly designed by someone sitting at a MacBook, that had clearly never been touched by someone with a lower-end Android device on a slow connection. These aren't edge cases for most people in the world. They're the standard conditions under which hundreds of millions of people use software.
The failure modes are invisible if you've never been in them. If you've been in them, they're obvious.
Arriving in the US
The infrastructure shock of arriving in the United States is real in a specific way. Fiber everywhere. Charging outlets built into airport waiting areas as a matter of course. Connectivity as an assumed baseline rather than a condition you work around. It's a lot.
But I didn't have the reaction I was supposed to have, which was something like pure amazement. What I had was more like clarity. Seeing a high-connectivity environment after living in a constrained one made the assumptions visible in sharp relief, because I'd been on the other side of them. I could see what the system was built for. I could see who it was built assuming.
That's not a common thing to be able to see, and I don't think you can learn it purely from reading about inclusive design, though that helps. It comes from having actually been the person the assumptions excluded.
How this shapes how I work
Every project I work on, I run what I think of as an assumption audit. Who does this work for, in the design as it currently stands? Who does it fail? Where are the conditions the design assumes that don't hold for some users?
On the earthquake damage predictor I built at GWU, the most striking finding was that geographic location identifiers dominated feature importance by a significant margin over structural features like building age and construction material. Geo-level IDs were the strongest predictors of damage. The problem was that those IDs had been anonymized and couldn't be linked back to actual spatial data. We hit a performance ceiling not because the model itself was weak but because the data couldn't capture what actually mattered: specific local geology, proximity to fault lines, regional building practice patterns. Recognizing what the ceiling was required being honest about what the system genuinely couldn't know. The data had a boundary, and the model was performing near it. That's a different diagnosis than "the model needs improvement," and it leads to different decisions about what to do next.
On the revenue management platform I designed for Nepal's Financial Comptroller General's Office, the constraint was different. The users were civil servants with highly variable levels of digital experience. The design had to work for the person with the least experience, not the most. Cascading dropdowns instead of free-text entry. Automatic data retrieval instead of requiring users to remember code numbers. Visual confirmation at every state change so users always knew what the system had registered. Every one of these decisions was about reducing the assumption that the user already knew how the system worked. Designing for the hardest user made it better for everyone. That's almost always true.
The honest version
The most dangerous design assumptions are the ones that feel like obvious baseline facts. That users have reliable connectivity. That they have a recent device. That they know what a 404 error means. That they can read the error message in English. These feel like obvious baselines if you've always had them. They're not obvious at all if you haven't.
Noticing these assumptions requires deliberately trying to see from outside the conditions you work in. It requires asking who isn't in the room when decisions get made, who the design fails and whether that failure is being registered as a signal or ignored as an acceptable error rate. I couldn't have gotten this from a course on inclusive design alone. The course would have given me a framework. The framework is genuinely useful. But understanding it at the level where it changes how you read a design spec, that came from having grown up in the gap.
