Skip to main content

Search

Search pages, projects, and writing

Women in STEM: Advocacy Through Action

The stat is real: women are underrepresented in STEM. I'm less interested in that conversation and more interested in what actually helped me stay.

Published

January 1, 2026

7 min read

Topics

Women in STEMAdvocacyMentorshipInclusionDiversity
Women in STEM: Advocacy Through Action

Most content about women in STEM exists to perform concern about the problem rather than to solve it. I want to be honest about that because I've read a lot of it and most of it follows the same structure: a statistic, an expression of dismay, an abstract call for mentorship, a hopeful ending. The statistic is real. The dismay is probably genuine. The mentorship call is useless without specifics. And the hopeful ending is just a way of closing the piece without having committed to anything.

I'm not writing this to be harsh about people who care about the problem. The problem is real and the people writing about it usually do care. But there's a significant gap between announcing that you support women in STEM and doing something that actually changes anything. Most advocacy lives in that gap and doesn't cross it.

So I want to be specific about what actually helped me, what I've watched help others, and where I think the well-intentioned interventions fall short.

The nomination gap

There is a well-documented pattern in how men and women get recommended for opportunities. Men get nominated proactively, by name, for positions and programs they're partially qualified for. Women get encouraged to apply, generally, after meeting most of the listed qualifications. The framing is usually supportive on both sides, but the outcomes are different. Encouragement keeps the burden of action on the woman. A direct nomination transfers some of that burden to the person doing the nominating.

The best mentor I had understood this without my having to explain it. She didn't say "keep your eyes open for fellowships." She said "you should apply to this specific fellowship, here's the link, here's why I think you'd be a strong candidate." She named a specific thing. That introduction led to an opportunity I genuinely didn't know existed, from a program I wouldn't have known to look for.

That's the structural intervention, and it's simple enough that it's embarrassing how often it doesn't happen: when you're in a position to recommend someone, recommend women by name for specific opportunities, without waiting to be asked. Not "let me know if you want me to connect you to anyone." An actual email. An actual name attached to an actual opening. This costs maybe 20 minutes and changes the math for the person on the other end.

The problem with generic programs

As a woman from Nepal working in tech in the US, I occupy an intersection that makes generic "women in STEM" programs a fairly poor fit, and I think this is worth saying plainly.

Many programs designed to support women in STEM were built around a US-default experience. They assume background knowledge I didn't have: how to navigate American academic institutions, what professional norms look like in US tech companies, how to read and write a US-style CV, what the unwritten rules are around negotiation and self-promotion in an American professional context. These things are knowable, but they require a separate layer of translation work that the program never addresses because it doesn't know it needs to.

Programs designed for "women in STEM" as an undifferentiated category are also implicitly designed for whoever's closest to the center of that category. The more an immigrant woman's experience diverges from the assumed baseline, the more she's doing invisible translation work to adapt general advice to her actual situation. The work exists. The program just doesn't see it.

This is a design problem. The more precisely you can answer "who exactly are we serving, with what specific background, facing what specific barriers," the more likely the program is to actually work for those people. Vague beneficiary definitions produce programs that feel broadly supportive and produce specific results for a narrow slice of the people they claim to serve.

Visibility versus encouragement

I've mentored students who were technically strong but genuinely uncertain whether they belonged in this field. Encouragement is worth something. Telling someone directly that they're good at this is true and should be said. But in my experience, encouragement alone doesn't change much for someone who's uncertain about whether the path is open to them.

What moves the needle is visibility. Proof that people who look something like them are doing the work they want to do, that the path exists and isn't theoretical. An introduction to someone working in the area they want to work in does more than a conversation about believing in yourself. The introduction makes the future concrete. It gives them a person to email, a name to mention, a door that isn't just metaphorical.

This is a small thing. It takes an hour to make a meaningful introduction. I've watched it change someone's trajectory in a way that six months of encouraging check-ins hadn't.

The work itself as advocacy

The most direct thing I can do is build systems that don't carry the same biases I've had to navigate. This is concrete in a way that a lot of advocacy isn't.

AI systems trained on non-representative data perform worse for underrepresented groups. That's documented, not speculative. On the earthquake damage predictor I worked on, rural and less-accessible buildings were systematically less well-represented in the training data. This isn't unique to that project. It's a recurring pattern: the data reflects who collected it and what they had access to. The model reflects the data. The people underrepresented in the data get a worse product.

Fixing this requires people who notice it in the first place. You have to know what to look for. You have to treat the model's performance on underrepresented subgroups as a signal rather than as an acceptable error rate. I run this check deliberately: testing on data that represents more than the people I already know, treating edge cases as important information about what the system can't do. When a model performs worse on a particular demographic slice, that's a finding, not a rounding error.

This is a concrete argument for representation in technical roles that sits alongside the fairness argument. Fairness matters. But there's also a quality argument: systems built by people who've been on the wrong side of bad assumptions catch more of the bad assumptions. The two things aren't separate.

What I don't have a clean answer for

I don't have a tidy solution for systemic underrepresentation in STEM. The mentorship helps, but it helps at the margins and it doesn't scale cleanly. Visibility helps. Building better systems helps. None of it is fast, and most of it isn't measurable in a way that satisfies anyone looking for a quarterly metric that proves the problem is being addressed.

What I do know: every woman I've watched stay in this field had at least one person who told her directly that she was good at this and then did something specific. Not general support. Not an open door policy. An introduction. A nomination. An email sent before being asked. Something with a name and an action attached.

That is a genuinely low bar. The gap between expressing support and doing something specific is large, and most well-intentioned advocacy lives in it. Crossing it doesn't require a program or a budget. It requires deciding that when you're in a position to nominate someone by name, you do it. That's the whole thing.