DEV Community

Cover image for Whose Fairness Are You Coding Into Your AI System?
Ebikara Spiff
Ebikara Spiff

Posted on

1 1 1 1 1

Whose Fairness Are You Coding Into Your AI System?

As developers, we’re often taught to treat fairness like a math problem.

Balance the dataset.

Reduce bias.

Optimize the outcome.

But here’s a question I’ve been wrestling with:

Is fairness universal — or is it cultural?


Growing up, I saw fairness differently

I’m from Nigeria.

Where I grew up, fairness didn’t mean “treat everyone exactly the same.”

It often meant “consider people’s different realities.”

Think about it:

If two students show up late to class, one because of traffic, the other because they were helping their parents in the market, do we punish them both the same way?

Same rule, different context.

And maybe fairness means acknowledging that.


AI doesn't always do that

Most AI systems we build today encode a Western interpretation of fairness:

🧮 Group fairness

⚖️ Individual fairness

📊 Statistical parity

These are important, but they’re not always enough.

Take this real case:

A credit scoring algorithm in Kenya failed to recognize community-based lending traditions, like rotating savings groups (ROSCAs).

As a result, reliable borrowers were marked “high-risk” because the system didn’t understand local context.

Fair model?

Accurate data?

Maybe.

Fair outcome?

Not really.


Developers, we need to ask harder questions

If you’re working on AI, especially models that affect people’s lives, I urge you to consider:

  • Whose values are we embedding into our models?
  • Are we treating fairness as a checklist or a conversation?
  • Can our systems adapt to different cultural realities, not just datasets?

As I prepare for a PhD, I’m committed to asking these questions, and building models that reflect local values, not just global assumptions.

Because true fairness in AI might not come from the top down, but from the Global South outward.


💬 What’s your take?

Have you ever worked on an AI project where fairness was tricky to define?

Do you think AI models should adapt to different cultures, or aim for universal rules?

Let’s talk in the comments 👇🏽

Top comments (0)

Feature flag article image

Create a feature flag in your IDE in 5 minutes with LaunchDarkly’s MCP server 🏁

How to create, evaluate, and modify flags from within your IDE or AI client using natural language with LaunchDarkly's new MCP server. Follow along with this tutorial for step by step instructions.

Read full post

Announcing the First DEV Education Track: "Build Apps with Google AI Studio"

The moment is here! We recently announced DEV Education Tracks, our new initiative to bring you structured learning paths directly from industry experts.

Dive in and Learn

DEV is bringing Education Tracks to the community. Dismiss if you're not interested. ❤️