<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hemanta Nandi</title>
    <description>The latest articles on Forem by Hemanta Nandi (@hemantanandi).</description>
    <link>https://forem.com/hemantanandi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hemantanandi"/>
    <language>en</language>
    <item>
      <title>Social Media Doesn’t Run on Algorithms. It Runs on People.</title>
      <dc:creator>Hemanta Nandi</dc:creator>
      <pubDate>Sat, 07 Mar 2026 05:48:34 +0000</pubDate>
      <link>https://forem.com/hemantanandi/social-media-doesnt-run-on-algorithms-it-runs-on-people-1198</link>
      <guid>https://forem.com/hemantanandi/social-media-doesnt-run-on-algorithms-it-runs-on-people-1198</guid>
      <description>&lt;p&gt;Most people think social media runs on algorithms.&lt;/p&gt;

&lt;p&gt;It doesn’t.&lt;/p&gt;

&lt;p&gt;It runs on people.&lt;/p&gt;

&lt;p&gt;Real people who review content, make hard calls, apply policies, and sometimes sit quietly after a shift thinking about what they just saw.&lt;/p&gt;

&lt;p&gt;I work in Trust &amp;amp; Safety. And no, it’s not just “removing bad posts.”&lt;/p&gt;

&lt;p&gt;Let me take you behind the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf9bb6c65q858nad861x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf9bb6c65q858nad861x.webp" alt=" " width="720" height="1080"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Internet Is Messy&lt;/strong&gt;&lt;br&gt;
On any given day, a platform can receive:&lt;/p&gt;

&lt;p&gt;Sexual content framed as entertainment&lt;br&gt;
Political debates that escalate within minutes&lt;br&gt;
Identity-based discussions that sit right on the edge&lt;br&gt;
Content that looks harmless until you examine the context&lt;br&gt;
Now imagine reviewing hundreds of these in a single shift.&lt;/p&gt;

&lt;p&gt;Your job is not to react emotionally.&lt;br&gt;
Your job is to apply policy consistently.&lt;/p&gt;

&lt;p&gt;That sounds simple.&lt;/p&gt;

&lt;p&gt;It isn’t.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s Not About What You Think&lt;/strong&gt;&lt;br&gt;
One of the biggest misconceptions about content moderation is that decisions are based on personal belief.&lt;/p&gt;

&lt;p&gt;They’re not.&lt;/p&gt;

&lt;p&gt;Every action must map to written guidelines. If something is removed, labeled, restricted, or escalated, there must be a clear policy reference behind it.&lt;/p&gt;

&lt;p&gt;There’s no room for “I don’t like this.”&lt;/p&gt;

&lt;p&gt;Instead, the questions look like this:&lt;/p&gt;

&lt;p&gt;Does it violate sexual content policy?&lt;br&gt;
Does it require a political disclosure label?&lt;br&gt;
Is it identity-based and potentially inflammatory?&lt;br&gt;
Does it cross into harassment or hate speech?&lt;br&gt;
Precision matters.&lt;br&gt;
Documentation matters.&lt;br&gt;
Consistency matters.&lt;/p&gt;

&lt;p&gt;Without those, enforcement becomes arbitrary. And arbitrary enforcement destroys trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context Is Everything&lt;/strong&gt;&lt;br&gt;
A kiss in a movie scene might require a label.&lt;/p&gt;

&lt;p&gt;A discussion about gender might need careful classification.&lt;/p&gt;

&lt;p&gt;A political clip may be allowed, but only with proper disclosure.&lt;/p&gt;

&lt;p&gt;The same piece of content can be:&lt;/p&gt;

&lt;p&gt;Educational&lt;br&gt;
Satirical&lt;br&gt;
Exploitative&lt;br&gt;
Inflammatory&lt;br&gt;
Your job is to tell the difference.&lt;/p&gt;

&lt;p&gt;Trust &amp;amp; Safety isn’t black and white. It’s gray. And gray requires judgment, experience, and discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Mental Discipline No One Talks About&lt;/strong&gt;&lt;br&gt;
You train yourself to focus on structure.&lt;/p&gt;

&lt;p&gt;Review.&lt;br&gt;
Assess.&lt;br&gt;
Classify.&lt;br&gt;
Document.&lt;/p&gt;

&lt;p&gt;But behind that structure, there’s constant cognitive pressure.&lt;/p&gt;

&lt;p&gt;Every decision affects:&lt;/p&gt;

&lt;p&gt;A creator’s reach&lt;br&gt;
A viewer’s experience&lt;br&gt;
A platform’s reputation&lt;br&gt;
You are making impact decisions at scale, often within minutes.&lt;/p&gt;

&lt;p&gt;Burnout is real in this field.&lt;/p&gt;

&lt;p&gt;So is resilience.&lt;/p&gt;

&lt;p&gt;The part people don’t see is the emotional control required to stay objective while reviewing sensitive or disturbing material repeatedly.&lt;/p&gt;

&lt;p&gt;That’s not something an algorithm carries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s Not Just Moderation. It’s Risk Control.&lt;/strong&gt;&lt;br&gt;
Trust &amp;amp; Safety protects more than users.&lt;/p&gt;

&lt;p&gt;It protects:&lt;/p&gt;

&lt;p&gt;Communities from toxic escalation&lt;br&gt;
Brands from reputational damage&lt;br&gt;
Platforms from regulatory scrutiny&lt;br&gt;
Public discourse from manipulation&lt;br&gt;
Yes, it’s policy enforcement.&lt;/p&gt;

&lt;p&gt;But it’s also behavioral analysis, trend monitoring, escalation management, and governance.&lt;/p&gt;

&lt;p&gt;Automation helps. AI assists. But in complex, high-risk cases, humans still make the final call.&lt;/p&gt;

&lt;p&gt;And those calls are rarely casual.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Work Matters More Than Ever&lt;/strong&gt;&lt;br&gt;
Content is growing faster than policies evolve.&lt;/p&gt;

&lt;p&gt;AI-generated media is increasing.&lt;br&gt;
Political polarization is intensifying.&lt;br&gt;
Identity-based conversations are more sensitive and more visible.&lt;/p&gt;

&lt;p&gt;Trust &amp;amp; Safety teams are the quiet layer holding it together.&lt;/p&gt;

&lt;p&gt;When we do our job well, nothing dramatic happens.&lt;/p&gt;

&lt;p&gt;No headline.&lt;br&gt;
No crisis.&lt;br&gt;
No viral outrage.&lt;/p&gt;

&lt;p&gt;And that’s exactly the point.&lt;/p&gt;

&lt;p&gt;Because the safest platforms are the ones where chaos never reaches the surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing Reflection&lt;/strong&gt;&lt;br&gt;
If you’ve ever wondered why your feed feels relatively balanced, it’s not just code.&lt;/p&gt;

&lt;p&gt;It’s people.&lt;/p&gt;

&lt;p&gt;People trained to interpret policy, manage risk, and make accountable decisions under pressure.&lt;/p&gt;

&lt;p&gt;Trust &amp;amp; Safety isn’t glamorous work.&lt;/p&gt;

&lt;p&gt;But it’s foundational.&lt;/p&gt;

&lt;p&gt;And the internet would look very different without it.&lt;/p&gt;

</description>
      <category>career</category>
      <category>discuss</category>
      <category>mentalhealth</category>
      <category>watercooler</category>
    </item>
  </channel>
</rss>
