<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Irtiqa Hub</title>
    <description>The latest articles on Forem by Irtiqa Hub (@irtiqa_hub).</description>
    <link>https://forem.com/irtiqa_hub</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/irtiqa_hub"/>
    <language>en</language>
    <item>
      <title>How we built an AI to beat the "Resume Bots" (ATS)</title>
      <dc:creator>Irtiqa Hub</dc:creator>
      <pubDate>Fri, 02 Jan 2026 16:25:32 +0000</pubDate>
      <link>https://forem.com/irtiqa_hub/how-we-built-an-ai-to-beat-the-resume-bots-ats-3e44</link>
      <guid>https://forem.com/irtiqa_hub/how-we-built-an-ai-to-beat-the-resume-bots-ats-3e44</guid>
      <description>&lt;p&gt;We've all been there: you spend hours crafting the perfect resume, hit "Apply," and... silence.&lt;/p&gt;

&lt;p&gt;When we started digging into why this happens, we realized the problem usually isn't the candidate's skills - it's the parsability of the document. Most modern hiring runs on Applicant Tracking Systems (ATS) that act as gatekeepers. If your PDF has complex columns, invisible tables, or lacks specific semantic keywords, the bot rejects you before a human ever sees your name.&lt;/p&gt;

&lt;p&gt;As developers, we realized this wasn't a "writing" problem. It was a data structure problem. So, we decided to build a tool to fix it.&lt;/p&gt;

&lt;p&gt;The Challenge: Reverse Engineering the Parser&lt;br&gt;
We wanted to build an engine that "sees" a resume exactly how an ATS sees it. We broke the problem down into three technical steps:&lt;/p&gt;

&lt;p&gt;Text Extraction: We moved away from simple PDF-to-Text converters. We needed to preserve the structure (headers vs. body text) to understand context.&lt;/p&gt;

&lt;p&gt;Keyword Density Analysis (NLP): We used Natural Language Processing to scan Job Descriptions (JDs) and extract "hard skills" (like React, Python, SQL) versus "soft skills."&lt;/p&gt;

&lt;p&gt;Gap Analysis: The core logic had to compare the two datasets (Resume vs. JD) and return a "match score" based on vector similarity, not just simple word counts.&lt;/p&gt;

&lt;p&gt;The Result&lt;br&gt;
It took us a few months of tweaking the weighting algorithms, especially to handle the specific formats used in the Indian market (like Naukri profiles), but we finally cracked it.&lt;/p&gt;

&lt;p&gt;We packaged this engine into &lt;a href="https://irtiqahub.com/careerlift" rel="noopener noreferrer"&gt;CareerLift&lt;/a&gt;, a tool that now helps candidates "debug" their resumes. Instead of just guessing, users can see exactly which keywords are missing and fix their formatting code so the parsers can actually read it.&lt;/p&gt;

&lt;p&gt;What we learned&lt;br&gt;
The biggest takeaway? Simplicity wins. Complex designs break parsers. The most "boring" resumes often have the highest success rates because the data is clean.&lt;/p&gt;

&lt;p&gt;If you're working on any NLP or parsing projects, I'd love to hear how you handle unstructured PDF data! It was definitely the hardest part of this build.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>webdev</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
