<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dmitriy Dmitriy</title>
    <description>The latest articles on Forem by Dmitriy Dmitriy (@dmitriy_dmitriy_d50839940).</description>
    <link>https://forem.com/dmitriy_dmitriy_d50839940</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dmitriy_dmitriy_d50839940"/>
    <language>en</language>
    <item>
      <title>When an API stopped returning JSON, I switched to Selenium and added AI summaries</title>
      <dc:creator>Dmitriy Dmitriy</dc:creator>
      <pubDate>Sat, 11 Apr 2026 02:24:55 +0000</pubDate>
      <link>https://forem.com/dmitriy_dmitriy_d50839940/when-an-api-stopped-returning-json-i-switched-to-selenium-and-added-ai-summaries-1kfl</link>
      <guid>https://forem.com/dmitriy_dmitriy_d50839940/when-an-api-stopped-returning-json-i-switched-to-selenium-and-added-ai-summaries-1kfl</guid>
      <description>&lt;p&gt;I built a parser around the DNB Business Directory API. At first, everything worked fine — simple requests, JSON responses, clean and fast.&lt;/p&gt;

&lt;p&gt;Then it suddenly stopped working.&lt;/p&gt;

&lt;p&gt;My script started getting empty or unusable responses, even though the same requests still worked perfectly in the browser. Status codes were often 200, but the data was missing or incomplete.&lt;/p&gt;

&lt;p&gt;After trying different headers, sessions, retries, and delays, it became clear that this wasn’t a normal API issue. Most likely, anti-bot filtering.&lt;/p&gt;

&lt;p&gt;What I changed&lt;/p&gt;

&lt;p&gt;Instead of trying to bypass it at the HTTP level, I switched to Selenium.&lt;/p&gt;

&lt;p&gt;The new approach:&lt;/p&gt;

&lt;p&gt;open the site in a real browser&lt;br&gt;
search companies by keyword + country&lt;br&gt;
paginate through results&lt;br&gt;
collect company profile links&lt;br&gt;
parse data directly from rendered pages&lt;/p&gt;

&lt;p&gt;This worked immediately because it behaves like a real user session.&lt;/p&gt;

&lt;p&gt;Then I added AI&lt;/p&gt;

&lt;p&gt;After collecting company data, I wanted to understand what these companies actually do.&lt;/p&gt;

&lt;p&gt;So I added a second stage:&lt;/p&gt;

&lt;p&gt;scrape the company website (home + a few internal pages)&lt;br&gt;
clean the text&lt;br&gt;
send it to Groq&lt;br&gt;
generate a short summary + list of services&lt;/p&gt;

&lt;p&gt;I also added a simple keyword-based filter to detect risky content (gambling, adult, etc.) before sending data to the LLM.&lt;/p&gt;

&lt;p&gt;Final pipeline&lt;/p&gt;

&lt;p&gt;Selenium → company profiles → websites → multi-page scraping → AI summaries → Excel&lt;/p&gt;

&lt;p&gt;Result&lt;/p&gt;

&lt;p&gt;Interestingly, the workaround ended up being more useful than the original solution.&lt;/p&gt;

&lt;p&gt;Instead of just collecting structured data, I now also get:&lt;/p&gt;

&lt;p&gt;a quick description of each company&lt;br&gt;
basic content classification&lt;br&gt;
Demo / project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dibara512.github.io/my-site/" rel="noopener noreferrer"&gt;https://dibara512.github.io/my-site/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>I built a company data pipeline — here’s what broke in real-world data</title>
      <dc:creator>Dmitriy Dmitriy</dc:creator>
      <pubDate>Sat, 04 Apr 2026 17:55:37 +0000</pubDate>
      <link>https://forem.com/dmitriy_dmitriy_d50839940/i-built-a-company-data-pipeline-heres-what-broke-in-real-world-data-3m7e</link>
      <guid>https://forem.com/dmitriy_dmitriy_d50839940/i-built-a-company-data-pipeline-heres-what-broke-in-real-world-data-3m7e</guid>
      <description>&lt;p&gt;I built a Python pipeline that automates company data collection:&lt;br&gt;
searching registries, extracting websites, scraping content, finding phone numbers, and generating summaries using AI.&lt;/p&gt;

&lt;p&gt;At first, I thought scraping would be the hard part.&lt;/p&gt;

&lt;p&gt;It turned out to be the easiest.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the pipeline does
&lt;/h2&gt;

&lt;p&gt;The system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;finds companies via API (DNB)&lt;/li&gt;
&lt;li&gt;extracts website and metadata&lt;/li&gt;
&lt;li&gt;visits multiple pages of each site&lt;/li&gt;
&lt;li&gt;extracts phone numbers from HTML, links and structured data&lt;/li&gt;
&lt;li&gt;generates summaries using LLMs&lt;/li&gt;
&lt;li&gt;exports everything into a structured Excel dataset&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What actually broke
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. APIs are unstable and rate-limited
&lt;/h3&gt;

&lt;p&gt;The DNB API would randomly return 429 and 503 errors.&lt;/p&gt;

&lt;p&gt;Without retries, the pipeline would fail after a few requests.&lt;/p&gt;

&lt;p&gt;I ended up implementing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;retry logic&lt;/li&gt;
&lt;li&gt;exponential backoff&lt;/li&gt;
&lt;li&gt;random delays between requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even then, stability is never guaranteed.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Phone numbers are surprisingly hard
&lt;/h3&gt;

&lt;p&gt;Phone extraction turned out to be one of the hardest parts.&lt;/p&gt;

&lt;p&gt;Numbers appear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;in different formats (+1, brackets, spaces, dashes)&lt;/li&gt;
&lt;li&gt;mixed with dates or IDs&lt;/li&gt;
&lt;li&gt;inside HTML text, links (tel:) and JSON-LD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had to build logic to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;normalize formats&lt;/li&gt;
&lt;li&gt;filter invalid matches&lt;/li&gt;
&lt;li&gt;classify numbers by reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this step, the output was unusable.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Websites are inconsistent
&lt;/h3&gt;

&lt;p&gt;Every site is different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;different HTML structures&lt;/li&gt;
&lt;li&gt;missing data&lt;/li&gt;
&lt;li&gt;broken markup&lt;/li&gt;
&lt;li&gt;content hidden in scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is no universal parser.&lt;/p&gt;

&lt;p&gt;Even simple tasks like extracting clean text require handling multiple edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Matching company data is unreliable
&lt;/h3&gt;

&lt;p&gt;Company names don’t always match exactly across sources.&lt;/p&gt;

&lt;p&gt;Small differences (spacing, symbols, legal forms) break naive matching.&lt;/p&gt;

&lt;p&gt;This forced me to implement stricter matching logic and fallbacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. AI is helpful, but not deterministic
&lt;/h3&gt;

&lt;p&gt;I used LLMs to generate company summaries from scraped text.&lt;/p&gt;

&lt;p&gt;But even with the same input:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outputs vary&lt;/li&gt;
&lt;li&gt;rate limits happen&lt;/li&gt;
&lt;li&gt;some responses fail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make it usable, I had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;control prompts carefully&lt;/li&gt;
&lt;li&gt;limit output length&lt;/li&gt;
&lt;li&gt;add fallback between models&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;The pipeline can process hundreds of companies per run,&lt;br&gt;
replacing hours of manual work.&lt;/p&gt;

&lt;p&gt;But the main takeaway:&lt;/p&gt;

&lt;p&gt;Scraping is only a small part of the system.&lt;/p&gt;

&lt;p&gt;Most of the effort goes into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cleaning data&lt;/li&gt;
&lt;li&gt;validating results&lt;/li&gt;
&lt;li&gt;handling edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you’re starting with web scraping, the hardest part isn’t extracting data.&lt;/p&gt;

&lt;p&gt;It’s making that data reliable.&lt;/p&gt;

&lt;p&gt;Real-world data is messy, inconsistent and unpredictable.&lt;/p&gt;

&lt;p&gt;Building a pipeline is not about scraping —&lt;br&gt;
it’s about turning raw data into something usable.&lt;/p&gt;

&lt;p&gt;Happy to share more details if there’s interest.&lt;/p&gt;

</description>
      <category>api</category>
      <category>dataengineering</category>
      <category>python</category>
      <category>webscraping</category>
    </item>
    <item>
      <title>How I Automated Data Collection with Python (Web Scraping, AI, Excel)</title>
      <dc:creator>Dmitriy Dmitriy</dc:creator>
      <pubDate>Tue, 24 Mar 2026 01:59:10 +0000</pubDate>
      <link>https://forem.com/dmitriy_dmitriy_d50839940/-how-i-build-python-automation-tools-for-real-business-tasks-scraping-ai-excel-30j9</link>
      <guid>https://forem.com/dmitriy_dmitriy_d50839940/-how-i-build-python-automation-tools-for-real-business-tasks-scraping-ai-excel-30j9</guid>
      <description>&lt;p&gt;I recently built a set of Python tools that automate data collection, extract company information, and generate summaries using AI.&lt;/p&gt;

&lt;p&gt;Instead of spending hours manually collecting data, everything is processed automatically.&lt;/p&gt;

&lt;p&gt;Here’s a quick breakdown of how it works 👇&lt;/p&gt;

&lt;p&gt;I’m a Python developer focused on automation, data extraction, and building practical tools for business workflows.&lt;/p&gt;

&lt;p&gt;Instead of writing simple scripts, I build solutions that actually save time — collecting company data, automating repetitive processes, and integrating AI into real tasks.&lt;/p&gt;

&lt;p&gt;If you want to see real examples of what I build, you can check my work here:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://dibara512.github.io/my-site/" rel="noopener noreferrer"&gt;https://dibara512.github.io/my-site/&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What I build
&lt;/h2&gt;

&lt;p&gt;Here are the main types of solutions I work on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web scraping tools (registries, directories, company databases)&lt;/li&gt;
&lt;li&gt;Browser automation (forms, dashboards, repetitive workflows)&lt;/li&gt;
&lt;li&gt;Excel and database processing tools&lt;/li&gt;
&lt;li&gt;AI-powered data analysis (LLMs, summaries, classification)&lt;/li&gt;
&lt;li&gt;Internal tools with GUI for teams&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Real examples
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Company data extraction from registries
&lt;/h3&gt;

&lt;p&gt;I’ve built parsers for multiple national registries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finland
&lt;/li&gt;
&lt;li&gt;France (SIREN extraction)
&lt;/li&gt;
&lt;li&gt;Belgium, Austria, UK, Poland
&lt;/li&gt;
&lt;li&gt;Japan, Iceland
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typical workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read company list from Excel
&lt;/li&gt;
&lt;li&gt;Automatically search registry
&lt;/li&gt;
&lt;li&gt;Match exact company names
&lt;/li&gt;
&lt;li&gt;Extract registration numbers
&lt;/li&gt;
&lt;li&gt;Export results back to Excel
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows processing hundreds of companies in minutes instead of hours.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. Advanced contact scraper
&lt;/h3&gt;

&lt;p&gt;One of my tools focuses on extracting contact data from websites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Phone numbers (tel:, JSON-LD, raw HTML)
&lt;/li&gt;
&lt;li&gt;Company websites
&lt;/li&gt;
&lt;li&gt;Structured data
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It includes filtering and validation, so the output is clean and ready to use.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. AI-powered website analysis
&lt;/h3&gt;

&lt;p&gt;I built a system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loads multiple pages from a website
&lt;/li&gt;
&lt;li&gt;Extracts and aggregates content
&lt;/li&gt;
&lt;li&gt;Generates summaries using AI (Groq API)
&lt;/li&gt;
&lt;li&gt;Identifies business activity
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially useful when working with large datasets of unknown websites.&lt;/p&gt;


&lt;h3&gt;
  
  
  4. Full automation pipeline
&lt;/h3&gt;

&lt;p&gt;One of the most advanced tools I developed combines everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-page company search (up to 20 pages)
&lt;/li&gt;
&lt;li&gt;Matching results with Excel datasets
&lt;/li&gt;
&lt;li&gt;Collecting website, phone, industry, DUNS
&lt;/li&gt;
&lt;li&gt;AI-generated summaries
&lt;/li&gt;
&lt;li&gt;Structured export to Excel
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This replaces hours of manual research and data entry.&lt;/p&gt;

&lt;p&gt;You can find similar solutions here:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://dibara512.github.io/my-site/" rel="noopener noreferrer"&gt;https://dibara512.github.io/my-site/&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Example: extracting frequent keywords from company data
&lt;/h2&gt;

&lt;p&gt;Sometimes I need to analyze large datasets of company names or descriptions.&lt;/p&gt;

&lt;p&gt;Here’s a simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;collections&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;top_word_frequencies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;min_len&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;top_n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[a-z0-9]+(?:-[a-z0-9]+)*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;min_len&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;counts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Counter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;counts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;most_common&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;top_n&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="n"&gt;text_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Apple Inc Google LLC Microsoft Corporation Apple Google Apple
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;top_words&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;top_word_frequencies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;freq&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;top_words&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;freq&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach helps me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identify patterns in company data&lt;/li&gt;
&lt;li&gt;generate better keywords&lt;/li&gt;
&lt;li&gt;improve search and matching logic
Technologies I use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most of my projects, I work with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Selenium / BeautifulSoup&lt;/li&gt;
&lt;li&gt;Excel (openpyxl, pandas)&lt;/li&gt;
&lt;li&gt;SQL (SQLite, Firebird)&lt;/li&gt;
&lt;li&gt;APIs (Groq, OpenStreetMap)&lt;/li&gt;
&lt;li&gt;Tkinter (for internal tools)&lt;/li&gt;
&lt;li&gt;How I approach projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My workflow is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You describe the task&lt;/li&gt;
&lt;li&gt;I propose a practical solution&lt;/li&gt;
&lt;li&gt;I build a working tool&lt;/li&gt;
&lt;li&gt;You get a ready-to-use result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No unnecessary complexity — only tools that solve the problem.&lt;/p&gt;

&lt;p&gt;Final thoughts&lt;/p&gt;

&lt;p&gt;Automation is not about writing scripts — it’s about saving time and reducing manual work.&lt;/p&gt;

&lt;p&gt;If you're working with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;web scraping&lt;/li&gt;
&lt;li&gt;automation&lt;/li&gt;
&lt;li&gt;data collection&lt;/li&gt;
&lt;li&gt;AI workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;feel free to explore my work here:&lt;br&gt;
👉 &lt;a href="https://dibara512.github.io/my-site/" rel="noopener noreferrer"&gt;https://dibara512.github.io/my-site/&lt;/a&gt;   &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
