<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Krishnan R</title>
    <description>The latest articles on Forem by Krishnan R (@krish2305).</description>
    <link>https://forem.com/krish2305</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/krish2305"/>
    <language>en</language>
    <item>
      <title>AWS Cloud Practitioner Essentials Certificate Achieved!</title>
      <dc:creator>Krishnan R</dc:creator>
      <pubDate>Tue, 12 Aug 2025 12:25:57 +0000</pubDate>
      <link>https://forem.com/krish2305/aws-cloud-practitioner-essentials-certificate-achieved-4k0p</link>
      <guid>https://forem.com/krish2305/aws-cloud-practitioner-essentials-certificate-achieved-4k0p</guid>
      <description>&lt;h1&gt;
  
  
  🚀 Achieved the AWS Cloud Practitioner Essentials Certificate!
&lt;/h1&gt;

&lt;p&gt;I successfully completed the &lt;strong&gt;AWS Cloud Practitioner Essentials&lt;/strong&gt; certificate from AWS! This achievement was made possible by an excellent free course that helped me solidify my understanding of AWS Cloud concepts, including core services, security, pricing, and architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jbfaevrxpuuz4wlcb84.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jbfaevrxpuuz4wlcb84.webp" alt=" " width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>python</category>
      <category>career</category>
      <category>news</category>
    </item>
    <item>
      <title>Computer Vision Algorithms Led AI — Until Transformers Took Over</title>
      <dc:creator>Krishnan R</dc:creator>
      <pubDate>Thu, 31 Jul 2025 08:10:32 +0000</pubDate>
      <link>https://forem.com/krish2305/computer-vision-algorithms-led-ai-until-transformers-took-over-4l3h</link>
      <guid>https://forem.com/krish2305/computer-vision-algorithms-led-ai-until-transformers-took-over-4l3h</guid>
      <description>&lt;h1&gt;
  
  
  Computer Vision Algorithms Led AI — Until Transformers Took Over
&lt;/h1&gt;

&lt;p&gt;Until 2017, most AI advancements were driven by breakthroughs in &lt;strong&gt;computer vision&lt;/strong&gt;, largely powered by &lt;strong&gt;Convolutional Neural Networks (CNNs)&lt;/strong&gt;. Models like &lt;strong&gt;&lt;a href="https://arxiv.org/abs/1512.03385" rel="noopener noreferrer"&gt;ResNet&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;a href="https://arxiv.org/abs/1506.02640" rel="noopener noreferrer"&gt;YOLO&lt;/a&gt;&lt;/strong&gt;, and &lt;strong&gt;&lt;a href="https://arxiv.org/abs/1506.01497" rel="noopener noreferrer"&gt;Faster R-CNN&lt;/a&gt;&lt;/strong&gt; enabled significant progress in tasks such as image classification, object detection, and segmentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Turning Point: Transformers in 2017
&lt;/h2&gt;

&lt;p&gt;In 2017, the introduction of the &lt;strong&gt;Transformer architecture&lt;/strong&gt; through the paper &lt;em&gt;&lt;a href="https://arxiv.org/abs/1706.03762" rel="noopener noreferrer"&gt;"Attention is All You Need"&lt;/a&gt;&lt;/em&gt; marked a major shift in the AI landscape.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Originally designed for &lt;strong&gt;Natural Language Processing (NLP)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Led to models like:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://arxiv.org/abs/1810.04805" rel="noopener noreferrer"&gt;BERT&lt;/a&gt;&lt;/strong&gt; (Bidirectional Encoder Representations from Transformers)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://openai.com/research/gpt" rel="noopener noreferrer"&gt;GPT&lt;/a&gt;&lt;/strong&gt; (Generative Pretrained Transformer)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://arxiv.org/abs/1910.10683" rel="noopener noreferrer"&gt;T5&lt;/a&gt;&lt;/strong&gt; (Text-To-Text Transfer Transformer)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These models achieved state-of-the-art performance in many NLP benchmarks and brought &lt;strong&gt;language models to the center of AI research&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transformers Expand Beyond Text
&lt;/h2&gt;

&lt;p&gt;Over time, the impact of Transformers extended beyond NLP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Computer Vision&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://arxiv.org/abs/2010.11929" rel="noopener noreferrer"&gt;ViT (Vision Transformer)&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://segment-anything.com/" rel="noopener noreferrer"&gt;SAM (Segment Anything)&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Multi-modal Models&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://openai.com/research/clip" rel="noopener noreferrer"&gt;CLIP&lt;/a&gt;&lt;/strong&gt; (connects text and images)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://openai.com/dall-e" rel="noopener noreferrer"&gt;DALL·E&lt;/a&gt;&lt;/strong&gt; (text-to-image generation)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These models demonstrate the &lt;strong&gt;flexibility and scalability&lt;/strong&gt; of the Transformer architecture across vision, language, and beyond.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Paradigm Shift in AI
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The shift from CNN-dominated pipelines to Transformer-based architectures represents one of the most significant transitions in the history of AI.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What do you think?
&lt;/h3&gt;

&lt;p&gt;Let me know your thoughts in the comments below.&lt;/p&gt;




&lt;h1&gt;
  
  
  AI #DeepLearning #Transformers #NLP #ComputerVision #BERT #GPT #ViT #CLIP #TechTrends
&lt;/h1&gt;

</description>
      <category>discuss</category>
      <category>community</category>
      <category>architecture</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Instance Segmentation with Mask R-CNN (ResNet-50 + FPN) using Detectron2</title>
      <dc:creator>Krishnan R</dc:creator>
      <pubDate>Sun, 20 Jul 2025 18:53:30 +0000</pubDate>
      <link>https://forem.com/krish2305/instance-segmentation-with-mask-r-cnn-resnet-50-fpn-using-detectron2-3691</link>
      <guid>https://forem.com/krish2305/instance-segmentation-with-mask-r-cnn-resnet-50-fpn-using-detectron2-3691</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5merdp532b3atjo6vv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5merdp532b3atjo6vv1.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;# 🖼️ Instance Segmentation with Mask R-CNN (ResNet-50 + FPN) using Detectron2&lt;/p&gt;

&lt;p&gt;Today, I successfully ran an instance segmentation model using &lt;strong&gt;Mask R-CNN&lt;/strong&gt; with the &lt;code&gt;ResNet-50&lt;/code&gt; backbone and &lt;strong&gt;Feature Pyramid Network (FPN)&lt;/strong&gt;, based on the config file:&lt;br&gt;&lt;br&gt;
&lt;code&gt;mask_rcnn_R_50_FPN_3x.yaml&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 Model Architecture Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ResNet-50&lt;/strong&gt;: Backbone network to extract rich feature representations from the image.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FPN (Feature Pyramid Network)&lt;/strong&gt;: Improves feature maps at multiple scales for better detection of small and large objects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mask R-CNN&lt;/strong&gt;: Builds on top of Faster R-CNN by adding a segmentation branch to predict masks at the pixel level.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ✅ Key Learnings &amp;amp; Workflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Understood how to use and modify model config files in Detectron2.&lt;/li&gt;
&lt;li&gt;Explored the model loading process from pretrained checkpoints.&lt;/li&gt;
&lt;li&gt;Ran inference successfully on a sample input and verified the output.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>python</category>
      <category>career</category>
    </item>
    <item>
      <title>Learning MLOps by Building a Real-World Salary Prediction Pipeline (MLflow + FastAPI + Docker)</title>
      <dc:creator>Krishnan R</dc:creator>
      <pubDate>Mon, 14 Jul 2025 16:58:08 +0000</pubDate>
      <link>https://forem.com/krish2305/learning-mlops-by-building-a-real-world-salary-prediction-pipeline-mlflow-fastapi-docker-3dn1</link>
      <guid>https://forem.com/krish2305/learning-mlops-by-building-a-real-world-salary-prediction-pipeline-mlflow-fastapi-docker-3dn1</guid>
      <description>&lt;h1&gt;
  
  
  🚀 Hi , I am Going to Build a Real-World Salary Prediction MLOps Pipeline with MLflow
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn experiment tracking, model versioning, and deployment by building an ML pipeline that predicts employee salaries.&lt;br&gt;&lt;br&gt;
This project is the foundation of my &lt;strong&gt;15-day MLOps learning sprint&lt;/strong&gt; — and I’m building it &lt;strong&gt;in public&lt;/strong&gt; on LinkedIn &amp;amp; GitHub!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  👋 Why I Built This
&lt;/h2&gt;

&lt;p&gt;To Learn Mlops Pratically So I decided to learn by building a real, end-to-end project that involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Tracking models &amp;amp; metrics with &lt;strong&gt;MLflow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ Serving predictions via &lt;strong&gt;FastAPI&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ Adding a UI using &lt;strong&gt;Streamlit&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ Preparing for production with &lt;strong&gt;Docker &amp;amp; CI/CD&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ Building in public to grow with the community
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I chose a practical use case — &lt;strong&gt;predicting employee salaries&lt;/strong&gt; — and turned it into a full &lt;strong&gt;MLOps pipeline&lt;/strong&gt; that simulates real-world AI workflows.&lt;/p&gt;




&lt;p&gt;This project kicks off my &lt;strong&gt;15-day hands-on MLOps learning challenge&lt;/strong&gt;, where I’ll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and track experiments with MLflow&lt;/li&gt;
&lt;li&gt;Serve the model via an API&lt;/li&gt;
&lt;li&gt;Add a clean UI for users&lt;/li&gt;
&lt;li&gt;Package and deploy the app to the cloud&lt;/li&gt;
&lt;li&gt;Learn CI/CD automation for model updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  💼 &lt;a href="https://www.linkedin.com/in/krish2305/?originalSubdomain=in" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;If you're learning MLOps too — &lt;strong&gt;join me&lt;/strong&gt;! Let’s grow together.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>docker</category>
      <category>learning</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
