<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Veríssimo Cassange</title>
    <description>The latest articles on Forem by Veríssimo Cassange (@vec21).</description>
    <link>https://forem.com/vec21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vec21"/>
    <language>en</language>
    <item>
      <title>How I Built an ATS-Optimized AI Portfolio with Antigravity: From Nginx Hell to Cloud Run</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Sat, 17 Jan 2026 14:52:42 +0000</pubDate>
      <link>https://forem.com/vec21/how-i-built-an-ats-optimized-ai-portfolio-with-antigravity-from-nginx-hell-to-cloud-run-4idk</link>
      <guid>https://forem.com/vec21/how-i-built-an-ats-optimized-ai-portfolio-with-antigravity-from-nginx-hell-to-cloud-run-4idk</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;I’m Veríssimo Cassange, an AI Software Engineer based in Luanda, Angola 🇦🇴. I’ve always been obsessed with how technical architecture can drive social impact. My day-to-day usually involves Python, Machine Learning, and trying to explain to my family why "Infrastructure as Code" is actually exciting.&lt;/p&gt;

&lt;p&gt;For this challenge, I didn't just want a "pretty" site. I wanted a portfolio that acts like a Trojan Horse—looking premium to human recruiters while being perfectly tuned for the ATS (Applicant Tracking Systems) that often filter out talented engineers before they even get a chance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;

&lt;p&gt;I deployed the app to Google Cloud Run. You can check the live version here:&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://vec21-portfolio-545099629721.europe-west1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;🔗 &lt;strong&gt;Live Portfolio&lt;/strong&gt;: &lt;a href="https://vec21-portfolio-545099629721.europe-west1.run.app" rel="noopener noreferrer"&gt;vec21-portfolio.europe-west1.run.app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The "ATS Algorithm" Strategy
&lt;/h3&gt;

&lt;p&gt;I decided to treat the portfolio as a data-rich document. Instead of just listing projects, I integrated specific keywords—&lt;em&gt;Generative AI&lt;/em&gt;, &lt;em&gt;RAG&lt;/em&gt;, &lt;em&gt;Docker&lt;/em&gt;, &lt;em&gt;CI/CD&lt;/em&gt;—directly into the metadata and descriptions. &lt;/p&gt;

&lt;p&gt;But I hit a wall early on: the GitHub API doesn't give you the "why" behind a project. I realized that if I wanted to impress both the algorithm and the human recruiter, I had to manually augment my repository data with custom objectives and technology tags. So, I built a local manifest system to enrich the data fetched from GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Frontend&lt;/strong&gt;: React 19 + Vite (Fast is an understatement).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Styling&lt;/strong&gt;: Tailwind CSS 4. I wanted that "glassmorphism" look that feels premium but isn't a nightmare for accessibility.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Animations&lt;/strong&gt;: Framer Motion for those subtle micro-animations (like the smooth scroll progress bar I called &lt;strong&gt;BPROGRESS&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Deployment&lt;/strong&gt;: Docker + Google Cloud Run. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Google AI Copilot
&lt;/h3&gt;

&lt;p&gt;I used &lt;strong&gt;Antigravity&lt;/strong&gt; (Google’s AI-first dev environment) as my second brain. It wasn't about "click a button, get a site." It was more like having a senior engineer sitting next to me. &lt;/p&gt;

&lt;p&gt;For instance, Antigravity was crucial when I decided to use &lt;strong&gt;nanobanana Pro&lt;/strong&gt; to generate consistent, professional thumbnails for all my projects. I wanted a specific aesthetic, and the AI helped me iterate on those visuals until they felt like part of a unified brand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Breaking (and Fixing) Docker for Cloud Run
&lt;/h3&gt;

&lt;p&gt;To be honest, the deployment was the biggest headache. I wanted to follow security best practices by running Nginx as a non-root user. &lt;/p&gt;

&lt;p&gt;If you've ever tried this on Cloud Run, you know the pain: permission errors everywhere once you touch &lt;code&gt;/var/cache/nginx&lt;/code&gt;. I spent hours debugging why my container would crash on startup. I eventually had to rewrite the &lt;code&gt;nginx.conf&lt;/code&gt; to use &lt;code&gt;/tmp&lt;/code&gt; for PIDs and temp files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# The human struggle: fixing permissions for a non-root user&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/cache/nginx /tmp/nginx &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; nginx:nginx /var/cache/nginx /tmp/nginx &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 /var/cache/nginx /tmp/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It was frustrating, but seeing that "Service is Healthy" checkmark in the Google Cloud Console was the best feeling of the entire week.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Project Modal System
&lt;/h3&gt;

&lt;p&gt;Instead of redirecting people away from my site to GitHub immediately, I built a modal system. It gives a quick technical deep-dive (Technologies, Objectives, Challenges) before the user decides to jump into the code. It keeps the engagement high.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Localization and Impact
&lt;/h3&gt;

&lt;p&gt;As someone from Luanda, I made sure to highlight my work with &lt;strong&gt;Frontier Tech Leaders - Angola&lt;/strong&gt;. It’s important to me that my portfolio reflects my local context while showcasing global technical standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;IA as a Copilot&lt;/strong&gt;: Using Antigravity changed how I debug. Instead of just searching StackOverflow, I had a context-aware assistant helping me optimize my Docker multi-stage builds.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Structure Matters&lt;/strong&gt;: ATS optimization isn't just "keyword stuffing"—it's about semantic HTML. Using &lt;code&gt;&amp;lt;h1&amp;gt;&lt;/code&gt; to &lt;code&gt;&amp;lt;h6&amp;gt;&lt;/code&gt; correctly and keeping the DOM lean matters more than I thought.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Trade-offs&lt;/strong&gt;: I chose Nginx over a simple Node server because I wanted better control over headers and compression, even if it meant fighting with Cloud Run's filesystem restrictions.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Find the Next Instagram Stars Before They Explode: AI-Powered Growth Predictor</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Sat, 30 Aug 2025 22:57:09 +0000</pubDate>
      <link>https://forem.com/vec21/find-the-next-instagram-stars-before-they-explode-ai-powered-growth-predictor-4f7c</link>
      <guid>https://forem.com/vec21/find-the-next-instagram-stars-before-they-explode-ai-powered-growth-predictor-4f7c</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata-n8n-2025-08-13"&gt;AI Agents Challenge powered by n8n and Bright Data&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I created an &lt;strong&gt;AI-Powered Instagram Growth Predictor&lt;/strong&gt; that analyzes Instagram profiles and predicts which accounts will become the next viral stars. The system solves a critical problem in the $16 billion influencer marketing industry: brands and agencies spend 10x more on established influencers when they could have partnered with emerging talent at 90% lower costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;: Manual influencer research takes hours, relies on gut feelings, and often leads to late discovery of promising accounts when partnerships become expensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Solution&lt;/strong&gt;: Send any Instagram URL to a chat interface and get instant AI analysis including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Future Star Score (0-100)&lt;/li&gt;
&lt;li&gt;Investment recommendation (BUY/HOLD/AVOID)&lt;/li&gt;
&lt;li&gt;6-month growth predictions&lt;/li&gt;
&lt;li&gt;Detailed AI reasoning with market insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system transforms raw Instagram data into actionable business intelligence, helping users discover emerging influencers before they explode in popularity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try it instantly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the chat interface&lt;/li&gt;
&lt;li&gt;Paste any Instagram URL (example: &lt;code&gt;https://www.instagram.com/cristiano/&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Get AI-powered growth analysis in under 10 seconds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Demo Video&lt;/strong&gt;:   &lt;iframe src="https://www.youtube.com/embed/pOa-1UE0ZKw"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  n8n Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Complete Workflow JSON&lt;/strong&gt;: &lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The workflow implements a sophisticated 13-node pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Chat Trigger&lt;/strong&gt; - Public interface for user input&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;URL Extraction&lt;/strong&gt; - Regex-based Instagram URL validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional Routing&lt;/strong&gt; - Smart error handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bright Data Scraper&lt;/strong&gt; - Professional Instagram data extraction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics Calculator&lt;/strong&gt; - Advanced growth indicators computation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Agent&lt;/strong&gt; - Expert growth analyst with custom persona&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Groq LLM&lt;/strong&gt; - Ultra-fast inference with Qwen 32B model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON Parser&lt;/strong&gt; - Robust output handling with fallbacks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Formatter&lt;/strong&gt; - Professional chat presentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Sheets Logger&lt;/strong&gt; - Historical data persistence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Responder&lt;/strong&gt; - User communication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handler&lt;/strong&gt; - Graceful failure management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow Completion&lt;/strong&gt; - Clean termination&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  System Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AI Agent Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model&lt;/strong&gt;: Groq's Qwen 32B (ultra-fast open-source inference)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System Instructions&lt;/strong&gt;: 10+ year Instagram growth expert persona with sophisticated analysis framework&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt;: Stateless design with Google Sheets persistence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt;: Custom metrics calculator + JSON parser with error recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Core Analysis Framework&lt;/strong&gt;:&lt;br&gt;
The AI Agent evaluates 5 key areas (scored 0-100 each):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Growth Velocity Score&lt;/strong&gt; - Organic growth acceleration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Quality Score&lt;/strong&gt; - Consistency and professionalism
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engagement Authenticity&lt;/strong&gt; - Real vs fake engagement detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Positioning Score&lt;/strong&gt; - Viral growth potential&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future Star Potential&lt;/strong&gt; - 5x-10x growth likelihood in 6 months&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Advanced Metrics Engine&lt;/strong&gt;:&lt;br&gt;
My custom algorithm calculates 16 sophisticated metrics from raw data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Growth intelligence calculations&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;followersGrowthRate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;followers&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;account_age_days&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;365&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;engagementVelocity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;avg_engagement&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;followers&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;professionalSetupScore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;isProfessional&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;isVerified&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;hasLink&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;hasContactInfo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Enhanced profile data for AI analysis&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enhancedProfile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;followers_growth_rate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;followersGrowthRate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;engagement_velocity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;engagementVelocity&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;account_maturity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;account_age_days&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;365&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;180&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mature&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;emerging&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;professional_setup_score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;professionalSetupScore&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bright Data Verified Node
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Bright Data Verified Node
&lt;/h3&gt;

&lt;p&gt;I used the Bright Data Verified Node with the following configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Credential:&lt;/strong&gt; BrightDataItsVec21&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; Web Scraper&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operation:&lt;/strong&gt; Scrape By URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dataset:&lt;/strong&gt; Instagram - Profiles (&lt;code&gt;gd_l1vikfch901nx3by4&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;URLs:&lt;/strong&gt; Dynamic input via &lt;code&gt;{{ $json["target_profile_url"] }}&lt;/code&gt; (example: &lt;code&gt;https://www.instagram.com/cristiano/&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Include Errors:&lt;/strong&gt; Enabled for robust error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;webScrapper&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dataset_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gd_l1vikfch901nx3by4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;urls&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;{{ $json[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;target_profile_url&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;] }}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;requestOptions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup allowed me to extract comprehensive Instagram profile data, which was then transformed into advanced growth metrics for AI analysis. The workflow includes error validation and fallback logic to ensure reliable results, even when some data points are missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Development Process
&lt;/h3&gt;

&lt;p&gt;This was my first time building a complex n8n workflow, and it became an incredible learning experience. I started with the basic concept of Instagram analysis but quickly realized I could create something much more sophisticated using AI prediction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning n8n&lt;/strong&gt;: Initially, the visual workflow approach felt overwhelming compared to traditional coding. But once I understood the node-based logic, development accelerated dramatically. The ability to see data flow and test each step visually made debugging much easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mastering Bright Data&lt;/strong&gt;: At first, the Bright Data verified node seemed complicated. The dataset configuration and URL formatting took some trial and error. But once I understood how to structure the requests properly, it became incredibly reliable. The Instagram Profiles dataset provides rich data that goes far beyond what you'd get from manual scraping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Agent Challenges&lt;/strong&gt;: The biggest technical hurdle was getting consistent JSON output from the AI agent. Early versions would return responses wrapped in markdown or include "thinking" tags that broke my parsing. I solved this with a multi-layered parsing approach that handles various output formats.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Insights Learned
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;n8n Workflows&lt;/strong&gt;: Visual automation is powerful once you embrace the paradigm shift. Error handling through conditional nodes is more intuitive than traditional try-catch blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bright Data Integration&lt;/strong&gt;: The verified nodes are production-ready out of the box. The learning curve is worth it - you get enterprise-grade scraping without infrastructure complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Agent Design&lt;/strong&gt;: Persona-based prompts work much better than generic instructions. Treating the AI as a specific expert (10-year Instagram analyst) produces more consistent and valuable outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inspiration &amp;amp; Evolution
&lt;/h3&gt;

&lt;p&gt;This project was inspired by a basic Instagram filter workflow in the n8n community (workflow 6621), but I completely reimagined it as an AI-powered prediction system. Where the original simply filtered profiles by basic criteria, my solution adds sophisticated growth prediction, AI analysis, and investment recommendations - transforming a simple filter into a comprehensive influencer intelligence platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;The final system processes Instagram profiles in under 10 seconds with 85%+ prediction confidence. More importantly, it demonstrates real commercial value - early influencer discovery can reduce partnership costs by 90% while delivering 300-500% ROI compared to established influencer rates.&lt;/p&gt;

&lt;p&gt;This project proved that combining n8n's workflow orchestration, Bright Data's reliable scraping, and modern AI can create production-ready solutions that solve real business problems.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>n8nbrightdatachallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Postmark + RAG = Email Assistant 2.0</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Sun, 08 Jun 2025 22:12:18 +0000</pubDate>
      <link>https://forem.com/vec21/postmark-rag-email-assistant-20-29en</link>
      <guid>https://forem.com/vec21/postmark-rag-email-assistant-20-29en</guid>
      <description>&lt;p&gt;This is a submission for the &lt;a href="https://dev.to/challenges/postmark"&gt;Postmark Challenge: Inbox Innovators&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A virtual assistant solution based on RAG (Retrieval Augmented Generation) for VerdeVive. The system processes incoming emails, uses an AI model to generate contextual responses and automatically replies to customers.&lt;/p&gt;

&lt;p&gt;Here's a quick look at how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It picks up customer emails through Postmark's inbound processing.&lt;/li&gt;
&lt;li&gt;Next, it uses natural language processing to understand the questions customers are asking.&lt;/li&gt;
&lt;li&gt;Then, it digs into our knowledge base, finding the most relevant information through vector similarity search.&lt;/li&gt;
&lt;li&gt;After that, an LLM (Large Language Model), specifically via Groq, crafts contextually appropriate and accurate responses.&lt;/li&gt;
&lt;li&gt;Finally, it sends out a professional, well-formatted email response to the customer in an instant.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What sets this solution apart from a typical chatbot? It delivers factually correct information directly related to VerdeVive's product catalog and our commitment to sustainability. All of this happens seamlessly through the email interface our customers already use and trust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why I Used RAG 🤖
&lt;/h3&gt;

&lt;p&gt;I used RAG (Retrieval-Augmented Generation) at VerdeVive to develop a virtual assistant that distances itself from traditional chatbots. This approach allows responses to be generated from information extracted directly from our content.md file, which concentrates key data about the company, such as mission, products, and initiatives. In this way, we ensure that the assistant accurately represents our identity and values while avoiding the high costs associated with training extensive models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Demo Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send any email to: &lt;strong&gt;&lt;a href="mailto:vec21@verdevive.online"&gt;vec21@verdevive.online&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example questions to try:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"What sustainable products do you sell?"&lt;/li&gt;
&lt;li&gt;"Tell me about your partnerships"&lt;/li&gt;
&lt;li&gt;"How do your products help the environment?"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📨 I Sent a Question
&lt;/h3&gt;

&lt;p&gt;I asked the following question via email:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"What sustainable products do you sell?"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatanahhnjvc97s1ifzw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatanahhnjvc97s1ifzw0.png" alt="Sent question"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  ✅ I Received the Answer
&lt;/h3&gt;

&lt;p&gt;The application successfully received and processed the response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv8g5d0996vmr536ro20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuv8g5d0996vmr536ro20.png" alt="Received answer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Instructions:&lt;/strong&gt;&lt;br&gt;
1️⃣  Simply send an email to &lt;strong&gt;&lt;a href="mailto:vec21@verdevive.online"&gt;vec21@verdevive.online&lt;/a&gt;&lt;/strong&gt; with your question in the body.&lt;br&gt;
2️⃣  You'll then receive an automated response with information relevant to your inquiry.&lt;br&gt;
3️⃣ Response times are typically under 2 minutes, so you won't be waiting long!&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;

&lt;p&gt;For more technical details, you can access my repository and follow the step-by-step described in the README.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vec21" rel="noopener noreferrer"&gt;
        vec21
      &lt;/a&gt; / &lt;a href="https://github.com/vec21/email-ai-assistant" rel="noopener noreferrer"&gt;
        email-ai-assistant
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI-powered email assistant that uses RAG technology to automatically respond to inquiries with accurate, context-aware information.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;VerdeVive Assistant 🌱&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A virtual assistant based on RAG (Retrieval Augmented Generation) that processes received emails, generates contextual responses, and automatically replies to VerdeVive customers.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;📋 Description&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;VerdeVive Assistant is an automated customer service solution that uses AI technology to process customer emails and generate personalized responses based on company documentation. The system integrates with the Postmark service to receive and send emails, and uses an advanced language model (Llama3) through the Groq API to generate contextual and relevant responses.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🛠️ Technologies&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend Webhook&lt;/strong&gt;: Node.js, Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG API&lt;/strong&gt;: Python, Flask&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Processing&lt;/strong&gt;: LangChain, FAISS, HuggingFace Embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Model&lt;/strong&gt;: Llama3 via Groq API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email Processing&lt;/strong&gt;: Postmark&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process Management&lt;/strong&gt;: PM2&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🔍 Project Structure&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;email-ai-assistant/
├── backend/                  # Webhook server and RAG API
│   ├── error_emails/         # Stores emails with processing errors
│   ├── rag/                  # Retrieval Augmented Generation API
│   │   ├── indexador.py      # Document indexing&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vec21/email-ai-assistant" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First Steps
&lt;/h3&gt;

&lt;p&gt;I invented “&lt;strong&gt;VerdeVive&lt;/strong&gt;”, an Angolan company dedicated to promoting a sustainable lifestyle through ecological and ethical products. I hosted the website on Vercel using a domain I bought from &lt;a href="https://www.lws.fr/" rel="noopener noreferrer"&gt;&lt;strong&gt;LWS&lt;/strong&gt;&lt;/a&gt;. You can visit the VerdeVive website at this link: &lt;a href="https://www.verdevive.online." rel="noopener noreferrer"&gt;&lt;strong&gt;Verdevive&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created an instance on AWS:  an Ubuntu Server 24.04 on the Free Tier plan, where my RAG system is running, which processes the messages received via Postmark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack ⚙️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend Webhook&lt;/strong&gt;: Node.js, Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG API&lt;/strong&gt;: Python, Flask&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Processing&lt;/strong&gt;: LangChain, FAISS, HuggingFace Embeddings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Model&lt;/strong&gt;: Llama3 via Groq API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email Processing&lt;/strong&gt;: Postmark&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process Management&lt;/strong&gt;: PM2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Email Processing:&lt;/strong&gt;For more technical details, you can access my repository and follow the step-by-step described in the README. Postmark was key here, handling everything from inbound processing and webhooks to final delivery. &lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Process 🔄
&lt;/h3&gt;

&lt;p&gt;First off, I gathered all of VerdeVive's product documentation to build a solid knowledge base. Then, I vectorized this content using sentence-transformers embeddings. With that ready, I built a Flask API to connect the FAISS index with the Groq LLM.&lt;/p&gt;

&lt;p&gt;Next, I implemented the webhook server to process those incoming Postmark emails. Crafting responsive and accessible email templates for the replies was another crucial step. To keep the whole system reliable, I set up PM2 as the process manager. And of course, I put logging and monitoring in place to keep an eye on system health.&lt;/p&gt;

&lt;h3&gt;
  
  
  Postmark Integration 📬
&lt;/h3&gt;

&lt;p&gt;For this project, I made full use of several powerful &lt;strong&gt;Postmark&lt;/strong&gt; features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inbound Stream:&lt;/strong&gt; Essential for automatically receiving and processing customer questions via email directly into the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transactional Stream:&lt;/strong&gt; Used to send time-sensitive messages such as confirmations and alerts, triggered for one recipient at a time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Webhooks:&lt;/strong&gt; Enabled real-time processing of every incoming email.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Message Streams:&lt;/strong&gt; This feature helped me clearly separate transactional emails from inbound ones, ensuring maximum deliverability and organization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔄 How I Used Message Streams:
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stream Type&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transactional&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Outbound&lt;/td&gt;
&lt;td&gt;Sending time-sensitive automated messages and replies to users (e.g., confirmations, responses)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Inbound&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;Receiving and parsing emails from customers (e.g., support or contact requests)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  📊 My Activity in the Postmark Server
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofyoqgfxxncwys0be1tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofyoqgfxxncwys0be1tv.png" alt="My activity in the Postmark server"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  🧾 Parsing Incoming Emails
&lt;/h3&gt;

&lt;p&gt;Below is an example of an incoming email being received and parsed correctly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpqsgootvargwdyh2ios.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpqsgootvargwdyh2ios.png" alt="Receiving and parsing email"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Experience with Postmark ⭐
&lt;/h3&gt;

&lt;p&gt;I didn't know about Postmark until the “Postmark Challenge: Inbox Innovators” was published on dev.to. I use Gmail as my email provider, bought a domain that gave me access to a professional email, and signed up on the Postmark website.&lt;/p&gt;

&lt;p&gt;On the site, the first thing I saw was the introduction to what a Postmark server is: it works like a “folder”, as simple as that. At first, I didn't really understand what they meant by "folder," but then it became clearer.&lt;/p&gt;

&lt;p&gt;Postmark has a wealth of documentation that helped me understand a bit about inbound email parsing. I used the Postmark Help assistant a lot (“👋 I'm Stamp, your AI powered assistant”), which provided very practical answers.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>postmarkchallenge</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>AI News Sentiment Analyzer</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Mon, 26 May 2025 06:27:44 +0000</pubDate>
      <link>https://forem.com/vec21/ai-news-sentiment-analyzer-h15</link>
      <guid>https://forem.com/vec21/ai-news-sentiment-analyzer-h15</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/brightdata-2025-05-07"&gt;Bright Data AI Web Access Hackathon&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built the &lt;strong&gt;AI News Sentiment Analyzer&lt;/strong&gt;, a real-time tool that discovers, accesses, extracts, and analyzes news articles about artificial intelligence from across the web. Using Bright Data's MCP server capabilities and Groq's powerful llama3-70b-8192 model, the system provides insights into how AI is portrayed in current media.&lt;/p&gt;

&lt;p&gt;In today's rapidly evolving AI landscape, keeping track of public perception and media coverage is crucial for researchers, companies, and policymakers. However, manually monitoring numerous news sources is time-consuming and subjective. My solution automates this process by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finding relevant AI news content across diverse sources&lt;/li&gt;
&lt;li&gt;Extracting meaningful information from complex web pages&lt;/li&gt;
&lt;li&gt;Analyzing the sentiment of coverage objectively&lt;/li&gt;
&lt;li&gt;Presenting insights in an accessible format&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The application features a clean Streamlit interface where users can search for AI-related topics. It then displays a sentiment distribution of the articles found, showing positive, neutral, and negative coverage. For positive and neutral articles, it provides full details with links to the original sources. For negative articles, it only shows the titles without links, as specified in the requirements.&lt;/p&gt;

&lt;p&gt;🛠️ For this project, I used &lt;strong&gt;&lt;a href="https://github.com/astral-sh/uv" rel="noopener noreferrer"&gt;uv&lt;/a&gt;&lt;/strong&gt; to install and manage Python dependencies, ensuring a faster and more efficient experience.&lt;br&gt;&lt;br&gt;
🌐 I also used &lt;strong&gt;&lt;a href="https://github.com/mcp-use/mcp-use" rel="noopener noreferrer"&gt;mcp-use&lt;/a&gt;&lt;/strong&gt;, which provides the easiest way to interact with &lt;em&gt;mcp-use&lt;/em&gt; servers using custom agents.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw2dds4bkyx5295r7rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw2dds4bkyx5295r7rn.png" alt="demo"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How I Used Bright Data's Infrastructure
&lt;/h2&gt;

&lt;p&gt;Bright Data's MCP server was absolutely essential to this project's success. It enabled all four key actions required for effective real-time web interaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "mcpServers": {
      "Bright Data": {
        "command": "npx",
        "args": ["@brightdata/mcp"],
        "env": {
          "API_TOKEN": "",
          "WEB_UNLOCKER_ZONE": "",
          "BROWSER_AUTH": ""
        }
      }
    }
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  1. Discover
&lt;/h3&gt;

&lt;p&gt;The application uses Bright Data's MCP server to discover relevant AI news content across the web by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Searching for recent news articles about specific AI topics&lt;/li&gt;
&lt;li&gt;Finding content across major news sites like UOL, G1, TechCrunch, CNN, and BBC&lt;/li&gt;
&lt;li&gt;Identifying relevant articles based on content and recency&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  2. Access
&lt;/h3&gt;

&lt;p&gt;The MCP server enables the application to navigate through complex news websites that would typically block automated access, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handling paywalls and cookie consent forms&lt;/li&gt;
&lt;li&gt;Accessing content behind JavaScript rendering&lt;/li&gt;
&lt;li&gt;Navigating multi-page articles&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  3. Extract
&lt;/h3&gt;

&lt;p&gt;Once the MCP server accesses the content, the application extracts structured data including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Article titles&lt;/li&gt;
&lt;li&gt;Publication dates&lt;/li&gt;
&lt;li&gt;URLs for reference&lt;/li&gt;
&lt;li&gt;Article summaries&lt;/li&gt;
&lt;li&gt;Full article content for sentiment analysis&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  4. Interact
&lt;/h3&gt;

&lt;p&gt;The MCP server simulates human-like interaction with websites, enabling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scrolling through infinite-scroll pages&lt;/li&gt;
&lt;li&gt;Clicking on "Read More" buttons&lt;/li&gt;
&lt;li&gt;Navigating pagination&lt;/li&gt;
&lt;li&gt;Handling dynamic content loading&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without Bright Data's MCP capabilities, this project would be limited to analyzing pre-collected datasets or simple RSS feeds, significantly reducing its value for real-time sentiment analysis.&lt;/p&gt;
&lt;h2&gt;
  
  
  Performance Improvements
&lt;/h2&gt;

&lt;p&gt;Using Bright Data's real-time web data access dramatically improved the AI system's performance compared to traditional approaches in several key ways:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Freshness of Data
&lt;/h3&gt;

&lt;p&gt;Traditional approaches rely on pre-collected datasets or APIs that may be hours or days old. With Bright Data's MCP server, the application accesses the most current news articles available, ensuring that sentiment analysis reflects the very latest media coverage.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Breadth of Sources
&lt;/h3&gt;

&lt;p&gt;Most traditional approaches are limited to sources with accessible APIs or RSS feeds. Bright Data's MCP server allows the application to access any public news website, regardless of its technical implementation, dramatically increasing the diversity of sources analyzed.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Content Depth
&lt;/h3&gt;

&lt;p&gt;Traditional scrapers often struggle with JavaScript-rendered content, paywalls, and complex site structures. Bright Data's MCP server enables extraction of complete article content rather than just headlines or summaries, providing much richer data for sentiment analysis.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Adaptability
&lt;/h3&gt;

&lt;p&gt;News websites frequently change their structure, breaking traditional scrapers. Bright Data's MCP server handles these changes seamlessly, ensuring consistent data extraction over time without requiring constant maintenance.&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Contextual Understanding
&lt;/h3&gt;

&lt;p&gt;With access to full article content rather than just metadata, the llama3-70b-8192 model can perform much more nuanced sentiment analysis, understanding context, tone, and implications that would be missed by traditional approaches.&lt;/p&gt;
&lt;h3&gt;
  
  
  6. Real-World Applications
&lt;/h3&gt;

&lt;p&gt;The improved performance enables practical applications that wouldn't be possible with traditional approaches, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Corporate reputation management for AI companies&lt;/li&gt;
&lt;li&gt;Investment decision support for AI markets&lt;/li&gt;
&lt;li&gt;Academic research on media portrayal of AI&lt;/li&gt;
&lt;li&gt;Policy development based on public concerns&lt;/li&gt;
&lt;li&gt;Educational content development with current examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining Bright Data's real-time web access capabilities with advanced AI models, this project demonstrates how AI systems can provide valuable insights into complex topics like AI media coverage, with performance that far exceeds what would be possible using traditional web scraping or API-based approaches.&lt;/p&gt;
&lt;h2&gt;
  
  
  My Github repository to run
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vec21" rel="noopener noreferrer"&gt;
        vec21
      &lt;/a&gt; / &lt;a href="https://github.com/vec21/ai-news-sentiment-analyzer" rel="noopener noreferrer"&gt;
        ai-news-sentiment-analyzer
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Real-time sentiment analyzer for AI-related news using Bright Data MCP and Groq's llama3-70b-8192 model
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AI News Sentiment Analyzer&lt;/h1&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Overview&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;This project is a real-time sentiment analyzer for AI-related news, leveraging Bright Data's MCP server capabilities and Groq's powerful llama3-70b-8192 model. The system discovers, accesses, extracts, and analyzes news articles about artificial intelligence, providing insights into how AI is portrayed in current media.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Key Features&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Real-Time Web Data Collection&lt;/h3&gt;

&lt;/div&gt;

&lt;p&gt;The application uses Bright Data's MCP server to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Discover&lt;/strong&gt;: Find relevant AI news content across major news sites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access&lt;/strong&gt;: Navigate through complex news websites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extract&lt;/strong&gt;: Pull structured data including titles, URLs, dates, and summaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interact&lt;/strong&gt;: Engage with dynamic, JavaScript-rendered pages to extract content&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Advanced Sentiment Analysis&lt;/h3&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Utilizes Groq's llama3-70b-8192 model for nuanced sentiment analysis&lt;/li&gt;
&lt;li&gt;Classifies articles as positive, neutral, or negative based on content&lt;/li&gt;
&lt;li&gt;Provides fallback to NLTK for sentiment analysis when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;User-Friendly Interface&lt;/h3&gt;

&lt;/div&gt;


&lt;ul&gt;

&lt;li&gt;Clean Streamlit interface with search functionality&lt;/li&gt;

&lt;li&gt;Visual representation of sentiment distribution&lt;/li&gt;

&lt;li&gt;Expandable article details with direct links…&lt;/li&gt;

&lt;/ul&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vec21/ai-news-sentiment-analyzer" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>devchallenge</category>
      <category>brightdatachallenge</category>
      <category>ai</category>
      <category>webdata</category>
    </item>
    <item>
      <title>Retro African Safari Dash 🦒🌍</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Mon, 12 May 2025 03:00:45 +0000</pubDate>
      <link>https://forem.com/vec21/retro-african-safari-dash-55l1</link>
      <guid>https://forem.com/vec21/retro-african-safari-dash-55l1</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aws-amazon-q-v2025-04-30"&gt;Amazon Q Developer "Quack The Code" Challenge&lt;/a&gt;: That's Entertainment!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built "Retro African Safari Dash," an 8-bit style arcade game where players navigate through an African savanna, collecting cultural artifacts while avoiding obstacles 🦒🏺. The game features a complete serverless backend for leaderboard functionality, allowing players to submit and compare scores 🏆.&lt;/p&gt;

&lt;p&gt;The project demonstrates how Amazon Q Developer can assist in creating both the frontend game mechanics and the serverless AWS infrastructure needed to support it 🚀. The game includes:&lt;/p&gt;

&lt;p&gt;• Retro pixel art graphics with animated characters and objects 🎨✨&lt;br&gt;
• Intuitive controls using arrow keys ⬆️⬇️⬅️➡️&lt;br&gt;
• Progressive difficulty that increases over time ⏩&lt;br&gt;
• Lives system and score tracking ❤️🥇&lt;br&gt;
• Online leaderboard with authentication 🔐&lt;br&gt;
• Responsive design that works across devices 📱💻&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;The game is playable at: &lt;a href="https://d33ejg1jsmvn6g.cloudfront.net" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net&lt;/a&gt; 🌐&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/Od4wLQwzsJg"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;For testing the leaderboard functionality, please use these credentials:&lt;br&gt;
• Admin: username: admin 👑, password: 2025DEVChallenge 🛡️&lt;br&gt;
• User: username: newuser 🧑, password: 2025DEVChallenge 🛡️&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vec21" rel="noopener noreferrer"&gt;
        vec21
      &lt;/a&gt; / &lt;a href="https://github.com/vec21/retro-african-safari-dash" rel="noopener noreferrer"&gt;
        retro-african-safari-dash
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A retro-style game built with Phaser.js and AWS serverless architecture for the AWS Amazon Q Challenge
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Retro African Safari Dash 🦒🌍&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A retro-style, 8-bit arcade game where players control a character navigating an African savanna, collecting cultural artifacts while avoiding obstacles. Built for the AWS Amazon Q Developer Challenge "That's Entertainment!" 🎮✨.&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/vec21/retro-african-safari-dash/screenshots/gameplay.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fvec21%2Fretro-african-safari-dash%2Fscreenshots%2Fgameplay.png" alt="Game Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Play the Game 🎉&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;The game is available at: &lt;a href="https://d33ejg1jsmvn6g.cloudfront.net" rel="nofollow noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net&lt;/a&gt; 🌐&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Project Overview 📝&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;This project demonstrates how Amazon Q Developer can be used to create an entertaining retro-style game with AWS infrastructure. The game features:&lt;/p&gt;
&lt;p&gt;• 8-bit pixel art aesthetic inspired by 90s arcade games 🎨
• Player movement with arrow keys 🕹️
• Collectible artifacts that increase score 🏺
• Obstacles to avoid 🚧
• Lives system with progressive difficulty ❤️
• Leaderboard system using DynamoDB 🏆
• Serverless backend with AWS Lambda and API Gateway ⚙️&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Architecture 🏗️&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;The project uses the following AWS services:&lt;/p&gt;
&lt;p&gt;• &lt;strong&gt;S3&lt;/strong&gt;: Hosts the static game files (HTML, CSS, JavaScript, assets) 📦
• &lt;strong&gt;CloudFront&lt;/strong&gt;…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vec21/retro-african-safari-dash" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Used Amazon Q Developer
&lt;/h2&gt;

&lt;p&gt;Amazon Q Developer was instrumental throughout the entire development process:&lt;/p&gt;

&lt;h3&gt;
  
  
  Game Development 🎲
&lt;/h3&gt;

&lt;p&gt;I started with a basic concept and asked Amazon Q to help generate the core game mechanics using Phaser.js. It provided complete code for:&lt;br&gt;
• Player movement and controls 🕹️&lt;br&gt;
• Collision detection between player, artifacts, and obstacles 🚧&lt;br&gt;
• Score tracking and lives system 📊&lt;br&gt;
• Game state management (menu, gameplay, game over) 📋&lt;/p&gt;

&lt;p&gt;When I encountered issues with sprite rendering, Amazon Q helped debug and fix the problems by suggesting proper scaling for my 1024x1024 sprite images 🖼️.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Infrastructure 🏗️
&lt;/h3&gt;

&lt;p&gt;For the backend, Amazon Q helped me:&lt;br&gt;
• Generate Pulumi code to provision all required AWS resources 🛠️&lt;br&gt;
• Create a DynamoDB table with appropriate indexes for the leaderboard 📈&lt;br&gt;
• Develop a Lambda function to handle score submission and retrieval ⚡&lt;br&gt;
• Configure API Gateway with proper CORS settings 🔗&lt;br&gt;
• Set up CloudFront distribution for content delivery 🌍&lt;/p&gt;

&lt;p&gt;When I encountered deployment issues with S3 bucket creation, Amazon Q identified the problem and suggested a solution using random name generation to ensure unique bucket names 🪣.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration and Testing 🧪
&lt;/h3&gt;

&lt;p&gt;Amazon Q also assisted with:&lt;br&gt;
• Connecting the frontend game to the backend API 🌐&lt;br&gt;
• Implementing the authentication system with the required testing credentials 🔐&lt;br&gt;
• Creating comprehensive testing instructions 📝&lt;br&gt;
• Translating all code comments and documentation to English 🌍&lt;br&gt;
• Debugging connectivity issues between components 🐞&lt;/p&gt;

&lt;p&gt;The most impressive aspect was how Amazon Q could understand both the game development aspects and the AWS infrastructure requirements, providing cohesive solutions that worked together seamlessly 🤝.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture 🏛️
&lt;/h2&gt;

&lt;p&gt;The project uses a serverless architecture with:&lt;br&gt;
• S3 for static web hosting 📦&lt;br&gt;
• CloudFront for content delivery 🌐&lt;br&gt;
• API Gateway and Lambda for backend processing ⚙️&lt;br&gt;
• DynamoDB for data storage 📊&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ns5cxplcpz3a2nr5jux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ns5cxplcpz3a2nr5jux.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project demonstrates how Amazon Q Developer can help create complete, production-ready applications that combine interactive frontend experiences with scalable cloud backends 🌟.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>awschallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>GitHub PR Analyzer: Automating Code Reviews with Amazon Q Developer</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Sun, 11 May 2025 14:26:02 +0000</pubDate>
      <link>https://forem.com/vec21/github-pr-analyzer-automating-code-reviews-with-amazon-q-developer-31</link>
      <guid>https://forem.com/vec21/github-pr-analyzer-automating-code-reviews-with-amazon-q-developer-31</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aws-amazon-q-v2025-04-30"&gt;Amazon Q Developer "Quack The Code" Challenge&lt;/a&gt;: Crushing the Command Line&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I created &lt;strong&gt;GitHub PR Analyzer&lt;/strong&gt; 🚀, a powerful command-line tool that automates code review in GitHub repositories. This tool addresses several pain points that developers face when managing pull requests across multiple repositories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-consuming manual reviews&lt;/strong&gt; ⏳: Developers often spend hours reviewing PRs, especially in large repositories with many contributors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Difficulty tracking PRs across multiple repositories&lt;/strong&gt; 🌐: When working with microservices or distributed systems, tracking PRs across repositories becomes challenging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of standardized reporting&lt;/strong&gt; 📊: Without a consistent way to document PR reviews, valuable insights get lost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing historical data&lt;/strong&gt; 📜: Finding patterns in past PRs is difficult without proper archiving.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub PR Analyzer solves these problems by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automating PR analysis&lt;/strong&gt; 🤖: Extracts detailed information about PRs (open, closed, or all)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detecting code issues&lt;/strong&gt; 🔍: Identifies TODOs, FIXMEs, and files with excessive changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generating comprehensive reports&lt;/strong&gt; 📄: Creates detailed PDF reports with PR statistics and code analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supporting multiple repositories&lt;/strong&gt; 📚: Analyzes several repositories in a single command&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Providing a web interface&lt;/strong&gt; 🌍: Makes all reports accessible through a user-friendly web interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sending email notifications&lt;/strong&gt; 📧: Alerts team members when new reports are generated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tool is built with Python 🐍 and integrates with AWS services (S3, Lambda, SNS, CloudWatch) ☁️. I used &lt;strong&gt;Pulumi&lt;/strong&gt; for infrastructure as code (IaC) 🛠️—a practice I adopted and refined during my participation in a previous challenge, the &lt;a href="https://dev.to/challenges/pulumi"&gt;Pulumi Challenge&lt;/a&gt;, where I first explored infrastructure automation in depth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Web Interface 🌐
&lt;/h3&gt;

&lt;p&gt;The GitHub PR Analyzer provides a web interface to browse and search all generated reports 📊:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0o8inrekohu75ymf098.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0o8inrekohu75ymf098.png" alt="Interface Web"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  PDF Reports 📄
&lt;/h3&gt;

&lt;p&gt;The tool generates detailed PDF reports with PR statistics and code analysis 📈:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxg3m30oxro5p7umw265.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxg3m30oxro5p7umw265.png" alt="PDF Report Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The image above is just a snapshot extracted from the generated PDF. You can view the full report directly in the PDF file available in the repository.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Email Notifications 📧
&lt;/h3&gt;

&lt;p&gt;When a new report is generated, team members receive email notifications 🚨:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2gd0n99yphigfilpn0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2gd0n99yphigfilpn0n.png" alt="Email Notification"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Live Demo 🎮
&lt;/h3&gt;

&lt;p&gt;You can access the web interface here: &lt;a href="http://vec21-aws-challenge.s3-website-us-east-1.amazonaws.com" rel="noopener noreferrer"&gt;GitHub PR Analyzer Web Interface&lt;/a&gt; 🔗&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;

&lt;p&gt;The complete code is available on GitHub:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vec21" rel="noopener noreferrer"&gt;
        vec21
      &lt;/a&gt; / &lt;a href="https://github.com/vec21/aws-challenge-automation" rel="noopener noreferrer"&gt;
        aws-challenge-automation
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This repository contains my submission for the Amazon Q Developer Challenge – Quack the Code (April/May 2025), in the 'Crushing the Command Line' category. Details: https://dev.to/challenges/aws-amazon-q-v2025-04-30
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;GitHub PR Analyzer 🌟&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/vec21/aws-challenge-automation/screenshots/banner.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fvec21%2Faws-challenge-automation%2Fscreenshots%2Fbanner.png" alt="GitHub PR Analyzer"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A command-line tool that automates code review in GitHub repositories, generating detailed PDF reports and making them available through a web interface. 🚀&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features 🎯&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pull Request Analysis&lt;/strong&gt; 📋: Extracts detailed information about PRs (open, closed, or all)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Analysis&lt;/strong&gt; 🔍: Detects issues like TODOs, FIXMEs, and files with many changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Date Filtering&lt;/strong&gt; 🗓️: Allows analyzing PRs created in the last N days&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Repository Support&lt;/strong&gt; 📦: Analyzes multiple repositories in a single report&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PDF Reports&lt;/strong&gt; 📄: Generates detailed reports in PDF format&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Interface&lt;/strong&gt; 🌐: View all reports in a user-friendly web interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email Notifications&lt;/strong&gt; 📧: Receive alerts when new reports are generated&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Prerequisites ✅&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Python 3.9+ 🐍&lt;/li&gt;
&lt;li&gt;AWS account with access to create resources (S3, Lambda, SNS) ☁️&lt;/li&gt;
&lt;li&gt;GitHub personal access token 🔑&lt;/li&gt;
&lt;li&gt;Pulumi CLI installed ⚙️&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Installation 🛠️&lt;/h2&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Clone the repository:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/vec21/aws-challenge-automation.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; aws-challenge-automation&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create and activate a virtual environment:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;python&lt;/pre&gt;…
&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vec21/aws-challenge-automation" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Used Amazon Q Developer
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Amazon Q Developer 🤖
&lt;/h3&gt;

&lt;p&gt;Amazon Q Developer was the secret weapon that helped me build this tool efficiently. I leveraged its specialized commands to accelerate development:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faogtebos13vr2v0ryffy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faogtebos13vr2v0ryffy.png" alt="Amazon Q Developer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;/dev&lt;/code&gt; - Code Development 💻
&lt;/h3&gt;

&lt;p&gt;Amazon Q Developer helped me bootstrap the project by generating the initial CLI structure with GitHub API integration. This saved me hours of boilerplate coding and documentation reading ⏳.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Generated by Amazon Q Developer
&lt;/span&gt;&lt;span class="nd"&gt;@click.command&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nd"&gt;@click.option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--repo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GitHub repository (user/repo) or comma-separated list&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nd"&gt;@click.option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--state&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;open&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;click&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Choice&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;open&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;closed&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;all&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt; 
              &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;State of PRs to analyze&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;review_code&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Reviews pull requests from a GitHub repository and generates a PDF report.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt; &lt;span class="err"&gt;📄&lt;/span&gt;
    &lt;span class="c1"&gt;# Implementation follows...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Amazon Q also helped me implement the Pulumi infrastructure code, setting up S3, Lambda, and SNS services with proper permissions 🛠️:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Generated by Amazon Q Developer
&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;automation-bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vec21-aws-challenge&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AutomationBucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;website_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketWebsiteConfigurationV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;website-config&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;index_document&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suffix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;error_document&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error.html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;/review&lt;/code&gt; - Code Optimization 🔍
&lt;/h3&gt;

&lt;p&gt;When I encountered rate limiting issues with the GitHub API, Amazon Q suggested implementing retry mechanisms 🔄:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before Amazon Q review
&lt;/span&gt;&lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_repo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pulls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_pulls&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# After Amazon Q review
&lt;/span&gt;&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_repo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;pulls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_pulls&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;RateLimitExceededException&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Wait before retrying
&lt;/span&gt;    &lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_repo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;pulls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_pulls&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It also identified a critical timezone issue when comparing dates ⏰:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before Amazon Q review
&lt;/span&gt;&lt;span class="n"&gt;since_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# After Amazon Q review
&lt;/span&gt;&lt;span class="n"&gt;since_date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;/test&lt;/code&gt; - Test Generation 🧪
&lt;/h3&gt;

&lt;p&gt;Amazon Q generated comprehensive test cases for my code, including fixtures and mocks ✅:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Generated by Amazon Q Developer
&lt;/span&gt;&lt;span class="nd"&gt;@pytest.fixture&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mock_pull_request&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;mock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MagicMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;number&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="n"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Test PR&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;login&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;testuser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;mock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;created_at&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# More properties...
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;mock&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_analyze_pull_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mock_repository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mock_pull_request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;analyze_pull_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mock_repository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mock_pull_request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;complexity&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;issues&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;languages&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;/doc&lt;/code&gt; - Documentation 📝
&lt;/h3&gt;

&lt;p&gt;Amazon Q helped me create clear documentation, including the architecture diagram and usage examples 📚:&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture 🏗️
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0kx9rc110l0gsr33ott.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0kx9rc110l0gsr33ott.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The architecture diagram was originally generated by Amazon Q in ASCII format, using characters such as &lt;code&gt;-&lt;/code&gt;, &lt;code&gt;&amp;gt;&lt;/code&gt;, &lt;code&gt;|&lt;/code&gt;, and letters to illustrate the system components and their interactions. This ASCII diagram was then converted into a visual image using ChatGPT, based on the same structure, to improve readability and presentation 🎨.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The project uses the following AWS services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; ☁️: Storage for PDF reports and web interface hosting
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda&lt;/strong&gt; ⚡: Scheduled execution of code analysis
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SNS&lt;/strong&gt; 📬: Email notification delivery
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch Events&lt;/strong&gt; ⏰: Scheduling of periodic executions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Insights from Using Amazon Q Developer 🧠
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with a clear problem statement&lt;/strong&gt; 🎯: The more specific your request to Amazon Q, the better the generated code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterative development&lt;/strong&gt; 🔄: Use Amazon Q to generate a basic structure, then refine it with more specific requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage specialized commands&lt;/strong&gt; 🛠️: The &lt;code&gt;/dev&lt;/code&gt;, &lt;code&gt;/review&lt;/code&gt;, &lt;code&gt;/test&lt;/code&gt;, and &lt;code&gt;/doc&lt;/code&gt; commands are tailored for different development phases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify and understand the code&lt;/strong&gt; ✅: Always review and understand the generated code before implementing it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Amazon Q for learning&lt;/strong&gt; 📖: The explanations provided alongside the code are excellent learning resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Future Improvements 🚀
&lt;/h2&gt;

&lt;p&gt;While GitHub PR Analyzer already provides significant value in its current form, I have several exciting enhancements planned for future iterations:&lt;/p&gt;

&lt;h3&gt;
  
  
  Short-term Improvements 🔜
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AI-powered Code Analysis&lt;/strong&gt; 🧠: Integrate with Amazon Bedrock or Amazon CodeGuru to provide deeper code insights, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Potential bugs and security vulnerabilities&lt;/li&gt;
&lt;li&gt;Code quality metrics and suggestions&lt;/li&gt;
&lt;li&gt;Performance optimization recommendations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Report Templates&lt;/strong&gt; 📊: Allow users to define their own report templates to focus on metrics that matter most to their teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Actions Integration&lt;/strong&gt; ⚙️: Create a GitHub Action that automatically generates reports on PR creation, updates, or merges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Slack/Teams Notifications&lt;/strong&gt; 💬: Expand notification options beyond email to include popular team communication platforms.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Medium-term Vision 🔭
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PR Trend Analysis&lt;/strong&gt; 📈: Implement historical data analysis to identify patterns and trends in your development process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PR velocity over time&lt;/li&gt;
&lt;li&gt;Common issues by repository or contributor&lt;/li&gt;
&lt;li&gt;Code quality trends&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactive Dashboards&lt;/strong&gt; 📱: Enhance the web interface with interactive charts and filtering capabilities for better data exploration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-platform Support&lt;/strong&gt; 🌐: Extend beyond GitHub to support GitLab, Bitbucket, and other Git hosting services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Pipeline Analysis&lt;/strong&gt; 🔄: Correlate PR data with CI/CD pipeline metrics to identify bottlenecks in your development workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Long-term Roadmap 🗺️
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Collaborative Review Features&lt;/strong&gt; 👥: Enable teams to collaborate on PR reviews directly within the tool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comment and discussion threads&lt;/li&gt;
&lt;li&gt;Review assignments and tracking&lt;/li&gt;
&lt;li&gt;Approval workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Machine Learning Insights&lt;/strong&gt; 🤖: Train models on your PR history to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predict PR review time and complexity&lt;/li&gt;
&lt;li&gt;Recommend optimal reviewers based on code expertise&lt;/li&gt;
&lt;li&gt;Identify potential merge conflicts before they occur&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Enterprise Integration&lt;/strong&gt; 🏢: Develop enterprise features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSO authentication&lt;/li&gt;
&lt;li&gt;Role-based access control&lt;/li&gt;
&lt;li&gt;Compliance reporting&lt;/li&gt;
&lt;li&gt;Custom retention policies&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developer Productivity Metrics&lt;/strong&gt; ⚡: Provide insights into developer productivity while respecting privacy and focusing on team-level metrics rather than individual performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I'm excited to continue evolving this tool based on user feedback and emerging needs in the development community. If you have suggestions for additional features, please share them in the comments or open an issue in the GitHub repository! 🙌&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion 🎉
&lt;/h2&gt;

&lt;p&gt;Building GitHub PR Analyzer with Amazon Q Developer was a game-changer 🚀. What would have taken weeks to develop was completed in days ⏩, with better code quality and more comprehensive documentation 📚. The tool now helps our team save hours each week on code reviews ⏰ while providing valuable insights into our development process 📊.&lt;/p&gt;

&lt;p&gt;Amazon Q Developer isn't just a code generator—it's a true development partner 🤝 that helps you think through problems 🧠, implement solutions 🛠️, and learn best practices along the way 📖.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>awschallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Pulumi + GitHub Actions: A CI/CD Pipeline for AWS S3 Deployment</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Mon, 07 Apr 2025 03:17:58 +0000</pubDate>
      <link>https://forem.com/vec21/pulumi-github-actions-a-cicd-pipeline-for-aws-s3-deployment-klh</link>
      <guid>https://forem.com/vec21/pulumi-github-actions-a-cicd-pipeline-for-aws-s3-deployment-klh</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pulumi"&gt;Pulumi Deploy and Document Challenge&lt;/a&gt;: Fast Static Website Deployment&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;For the Pulumi challenge, I built a project that provisions an AWS S3 bucket and hosts a static mini-site using Pulumi and Python. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt 1&lt;/strong&gt;: I created an S3 bucket with Pulumi and uploaded a simple website (HTML, CSS, JavaScript) that shares my journey learning Infrastructure as Code (IaC). I deployed it manually using &lt;code&gt;pulumi up&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt 2&lt;/strong&gt;: I automated the deployment process with GitHub Actions. The workflow in &lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt; runs &lt;code&gt;pulumi up&lt;/code&gt; on every &lt;code&gt;push&lt;/code&gt; to the &lt;code&gt;main&lt;/code&gt; branch, updating the S3 bucket automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The site is live at: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before creating prompt2:&lt;/strong&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8e4qiyvonh9prki8d0g.png" alt="my site prompt1"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After creating prompt2:&lt;/strong&gt;
&lt;a href="https://d33ejg1jsmvn6g.cloudfront.net/index.html" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net/index.html&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Live Demo Link
&lt;/h2&gt;

&lt;p&gt;Here’s a quick demo video showing my project in action, including the GitHub Actions workflow running and the S3 site updating:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/YMSQLbcjqbY"&gt;
  &lt;/iframe&gt;
&lt;br&gt;
&lt;a href="https://d33ejg1jsmvn6g.cloudfront.net/index.html" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net/index.html&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;

&lt;p&gt;Check out the project repository on GitHub, which includes a detailed README with setup instructions:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/vec21/pulumi-prompt2-challenge" rel="noopener noreferrer"&gt;https://github.com/vec21/pulumi-prompt2-challenge&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;This project was an exciting journey into Infrastructure as Code (IaC) and CI/CD, but it came with its share of challenges.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting with Prompt 1
&lt;/h3&gt;

&lt;p&gt;For Prompt 1, I learned how to use Pulumi to provision an S3 bucket on AWS. I chose Python because I’m comfortable with it, and the Pulumi documentation was super helpful. Writing the &lt;code&gt;__main__.py&lt;/code&gt; to create the bucket and upload my mini-site (HTML, CSS, JavaScript) went smoothly. The biggest challenge here was figuring out the right AWS permissions for the S3 bucket to host a static site—I had to tweak IAM roles to allow public access, which took some trial and error. Once I got it working, seeing my site live on the S3 URL was a rewarding moment!&lt;/p&gt;

&lt;h3&gt;
  
  
  Tackling Prompt 2
&lt;/h3&gt;

&lt;p&gt;Prompt 2 was where things got trickier. Automating the deployment with GitHub Actions sounded straightforward, but I ran into several issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Billing Issue with GitHub Actions&lt;/strong&gt;: Initially, my workflow wouldn’t run because of a payment error in my GitHub account. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pulumi Login Errors&lt;/strong&gt;: The biggest hurdle was authenticating Pulumi in the GitHub Actions workflow. My first attempt at &lt;code&gt;pulumi login&lt;/code&gt; failed with an &lt;code&gt;unknown flag: --token&lt;/code&gt; error because I used an incorrect command. After several tries, I learned that setting the &lt;code&gt;PULUMI_ACCESS_TOKEN&lt;/code&gt; as an environment variable and running &lt;code&gt;pulumi login&lt;/code&gt; without extra flags worked best. Separating the AWS and Pulumi credential steps in the workflow also helped make debugging easier.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazfacw3q1doaliiqds0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazfacw3q1doaliiqds0j.png" alt="Pulumi error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iterating and Testing&lt;/strong&gt;: I made small changes to my &lt;code&gt;index.html&lt;/code&gt; file to test the CI/CD pipeline. Watching the GitHub Actions workflow run successfully and update the S3 bucket automatically was a huge win! Below is a screenshot of the workflow running smoothly after fixing the issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I added this code to index.html&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;&amp;lt;p&amp;gt;Automatic Deployment with GitHub Actions - 06/04/2025 &amp;lt;/p&amp;gt;&lt;/code&gt; and&lt;br&gt;
&lt;code&gt;&amp;lt;p&amp;gt;Automatic Deployment with GitHub Actions - 06/04/2025  - Test2&amp;lt;/p&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqud0g6xjqs9eodai3y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqud0g6xjqs9eodai3y7.png" alt="Image index.html"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;git push origin main&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj159i5qezq7wvn8vnbqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj159i5qezq7wvn8vnbqs.png" alt="github action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Learned
&lt;/h3&gt;

&lt;p&gt;This project taught me a lot about IaC and CI/CD:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pulumi is Powerful&lt;/strong&gt;: Using Pulumi to manage infrastructure with code is much more intuitive than clicking through the AWS Console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Saves Time&lt;/strong&gt;: Automating deployments with GitHub Actions makes the process faster and less error-prone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging is Key&lt;/strong&gt;: I got better at reading logs and troubleshooting issues, like authentication errors and billing limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Support&lt;/strong&gt;: The Pulumi and GitHub Actions communities (and some AI assistance!) were invaluable in helping me overcome challenges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, this challenge pushed me to grow as a developer and gave me practical skills I can apply to real-world projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Pulumi
&lt;/h2&gt;

&lt;p&gt;Pulumi was the core tool for managing infrastructure in this project, and I used it in both Prompt 1 and Prompt 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I Used Pulumi
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt 1&lt;/strong&gt;: I wrote a Python script (&lt;code&gt;__main__.py&lt;/code&gt;) to provision an AWS S3 bucket and configure it as a static website. Pulumi made it easy to define the bucket, set the &lt;code&gt;index.html&lt;/code&gt; as the default document, and upload my mini-site files (HTML, CSS, JavaScript) with just a few lines of code. I ran &lt;code&gt;pulumi up&lt;/code&gt; to deploy everything manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt 2&lt;/strong&gt;: I reused the same Pulumi code in my GitHub Actions workflow (&lt;code&gt;.github/workflows/deploy.yml&lt;/code&gt;). The workflow runs &lt;code&gt;pulumi up&lt;/code&gt; automatically on every &lt;code&gt;push&lt;/code&gt; to the &lt;code&gt;main&lt;/code&gt; branch, updating the S3 bucket with any changes to my site. Pulumi handled the infrastructure updates seamlessly, ensuring consistency between manual and automated deploys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Pulumi Was Beneficial
&lt;/h3&gt;

&lt;p&gt;Pulumi brought several advantages to this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code-Based Infrastructure&lt;/strong&gt;: Writing infrastructure as code in Python was much more intuitive than clicking through the AWS Console. I could version my infrastructure alongside my site code in Git.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusability&lt;/strong&gt;: The same Pulumi script worked for both manual and automated deploys, saving me time and reducing errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear Feedback&lt;/strong&gt;: Pulumi’s CLI gave detailed output during &lt;code&gt;pulumi up&lt;/code&gt;, making it easy to understand what changes were being applied to my S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Assistance with GitHub Copilot
&lt;/h3&gt;

&lt;p&gt;While I didn’t use Pulumi Copilot, I did rely on GitHub Copilot to help with some parts of the project. One key prompt I used was tweaking the GitHub Actions workflow to fix a &lt;code&gt;pulumi login&lt;/code&gt; issue. Copilot suggested simplifying the authentication step by removing unnecessary flags, which helped me move forward after several failed attempts. This assistance was crucial in debugging and getting the CI/CD pipeline to work smoothly.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pulumichallenge</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>From AWS Console Chaos to a Static Site with Pulumi: My IaC Journey</title>
      <dc:creator>Veríssimo Cassange</dc:creator>
      <pubDate>Sun, 06 Apr 2025 04:55:55 +0000</pubDate>
      <link>https://forem.com/vec21/from-aws-console-chaos-to-a-static-site-with-pulumi-my-iac-journey-2nni</link>
      <guid>https://forem.com/vec21/from-aws-console-chaos-to-a-static-site-with-pulumi-my-iac-journey-2nni</guid>
      <description>

&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pulumi"&gt;Pulumi Deploy and Document Challenge&lt;/a&gt;: Fast Static Website Deployment&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a fast static website hosted on AWS S3 and served globally via CloudFront. It features two pages: a personal diary (&lt;code&gt;index.html&lt;/code&gt;) and a tutorial page (&lt;code&gt;tutorial.html&lt;/code&gt;) with interactive "Copy" buttons for code snippets. Using Pulumi, I automated the deployment of an S3 bucket, set it up as a public website, and added a CloudFront CDN for speed and security—all coded in Python. This project replaced my old, tedious AWS console workflow with Infrastructure as Code (IaC).&lt;/p&gt;

&lt;h2&gt;
  
  
  Live Demo Link
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Diary:&lt;/strong&gt; &lt;a href="https://d33ejg1jsmvn6g.cloudfront.net/index.html" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net/index.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tutorial:&lt;/strong&gt; &lt;a href="https://d33ejg1jsmvn6g.cloudfront.net/tutorial.html" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net/tutorial.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;

&lt;p&gt;Check out my full code and a detailed README here:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/vec21/pulumi-s3-challenge" rel="noopener noreferrer"&gt;https://github.com/vec21/pulumi-s3-challenge&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;Hey there! I’m Veríssimo, a beginner in IaC, and this is my story of escaping the AWS console chaos. I used to spend 15 minutes per EC2 instance in the console—selecting instance types, networks, SSH keys—repeating it all for every test. It was slow and frustrating. For this challenge, I dove into Pulumi to deploy a static website, and it was a game-changer!&lt;/p&gt;

&lt;p&gt;I started on Ubuntu (more on my Windows woes later). First, I installed Pulumi and AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.pulumi.com | sh
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;unzip curl &lt;span class="nt"&gt;-y&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; unzip awscliv2.zip &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I configured AWS credentials with &lt;code&gt;aws configure&lt;/code&gt;, using an IAM user with full access to EC2, S3, CloudFront, and IAM.&lt;/p&gt;

&lt;p&gt;Next, I tested Pulumi with the &lt;code&gt;vm-aws-python&lt;/code&gt; template (&lt;code&gt;pulumi new vm-aws-python&lt;/code&gt;). A quick &lt;code&gt;pulumi up&lt;/code&gt;, a "yes," and my first EC2 instance was live in minutes! This hooked me, so I aimed bigger: a static website. Using &lt;code&gt;pulumi new static-website-aws-python&lt;/code&gt;, I set out to host a diary and tutorial page. But my first &lt;code&gt;pulumi up&lt;/code&gt; hit a snag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TypeError: BucketV2._internal_init() got an unexpected keyword argument 'website'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The S3 V2 API had changed! After digging into &lt;a href="https://www.pulumi.com/docs" rel="noopener noreferrer"&gt;Pulumi’s docs&lt;/a&gt;, I updated &lt;code&gt;__main__.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketWebsiteConfigurationV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bucket-website&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index_document&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suffix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I added public permissions, and it worked!&lt;/p&gt;

&lt;p&gt;To make the site fast and secure, I added CloudFront. My final &lt;code&gt;__main__.py&lt;/code&gt; synced my &lt;code&gt;./www&lt;/code&gt; folder (HTML, CSS, JS, images) to S3 and set up a CDN. Running &lt;code&gt;pulumi up --refresh&lt;/code&gt; gave me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 URL:&lt;/strong&gt; &lt;a href="http://bucket-17bf36b.s3-website-us-west-2.amazonaws.com" rel="noopener noreferrer"&gt;http://bucket-17bf36b.s3-website-us-west-2.amazonaws.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudFront URL:&lt;/strong&gt; &lt;a href="https://d33ejg1jsmvn6g.cloudfront.net" rel="noopener noreferrer"&gt;https://d33ejg1jsmvn6g.cloudfront.net&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then came the Windows hiccup. My username "Veríssimo" (with an accent) broke Pulumi’s paths. Switching to "Verissimo" and rebooting locked me out—chaos! Thankfully, my Ubuntu dual-boot saved the day.&lt;/p&gt;

&lt;p&gt;Lessons learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid accents in Windows usernames—trust me!
&lt;/li&gt;
&lt;li&gt;S3 V2 needs separate website config—check the docs.
&lt;/li&gt;
&lt;li&gt;CloudFront is worth it for speed and HTTPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Pulumi
&lt;/h2&gt;

&lt;p&gt;Pulumi turned a tedious task into a fun coding project. I used its Python SDK to define my infrastructure—first spinning up an EC2 instance with &lt;code&gt;vm-aws-python&lt;/code&gt;, then building the static site with &lt;code&gt;static-website-aws-python&lt;/code&gt;. In &lt;code&gt;__main__.py&lt;/code&gt;, I configured an S3 bucket, website settings, and CloudFront distribution. It was beneficial because it saved me hours of console clicking, made my setup reproducible, and let me version-control it with Git. The &lt;a href="https://www.pulumi.com/docs" rel="noopener noreferrer"&gt;Pulumi docs&lt;/a&gt; were key for fixing the S3 V2 issue. I didn’t use Pulumi Copilot, but splitting the bucket and website config was a critical manual tweak:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bucket&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BucketWebsiteConfigurationV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bucket-website&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index_document&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suffix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pulumi + Python = beginner-friendly power!&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pulumichallenge</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
