<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Entelligence AI</title>
    <description>The latest articles on Forem by Entelligence AI (@entelligenceai).</description>
    <link>https://forem.com/entelligenceai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/entelligenceai"/>
    <language>en</language>
    <item>
      <title>Inside OpenClaw: How a Persistent AI Agent Actually Works</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Thu, 19 Feb 2026 18:28:58 +0000</pubDate>
      <link>https://forem.com/entelligenceai/inside-openclaw-how-a-persistent-ai-agent-actually-works-1mnk</link>
      <guid>https://forem.com/entelligenceai/inside-openclaw-how-a-persistent-ai-agent-actually-works-1mnk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;, originally called ClawdBot, is trending everywhere. People are building insane things with it: an AI agent that rebuilds an entire website via Telegram, an AI agent platform where humans are only guests, and giving one AI full access to your system that can accidentally delete 6,000 emails because of a prompt injection attack.&lt;/p&gt;

&lt;p&gt;Unlike ChatGPT or Claude sitting behind a web interface, OpenClaw runs as a persistent process on your hardware. You message it through WhatsApp, Telegram, or Slack. It messages you back. It can check things while you sleep. It has access to your filesystem, your terminal, and whatever APIs you give it.&lt;/p&gt;

&lt;p&gt;The possibilities are wild. The security risks are real. And the technical architecture behind it explains both, and it's simpler than you'd think. Let's see how it actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Gateway Architecture: The Central Nervous System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenClaw runs as a single Node.js process on your machine, listening on &lt;code&gt;127.0.0.1:18789&lt;/code&gt; by default. This process is called the Gateway, which manages every messaging platform connection simultaneously: WhatsApp, Telegram, Discord, Slack, Signal, and others.&lt;/p&gt;

&lt;p&gt;Think of it as the central nervous system. Every message coming in from any platform passes through the Gateway. Every response your agent generates goes back out through it. All communication happens via WebSocket protocol, which keeps connections open and allows real-time bidirectional messaging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eftt1a4f0b8gtseqn6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eftt1a4f0b8gtseqn6z.png" alt="Clawdbot Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Session State, Routing, and Security&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Gateway handles three critical functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;session state management,&lt;/li&gt;
&lt;li&gt;message routing, and&lt;/li&gt;
&lt;li&gt;security enforcement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a message arrives from WhatsApp, the Gateway determines which agent session should handle it based on the user, conversation context, or routing rules you've configured. It loads the appropriate session state, passes the message to the agent, waits for the LLM to generate a response, then routes that response back through the correct platform connection.&lt;/p&gt;

&lt;p&gt;This centralized design solves a real technical problem. WhatsApp Web only allows one active session at a time. If you try running multiple instances, they conflict and kick each other off. The Gateway acts as that single session, then manages multiple agent conversations internally. Configure WhatsApp once, and the Gateway handles everything downstream. The same principle applies to every other platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Connection and Authentication Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When a platform wants to connect, it establishes a WebSocket connection and sends a &lt;code&gt;connect&lt;/code&gt; request with device identity, basically, "&lt;em&gt;I'm WhatsApp running on device XYZ, and I want to talk to your agent.&lt;/em&gt;" The Gateway checks its pairing store. If this device has never connected before, it rejects the connection and waits for explicit approval.&lt;/p&gt;

&lt;p&gt;Once approved, the Gateway issues a device token scoped to specific permissions. That token determines what this device can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which users it can message as,&lt;/li&gt;
&lt;li&gt;which agent sessions it can access, and&lt;/li&gt;
&lt;li&gt;what capabilities it has.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Future connections use this token for authentication instead of requiring re-approval every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Message Routing After Authentication
&lt;/h3&gt;

&lt;p&gt;Once a platform is authenticated, every message it sends goes through routing logic. The Gateway decides where the message goes and whether the agent should respond based on rules you configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Messages from users on your allow list get processed&lt;/li&gt;
&lt;li&gt;Messages from unknown users get dropped before the agent sees them&lt;/li&gt;
&lt;li&gt;DMs route to your personal assistant agent&lt;/li&gt;
&lt;li&gt;Group chats might only trigger responses when someone @mentions the agent directly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Network Binding: Local by Default&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;All this routing and authentication happens on your local machine. The Gateway binds to &lt;code&gt;127.0.0.1&lt;/code&gt; (localhost) by default, not &lt;code&gt;0.0.0.0&lt;/code&gt; (all network interfaces). This network binding determines who can connect to the Gateway in the first place.&lt;/p&gt;

&lt;p&gt;Binding to &lt;code&gt;127.0.0.1&lt;/code&gt; means only processes running on your machine can reach the Gateway, no external network access. Your agent isn't accessible from outside your machine unless you deliberately reconfigure the binding. This prevents accidental public exposure, a critical consideration given the Gateway has access to your filesystem, terminal, and connected APIs.&lt;/p&gt;

&lt;p&gt;Every message follows the same path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;platform → Gateway authentication → routing logic → agent session load → LLM processing → response generation → Gateway → platform delivery.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One process. All platforms. Centralized control. And everything stays local unless you explicitly decide otherwise.&lt;/p&gt;

&lt;p&gt;Now that we understand how messages reach the agent, let's look at what happens once they get there.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Agent Loop: From Message to Action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When a message hits the Gateway, it doesn't just forward blindly to an LLM. There's a processing cycle that turns your "check my calendar" into an actual response with context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6btvwn1trukvp5jjteel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6btvwn1trukvp5jjteel.png" alt="Agent loop diagram" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Gateway routes the message to the appropriate agent session based on who sent it and where it came from. That session loads conversation history from the file system, everything you've said to this agent in the past, not just this conversation, but previous ones too. This is why your agent remembers you asked about a project last Tuesday.&lt;/p&gt;

&lt;p&gt;The agent passes the message to the LLM along with available tools and skills. The model processes the request, decides if it needs to call a tool (like checking your calendar or sending an email), executes those actions, and generates a response. That response streams back through the Gateway to whichever platform you messaged from.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Context That Persists&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Unlike a fresh ChatGPT conversation every time, OpenClaw sessions don't reset. The agent knows who you are, what you've asked before, and what's in your workspace. If you told it last week that you're working on Project XYZ, it remembers. If you saved notes in your workspace, it can reference them.&lt;/p&gt;

&lt;p&gt;This persistence happens because everything stays in files on your machine. The agent reloads context every time it processes a message, but that context doesn't disappear when you close the chat. And you're not locked to one LLM, configure Claude for complex reasoning, GPT-4 for creative tasks, or a cheaper model for simple queries. The agent loop works the same regardless.&lt;/p&gt;

&lt;p&gt;This file-based approach to memory is what makes the persistence possible. Let's look at how that actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Persistent Memory: Everything is a File&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenClaw doesn't use a database. Everything is stored &lt;code&gt;~/clawd/&lt;/code&gt; as Markdown files.&lt;/p&gt;

&lt;p&gt;Your agent's behavior is defined in &lt;code&gt;AGENTS.md&lt;/code&gt;. Its personality and core instructions are stored in &lt;code&gt;SOUL.md&lt;/code&gt;. Available tools are listed in &lt;code&gt;TOOLS.md&lt;/code&gt;. Skills you've installed are saved in &lt;code&gt;~/clawd/skills/&amp;lt;skill&amp;gt;/SKILL.md&lt;/code&gt;. Memory logs are timestamped files with names like &lt;code&gt;2026-02-10-conversation.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Open any text editor, and you see exactly what your agent knows. Want to check what it remembers about your last project discussion? Open the memory log. Want to modify how it responds to calendar requests? Edit &lt;code&gt;AGENTS.md&lt;/code&gt;. Want to see what tools it has access to? Read &lt;code&gt;TOOLS.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi475beutirj2jsnriyrs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi475beutirj2jsnriyrs.png" alt="Persistent memory" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since everything is plain text, version control works without extra setup. Run &lt;code&gt;git init&lt;/code&gt; in &lt;code&gt;~/clawd/&lt;/code&gt; and every change gets tracked. You can see when you added a new skill[we’ll learn about this in upcoming sections], when the agent updated its long-term memory, or when you modified its core instructions. If something breaks, roll back to a previous commit. Backups are simple, just copy the directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Memory Organizes Itself&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenClaw separates memory into layers. Daily logs capture short-term context, what you talked about today, what tasks are in progress, and what links you shared. These timestamped files accumulate over time.&lt;/p&gt;

&lt;p&gt;Long-term memory is curated by the agent itself. As conversations happen, the agent decides what's important enough to remember permanently. Maybe you told it you prefer concise responses. Maybe you gave it standing instructions about how to handle certain types of requests. That information gets written to long-term memory files and persists across sessions.&lt;/p&gt;

&lt;p&gt;If you're analyzing a dataset and your computer crashes, the agent reloads workspace state when it comes back up. It knows where you left off because that state lives in a file it can read on restart.&lt;/p&gt;

&lt;p&gt;But memory alone doesn't make an agent proactive. For that, OpenClaw needs a mechanism to wake up and check things without you asking. That's where the heartbeat comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Heartbeat: The Proactive Agent&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most AI assistants wait for you to ask a question. OpenClaw doesn't have to.&lt;/p&gt;

&lt;p&gt;A cron job wakes your agent at whatever interval you configure, the default is every 30 minutes. The agent checks &lt;code&gt;HEARTBEAT.md&lt;/code&gt; for instructions, runs a reasoning loop, and decides if it needs to tell you something. No prompt required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyqr7h0pu0mcozp9b0zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyqr7h0pu0mcozp9b0zg.png" alt="Clawdbot main system" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you get proactive notifications. Your server goes down at 3 am, and the agent messages you on Telegram. A stock you're monitoring drops 15%, it executes a sell order, and confirms via WhatsApp. Three urgent emails from a client arrive, and it flags them immediately instead of waiting for you to check.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cheap Checks First&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenClaw doesn't call the LLM on every heartbeat(as you can see in the above image). That would burn through API costs fast. Instead, it uses a two-tier approach: cheap checks first, models only when needed.&lt;/p&gt;

&lt;p&gt;The agent runs fast, deterministic scripts first, checking for new emails, calendar changes, or system alerts. These are simple pattern matches or API queries that cost nothing. Only when something significant changes does the agent escalate to the LLM for interpretation and decision-making.&lt;/p&gt;

&lt;p&gt;For example, the cheap check sees "new email from landlord." That's a signal. The agent then calls Claude or GPT-4 to read the email, understand context from previous conversations about your lease, and decide if it needs to notify you or take action. If the heartbeat finds nothing new, no LLM call happens.&lt;/p&gt;

&lt;p&gt;This design keeps costs reasonable while maintaining responsiveness. You're not paying for 48 LLM calls per day when nothing important is happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The configuration for &lt;code&gt;HEARTBEAT.md&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;textevery: &lt;span class="s2"&gt;"30m"&lt;/span&gt;
target: &lt;span class="s2"&gt;"whatsapp:+1234567890"&lt;/span&gt;
active_hours: &lt;span class="s2"&gt;"9am-10pm"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The &lt;code&gt;active_hours&lt;/code&gt; setting prevents your agent from waking you at 2 am with non-urgent updates.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;target&lt;/code&gt; specifies which platform and contact to send heartbeat messages to.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;every&lt;/code&gt; parameter controls frequency, set it to &lt;code&gt;"1h"&lt;/code&gt; for standard monitoring, &lt;code&gt;"15m"&lt;/code&gt; for tighter checks if you're actively working, or &lt;code&gt;"5m"&lt;/code&gt; if you need a near-real-time response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each heartbeat cycle loads the agent's current context, checks for conditions defined in the heartbeat instructions, and only sends a message if something actually needs attention. It's not spamming you every 30 minutes, it's checking every 30 minutes and speaking up when there's a reason.&lt;/p&gt;

&lt;p&gt;This proactive capability is built in, but you can extend what the agent actually does during those checks. That's where skills come in.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Skills &amp;amp; Execution: Extending Agent Capabilities&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenClaw uses a skill-based architecture where capabilities are defined in Markdown files, not compiled code.&lt;/p&gt;

&lt;p&gt;Each skill is present as &lt;code&gt;~/clawd/skills/&amp;lt;skill-name&amp;gt;/SKILL.md&lt;/code&gt; and contains instructions for interacting with APIs or performing workflows. The agent reads these files at runtime to understand available capabilities. Installation is immediate, no recompilation or server restarts. Over 100 community skills exist on &lt;a href="https://clawhub.ai/" rel="noopener noreferrer"&gt;ClawHub&lt;/a&gt; for Gmail, browser automation, home control, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6tmnho4wjlasf4vwn8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6tmnho4wjlasf4vwn8x.png" alt="Clawhub" width="732" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Execution Model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Skills execute wherever the OpenClaw process runs, your local machine, a VPS, or a managed container. The architecture stays identical: Gateway routes messages, agent loads skills from the filesystem, LLM calls happen directly (not proxied through a vendor), and results write back to local storage.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Aspect&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Cloud AI Tools&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;OpenClaw&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data storage&lt;/td&gt;
&lt;td&gt;Vendor servers&lt;/td&gt;
&lt;td&gt;Where process runs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Execution&lt;/td&gt;
&lt;td&gt;Vendor infrastructure&lt;/td&gt;
&lt;td&gt;Your hardware/VPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API calls&lt;/td&gt;
&lt;td&gt;Proxied through vendor&lt;/td&gt;
&lt;td&gt;Direct from agent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Tool restrictions set limits at the Gateway level. You can run the agent in sandboxed mode (restricted capabilities for safety) or full access mode (unrestricted system control). In sandboxed mode, it blocks writing to the filesystem and shell access. Full access mode lets you use terminal commands and control the browser. If the LLM tries to do something it's not allowed to, the Gateway stops it before it happens.&lt;/p&gt;

&lt;p&gt;Regardless of where the process runs, connecting to messaging platforms requires authentication and security enforcement. That's where the Gateway's role becomes critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Security &amp;amp; Multi-Platform Handling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Gateway enforces security at the routing layer, not just at connection time.&lt;/p&gt;

&lt;p&gt;Once platforms are authenticated and connected, every message goes through security checks before reaching the agent. Allow lists control which users or groups get responses. If someone not on the list sends a message, the Gateway drops it before the agent sees it. This works across all platforms: WhatsApp, Telegram, Discord, Slack, using the same allow-list configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2poqpmjp4rr26cqjzkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2poqpmjp4rr26cqjzkl.png" alt="Gateway and Security" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How Multi-Platform Routing Works&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Channel Layer sits between platform connections and the agent. WhatsApp messages arrive in one format, Telegram in another, Discord in a third. The Channel Layer adapts these to a common internal structure so the agent doesn't need platform-specific code. It also handles platform events like reactions, typing indicators, and read receipts.&lt;/p&gt;

&lt;p&gt;This abstraction means you can write one routing rule that applies to all platforms. "Only respond to @mentions in group chats" works the same whether the message came from Slack or Discord. The Channel Layer translates platform-specific mention formats into a standard structure that the Gateway understands.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Security Architecture: Layered Restrictions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenClaw's architecture assumes the LLM can be tricked. Prompt injection attacks are real, the architecture can't prevent them at the LLM level, so it limits damage through multiple enforcement layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tool approval workflows&lt;/strong&gt; gate dangerous operations (file deletion, shell commands, payments) with explicit user confirmation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoped permissions&lt;/strong&gt; separate read and write access (read emails vs send emails, query database vs modify database)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device token capabilities&lt;/strong&gt; restrict what each connected device can do (DMs only, no group chats, read-only mode)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One compromised conversation shouldn't give access to everything. These layers don't stop a determined attacker who controls what the LLM reads, but they slow them down enough to notice and intervene. The 6,000-email deletion incident from the intro wasn't a design flaw, it demonstrated why these restrictions matter and why running an AI agent with full system access requires understanding the risks.&lt;/p&gt;

&lt;p&gt;The architecture gives you control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;choose which platforms connect,&lt;/li&gt;
&lt;li&gt;which users get responses,&lt;/li&gt;
&lt;li&gt;which tools are available, and&lt;/li&gt;
&lt;li&gt;which operations require approval.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That control is the tradeoff for running a persistent agent with access to your systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else?
&lt;/h2&gt;

&lt;p&gt;OpenClaw's architecture is surprisingly simple: a Gateway routes messages, an agent loop processes them with LLM and tools, memory is persisted as files, skills extend capabilities, and a heartbeat runs proactive checks. No database, no microservices, no vendor lock-in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6yngu95osze5tcmooqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6yngu95osze5tcmooqx.png" alt="Reactive Flow" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bngocs9kk3y54hc7chk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bngocs9kk3y54hc7chk.png" alt="Proactive Flow" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The design choices, file-based memory, Markdown skills, local execution, assumption of compromise, prioritize transparency and control over convenience. You see exactly what your agent knows, what it can do, and where it runs. The tradeoff is that you manage the infrastructure and accept the security risks that come with giving an AI access to your systems.&lt;/p&gt;

&lt;p&gt;What makes OpenClaw interesting isn't revolutionary technology. It's the combination of persistent execution, proactive behavior, multi-platform integration, and modular capabilities in an architecture you can inspect and modify. Whether that's worth running depends on what you're building and how much control you need.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>Ask Ellie : Getting Engineering Visibility Without Adding More Dashboards</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Mon, 26 Jan 2026 18:15:00 +0000</pubDate>
      <link>https://forem.com/entelligenceai/ask-ellie-getting-engineering-visibility-without-adding-more-dashboards-14m6</link>
      <guid>https://forem.com/entelligenceai/ask-ellie-getting-engineering-visibility-without-adding-more-dashboards-14m6</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Why engineering teams lose visibility when data is spread across GitHub, Jira/Linear, CI, and monitoring tools&lt;/li&gt;
&lt;li&gt;How a chat-based interface can answer engineering, delivery, and performance questions using real system data&lt;/li&gt;
&lt;li&gt;How engineers, managers, and leaders use the same interface differently to reduce context switching&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/W6kp00N68U0"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Modern engineering teams depend on multiple systems such as GitHub, Jira or Linear, CI pipelines, monitoring tools, and analytics platforms to ship software. While each tool solves a specific problem, the overall workflow often fragments context across systems.&lt;/p&gt;

&lt;p&gt;According to a recent developer productivity report, over &lt;a href="https://skan.ai/hubfs/Whitepapers/Whitepaper_Developer%20Productivity_Q3%202025.pdf" rel="noopener noreferrer"&gt;31% of engineering teams&lt;/a&gt; identify context switching and the time required to gather context as their number one productivity killer.. In engineering teams, this often shows up as time spent searching for information, preparing status updates, or switching between dashboards to answer basic delivery questions.&lt;/p&gt;

&lt;p&gt;Entelligence addresses this problem through &lt;strong&gt;Ask Ellie&lt;/strong&gt;, an engineering intelligence chat interface available in Slack and the Entelligence dashboard. In this article, let us explore how Ask Ellie helps teams query real engineering data, understand ongoing work, and make informed decisions without adding more tools or dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Visibility Problem in Modern Engineering Teams
&lt;/h2&gt;

&lt;p&gt;Engineering teams generate large amounts of data every day, but that data is spread across many systems. As a result, visibility into what is happening often requires manual effort rather than being readily available.&lt;/p&gt;

&lt;p&gt;Common challenges teams face include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic questions about delivery, code quality, and ownership do not have a single source of truth.&lt;/li&gt;
&lt;li&gt;Work is fragmented across GitHub for code and pull requests, Jira or Linear for tickets, CI tools for build and deployment status, and monitoring and analytics platforms for runtime signals.&lt;/li&gt;
&lt;li&gt;Engineers and managers frequently switch between dashboards to piece together context, which interrupts focus and slows decision-making.&lt;/li&gt;
&lt;li&gt;Important signals are easy to miss when information is distributed and updated at different times.&lt;/li&gt;
&lt;li&gt;As teams grow, the effort required to maintain shared understanding increases faster than the amount of work being delivered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These gaps in visibility come from the difficulty of accessing and connecting it when decisions need to be made.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Chat Is Becoming a Control Plane for Engineering Data
&lt;/h2&gt;

&lt;p&gt;Chat has become the most natural place for engineering teams to ask operational questions because it is already where day-to-day coordination happens. Engineers discuss deployments, incidents, and blockers in chat long before they open dashboards or reports. When access to engineering data is available in the same interface, teams can get answers without breaking focus or switching tools.&lt;/p&gt;

&lt;p&gt;There is also an important difference between notification-driven chat and query-driven chat. Notifications push updates that may or may not be relevant at a given moment, which often leads to noise. Query-driven chat allows engineers, managers, and leaders to ask specific questions when they need information, making interactions more intentional and easier to act on.&lt;/p&gt;

&lt;p&gt;Most importantly, engineering decisions depend on context, not just metrics. A build status, a sprint number, or a performance score has limited value without understanding recent code changes, ownership, and delivery state. Chat-based access to engineering data works when it preserves this context, allowing teams to understand what is happening without manually reconstructing information across systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Chat Capabilities for Engineering Teams
&lt;/h2&gt;

&lt;p&gt;Not all AI chat tools are suited for engineering workflows. While generic AI chat systems can answer broad questions, engineering teams need responses that are grounded in real systems and current work. The difference becomes clear when comparing generic AI chat with engineering-aware AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc0zk24d1ixbbfggc875.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc0zk24d1ixbbfggc875.png" alt="Entelligence table" width="667" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ask Ellie as an Engineering Intelligence Interface
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpanx0ye42vzzduxr913u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpanx0ye42vzzduxr913u.png" alt="Ask Ellie" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://entelligence.ai/ask-ellie" rel="noopener noreferrer"&gt;Ask Ellie&lt;/a&gt; is the chat-based interface within Entelligence that allows teams to ask questions about their engineering work using natural language. It is designed to surface answers based on real engineering data rather than summaries or assumptions.&lt;/p&gt;

&lt;p&gt;Ask Ellie is available both inside Slack and within the Entelligence dashboard, allowing teams to access the same information in the environment where they already work.&lt;/p&gt;

&lt;p&gt;Ask Ellie acts as a single interface for querying engineering context without requiring teams to navigate multiple dashboards or manually aggregate information.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Engineers Can Use Ask Ellie in Day-to-Day Development
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Work Prioritization&lt;/strong&gt;: Ask what tickets to work on next, understand current priorities, and see where effort is needed without scanning multiple tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull Request and Code Change Visibility&lt;/strong&gt;: View open pull requests, review status, and recent changes across repositories to stay aligned with ongoing development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code and Product Understanding&lt;/strong&gt;: Ask questions about specific code, pull requests, or repositories to identify risks, logic issues, architectural concerns, and quality gaps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issue and Ticket Management&lt;/strong&gt;: Create tickets directly from chat and link them to relevant pull requests or code changes, reducing context switching during development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback Capture&lt;/strong&gt;: Collect customer or internal feedback directly in chat, with access available both in Slack and the Entelligence dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv8yqd366fwzlf3q9jug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjv8yqd366fwzlf3q9jug.png" alt="Team view" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ask Ellie for Engineering Managers and Product Managers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sprint and Delivery Visibility:&lt;/strong&gt; Access sprint reports, progress summaries, and current delivery status to understand how work is moving without relying on manual updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Current Work Awareness:&lt;/strong&gt; See what the team is actively working on, including in-progress tickets and open pull requests, with visibility into daily progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task and Ticket Management:&lt;/strong&gt; Create and assign tickets directly from chat, track ownership, and monitor status across connected planning tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delivery and Planning Alignment:&lt;/strong&gt; Connect delivery data from code and pull requests with planning systems such as Jira and Linear to maintain alignment between execution and roadmap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational Efficiency:&lt;/strong&gt; Reduce the need for status meetings and manual report aggregation by centralizing delivery insights within chat and the Entelligence dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa3yknmsfeg2buwyyt9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa3yknmsfeg2buwyyt9s.png" alt="Team metrics" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Capabilities of Ask Ellie for Engineering Leaders
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Team Performance Trends:&lt;/strong&gt; Track how teams perform and improve over time, with visibility into sprint-to-sprint progress and delivery consistency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Goal Setting and Measurement:&lt;/strong&gt; Set engineering goals and monitor how effectively they are being achieved using data from ongoing work and delivery outcomes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-Impact Teams and Individuals:&lt;/strong&gt; Identify high-performing teams and individuals by understanding contribution patterns and impact across projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Adoption Visibility:&lt;/strong&gt; View the percentage of AI-assisted code across teams and understand how AI usage is influencing development practices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delivery Impact of AI Usage:&lt;/strong&gt; Analyze how AI adoption affects shipping velocity and overall engineering outcomes without relying on manual reporting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueyxuuugdr93gbwqorv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueyxuuugdr93gbwqorv8.png" alt="Tickets and alerts" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Ask Ellie Gets Its Data
&lt;/h2&gt;

&lt;p&gt;Ask Ellie does not generate answers in isolation. Every response is grounded in data pulled from the engineering systems teams already use. By connecting these systems, Ask Ellie can reflect the current state of code, delivery, and execution rather than relying on static summaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connected Engineering Systems
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source Control and Code Reviews:&lt;/strong&gt; GitHub is used to access repositories, pull requests, review status, recent changes, and code context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning and Sprint Management:&lt;/strong&gt; Jira and Linear provide ticket data, sprint information, task ownership, and delivery status.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build and Delivery Signals:&lt;/strong&gt; CI systems contribute information about builds, deployments, and delivery health.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Analytics:&lt;/strong&gt; PostHog and Sentry surface runtime signals, errors, and usage data that relate engineering work to production behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational Inputs:&lt;/strong&gt; Meetings and other supported operational signals are incorporated to provide additional execution context where available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbsh6xztd80mttp9mkro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbsh6xztd80mttp9mkro.png" alt="Ask Ellie" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Data Integration Matters
&lt;/h3&gt;

&lt;p&gt;Engineering questions rarely live inside a single system. Understanding delivery health, code risk, or team performance usually requires correlating information across repositories, tickets, and runtime signals. &lt;/p&gt;

&lt;p&gt;By integrating these systems, Ask Ellie eliminates the need for manual data aggregation, ensures answers reflect the current engineering state, and avoids insights based on stale or partial information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Insights in Engineering Workflows
&lt;/h3&gt;

&lt;p&gt;Ask Ellie’s responses are backed by real repository data and live sprint information. They can be used directly to drive action. Engineers, managers, and leaders can move from understanding a situation to acting on it without leaving chat.&lt;/p&gt;

&lt;p&gt;Ask Ellie supports creating tickets, assigning work, and tracking follow-ups directly from the same interface used to ask questions. This allows chat to function as a starting point for operational workflows rather than just a reporting surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reducing Context Loss Without Adding New Tools
&lt;/h2&gt;

&lt;p&gt;Most engineering teams already have the tools they need, but the information they rely on is spread across too many places. GitHub, planning tools, CI systems, and monitoring platforms each hold part of the picture. &lt;/p&gt;

&lt;p&gt;Ask Ellie reduces context loss by providing a single chat interface that can surface information from these systems together. Instead of switching dashboards to reconstruct context, teams can ask focused questions and get answers that reflect the current state of code, delivery, and execution.&lt;/p&gt;

&lt;p&gt;The same interface works across roles without requiring separate views or systems. Engineers use it to understand code and prioritize work, managers use it to track delivery and sprint progress, and leaders use it to follow performance and trends over time. &lt;/p&gt;

&lt;p&gt;All of these interactions are grounded in the same underlying data; context is preserved as information moves between roles. This makes it easier for teams to stay aligned without adding new tools or increasing process overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Engineering visibility works best when it is continuous and embedded into daily workflows rather than delivered through periodic reports or disconnected dashboards. Chat-based engineering intelligence allows teams to ask questions, understand context, and act on real engineering data without interrupting how they work. &lt;/p&gt;

&lt;p&gt;By grounding answers in live code, sprint, and delivery systems, teams can spend less time navigating tools and more time building and improving software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://entelligence.ai/ask-ellie" rel="noopener noreferrer"&gt;Try Ask Ellie&lt;/a&gt; to understand code, delivery, and team performance directly from chat. &lt;a href="https://entelligence.ai/" rel="noopener noreferrer"&gt;Explore Entelligence&lt;/a&gt; to see how engineering data can be accessed and acted on in one place.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How AI Documentation Tools Cut Onboarding Time by 80%</title>
      <dc:creator>Jay Saadana</dc:creator>
      <pubDate>Tue, 09 Dec 2025 17:30:00 +0000</pubDate>
      <link>https://forem.com/entelligenceai/how-ai-documentation-tools-cut-onboarding-time-by-80-15k5</link>
      <guid>https://forem.com/entelligenceai/how-ai-documentation-tools-cut-onboarding-time-by-80-15k5</guid>
      <description>&lt;p&gt;Developer onboarding is one of the most expensive bottlenecks in engineering organizations. The average company spends $954 per new hire on onboarding, with engineers taking 3-4 weeks to become productive. That's 160 hours spent decoding undocumented systems instead of building features. AI documentation tools are changing this equation dramatically.&lt;/p&gt;

&lt;p&gt;These tools automatically generate comprehensive, always current documentation from your codebase, giving new engineers instant access to architectural insights, component relationships, and workflow explanations. The result? Teams are cutting onboarding time from 4 weeks to just 3 days, an 80% reduction.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore how AI documentation tools achieve these results, why they're becoming essential for engineering teams, and how Entelligence AI Docs is leading this transformation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI documentation tools eliminate 80% of onboarding time wasted on decoding undocumented systems, reducing new hire ramp-up from 4 weeks to 3 days.&lt;/li&gt;
&lt;li&gt;Automatic generation creates comprehensive docs in 5 minutes covering architecture, workflows, and component relationships that would take weeks manually.&lt;/li&gt;
&lt;li&gt;Auto updates with every pull request keep documentation perfectly synced with code, solving the chronic problem of outdated docs teams stop trusting.&lt;/li&gt;
&lt;li&gt;Engineering teams report 89% documentation accuracy with 100% codebase coverage, compared to 20-30% typical of manual approaches.&lt;/li&gt;
&lt;li&gt;Entelligence AI Docs combines one click generation, architectural intelligence, and real time collaboration to transform technical knowledge management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Makes AI Documentation Different?
&lt;/h2&gt;

&lt;p&gt;Traditional documentation requires engineers to manually write and maintain docs, a task that rarely happens. AI documentation tools take a fundamentally different approach by analyzing your entire codebase to automatically generate comprehensive documentation.&lt;/p&gt;

&lt;p&gt;AI documentation tools examine source files, commit history, pull requests, and code relationships to understand what your code does and how components interact. Entelligence AI Docs can generate complete documentation for your codebase in 3-5 minutes, including architectural overviews, module descriptions, and interaction flows.&lt;/p&gt;

&lt;p&gt;The game changer is automatic updates. When you merge code changes, documentation regenerates affected sections instantly. Your documentation always matches your current codebase, eliminating the trust issues that plague manual documentation.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/hXR34QhbPT0"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for Onboarding
&lt;/h2&gt;

&lt;p&gt;New engineers spend their first month in survival mode. They arrive ready to build, but instead spend weeks decoding your architecture. Every question requires tracking down a senior developer. The frustration compounds when senior engineers lose focus answering repetitive questions, new hires lose confidence, and your team loses velocity.&lt;/p&gt;

&lt;p&gt;This isn't a training problem. It's a documentation problem.&lt;br&gt;
When comprehensive documentation exists, everything changes. New engineers immediately understand your system architecture on day one. They see how components interact and why design decisions were made. Questions shift from "What does this do?" to "Should I use pattern X or Y here?" Senior engineers return to building. New hires start contributing in 3-5 days instead of 3-4 weeks.&lt;/p&gt;

&lt;p&gt;The difference compounds over time. Engineers with proper documentation write better code because they understand the full context. Better onboarding means higher retention and happier teams. The 80% time reduction isn't just about efficiency, it's about building teams that work well together from day one.\&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Impact
&lt;/h2&gt;

&lt;p&gt;The transformation is backed by data across engineering organizations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Onboarding Velocity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;46% of developers in large teams report significant documentation issues&lt;/li&gt;
&lt;li&gt;Teams with comprehensive documentation reduce onboarding time by 80-85%&lt;/li&gt;
&lt;li&gt;First meaningful PR: 3-5 days vs 3-4 weeks without proper docs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Senior Engineer Productivity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Senior developers typically spend 30%+ of time answering architecture questions&lt;/li&gt;
&lt;li&gt;AI documentation reduces this to under 5% reclaiming 25% of senior capacity&lt;/li&gt;
&lt;li&gt;1,360 hours saved annually for a 50 person team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Documentation Coverage &amp;amp; Quality&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual documentation covers only 20-30% of a codebase&lt;/li&gt;
&lt;li&gt;AI generated documentation provides 100% coverage&lt;/li&gt;
&lt;li&gt;89% of teams using AI docs report documentation that stays current&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Developer Experience&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;74% of developers say lack of documentation is their biggest frustration when joining teams&lt;/li&gt;
&lt;li&gt;35% higher retention rates with comprehensive docs&lt;/li&gt;
&lt;li&gt;40% increase in developer satisfaction scores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These metrics show why leading organizations treat documentation as critical infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Does Entelligence Compare?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqry15pv6yi9h9a6be32l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqry15pv6yi9h9a6be32l.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The comparison shows why teams choose Entelligence it's the only tool combining one click generation, auto updates with every PR, and complete architectural mapping.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Entelligence AI Docs Delivers Results
&lt;/h2&gt;

&lt;p&gt;Entelligence AI Docs solves the onboarding challenge with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;5-minute setup&lt;/strong&gt;: Connect your repository and generate comprehensive documentation instantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Always-current docs&lt;/strong&gt;: Documentation regenerates automatically with every pull request, ensuring accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architectural intelligence&lt;/strong&gt;: Maps system architecture, component interactions, and data flows for complete understanding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-language support&lt;/strong&gt;: Consistent documentation across TypeScript, Python, Go, Java, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export flexibility&lt;/strong&gt;: Works with Markdown, Notion, and Confluence.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Documentation debt costs organizations hundreds of thousands annually in lost productivity. AI documentation tools like Entelligence AI Docs eliminate this burden by automatically generating and maintaining comprehensive, trustworthy documentation.&lt;/p&gt;

&lt;p&gt;The 80% reduction in onboarding time is real engineering teams across the US are experiencing these results today. When new engineers access accurate documentation from day one, they start contributing immediately.&lt;/p&gt;

&lt;p&gt;Try &lt;a href="//entelligence.ai"&gt;Entelligence AI&lt;/a&gt; Docs free for 14 days and transform your onboarding process.&lt;br&gt;
&lt;a href="https://calendly.com/aiswarya-qlnt/1-1-aiswarya-1?month=2025-09" rel="noopener noreferrer"&gt;Book a demo with Entelligence AI today&lt;/a&gt; and start cutting your onboarding time by 80%.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>ai</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>We Built a Live Scoreboard for Developers: Now 1K+ Devs Are Competing on It🔥🏂</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Thu, 14 Aug 2025 17:32:47 +0000</pubDate>
      <link>https://forem.com/entelligenceai/introducing-entelligence-engineering-leaderboard-a-real-time-scoreboard-for-developers-1f5e</link>
      <guid>https://forem.com/entelligenceai/introducing-entelligence-engineering-leaderboard-a-real-time-scoreboard-for-developers-1f5e</guid>
      <description>&lt;p&gt;Tired of guessing if you're actually getting better at coding?&lt;/p&gt;

&lt;p&gt;Now you don't have to.&lt;/p&gt;

&lt;p&gt;Today we're launching the &lt;strong&gt;Entelligence Leaderboard,&lt;/strong&gt; a real-time scoreboard that ranks developers and teams based on actual code review performance.&lt;/p&gt;

&lt;p&gt;No extra setup. No forms to fill. Just write code like you always do, and watch your name rise (or fall) on the leaderboard, based on your real contributions.&lt;/p&gt;

&lt;p&gt;It's already live across 500+ devs. Now it's your turn to see where you really stand.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/JhuwVXHy_3M"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is the Leaderboard?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Entelligence Leaderboard is a live scoreboard for developers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It ranks real engineers from real companies based on how solid their PRs are.&lt;/p&gt;

&lt;p&gt;Every 5 minutes, it updates your &lt;strong&gt;Impact Score&lt;/strong&gt; based on actual code behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How much quality code you shipped&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How many bugs you avoided&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How helpful your reviews were&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're not competing against bots or theoretical metrics, you're up against engineers from multiple open-source organizations and startups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage1.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage1.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Leaderboard Rankings Interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything is organized into a clean interface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Engineers&lt;/strong&gt;: Individual developer rankings with Impact Scores and streaks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Companies&lt;/strong&gt;: How your organization stacks up against others&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Groups&lt;/strong&gt;: Team competitions and community challenges&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contest Archive&lt;/strong&gt;: Historical performance data from past weeks&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What You'll See When You Use the Leaderboard&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You don't need to install or set up anything. Just head over to &lt;a href="https://www.entelligence.ai/leaderboard" rel="noopener noreferrer"&gt;&lt;strong&gt;entelligence.ai/leaderboard&lt;/strong&gt;&lt;/a&gt; in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.entelligence.ai/leaderboard" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;👉 Try the Leaderboard Now!&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Here's what you'll find:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Live rankings&lt;/strong&gt; that refresh every 5 minutes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your name and Impact Score&lt;/strong&gt;, already there if your work is public&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tabs for Engineers, Companies, and Groups&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;stream of real-time PRs&lt;/strong&gt; from devs pushing code right now&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's all based on actual PR activity and updates automatically every few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage3.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage3.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Live Rankings and Real-time Updates"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create or Join Groups&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Groups let you track performance with a specific team, whether it's your company, a dev community, or just a few friends.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage4.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage4.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Create or Join Groups Interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To set one up, search for GitHub usernames and invite others. Once they accept, they'll appear in the group leaderboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Developer Profiles&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Click on any developer to see their full performance profile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage5.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage5.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Developer Profile Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each profile includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A 30-day chart showing daily Impact Scores&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A heatmap that tracks consistency over time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Current global rank and total score&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tech stack based on their recent work&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recent pull requests and coding activity&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's a simple way to understand how someone codes over time, not just what they've committed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Live PR Activity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The leaderboard includes a live feed of pull requests as they're created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage6.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage6.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Live PR Activity Feed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each entry shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The size and type of the change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The calculated Impact Score&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which developer made the change&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click on any entry to view more details or open the developer's full profile&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Share Your Wins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want to share how you've been doing, you can use the "Share" button on your profile. It creates a clean, readable card that includes your rank, score, and recent work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage7.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fleaderboard-blog%252Fimage7.png%26w%3D1920%26q%3D75%26dpl%3Ddpl_DjdVgFBnpYoMKSYd5B3sJVQDAvVq" alt="Share Your Performance Card"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can share it anywhere, with your team, online, or just save it for your own records.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Join&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To get started, just &lt;a href="https://www.entelligence.ai/" rel="noopener noreferrer"&gt;&lt;strong&gt;sign up on Entelligence&lt;/strong&gt;&lt;/a&gt; using your GitHub account.&lt;/p&gt;

&lt;p&gt;When you sign up, we request read-only access to your GitHub activity. This lets us track your public and private pull requests and calculate your scores automatically, no setup or manual steps required.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Get Started with the Leaderboard&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Skip the usual uncertainty about your coding performance. Whether you're curious about your impact or ready for some friendly competition, get clear answers about your development skills in minutes instead of wondering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.entelligence.ai/leaderboard" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;👉 Try the Leaderboard Now!&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Everything You Need to Know About the Gemini CLI</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Thu, 17 Jul 2025 19:13:06 +0000</pubDate>
      <link>https://forem.com/entelligenceai/everything-you-need-to-know-about-the-gemini-cli-4k5p</link>
      <guid>https://forem.com/entelligenceai/everything-you-need-to-know-about-the-gemini-cli-4k5p</guid>
      <description>&lt;p&gt;So, Google just released a new CLI coding agent, and yes, it's open source, completely free, and powered by Gemini 2.5 Pro with an impressive 1M token context window. 🤯&lt;/p&gt;

&lt;p&gt;In this post, we'll take a quick dive into what Gemini CLI is all about, how it compares to the best in the market, like Claude Code, where it excels (and also where it struggles!), and my opinion on whether it's worth switching to.&lt;/p&gt;

&lt;p&gt;We'll end the post by asking it to build a quick mini-project to get an idea of how good it is.&lt;/p&gt;

&lt;p&gt;Let's jump in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief on Gemini CLI
&lt;/h2&gt;

&lt;p&gt;Gemini CLI is Google's free, open-source terminal agent completely &lt;strong&gt;written in TypeScript&lt;/strong&gt; and powered by Gemini 2.5 Pro (a model with a massive 1 million token context window). It is available via Gemini Code Assist (60 requests/min, 1,000/day).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FGeminiCLI%252Fimg1.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_59TUQbEukNkGzVz9NkJdUPf2DFpW%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FGeminiCLI%252Fimg1.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_59TUQbEukNkGzVz9NkJdUPf2DFpW%2520align%3D" alt="Gemini CLI Intro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It does support Model Context Protocol (MCP) servers and everything you'd expect, like codebase understanding, code/test generation, bug fixes, and all of that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FGeminiCLI%252Fimg2.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_59TUQbEukNkGzVz9NkJdUPf2DFpW%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FGeminiCLI%252Fimg2.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_59TUQbEukNkGzVz9NkJdUPf2DFpW%2520align%3D" alt="Gemini CLI MCP servers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Gemini CLI uses a reason and act (ReAct) loop with your built-in tools and local or remote MCP servers to handle complex tasks like fixing bugs, adding new features, and improving test coverage.&lt;/p&gt;

&lt;p&gt;In short, it's Google's response to Anthropic Claude Code.&lt;/p&gt;

&lt;p&gt;Here are some of the key capabilities of Gemini CLI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Editing and Refactoring&lt;/strong&gt;: It can understand your codebase well and can especially help refactor your codebase more easily.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bug Fixing&lt;/strong&gt;: It goes without saying, since it gets auto context on your codebase, it can help you find and fix bugs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test and Documentation Generation&lt;/strong&gt;: It can generate tests for your codebase and can also help you write documentation, especially for wikis and READMEs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manage Conversation History and Memory&lt;/strong&gt;: Similar to how you can manage your chats and memory on ChatGPT and stuff, you can do the same with Gemini CLI. This allows you to keep things separated and get a much better response from the AI model.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All you need to use this tool is Node.js, a personal Google account, and, obviously, a terminal. 😉&lt;/p&gt;

&lt;p&gt;The team has shared this cool demo showing how this AI agent can use MCP servers (in this case, Veo and Imagen) to generate a short video of a cat adventure around Australia.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/KovEpdvVI4U"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Look, we're talking 60 model requests per minute and 1,000 requests per day at zero cost with all of these features. I think it's a win-win. 😋&lt;/p&gt;

&lt;p&gt;You don't pay much and even get all of these features for free, which would otherwise cost around $200/month for Claude Code, so you know how big of a deal this is.&lt;/p&gt;




&lt;h2&gt;
  
  
  Is this just another Claude Code?
&lt;/h2&gt;

&lt;p&gt;I wouldn't say it is just another Claude Code, but it's definitely &lt;strong&gt;just another CLI coding AI Agent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So what's new about it? To me, the biggest selling point for Gemini CLI is that it's completely free with nicer usage limits and open source compared to the closed source Claude Code.&lt;/p&gt;

&lt;p&gt;Anything special other than that? &lt;strong&gt;No&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;Gemini's recent models have been amazing with coding. Gemini 2.5 Pro is one of the best yet cheapest models right now when it comes to coding, and Gemini CLI brings all of that right to the terminal.&lt;/p&gt;

&lt;p&gt;You get a massive 1 million token context window model, Gemini 2.5 Pro, all for free? What else can you even ask for, right?&lt;/p&gt;

&lt;p&gt;It seems like the tough AI competitors are fighting for a spot in the terminal, essentially taking on the developer AI ecosystem.&lt;/p&gt;

&lt;p&gt;First, there was Claude Code released in February, then OpenAI Codex in April, and now Gemini CLI in June.&lt;/p&gt;

&lt;p&gt;So basically, you've got a "terminal agent" from all three AI giants! 🥴&lt;/p&gt;




&lt;h2&gt;
  
  
  My experience with it compared to Claude Code
&lt;/h2&gt;

&lt;p&gt;So does it compare? I would say straight up &lt;strong&gt;NO!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don't get me wrong, it's great, it has a super cool free usage limit, and you can go test it right now. The fact that it's completely open source with the Apache 2.0 License is great.&lt;/p&gt;

&lt;p&gt;But, when compared to Claude Code, it simply does not stand any chance. Anthropic does some real magic when it comes to working with agentic features.&lt;/p&gt;

&lt;p&gt;The tool itself is great, but I've run into quite a few issues with it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extremely slow&lt;/strong&gt;: I'm not exaggerating, it's just a bit too slow. Even the smallest changes take over 5-10 minutes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feels super dumb&lt;/strong&gt;: In these two or three days of testing this AI agent, it feels like you're not really talking with Gemini 2.5 Pro but a &lt;strong&gt;dumb&lt;/strong&gt; version of it. I've gotten much better results in Google AI Studio for the same task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lots of errors&lt;/strong&gt;: It's a fairly new tool, so it's likely to have errors. I don't know if it's fair to compare on this just yet, but I've run into a lot of errors where it just crashes (runtime errors!).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll see a new issue raised on GitHub every few minutes, and yes, &lt;strong&gt;every few minutes&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/google-gemini/gemini-cli/issues" rel="noopener noreferrer"&gt;google-gemini/gemini-cli&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Coding Test
&lt;/h2&gt;

&lt;p&gt;Enough of the theories, let's see how good and quick it can code a simple ping-pong game. I'm doing this quick test just to give you an idea of how well it works and how long it takes to complete a project.&lt;/p&gt;

&lt;p&gt;So many folks are building stuff with Gemini CLI. Take this as a reference: 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/sawyerhood/status/1912693186474222050" rel="noopener noreferrer"&gt;https://x.com/sawyerhood/status/1912693186474222050&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Ping-Pong Game
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt; Build a cool looking ping pong game in a single file using HTML, CSS and JS. Include dynamic lighting, shadows and smooth paddle/ball animation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a somewhat updated version of the same prompt where I've asked it to add a few new features on top, and here's how it went.&lt;/p&gt;

&lt;p&gt;Here's the code it generated: &lt;a href="https://gist.github.com/shricodev/0f4558cc59192a71b0e834a80e5d5fe0" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the output of the program:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/lHnC7M2P--0"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This one didn't take much time. I was able to wrap it up within 2–3 minutes. It was just a simple one-file code, and I must say, it did a decent job here.&lt;/p&gt;

&lt;p&gt;However, it didn't implement the lighting, and the implementation is not completely correct. The ball spawns randomly in the middle of the game at times, and also the shadows and the UI are not that good-looking, but that's fine.&lt;/p&gt;

&lt;p&gt;Overall, as you can see, it can do a decent job in coding, and for a decent "vibe coding" session, it can be a great help.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Gemini CLI shines in the generosity of its usage limits and open source. But, you cannot compare it to Claude Code in terms of performance and overall quality (not yet!).&lt;/p&gt;

&lt;p&gt;Still, for a tool that gets you Gemini 2.5 Pro right in the terminal with real-time web context, multi-modal outputs, and complete API/script integration, all completely free and open source, it is basically a god's work. This will make it more accessible as a terminal AI agent than ever.&lt;/p&gt;

&lt;p&gt;It may not be the best tool right now, but it's definitely a serious player in the terminal AI ecosystem.&lt;/p&gt;

&lt;p&gt;It's a no-brainer alternative to Claude Code if you are looking for a cheaper yet similar tool, and I'd definitely suggest you give this a shot.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🤔 And by the way, why is every new coding agent CLI tool using TypeScript? Like, does this make sense? Use Node for CLI when better alternatives like Go or even Rust exist?&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;Entelligence AI improves code quality and reduces developer burnout by automating code reviews to identify potential issues and deliver instant, context-aware PR feedback. If you want to accelerate your productivity and ensure code integrity, check out the &lt;a href="https://www.entelligence.ai/" rel="noopener noreferrer"&gt;tool&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Install Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Claude 3.7 vs Gemini 2.5 pro for Coding</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Mon, 07 Jul 2025 18:35:45 +0000</pubDate>
      <link>https://forem.com/entelligenceai/claude-37-vs-gemini-25-pro-for-coding-olm</link>
      <guid>https://forem.com/entelligenceai/claude-37-vs-gemini-25-pro-for-coding-olm</guid>
      <description>&lt;p&gt;Five months into 2025, upgraded large language models (LLMs) were released into the AI ecosystem, promising advanced coding capabilities for developers and organizations. Two of the most talked-about AI models for coding this quarter are &lt;strong&gt;Claude 3.7 Sonnet&lt;/strong&gt; and &lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Both models are positioning themselves as coding powerhouses, but which one actually delivers on this promise?&lt;/p&gt;

&lt;p&gt;In this article, we will compare Claude 3.7 Vs Gemini 2.5 Pro, analyzing their performance, efficiency, and accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" alt="Claude 3.7 Sonnet Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://www.anthropic.com/news/claude-3-7-sonnet" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anthropic released Claude 3.7 Sonnet in February 2025. It is marketed as their first "hybrid reasoning model" that switches between standard and extended thinking modes. Hence, it can produce quick responses or engage in step-by-step thinking, depending on the user's preference and tier.&lt;/p&gt;

&lt;p&gt;Claude 3.7 scored 62.3% (70.3% with custom scaffold) on SWE-bench verified (agentic coding), currently top for the benchmark. The model also supports a 200k token context window, enough to serve you for everyday coding tasks.&lt;/p&gt;

&lt;p&gt;You can use Claude 3.7 through a &lt;a href="https://claude.ai/" rel="noopener noreferrer"&gt;Claude&lt;/a&gt; account, &lt;a href="https://www.anthropic.com/api" rel="noopener noreferrer"&gt;Anthropic API&lt;/a&gt;, Vertex AI, and Amazon Bedrock. The model is available on all Claude's plans, but free tier users can't access the extended thinking mode. Anthropic currently charges $3 for every 1 million input tokens and $15 for every 1 million output tokens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini%25201.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini%25201.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" alt="Gemini 2.5 Pro Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#enhanced-reasoning" rel="noopener noreferrer"&gt;Google&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following suit, Google released Gemini 2.5 Pro in March 2025. Google calls it the "thinking model," explicitly designed to handle advanced coding and complex problems through enhanced reasoning. The model supports a 1 million token context window, which is 5 times larger than what Claude 3.7 currently offers. This increased context window means Gemini 2.5 Pro can handle large codebases and complex projects in a single prompt without performing poorly.&lt;/p&gt;

&lt;p&gt;Gemini 2.5 Pro scored 63.8% on SWE-bench verified, less than Claude 3.7. However, the model tops the board for many benchmarks, including mathematics, code editing, and visual reasoning, where it scored 86.7%/92%, 74%, and 81.7%, respectively.&lt;/p&gt;

&lt;p&gt;You can access Gemini 2.5 Pro and its API through &lt;a href="https://aistudio.google.com/" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt; or select the model from the dropdown menu in the Gemini app. It is currently free for limited use and then offers token-based pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding Capabilities
&lt;/h2&gt;

&lt;p&gt;Both Anthropic and Google claim their respective models excel at development tasks. So, let's assess how these competing models perform across different coding metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Generation
&lt;/h3&gt;

&lt;p&gt;Both models are great at generating functional code. However, Claude 3.7 provides cleaner and more structured code than Gemini 2.5 Pro, although it might need a few revisions.&lt;/p&gt;

&lt;p&gt;One interesting feature of Claude 3.7 is that if you're using its API, you can specify the number of tokens the model should spend thinking before answering. The output limit is currently set to 128K tokens, which helps you balance speed, cost, and quality based on your specific needs.&lt;/p&gt;

&lt;p&gt;Conversely, Gemini 2.5 Pro is great for efficient, production-ready code and provides key concepts used within the code. However, you should expect occasional bugs. The model also offers different settings, such as temperature (which detects the level of creativity allowed in the response), in Google AI Studio, so you've more control over the output. Its output limit is presently set to 65,536 tokens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Completion
&lt;/h3&gt;

&lt;p&gt;Claude 3.7 provides relevant recommendations with various alternatives to complete the code. Although, the model’s response can sometimes be filled with fluff. Gemini 2.5 Pro is more concise and produces more creative, out-of-the-box suggestions. Both models excel at understanding the semantics, syntax, and context of different programming languages to predict the next line of code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging and Error Explanation
&lt;/h3&gt;

&lt;p&gt;Claude 3.7 is better at debugging as it provides a more detailed and precise analysis of the problem, especially with its extended thinking mode. This process helps you understand the reasoning behind the model's suggestions.&lt;/p&gt;

&lt;p&gt;Moreover, Claude 3.7 makes safe edits without breaking existing functionality. The model can also be slightly better at handling test cases than Gemini 2.5 Pro. However, Claude 3.7 mostly performs well on small, logic-focused projects.&lt;/p&gt;

&lt;p&gt;If you want deeper, production-level debugging and refactoring, Gemini 2.5 Pro does a better job. Like Claude 3.7, the model also returns step-by-step explanations, although its response can sometimes be unnecessarily verbose. Yet, by leveraging its multimodal capabilities, Gemini 2.5 Pro can better pinpoint specific issues in large projects than Claude 3.7.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Language Support
&lt;/h3&gt;

&lt;p&gt;Gemini 2.5 Pro and Claude 3.7 support multiple languages, including mainstream programming languages, like JavaScript and Python, and niche languages like Rust and Go. Still, both models perform better with popular languages, likely due to their representation in training data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Context and Prompts
&lt;/h2&gt;

&lt;p&gt;Due to its 1M token context window, Gemini 2.5 Pro can maintain context during long conversations. The model is also great at understanding complex instructions in one prompt, unlike Claude 3.7, which often needs extra tweaks to produce better results.&lt;/p&gt;

&lt;p&gt;Nonetheless, Claude 3.7 is still a worthy contender. The model scored an impressive 93.2% on the IFEval (instruction following) benchmark with extended thinking and 90.8% in standard mode. Hence, Claude 3.7 can also interpret and execute instructions effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini%25202.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fimages%252FClaudeVsGemini%25202.jpg%26w%3D1920%26q%3D100%26dpl%3Ddpl_5aydLEkvWk8S6nEzrKVXP45fSysE%2520align%3D" alt="IFEval Benchmark for Claude 3.7"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://www.anthropic.com/news/claude-3-7-sonnet" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite its 200k token context window, Claude 3.7 can maintain context in multi-turn conversations with more nuanced understanding than Gemini 2.5 Pro. The model's chain-of-thought is also powerful, especially when using extended thinking mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Quality and Accuracy
&lt;/h2&gt;

&lt;p&gt;Claude 3.7 writes readable code but can lack robustness sometimes. The model can also recognize and correct its own mistakes. Gemini 2.5 Pro, on the other hand, writes maintainable, well-commented code that's easy to modify and update. Its code also functions correctly under most expected conditions. Both models produce reliable code, but you might still have bugs to fix.&lt;/p&gt;

&lt;p&gt;The reality is that no LLM produces 100% accurate code at all times. Therefore, you’ve to tweak the models' input and output to attain the level of correctness, readability, and efficiency you desire. It's also essential to test and review every code gotten from these models to catch any quality issues and resolve them promptly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Entelligence AI improves code quality and reduces developer burnout by automating code reviews to identify potential issues and deliver instant, context-aware PR feedback. If you want to accelerate your productivity and ensure code integrity, check out the &lt;a href="https://www.entelligence.ai/" rel="noopener noreferrer"&gt;tool&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Install Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed and Responsiveness
&lt;/h2&gt;

&lt;p&gt;Gemini 2.5 Pro has impressive processing speed, even in complex coding scenarios. However, Claude 3.7 is not far behind. The model's responsiveness is almost instantaneous in the standard mode. Even when both models periodically delay in response, it's usually worth the wait.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;Both models have their shortcomings. Some developers have noted that Claude 3.7 is more inclined to make simple situations overly complex and effect changes that the user didn't request. Also, the model's performance sometimes reduces while handling multimodal tasks compared with Gemini 2.5 Pro. Meanwhile, Claude 3.7 can also struggle with high-volume and computationally intensive requests.&lt;/p&gt;

&lt;p&gt;For Gemini 2.5 Pro, the issue usually lies in missing key details and subtle implications that are important to produce a well-rounded result. So, it's better at broader, more generalized coding tasks.&lt;/p&gt;

&lt;p&gt;Occasionally, both models hallucinate, especially after lengthy conversations or processing large amounts of information. Therefore, it's still crucial that you verify every output, especially in high-stakes situations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case Recommendations
&lt;/h2&gt;

&lt;p&gt;Gemini 2.5 Pro performs better at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Improving structure and maintainability across large codebases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multimodal debugging, including diagram analysis and UI inspection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling mathematically heavy coding tasks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maintaining context across complex multi-file projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling of multi-repository projects&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude 3.7 Sonnet is excellent for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;High-level summaries with deep dives into code behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building and implementing functionality across the frontend, backend, and API layers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating complex agent workflows with precision&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Superior frontend design&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's no “overall best model for coding” since both models perform well depending on the particular use case. The best approach is to complement one model's strengths for the other's weaknesses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Each model has its highlights and drawbacks. Thus, your specific project requirements and technological needs will determine which model is the right choice. Gemini 2.5 Pro is best for multimodal tasks, real-time performance, and complex coding challenges, but if you want precision and comprehensive reasoning, then Claude 3.7 will serve you better.&lt;/p&gt;

&lt;p&gt;Ultimately, Claude 3.7 Sonnet and Gemini 2.5 Pro prove that the future of AI in coding will only get more exciting. These models are changing how developers write code and interact with their development environments, so you can expect more innovative advancements that will push the boundaries of what's currently possible.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>5 Tools That Helped Me Catch 70% More Bugs in the Codebase [Important!]</title>
      <dc:creator>Pankaj Singh</dc:creator>
      <pubDate>Mon, 30 Jun 2025 14:49:40 +0000</pubDate>
      <link>https://forem.com/entelligenceai/5-tools-that-helped-me-catch-70-more-bugs-in-the-codebase-important-3phk</link>
      <guid>https://forem.com/entelligenceai/5-tools-that-helped-me-catch-70-more-bugs-in-the-codebase-important-3phk</guid>
      <description>&lt;p&gt;Ever since I joined the enterprise team, I’ve been obsessed with squashing bugs early. It turns out I’m not alone, studies show static analysis tools alone can detect up to 70% of potential code defects. Even more impressively, advanced AI code-review systems claim to catch around 90% of common issues. Intriguing, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjkyu0xzlws1lox0ljlv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjkyu0xzlws1lox0ljlv.gif" alt="bugs" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By combining the right tools, from AI-driven code review to automated tests and monitoring, I managed to boost the number of bugs we catch before release by roughly 70%.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;a href="https://dub.sh/vceKS9z" rel="noopener noreferrer"&gt;Entelligence AI Code Review&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o5szqh4i6xfsb3fpsvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o5szqh4i6xfsb3fpsvg.png" alt="entelligence" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I started embedding Entelligence’s real-time AI reviewer directly in my IDE and immediately saw results. It’s like having a savvy teammate checking my code as I type. In fact, the makers of &lt;a href="https://dub.sh/L1Iq9Jv" rel="noopener noreferrer"&gt;Entelligence&lt;/a&gt; boast that this IDE integration “helps you catch bugs and improve code quality instantly”. The AI flags issues and even suggests fixes before I commit to GitHub. Because it supports dozens of languages, I could use it across our whole stack (&lt;a href="https://www.python.org/" rel="noopener noreferrer"&gt;Python&lt;/a&gt;, &lt;a href="https://www.w3schools.com/js/" rel="noopener noreferrer"&gt;JavaScript&lt;/a&gt;, Java, etc.). Using Entelligence, I routinely caught subtle logic and design flaws early, massively cutting down the number of defects slipping into code reviews or production.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. &lt;a href="https://www.sonarsource.com/products/sonarqube/" rel="noopener noreferrer"&gt;SonarQube (Static Analysis)&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i9kk77kfx0kscigw1n6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i9kk77kfx0kscigw1n6.png" alt="SonarQube" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I set up SonarQube scans as part of our build. SonarQube is a static analysis tool that “detects bugs, vulnerabilities, and code smells” across 29+ languages. Whenever new code is pushed, SonarQube’s automated quality gate kicks in, highlighting issues immediately. This makes clean-up proactive: developers fix unsafe patterns or unused variables before merging. In practice, we found this was powerful – it turns out static analysis can catch roughly 70% of defects before runtime. By addressing these flagged issues in SonarQube early, our team drastically reduced the trivial bugs that used to blow up later, improving overall code reliability and maintainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;CI/CD Pipelines &amp;amp; Automated Tests&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eb3cdjva6ojvw695bc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eb3cdjva6ojvw695bc7.png" alt="githubactions" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also overhauled our CI/CD pipelines (using Jenkins/GitHub Actions) to run thorough test suites on every commit. Now, each pull request triggers automated unit and integration tests (JUnit, Jest, etc.) along with the static scans. This meant catching bugs the moment they appear. Tools like Jenkins and GitHub Actions “trigger automated unit tests after each code commit,” effectively catching software bugs at the early stages of development. &lt;/p&gt;

&lt;p&gt;In my experience, this CI-driven testing caught countless edge cases and regressions right away – issues that otherwise would have reached QA or production. Automating tests in the pipeline has not only stopped obvious bugs (like broken API responses) from merging, but also given me quick feedback so my team can fix defects immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. &lt;a href="https://sentry.io/welcome/" rel="noopener noreferrer"&gt;Sentry (Error Monitoring)&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frznh45k3jb5f3lp7k1hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frznh45k3jb5f3lp7k1hz.png" alt="sentry" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite all the up-front checks, some bugs inevitably slipped through – that’s where Sentry came in. Sentry is an application monitoring and error-tracking tool that automatically captures exceptions, crashes, and slowdowns in real time. In practice it was a lifesaver: once Sentry was integrated, I began seeing every production and staging error with full context. &lt;/p&gt;

&lt;p&gt;As one summary puts it, “Sentry helps engineering teams identify and fix bugs faster by automatically capturing exceptions, crashes, and … performance transactions”. Using Sentry, whenever an error popped up in our distributed services, I got notified immediately with stack traces. This meant catching user-impacting bugs instantly (often before customers even noticed) and reducing downtime. Today Sentry is used by 100,000+ organizations, and it’s been a huge help in making sure no runtime bug goes unnoticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. &lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;Linters &amp;amp; Static Type Checking&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0phvn8nfktju96mwoxk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0phvn8nfktju96mwoxk2.png" alt="ESlint" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I wouldn’t ignore the basics: linting and type-checking tools. Linters like ESLint (for JavaScript) or Pylint (for Python) automatically scan code for common mistakes or style issues as you write. These tools “automate checking of source code for programmatic errors”. In fact, using lint tools can “reduce errors and improve overall quality” by forcing developers to fix mistakes earlier. We also gradually converted key modules to TypeScript and enabled strict mode. The result was that trivial bugs (like undefined variables or wrong function calls) were caught by the compiler or linter before testing even began. By treating linter warnings as errors in CI, I eliminated a huge number of small bugs and inconsistencies up-front.&lt;/p&gt;

&lt;p&gt;Each of these tools tackles bugs at different stages – from writing code to shipping it – and together they formed a safety net across our entire stack. The combined effect was clear: our bug count dropped dramatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvvsj70h26iuyrhafbsg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvvsj70h26iuyrhafbsg.gif" alt="done" width="245" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;In the enterprise world, delivering quality code is non-negotiable, and skipping any of these tools leaves gaps.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Don’t miss out on Entelligence for instant AI feedback, SonarQube for deep static scans, CI pipelines with automated tests for early regression checks, Sentry for runtime visibility, or good old linters/type checking for first-line defense. Adopting all of them means catching issues at every step. I’ve seen it personally. Ready to up your quality game? Start integrating these tools today and watch those elusive bugs vanish.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let me know if you use any tool in the same space in the comment section below!!!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>productivity</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Cursor BugBot vs Entelligence</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Sat, 28 Jun 2025 11:58:06 +0000</pubDate>
      <link>https://forem.com/entelligenceai/cursor-bugbot-vs-entelligence-37d9</link>
      <guid>https://forem.com/entelligenceai/cursor-bugbot-vs-entelligence-37d9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The launch of the &lt;a href="https://x.com/EntelligenceAI/status/1930304370643779867" rel="noopener noreferrer"&gt;&lt;strong&gt;Entelligence AI extension&lt;/strong&gt;&lt;/a&gt; for &lt;strong&gt;VS Code, Cursor, and Windsurf&lt;/strong&gt; introduces an in-IDE code reviewer that provides immediate feedback before you even open a pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25201.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25201.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="Entelligence AI Extension"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, we'll walk through how &lt;strong&gt;Entelligence AI&lt;/strong&gt; stacks up against &lt;strong&gt;Cursor (BugBot).&lt;/strong&gt; Whether you're focused on deep code reviews, quick fixes, or streamlined workflows, you'll see which tool fits your style and why Entelligence AI might be just what you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Entelligence AI?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://entelligence.ai/" rel="noopener noreferrer"&gt;Entelligence.AI&lt;/a&gt; is your team's AI-powered engineering intelligence platform that streamlines development, enhances collaboration, and accelerates engineering productivity. It works as a quiet companion around your codebase, helping your team stay aligned without changing how you work.&lt;/p&gt;

&lt;p&gt;Instead of asking you to follow new processes, it supports everyday tasks like reviewing pull requests, onboarding, and tracking team performance. It's built to handle the important things that often get missed.&lt;/p&gt;

&lt;p&gt;It also respects your privacy, your code is never used for training, and you can self-host it if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is BugBot?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.cursor.com/bugbot" rel="noopener noreferrer"&gt;&lt;strong&gt;BugBot&lt;/strong&gt;&lt;/a&gt; is Cursor's built-in tool for reviewing pull requests on GitHub. Once installed, it runs automatically (or when you ask via &lt;code&gt;bugbot run&lt;/code&gt;) and scans your PRs for potential bugs or issues.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pull request checks:&lt;/strong&gt; Every time you open or update a PR, BugBot reviews the changed code and leaves comments on any possible mistakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quick fix flow:&lt;/strong&gt; If BugBot finds something, it adds a "Fix in Cursor" link, click it to open your editor with the problem context pre-loaded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexible setup:&lt;/strong&gt; You can set it to run all the time or only when called, and decide whether to show outcomes when no issues are found.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team settings:&lt;/strong&gt; Repo admins can enable or disable it per repo, set cost limits, and manage permissions across teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BugBot is part of Cursor's version 1.0 release and comes with a 7-day free trial. After that, it requires a subscription to Cursor's Max mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25202.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25202.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="BugBot Interface"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison: Entelligence AI vs Cursor's BugBot
&lt;/h2&gt;

&lt;p&gt;Choosing a code review tool that works inside your IDE can be tricky, especially when multiple tools feel similar at first. To make it easier, we tested both &lt;strong&gt;Entelligence AI&lt;/strong&gt; and &lt;strong&gt;Cursor's BugBot&lt;/strong&gt; in a simple React app called &lt;em&gt;Should I Do It?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It uses an open API and basic async logic, so we could check how each tool handles real-world code: fetch requests, error handling, component structure, and async bugs.&lt;/p&gt;

&lt;p&gt;Instead of going broad, we focused on things that matter during actual development, not just what's on a landing page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Review
&lt;/h3&gt;

&lt;p&gt;One major difference between &lt;strong&gt;Entelligence AI&lt;/strong&gt; and &lt;strong&gt;Cursor's BugBot&lt;/strong&gt; is &lt;em&gt;when&lt;/em&gt; they let you review your code.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Entelligence AI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;With Entelligence, you don't have to wait to raise a pull request. It reviews your changes directly in the editor, so you can get suggestions as you go, before your code even leaves your branch.&lt;/p&gt;

&lt;p&gt;We tested this on our intentionally badly written &lt;code&gt;fetchAnswer.js&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;fetchAnswer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GET&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Accept&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*/*&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Cache-Control&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;no-cache&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Pragma&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;no-cache&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="n"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;follow&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;referrerPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;no-referrer&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nf"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;networkErr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Maybe the internet is down? Or maybe not.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;networkErr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Some error happened&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Fetch result is empty or undefined or null or broken&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;maybe&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://placekitten.com/200/200&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;placeholder&lt;/span&gt; &lt;span class="n"&gt;nonsense&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;204&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nf"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;JSON might be corrupted or evil&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonError&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;error-parsing-json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;object&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Data is not what we expected, but let&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="n"&gt;just&lt;/span&gt; &lt;span class="n"&gt;go&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;);
        return { answer: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="err"&gt;¯&lt;/span&gt;\&lt;span class="nf"&gt;_&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ツ&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;¯&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, image: &lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="s"&gt; };
      }

      if (data &amp;amp;&amp;amp; Object.keys(data).length &amp;gt; 0 &amp;amp;&amp;amp; data.answer &amp;amp;&amp;amp; data.image) {
        return {
          answer: `${data.answer}`,
          image: `${data.image}`
        };
      } else {
        console.log(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="n"&gt;Something&lt;/span&gt; &lt;span class="n"&gt;was&lt;/span&gt; &lt;span class="n"&gt;missing&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;but&lt;/span&gt; &lt;span class="n"&gt;let&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s not worry too much&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;almost&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://http.cat/404&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Status was weird: &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;uncertain&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://http.cat/500&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nf"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Global meltdown&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;panic&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://http.cat/418&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="n"&gt;default&lt;/span&gt; &lt;span class="n"&gt;fetchAnswer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what Entelligence pointed out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unnecessary console logs&lt;/strong&gt; clutter the code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use of &lt;code&gt;let&lt;/code&gt; for &lt;code&gt;url&lt;/code&gt; when &lt;code&gt;const&lt;/code&gt; would be more appropriate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overuse of template literals&lt;/strong&gt; like &lt;code&gt;${data.answer}&lt;/code&gt; when &lt;code&gt;data.answer&lt;/code&gt; would work fine&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Incorrect handling of HTTP status codes&lt;/strong&gt;, like treating &lt;code&gt;204&lt;/code&gt; (No Content) the same as &lt;code&gt;200&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not only did it highlight these problems, it gave inline suggestions to fix them. You could accept changes right there, no extra steps, no separate review window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25203.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25203.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="Entelligence AI Inline Suggestions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even &lt;em&gt;after&lt;/em&gt; raising a PR, Entelligence doesn't stop helping.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It &lt;strong&gt;summarizes&lt;/strong&gt; the pull request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides a &lt;strong&gt;walkthrough&lt;/strong&gt; of what the PR contains, including a helpful &lt;strong&gt;sequence diagram.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can give feedback using 👍 / 👎 emojis to help it learn your review preferences.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the PR looks good, it auto-comments with &lt;strong&gt;LGTM 👍&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It shows which review settings are enabled and lets you customize them directly in the Entelligence AI dashboard&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can even track analytics inside your dashboard, like how many PRs are open, merged, or in review, and the overall quality of your team's contributions.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;BugBot&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;With Cursor's BugBot, you need to raise a pull request first. BugBot then auto-reviews the code (if enabled), or you manually run it by commenting &lt;code&gt;bugbot run&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;On running it against the same file, here's what BugBot flagged:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Redundant data validation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Casual and inconsistent console messages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use of unnecessary string interpolation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confusing HTTP status logic&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25204.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25204.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="BugBot Review"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BugBot gave detailed, structured feedback, and included a "Fix in Cursor" button that opened Cursor with the changes ready to apply. It worked well, but the extra step of needing a PR or comment made it slightly slower in terms of feedback loop.&lt;/p&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt; gives you feedback &lt;em&gt;while coding&lt;/em&gt; and continues to assist &lt;em&gt;after&lt;/em&gt; a pull request is raised, with summaries, diagrams, and customizable reviews. It's built to stay with you throughout the entire workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BugBot&lt;/strong&gt; gives good suggestions too, but only kicks in &lt;em&gt;after&lt;/em&gt; you raise a pull request or trigger it manually.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bug Detection
&lt;/h3&gt;

&lt;p&gt;After code review, the next big test is &lt;strong&gt;bug detection,&lt;/strong&gt; especially how quickly and deeply these tools can catch small issues that often slip through until runtime or production.&lt;/p&gt;

&lt;p&gt;To test this, we created a simple but buggy React component: &lt;code&gt;AnswerBox.jsx&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;react&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;AnswerBox&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;textAlign&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;center&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fontFamily&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sans&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Your&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;No answer available yet&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;
          &lt;span class="n"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="n"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;answer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
          &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;300px&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
          &lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
          &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;
            &lt;span class="n"&gt;marginTop&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3px dashed purple&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;borderRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;boxShadow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0px 0px 20px rgba(0,0,0,0.2)&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;objectFit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;coverd&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
          &lt;span class="p"&gt;}}&lt;/span&gt;
        &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#888&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;No&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt; &lt;span class="n"&gt;provided&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="n"&gt;default&lt;/span&gt; &lt;span class="n"&gt;AnswerBox&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It looks harmless, but it's filled with small logic flaws, accessibility issues, and style bugs that are easy to miss.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Entelligence AI's Detection&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Entelligence AI gave real-time suggestions as we wrote the file, &lt;strong&gt;without waiting for a pull request&lt;/strong&gt;. It immediately pointed out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Incorrect CSS Property&lt;/strong&gt; - Spotted &lt;code&gt;objectFit: 'coverd'&lt;/code&gt; and suggested &lt;code&gt;'cover'&lt;/code&gt; a common but tricky typo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Invalid Font Family&lt;/strong&gt; - Caught &lt;code&gt;fontFamily: 'sans'&lt;/code&gt; and correctly recommended &lt;code&gt;'sans-serif'&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accessibility Concerns&lt;/strong&gt; - Flagged the &lt;code&gt;alt="answer"&lt;/code&gt; as too vague and suggested more meaningful alt text for screen readers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Suggestions&lt;/strong&gt; - Highlighted that the image is not lazy-loaded, which could impact performance on slower connections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inline Styling Feedback&lt;/strong&gt; - Recommended switching from inline styles to reusable CSS modules or styled-components for better maintainability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Handling&lt;/strong&gt; - Mentioned the absence of fallback behavior when the image fails to load.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Responsive Design Gaps&lt;/strong&gt; - Warned that fixed &lt;code&gt;width: "300px"&lt;/code&gt; might break responsiveness across screen sizes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prop Safety&lt;/strong&gt; - Noted the component didn't validate props using PropTypes, which can cause runtime issues in large apps.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These suggestions came up &lt;strong&gt;before raising any PR&lt;/strong&gt;, saving review time and making it easier to fix issues as they arise.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Cursor (BugBot) Review&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To get suggestions from BugBot, we had to first raise a PR. Once active, BugBot analyzed the diff and left a helpful review with several suggestions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Caught the same &lt;code&gt;objectFit: 'coverd'&lt;/code&gt; typo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Noticed the &lt;strong&gt;invalid font&lt;/strong&gt; and corrected it to &lt;code&gt;'sans-serif'&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flagged &lt;strong&gt;missing PropTypes&lt;/strong&gt; and even shared how to define them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recommended avoiding &lt;strong&gt;inline styles&lt;/strong&gt; and suggested externalizing CSS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Warned about potential crashes from &lt;strong&gt;missing&lt;/strong&gt; &lt;code&gt;answer&lt;/code&gt; props and suggested fallback handling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offered a neat refactored version of the component using better structure, error handling, and accessibility.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both tools flagged key issues, but what really sets them apart is &lt;strong&gt;when and how&lt;/strong&gt; they do it.&lt;/p&gt;

&lt;p&gt;If you're someone who likes catching mistakes before they go anywhere, Entelligence AI fits more naturally into your day-to-day. Cursor, meanwhile, is a solid safety net for teams focused on structured code review checkpoints.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Generation &amp;amp; Fixes
&lt;/h3&gt;

&lt;p&gt;As part of the PR process, I tried something different. Instead of making changes, I added this placeholder in a file to see if they understand what I need to add in this file.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add a dropdown with 'yes', 'no', and 'maybe' options. The answer and image should only display if the user selection matches the fetched API response.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This was the perfect opportunity to observe how both &lt;strong&gt;Entelligence AI&lt;/strong&gt; and &lt;strong&gt;Cursor&lt;/strong&gt; behave when reviewing and contributing to &lt;strong&gt;live code changes&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Cursor&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Cursor didn't just edit. It &lt;strong&gt;wrote the whole feature&lt;/strong&gt; from scratch, fetching the API response, managing user selection, handling loading and error states, and displaying the answer/image only when they matched.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25205.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25205.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="Cursor Code Generation"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiResponse&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;userSelection&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;userSelection&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="n"&gt;apiResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;API&lt;/span&gt; &lt;span class="n"&gt;Answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="n"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="n"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;apiResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It even wrapped everything with clean error boundaries and a proper loading experience. This wasn't a tweak, it was &lt;strong&gt;a production-ready implementation&lt;/strong&gt; that respected UI flow, UX states, and code style.&lt;/p&gt;

&lt;p&gt;Cursor handled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;API integration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dropdown state logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Matching condition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Loading + error boundaries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clean inline styling and accessibility&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Entelligence AI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Entelligence AI took a more incremental approach. Instead of building the feature end-to-end, it scanned the existing component and inserted &lt;strong&gt;just the logic&lt;/strong&gt; needed to satisfy the new condition, in diff-style.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25206.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252FBugBot%252Fimg%25206.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_HA46P67KzYE19V3HLwZbNadbcKQ3%2520align%3D" alt="Entelligence AI Code Generation"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;selectedOption&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setSelectedOption&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="n"&gt;selectedOption&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toUpperCase&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="n"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="n"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;/&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked quickly, but didn't have the full user flow awareness like Cursor did. There was no API fetching, no user feedback for loading or errors, and no structured fallback.&lt;/p&gt;

&lt;p&gt;Entelligence AI handled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Local dropdown logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Conditional rendering&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Minimal context awareness&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Diff-first suggestion mode&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentation Generation
&lt;/h3&gt;

&lt;p&gt;When it comes to keeping documentation up-to-date, &lt;strong&gt;Entelligence AI&lt;/strong&gt; takes the lead and does it quietly in the background.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Automatic &amp;amp; Inline Updates&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;As soon as a PR merges or changes happen in the codebase, Entelligence auto-updates relevant documentation. Whether it's a function, a component, or even a newly added file, the tool reads the code, understands the context, and updates the associated docs in real-time.&lt;/p&gt;

&lt;p&gt;No need to switch tabs or open a separate tool. You can also trigger updates manually from the IDE using a simple command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updateDocs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The best part? It's not locked. You can easily modify the generated docs to suit your tone, add notes, or expand on context, all without writing from scratch.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Cursor's Limitation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Cursor currently doesn't offer automatic or assisted documentation generation. While it can help you write a comment if you explicitly ask it to, it &lt;strong&gt;does not track changes or maintain up-to-date documentation&lt;/strong&gt; as your project evolves. You're still on your own for writing and managing docs, which can lead to outdated, inconsistent, or missing documentation over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which One Should You Choose?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Cursor (BugBot)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code Review Timing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Instant, in-editor while coding&lt;/td&gt;
&lt;td&gt;After PR is raised or manually triggered&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bug Detection&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real-time, catches bugs as you type&lt;/td&gt;
&lt;td&gt;Post-PR, helpful but delayed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code Fixes &amp;amp; Suggestions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Diff-style, quick edits with context&lt;/td&gt;
&lt;td&gt;Full implementations, inline and clean&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context Awareness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High, understands component structure, flags accessibility &amp;amp; styling&lt;/td&gt;
&lt;td&gt;Moderate, catches key issues but not deeply integrated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Documentation Generation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Auto-updates docs with Markdown support (&lt;code&gt;/updateDocs&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;No built-in documentation support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ease of Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Seamless, minimal setup, always on&lt;/td&gt;
&lt;td&gt;Good, but PR-dependent for most actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Developers/Teams who want fast, continuous feedback and tight documentation&lt;/td&gt;
&lt;td&gt;Teams/Developers that prefer structured, post-PR code review flows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Goes Beyond Code Reviews&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Handles docs, onboarding, team insights, and much more&lt;/td&gt;
&lt;td&gt;Limited to code suggestions and reviews&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you prefer a &lt;strong&gt;tight feedback loop&lt;/strong&gt;, catch bugs &lt;em&gt;before&lt;/em&gt; PRs, and want &lt;strong&gt;auto-generated documentation&lt;/strong&gt;, &lt;strong&gt;Entelligence AI&lt;/strong&gt; is a clear win.&lt;/p&gt;

&lt;p&gt;If your team has a &lt;strong&gt;PR-first workflow&lt;/strong&gt; and you want full code rewrites inside your editor, &lt;strong&gt;Cursor&lt;/strong&gt; (and BugBot) are still a powerful choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn more about the Entelligence AI code review extension:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.entelligence.ai/IDE" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Check Documentation&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both &lt;strong&gt;Entelligence AI&lt;/strong&gt; and &lt;strong&gt;Cursor&lt;/strong&gt; bring serious AI firepower into your coding workflow, but in very different ways.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt; acts like a quiet, senior engineer on your shoulder, helping you review, fix, and document your code &lt;em&gt;as you write it&lt;/em&gt;. It's perfect for developers who want to &lt;em&gt;stay in flow&lt;/em&gt;, catch bugs early, and keep their project healthy with minimal overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cursor (BugBot)&lt;/strong&gt; is like a structured reviewer that steps in &lt;em&gt;after&lt;/em&gt; you're done. It's reactive, helpful, and writes great code, but you'll need to raise PRs or trigger it manually to benefit from its insights.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a world where code is moving fast, having a tool that grows with your thought process, not just your diffs, makes a big difference.&lt;/p&gt;

&lt;p&gt;If you're building daily, Entelligence feels like a partner. The cursor feels like a reviewer.&lt;/p&gt;

&lt;p&gt;Pick what fits your team's rhythm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Install Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>ai</category>
    </item>
    <item>
      <title>Entelligence vs CodeRabbit</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Thu, 26 Jun 2025 12:56:55 +0000</pubDate>
      <link>https://forem.com/entelligenceai/entelligence-vs-coderabbit-4289</link>
      <guid>https://forem.com/entelligenceai/entelligence-vs-coderabbit-4289</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;AI-powered code review is quickly becoming an important part of a developer's workflow. Instead of manually finding bugs and improving code quality, teams prefer to use AI code review tools that not only review diffs but also understand their codebase and team goals.&lt;/p&gt;

&lt;p&gt;That's where AI code review tools are useful. In this post, we’re looking at two popular options developers are using right inside their editors: &lt;a href="https://www.entelligence.ai/" rel="noopener noreferrer"&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://www.coderabbit.ai/" rel="noopener noreferrer"&gt;&lt;strong&gt;CodeRabbit&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Entelligence AI?
&lt;/h2&gt;

&lt;p&gt;Entelligence AI is a developer tool that helps you review code directly inside your editor, without waiting for a pull request. It gives feedback while your changes are in locals, points out potential issues, and suggests improvements in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI in editor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It works with VS Code, Cursor, and Windsurf, and runs quietly in the background. When you make changes, Entelligence reviews the code and leaves helpful inline comments. These could be about logic errors, formatting, naming, or even missing edge cases. You can apply the suggestions with one click.&lt;/p&gt;

&lt;p&gt;Once you raise a pull request, Entelligence continues to help. It adds a summary of what’s changed, leaves comments in the diff, and even includes diagrams when needed. You can react to its feedback to guide how it reviews in the future.&lt;/p&gt;

&lt;p&gt;It also updates documentation automatically when your code changes and shows an overview of all PRs in a dashboard, so you can track what’s open, what’s been merged, and what still needs attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is CodeRabbit?
&lt;/h2&gt;

&lt;p&gt;CodeRabbit is an AI code review tool that works inside your development workflow. Once installed in your GitHub repo, it automatically reviews pull requests using AI and leaves suggestions as comments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage2.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage2.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="CodeRabbit in editor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also use CodeRabbit inside your editor (like VS Code, Cursor, and Windsurf) through its extension. CodeRabbit reviews can highlight issues, suggest improvements, and explain parts of the code you select. It supports both real-time editing feedback and Git-aware reviews, so you can use it while coding or after changes are pushed.&lt;/p&gt;

&lt;p&gt;It’s a helpful tool when you want quick feedback without waiting for a teammate to review your pull request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before we start our comparison, let's Install:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison
&lt;/h2&gt;

&lt;p&gt;Now that we've learned about both tools, it's time to compare them. We'll test them in different situations. We have a file called Ask.js, and we'll make several changes to this code and test both tools in different scenarios.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;react&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="n"&gt;default&lt;/span&gt; &lt;span class="n"&gt;function&lt;/span&gt; &lt;span class="nc"&gt;Ask&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setQuestion&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setAnswer&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;handleAsk&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="nf"&gt;setAnswer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;askaway-history&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
      &lt;span class="n"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;askaway-history&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;([{&lt;/span&gt; &lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nf"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nf"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1rem&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Ask&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Question&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="n"&gt;onChange&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setQuestion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt; &lt;span class="n"&gt;placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type your question...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt; &lt;span class="n"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;handleAsk&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Ask&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;marginTop&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;1rem&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="n"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="n"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;maxWidth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;200px&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Local Changes Review
&lt;/h3&gt;

&lt;p&gt;Let's begin by testing both tools on how well they review local changes. This means, without creating a PR or even for uncommitted changes. To test them, we wrote some buggy code with many issues like memory leaks, bad error handling, and multiple API calls to the same endpoint. We'll see if both tools can catch these problems correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;react&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;export&lt;/span&gt; &lt;span class="n"&gt;default&lt;/span&gt; &lt;span class="n"&gt;function&lt;/span&gt; &lt;span class="nc"&gt;Ask&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setQuestion&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;setAnswer&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;handleAsk&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;No&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt; &lt;span class="n"&gt;handling&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;fetch&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;setAnswer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Synchronous&lt;/span&gt; &lt;span class="n"&gt;operation&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;blocks&lt;/span&gt; &lt;span class="n"&gt;UI&lt;/span&gt;
    &lt;span class="nf"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;let&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;100000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;random&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Multiple&lt;/span&gt; &lt;span class="n"&gt;API&lt;/span&gt; &lt;span class="n"&gt;calls&lt;/span&gt; &lt;span class="n"&gt;without&lt;/span&gt; &lt;span class="n"&gt;batching&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Not&lt;/span&gt; &lt;span class="n"&gt;checking&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;badRes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://nonexistent-api.com/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;badData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;badRes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;This&lt;/span&gt; &lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;OK&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Unhandled&lt;/span&gt; &lt;span class="n"&gt;promise&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;localStorage&lt;/span&gt; &lt;span class="n"&gt;operations&lt;/span&gt; &lt;span class="n"&gt;without&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;catch&lt;/span&gt;
    &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;askaway-history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="n"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;askaway-history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;([{&lt;/span&gt; &lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Memory&lt;/span&gt; &lt;span class="n"&gt;leak&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;cleaning&lt;/span&gt; &lt;span class="n"&gt;up&lt;/span&gt;
    &lt;span class="nf"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Function&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;too&lt;/span&gt; &lt;span class="n"&gt;many&lt;/span&gt; &lt;span class="n"&gt;responsibilities&lt;/span&gt;
  &lt;span class="n"&gt;const&lt;/span&gt; &lt;span class="n"&gt;fetchDataBadly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Using&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt; &lt;span class="n"&gt;instead&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;const&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;let&lt;/span&gt;
    &lt;span class="n"&gt;var&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yesno.wtf/api&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Nested&lt;/span&gt; &lt;span class="nf"&gt;callbacks &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;callback&lt;/span&gt; &lt;span class="n"&gt;hell&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;?retry=1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;retryResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;?retry=2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;finalResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;finalResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="nf"&gt;setAnswer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
              &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Mutating&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="n"&gt;directly&lt;/span&gt;
              &lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extraData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;modified&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;});&lt;/span&gt;
          &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Empty&lt;/span&gt; &lt;span class="n"&gt;catch&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;practice&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Not&lt;/span&gt; &lt;span class="n"&gt;returning&lt;/span&gt; &lt;span class="n"&gt;anything&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;operation&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nf"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1rem&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Ask&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;Question&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;input&lt;/span&gt;
        &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;onChange&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setQuestion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
        &lt;span class="n"&gt;placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Type your question...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
      &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt; &lt;span class="n"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;handleAsk&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Ask&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt; &lt;span class="n"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;fetchDataBadly&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Bad&lt;/span&gt; &lt;span class="n"&gt;Fetch&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;button&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt; &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;marginTop&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1rem&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;Answer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;
            &lt;span class="n"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="n"&gt;maxWidth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200px&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
          &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;)}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s start with Entelligence first.&lt;/p&gt;

&lt;h4&gt;
  
  
  Entelligence AI:
&lt;/h4&gt;

&lt;p&gt;With Entelligence AI, you don’t need to raise a PR or even commit anything. It starts reviewing your code directly in the editor as you make changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage3.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage3.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI inline suggestions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That makes it especially helpful when you’re working on rough drafts or fixing logic before things ever reach GitHub or Bitbucket.&lt;/p&gt;

&lt;p&gt;On the same &lt;code&gt;Ask.jsx&lt;/code&gt; file, Entelligence flagged a bunch of real issues early:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;UI-blocking loop&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It pointed out that the code &lt;code&gt;let i = 0; i &amp;lt; 100000; i++) { Math.random(); }&lt;/code&gt; was unnecessarily CPU-intensive and could freeze the UI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unbatched fetch calls&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It flagged that calling &lt;code&gt;fetch()&lt;/code&gt; multiple times in a row without handling them properly was a bad pattern, wasteful, and prone to API limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Missing response status checks&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It caught that &lt;code&gt;await badRes.json()&lt;/code&gt; was being called without verifying &lt;code&gt;badRes.ok&lt;/code&gt;, which could crash the app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No cleanup on intervals&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Warned about the &lt;code&gt;setInterval()&lt;/code&gt; running continuously without any cleanup, which could lead to memory leaks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LocalStorage risks&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It noted that the &lt;code&gt;localStorage&lt;/code&gt; logic lacked error handling, and flagged that the &lt;code&gt;question&lt;/code&gt; was saved without sanitization, something that could lead to XSS if used later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multiple concerns in one function&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It didn’t just stop at bugs. Entelligence also flagged architectural concerns, like how &lt;code&gt;handleAsk()&lt;/code&gt; was doing too many unrelated things (fetching data, updating localStorage, looping, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unsafe rendering from the API&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Finally, it warned that rendering &lt;code&gt;answer.answer&lt;/code&gt; and &lt;code&gt;answer.image&lt;/code&gt; directly could be risky if the external API ever got compromised.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each issue was highlighted inline, along with a suggested fix that you could accept with one click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage4.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage4.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="CodeRabbit review"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeRabbit:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeRabbit also begins reviewing your code when you make any changes. You don't need to commit the changes. You can choose to review committed changes, uncommitted ones, or both. Then, by clicking on &lt;strong&gt;Review&lt;/strong&gt;, the code review will start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage5.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage5.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="CodeRabbit PR comments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our &lt;code&gt;Ask.jsx&lt;/code&gt; file, CodeRabbit caught several issues right away:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No error handling around fetch&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It rewrote the entire logic using a try-catch block and added proper checks for &lt;code&gt;.ok&lt;/code&gt; status.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unprotected localStorage operations&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It pointed out that using &lt;code&gt;localStorage.getItem()&lt;/code&gt; and &lt;code&gt;setItem()&lt;/code&gt; without a try-catch could fail in certain environments or when invalid JSON is stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Uncleaned&lt;/strong&gt; &lt;code&gt;setInterval&lt;/code&gt;&lt;br&gt;&lt;br&gt;
It warned us about a memory leak due to the missing &lt;code&gt;clearInterval()&lt;/code&gt; in the cleanup phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unhelpful catch block and deep callback nesting&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It spotted an empty catch block in &lt;code&gt;fetchDataBadly&lt;/code&gt; and suggested better error handling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Incorrect use of&lt;/strong&gt; &lt;code&gt;var&lt;/code&gt; and directly mutating React state&lt;br&gt;&lt;br&gt;
Flagged usage of &lt;code&gt;var&lt;/code&gt; instead of &lt;code&gt;let&lt;/code&gt;/&lt;code&gt;const&lt;/code&gt;, and the unsafe direct mutation of the &lt;code&gt;answer&lt;/code&gt; state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these issues appeared right after running a review, even before raising a PR. You could click a checkmark next to each suggestion, and the fix would be applied directly to the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage6.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage6.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI PR summary"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a pretty fast way to clean things up, especially if you like reviewing changes in batches.&lt;/p&gt;

&lt;p&gt;Both are good, but Entelligence catches more suggestions. Besides the number of suggestions, the difference is also in the depth of reasoning behind them. Entelligence explains &lt;em&gt;why&lt;/em&gt; something matters and what could happen if it’s ignored. All of this happens directly in the editor, without switching tools or breaking your flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Post-PR Code Review
&lt;/h3&gt;

&lt;p&gt;Once a pull request is raised, both CodeRabbit and Entelligence AI jump in to review your changes. But they approach it a bit differently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeRabbit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After raising a PR, CodeRabbit leaves a series of comments throughout the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage7.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage7.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI cross-file awareness"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our test, it posted around 6 suggestions across the file. These included everything from removing redundant fetch calls to rewriting deeply nested functions using modern async/await patterns.&lt;/p&gt;

&lt;p&gt;The suggestions are clear and directly tied to the lines of code they refer to. It also includes committable diffs, so you can apply fixes with a single click. The comments are categorized (e.g., "⚠️ Potential issue", "🛠️ Refactor suggestion") and easy to understand.&lt;/p&gt;

&lt;p&gt;However, the review process took about 1–3 &lt;strong&gt;minutes&lt;/strong&gt; to finish. After that, you can track your PRs in CodeRabbit’s dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Entelligence reviewed the same PR much faster, taking about 10 to 30 seconds. But more than speed, the structure of its feedback stood out.&lt;/p&gt;

&lt;p&gt;Rather than just adding line-by-line comments, Entelligence started with a PR summary that explained the purpose of the changes. It also broke down the logic step-by-step and even included a sequence diagram to show how different parts of the code interacted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage8.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage8.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI code diff"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could react to suggestions with a 👍 or 👎 to fine-tune future reviews. It also showed the current review settings, like what types of issues it checks for, which can be updated right from the dashboard.&lt;/p&gt;

&lt;p&gt;The dashboard view itself goes beyond just listing PRs. You can track review activity across repos, see the number of comments per PR, and adjust organization-wide settings. It’s designed for teams who want visibility and consistency without doing any extra work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Awareness
&lt;/h3&gt;

&lt;p&gt;To test context awareness, I intentionally included an import mismatch in one of the test PRs. I used a function called &lt;code&gt;cleanInput&lt;/code&gt; in the main component, but the actual exported function from &lt;code&gt;helpers.js&lt;/code&gt; was named &lt;code&gt;sanitizeInput&lt;/code&gt;. Let’s see how both perform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeRabbit:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeRabbit caught this and flagged it with a suggestion to fix the import. It also recommended input validation and consistent usage of sanitization logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage9.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage9.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="CodeRabbit code diff"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That was a good sign, it understood how the function was used and what it was supposed to do based on the file's context. However, when I accepted the fix, it did not rename the import but removed all imports and even eliminated the usages of those functions from the file entirely. That was a bit strange.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage10.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage10.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But here’s where Entelligence AI went a step further.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While CodeRabbit focused on the diff, Entelligence looked beyond it. In another file that wasn’t included in the pull request diff but was still using the same incorrect &lt;code&gt;cleanInput&lt;/code&gt; function, Entelligence flagged that as well. It suggested aligning the function usage across the codebase, even though that file wasn’t modified in the current PR.&lt;/p&gt;

&lt;p&gt;It identified the mismatch &lt;em&gt;and&lt;/em&gt; updated the import name from &lt;code&gt;clean&lt;/code&gt; to &lt;code&gt;sanitizeInput&lt;/code&gt;, &lt;strong&gt;preserving the structure&lt;/strong&gt; of the file and only changing what was necessary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage11.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage11.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="CodeRabbit dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the difference where Entelligence takes the lead. Entelligence looks at more than just the current changes, it sees how these changes impact the whole codebase. It understands patterns, connections between files, and past team decisions.&lt;/p&gt;

&lt;p&gt;While both tools help find immediate problems, Entelligence is special because it has a wider view. It considers the entire project to make sure nothing else breaks quietly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Fixes &amp;amp; Suggestions
&lt;/h3&gt;

&lt;p&gt;To test how both tools handle real-world code improvements, I added a few intentional issues in the &lt;code&gt;Ask.jsx&lt;/code&gt; file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mutated the &lt;code&gt;answer&lt;/code&gt; object directly: &lt;code&gt;answer &amp;amp;&amp;amp; (answer.extra = "bad mutation");&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added an empty &lt;code&gt;catch&lt;/code&gt; block&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Used &lt;code&gt;alert()&lt;/code&gt; to display validation errors&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Removed the &lt;code&gt;type&lt;/code&gt; from the &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt; element&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are small, but realistic, examples of what a junior dev might miss, or what slips into quick prototypes. Here’s how both tools responded:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Entelligence pointed out the issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Flagged the &lt;strong&gt;React state mutation&lt;/strong&gt; and clearly explained why it’s a problem—modifying state directly leads to unpredictable UI behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identified the &lt;strong&gt;missing&lt;/strong&gt; &lt;code&gt;type="button"&lt;/code&gt; on the &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Highlighted the &lt;strong&gt;empty catch block&lt;/strong&gt;, suggesting that it makes debugging harder and hurts resilience.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage12.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage12.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI team growth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each suggestion came with optional inline diffs, and in some cases, Entelligence explained the downstream risks, like potential form issues due to the missing button type. That extra context made it more than just a linter-like fix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeRabbit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeRabbit also quickly flagged each of the changes with clear, actionable suggestions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State Mutation&lt;/strong&gt;: It pointed out that directly modifying the state object (&lt;code&gt;answer.extra = ...&lt;/code&gt;) violates React’s immutability principle, and recommended removing it entirely.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Empty Catch Block&lt;/strong&gt;: It advised against suppressing all errors and suggested proper error handling for better debugging and visibility.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage13.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.entelligence.ai%2F_next%2Fimage%3Furl%3D%252Fassets%252Flanding%252Fcoderabbit%252Fimage13.png%26w%3D1920%26q%3D100%26dpl%3Ddpl_99pMaA8XMjwKsz5aCUEJq4YfUp9Z%2520align%3D" alt="Entelligence AI and CodeRabbit conclusion"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blocking&lt;/strong&gt; &lt;code&gt;alert()&lt;/code&gt;: It recognized &lt;code&gt;alert()&lt;/code&gt; as a poor UX choice and recommended using inline feedback or toast messages instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these suggestions was tied to specific lines and could be applied with one click using CodeRabbit's interface.&lt;/p&gt;

&lt;p&gt;While both tools surfaced the right issues, &lt;strong&gt;Entelligence added a bit more reasoning behind each suggestion&lt;/strong&gt;, which can be helpful when teaching juniors or trying to avoid similar bugs later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tracking &amp;amp; Analyzing Pull Requests Across the Organization
&lt;/h3&gt;

&lt;p&gt;When it comes to visibility across engineering work, both CodeRabbit and Entelligence AI offer dashboards and deeper insights, but they differ in depth, flexibility, and how much context they surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Entelligence AI is more comprehensive and better suited for scaling across teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It gives you a &lt;strong&gt;centralized overview&lt;/strong&gt; of all PRs across projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can track who authored, reviewed, and merged PRs, along with auto-generated summaries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Slack integration&lt;/strong&gt; provides real-time updates for every review, PR status, and sprint summary&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-updating documentation&lt;/strong&gt; based on PR changes (or manually via the dashboard)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect multiple repos and track sprint performance across them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deep &lt;strong&gt;Team Insights&lt;/strong&gt;: performance reviews, contribution patterns, and sprint assessments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define &lt;strong&gt;custom guidelines&lt;/strong&gt; so reviews match your team’s standards&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Entelligence supports many tools that engineering teams already use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Communication&lt;/strong&gt;: Slack, Discord&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;: Notion, Google Docs, Confluence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project Management&lt;/strong&gt;: Jira, Linear, Asana&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;: Sentry, Datadog, PagerDuty&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These integrations help Entelligence pull in relevant context, enrich reviews, and automate workflows, like syncing sprint data from Jira, pushing updates to Slack, or linking changes to a Notion doc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeRabbit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeRabbit offers a straightforward dashboard for tracking only PR activity. It also integrates with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jira&lt;/strong&gt; – to connect reviews with tickets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Linear&lt;/strong&gt; – to tie reviews to sprint planning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CircleCI&lt;/strong&gt; – to link CI builds with pull requests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s a Reports tab where you can create summaries, and a Learnings tab that tracks bot interactions across repositories, though these feel lightweight and dependent on manual use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which One Should You Choose?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature / Capability&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;CodeRabbit&lt;/strong&gt; 🐇&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Entelligence AI&lt;/strong&gt; 🧠&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local Code Review&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Reviews uncommitted/committed code with inline comments&lt;/td&gt;
&lt;td&gt;Reviews uncommitted/committed code with inline comments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pull Request Review&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Accurate and helpful comments on PR diffs&lt;/td&gt;
&lt;td&gt;Includes PR summaries, walkthroughs, and diagrams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context Awareness&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited to diff-based suggestions&lt;/td&gt;
&lt;td&gt;Understands full codebase and cross-file logic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Fix Suggestions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clear suggestions with 1-click apply&lt;/td&gt;
&lt;td&gt;Context-rich suggestions with inline diffs and risk analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dashboard&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic dashboard to track PRs&lt;/td&gt;
&lt;td&gt;Full dashboard with PR summaries, team insights, and auto-docs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Slower review time (4–5 minutes per PR)&lt;/td&gt;
&lt;td&gt;Fast review turnaround (usually &amp;lt;1 minute)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Customization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Some config options, limited flexibility&lt;/td&gt;
&lt;td&gt;Custom review guidelines, learning-based improvements&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Integrations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GitHub, Jira, Linear, CircleCI&lt;/td&gt;
&lt;td&gt;Slack, Discord, Jira, Linear, Asana, Confluence, Notion, Sentry, Datadog, PagerDuty, and more&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Documentation Updates&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not supported&lt;/td&gt;
&lt;td&gt;Automatically syncs documentation with code changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning &amp;amp; Improvement&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stores previous comments for learning&lt;/td&gt;
&lt;td&gt;Uses past reviews, reactions, and team patterns to adapt continuously&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Entelligence AI is a Better Fit
&lt;/h2&gt;

&lt;p&gt;After using both tools across different scenarios, editing locally, raising PRs, and tracking reviews, it becomes clear that Entelligence AI does a bit more at every step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Less setup, more value early&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Entelligence starts reviewing the moment you make changes. No more context switching. It flags issues as you work, which helps prevent problems before they’re even committed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reviews that explain, not just comment&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Instead of just saying &lt;em&gt;what’s wrong&lt;/em&gt;, Entelligence explains &lt;em&gt;why&lt;/em&gt;, whether it’s state mutation, architectural issues, or hidden risks like missing cleanup functions or unsafe rendering. This kind of feedback is especially helpful when you’re trying to learn or working with larger teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Understands the bigger picture&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Where most tools focus on the lines that changed, Entelligence steps back to see how the new code fits into everything else. It notices function mismatches, duplicated logic, or cross-file inconsistencies, even when those files weren’t touched in the PR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One tool for everything&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
PR summaries, team insights, documentation updates, performance reviews, Slack, and other workflow tool integrations all come from the same dashboard. This means fewer tabs, fewer integrations to manage, and a simpler workflow for teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It grows with your team&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The tool learns from past reviews and adapts based on team preferences. So over time, feedback gets more tailored, not just to the code, but to how your team likes to build.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So while CodeRabbit is a solid helper for PRs, Entelligence AI ends up being more than a reviewer, it becomes part of how the team writes, shares, and improves code every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn more about the Entelligence AI code review extension:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.entelligence.ai/IDE" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Check Documentation&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both &lt;strong&gt;Entelligence AI&lt;/strong&gt; and &lt;strong&gt;CodeRabbit&lt;/strong&gt; offer valuable support for AI-assisted code review, but they support different levels of depth.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Entelligence AI&lt;/strong&gt; is like a smart teammate in your development process. It doesn't just look at code changes, it understands the entire codebase, follows architectural patterns, and works well with the tools your team already uses. It provides real-time code feedback, creates automatic documentation, and gives insights into sprints, making it ideal for teams focused on quality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CodeRabbit&lt;/strong&gt; gives clear and useful feedback on pull requests. It's quick to set up, simple to use, and great for developers who want helpful suggestions during or after coding. Its integrations with GitHub and code editors make it a practical choice for teams or individual developers who want to automate basic reviews.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're looking for something that grows with your codebase, fits naturally into daily work, helps improve more than just the diff, and is a tool that has a full-context, long-term code quality with scalable insights, &lt;strong&gt;Entelligence AI&lt;/strong&gt; is the better choice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dub.sh/entelligence" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Install Entelligence AI VS Code Extension⛵&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
    <item>
      <title>10 Game-Changing Platforms &amp; Assistants Every Engineering Team Needs in 2025</title>
      <dc:creator>Pankaj Singh</dc:creator>
      <pubDate>Tue, 24 Jun 2025 06:11:47 +0000</pubDate>
      <link>https://forem.com/entelligenceai/10-game-changing-platforms-assistants-every-engineering-team-needs-in-2025-2ig4</link>
      <guid>https://forem.com/entelligenceai/10-game-changing-platforms-assistants-every-engineering-team-needs-in-2025-2ig4</guid>
      <description>&lt;p&gt;Engineering in 2025 is fast, complex, and relentless. You're expected to ship faster, stay secure, and keep everything running smoothly all at once.&lt;/p&gt;

&lt;p&gt;The secret weapon? &lt;strong&gt;&lt;em&gt;The right tools&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ktpjoxb4wgnj3imctc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ktpjoxb4wgnj3imctc.gif" alt="ohreally" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve rounded up 10 essential tools that top engineering teams are using to work smarter, automate more, and stay ahead. From AI-powered code reviews to seamless CI/CD and collaboration, this list cuts through the noise.&lt;/p&gt;

&lt;p&gt;&lt;a id="1-entelligence-ai-ai-code-review-developer-productivity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;a href="https://dub.sh/Zc2ppmU" rel="noopener noreferrer"&gt;Entelligence AI – AI Code Review &amp;amp; Developer Productivity&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27p7iixqtumf1dcxwpe4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27p7iixqtumf1dcxwpe4.gif" alt="entelligenceai" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An AI-driven code review and documentation assistant that analyzes entire codebases. Entelligence automates pull-request reviews, generates contextual comments, and keeps documentation in sync with code. Its agents catch complex bugs early (claiming ~70% more bugs found) and accelerate merges (up to 80% faster) by understanding cross-file context. In 2025, as AI matures, tools like Entelligence are critical for reducing manual review overhead and ensuring up-to-date docs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Deep codebase analysis for context-aware reviews; PR summaries and smart comments; quick-fix suggestions; self-updating documentation (Entelligence “turns code into clear docs” on each commit); performance dashboards (cycle time, team output); high security (SOC2-compliant, optional self-hosting, no code training). By automating reviews and docs, teams ship features faster and spend less time chasing context.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enterprise engineering teams with large or complex codebases who need to scale code reviews and maintain live documentation using AI.&lt;/p&gt;

&lt;p&gt;For more information, visit the official &lt;a href="https://dub.sh/zGT7qYZ" rel="noopener noreferrer"&gt;docs&lt;/a&gt;, and for even more complex examples, see the &lt;a href="https://dub.sh/Zc2ppmU" rel="noopener noreferrer"&gt;repository's&lt;/a&gt; example sections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to star and contribute to the repositories.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="2-sonarqube-code-quality-security-analysis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. &lt;a href="https://www.sonarsource.com/products/sonarqube/" rel="noopener noreferrer"&gt;SonarQube – Code Quality &amp;amp; Security Analysis&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3y4ieizqehgijv60zuj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3y4ieizqehgijv60zuj.gif" alt="sonarqube" width="245" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An open-source platform for continuous static code analysis. SonarQube automatically scans codebases (20+ languages) to find bugs, security vulnerabilities, and code smells. It enforces quality gates before merges to ensure standards are met. In 2025, with security and clean code more important than ever, SonarQube helps teams integrate rigorous quality checks into CI/CD.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Static code analysis for bugs, vulnerabilities, and maintainability issues; Quality Gates that block builds if thresholds fail; multi-language support; customizable rule sets; CI/CD integration (Jenkins, GitHub Actions, etc.); and historical trend tracking for technical debt. It gives teams clear dashboards of code health, reducing long-term maintenance costs by catching issues early.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps teams needing automated code-review enforcement and security checks in their build pipelines to maintain clean, reliable code.&lt;/p&gt;

&lt;p&gt;Check the documentation here - &lt;a href="https://docs.sonarsource.com/sonarqube-server/latest/" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="3-linearb-engineering-productivity-intelligence"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. &lt;a href="https://linearb.io/" rel="noopener noreferrer"&gt;LinearB – Engineering Productivity Intelligence&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9enlguxexc8goznro0u.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9enlguxexc8goznro0u.gif" alt="Linearb" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A DevOps intelligence platform (“engineering productivity platform”) that aggregates data from repos, CI/CD, and project tools. LinearB provides real-time visibility into metrics like cycle time, PR size, merge frequency, and bug ratio. It also embeds AI-driven automations in workflows. For example, customers have applied LinearB bots to ~35% of PRs, saving 321 developer-hours per month. In 2025, engineering teams use LinearB to optimize processes and measure team performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Real-time dashboards with industry-standard metrics (DORA/SPACE); workflow automations (auto-merge policies, PR bots, compliance checks); developer experience surveys; resource forecasting; and AI governance controls. Built-in reports highlight bottlenecks, predict delivery dates, and quantify impact. By automating repetitive tasks and providing data-driven insights, LinearB boosts velocity and developer satisfaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Engineering managers and DevOps teams at large enterprises who want to monitor team health, enforce standards, and reduce manual process overhead through analytics and automation.&lt;/p&gt;

&lt;p&gt;For Developer Experience: &lt;a href="https://linearb.io/platform/developer-experience" rel="noopener noreferrer"&gt;LinearB&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="4-jira-atlassian-agile-project-management"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. &lt;a href="https://www.atlassian.com/software/jira/agile" rel="noopener noreferrer"&gt;Jira (Atlassian) – Agile Project Management&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdjqq059c09thtb26h9b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdjqq059c09thtb26h9b.gif" alt="jira" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The industry-leading agile project tracker for software teams. Jira lets teams plan and track work with backlogs, Scrum/Kanban boards, and sprint planning. It keeps all tasks in one place so stakeholders see who’s doing what. In 2025, Jira remains central to enterprise development, aligning engineering tasks with business goals.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Customizable Scrum and Kanban boards; backlog management for sprint planning; dependency/timeline (roadmap) views; issue and task tracking with rich metadata; cross-team release calendars; and advanced reporting (burn-down, velocity, capacity). Jira also supports automation rules (e.g. issue transitions) and thousands of integrations (Slack, GitHub, CI tools, etc.). These features streamline coordination, improve visibility, and speed delivery.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any software development team practicing Agile/Scrum at scale, needing a single system to manage backlogs, sprints, and releases across multiple teams.&lt;/p&gt;

&lt;p&gt;Click Here For Documentation: &lt;a href="https://confluence.atlassian.com/jira" rel="noopener noreferrer"&gt;Jira Atlassian&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="5-github-actions-cicd-automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. &lt;a href="https://github.blog/enterprise-software/ci-cd/build-ci-cd-pipeline-github-actions-four-steps/" rel="noopener noreferrer"&gt;GitHub Actions – CI/CD &amp;amp; Automation&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenpii6qr3ofqgdsh8wjm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenpii6qr3ofqgdsh8wjm.gif" alt="githubactions" width="500" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub’s native CI/CD platform for automating software workflows. It allows developers to define build, test, and deployment pipelines as YAML files in their repos. Because it’s fully integrated with GitHub, teams can trigger workflows on commits, PRs, and releases. In 2025, GitHub Actions is a go-to tool as more projects live on GitHub and demand seamless end-to-end automation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Workflows-as-code (YAML) in GitHub repositories; extensive Action marketplace of community tools; matrix builds and Docker/container support; secret and environment management; and self-hosted runner support for on-prem hardware. Actions simplify CI/CD setup and enable reuse of workflows across projects. Continuous testing and deployment become frictionless, improving release velocity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Development teams using GitHub for code hosting who want built-in CI/CD without leaving GitHub, enabling fast, reliable builds and deployments.&lt;/p&gt;

&lt;p&gt;Click Here to Know More: &lt;a href="https://docs.github.com/en/actions/about-github-actions/understanding-github-actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="6-jenkins-x-cloud-native-cicd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. &lt;a href="https://jenkins-x.io/" rel="noopener noreferrer"&gt;Jenkins X – Cloud-Native CI/CD&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquj80n0xqgprcpelyxns.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquj80n0xqgprcpelyxns.gif" alt="cloudnative" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An open-source Kubernetes-native CI/CD platform. Jenkins X automates CI/CD pipelines using modern GitOps principles. It generates Tekton pipelines for builds and promotions, and manages environments via Git repositories. In 2025, Jenkins X is ideal for teams running containerized microservices, as it handles complex Kubernetes deployments out of the box.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: GitOps-based pipeline management (complete CI/CD defined in Git); Automated Preview Environments spun up for each pull request; built-in support for multi-cluster deployments; automated pipeline generation (no deep pipeline coding required); and ChatOps feedback (Jenkins X comments on PRs/issues). These features accelerate testing and review cycles in cloud-native workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Teams operating on Kubernetes who need an opinionated CI/CD system with GitOps flows (e.g. development teams deploying microservices with preview environments and automated promotions).&lt;/p&gt;

&lt;p&gt;Click Here for Documentation: &lt;a href="https://jenkins-x.io/v3/about/" rel="noopener noreferrer"&gt;Jenkins X&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="7-cypress-front-end-test-automation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. &lt;a href="https://www.cypress.io/" rel="noopener noreferrer"&gt;Cypress – Front-end Test Automation&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxxytganjen5mt1aumr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxxytganjen5mt1aumr.gif" alt="cypress" width="480" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A modern end-to-end testing framework for web apps. Cypress runs tests directly in the browser, giving developers fast, reliable feedback while they code. Its real-time reload and visual debugging (“time travel”) make test authoring intuitive. In 2025, Cypress continues to be widely adopted by enterprises for testing modern JavaScript front-ends and ensuring high-quality user experiences.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Real-time browser execution with automatic waiting (reducing flakiness); powerful, promise-free API for writing tests; snapshot time-travel for debugging; network stubbing; and test parallelization via Cypress Cloud. It integrates easily into CI pipelines and provides detailed failure logs. These capabilities help teams quickly catch regressions in complex UIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Front-end development teams who need robust, developer-friendly end-to-end and component testing for web applications and want to integrate tests into their CI/CD.&lt;/p&gt;

&lt;p&gt;Click Here for Documentation: &lt;a href="https://docs.cypress.io/app/get-started/why-cypress" rel="noopener noreferrer"&gt;Cypress&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="8-confluence-documentation-knowledge-base"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. &lt;a href="https://confluence.atlassian.com/doc/use-confluence-as-a-knowledge-base-218275154.html" rel="noopener noreferrer"&gt;Confluence – Documentation &amp;amp; Knowledge Base&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8qs2kl52s1jgufroqo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8qs2kl52s1jgufroqo.gif" alt="confluence" width="400" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A collaborative wiki and documentation platform. Confluence lets teams create, organize, and share rich content (text, code snippets, images, diagrams) in pages and spaces. It maintains page histories and comments so information stays current. In 2025, Confluence is key for centralizing documentation (design docs, runbooks, meeting notes) and capturing institutional knowledge across globally distributed teams.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Intuitive WYSIWYG editor with macros (to embed code, diagrams, etc.); pre-built templates (how-tos, architectures); page versioning and inline comments; live collaborative editing; and AI-augmented search (auto-suggests answers, defines acronyms, summarizes pages). Confluence integrates seamlessly with Jira and Slack, making it easy to link documentation to tasks. It keeps information discoverable and up-to-date, reducing onboarding time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Engineering and cross-functional teams needing a scalable intranet to author specs, policies, and knowledge repositories in a single searchable space.&lt;/p&gt;

&lt;p&gt;Click Here for Documentation: &lt;a href="https://confluence.atlassian.com/doc/use-confluence-as-a-knowledge-base-218275154.html" rel="noopener noreferrer"&gt;Confluence&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="9-slack-team-collaboration-communication"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  9. &lt;a href="https://slack.com/resources/collections/slack-for-team-collaboration" rel="noopener noreferrer"&gt;Slack – Team Collaboration &amp;amp; Communication&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32zj4249upmezx94s96b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32zj4249upmezx94s96b.gif" alt="team collaboration" width="480" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A real-time collaboration hub that organizes conversations into channels. Slack lets teams chat, call, and share files in context. It acts as a “digital HQ” where projects stay in sync without email. In 2025, Slack remains a central tool for developer collaboration, connecting people and tools (over 2,600 app integrations) and surfacing answers via powerful search.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Persistent channels (group or project-based) and threads for organized dialogue; direct messaging and huddles for ad-hoc voice/video calls; file and code snippet sharing; workflow automations (custom bots and Slack’s Workflow Builder); and enterprise search across all messages/files. Its searchable history and smart AI search make information easy to retrieve. Slack Connect and guest channels also enable secure collaboration with external partners. These features speed up decision-making and keep teams aligned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Software teams needing instant collaboration and integration with development tools. Slack is used to replace email and meetings with chat-driven workflows (e.g. CI build notifications, quick troubleshooting discussions, etc.).&lt;/p&gt;

&lt;p&gt;&lt;a id="10-stack-overflow-for-teams-knowledge-sharing-platform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. &lt;a href="https://stackoverflow.com/questions" rel="noopener noreferrer"&gt;Stack Overflow for Teams – Knowledge Sharing Platform&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nathci5apdk05iivpfs.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nathci5apdk05iivpfs.gif" alt="Stack Overflow for Teams" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A private Q&amp;amp;A knowledge base for organizations. It brings the familiar StackOverflow interface inside the company so developers can post questions and get expert answers. The platform surfaces human-verified solutions and insights. In 2025, many enterprises use it to capture tribal knowledge (best practices, architecture decisions) that might otherwise be lost.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Structured Q&amp;amp;A (questions, answers, upvotes) ensuring high signal-to-noise; Articles for longer-form documentation; full-text search with tags and filters; content health monitoring (alerting on outdated posts); and gamification to encourage participation. It also integrates with Slack, IDEs (e.g. VS Code), and other tools, so teams can search or post questions without context-switching. This centralizes team wisdom and reduces repeated “how-to” queries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Engineering and DevOps teams that want to retain and share specialized technical knowledge internally, building a living Q&amp;amp;A repository to onboard new hires faster and solve problems collectively.&lt;/p&gt;

&lt;p&gt;Click Here for Documention: &lt;a href="https://stackoverflow.com/documentation" rel="noopener noreferrer"&gt;Stackoverflow&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  11. &lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;GitLab – Complete DevSecOps Platform&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a id="11-gitlab-complete-devsecops-platform"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa51mlz4xukcijsfvuzx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa51mlz4xukcijsfvuzx.gif" alt="gitlab" width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitLab is an all-in-one DevSecOps platform that brings source control, CI/CD, security scanning, planning, and monitoring into a single application. Unlike traditional toolchains where teams juggle multiple platforms, GitLab simplifies development by unifying the software delivery lifecycle. In 2025, GitLab stands out for its tight integration of development and security, making it a powerful choice for enterprise-scale teams looking to streamline collaboration, automation, and governance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key features and benefits: Integrated Git-based source control; built-in CI/CD pipelines with auto-scaling runners; issue tracking and agile planning boards; code review and merge request workflows; static/dynamic security scans (SAST, DAST, container scanning, etc.); and value stream analytics. GitLab also supports Infrastructure as Code (IaC), GitOps, and Kubernetes deployments out of the box. By keeping the entire lifecycle—from ideation to production—on a single platform, it reduces tool fragmentation, enhances visibility, and accelerates delivery.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enterprise engineering teams seeking an end-to-end DevSecOps platform to manage everything from project planning to secure deployments, especially those operating in regulated or fast-moving environments who need scalability and compliance built-in.&lt;/p&gt;

&lt;p&gt;Click Here for Documentation: &lt;a href="https://docs.gitlab.com/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In 2025, the best engineering teams aren’t just moving fast they’re moving smart. The right tools don’t just save time; they enhance code quality, boost developer happiness, and future-proof your workflows. Whether you're optimizing CI/CD, automating reviews, managing documentation, or enabling cross-team collaboration, each tool on this list helps you build better software, faster.&lt;/p&gt;

&lt;p&gt;👉 Explore these tools, level up your stack, and give your team the edge it needs to thrive in a high-velocity world. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let me know if you use some other awesome platform or assistance for your development team!!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>devops</category>
      <category>javascript</category>
    </item>
    <item>
      <title>6 Ways AI Can Improve Your Python Code(Tested!)</title>
      <dc:creator>Pankaj Singh</dc:creator>
      <pubDate>Tue, 17 Jun 2025 10:35:37 +0000</pubDate>
      <link>https://forem.com/entelligenceai/6-ways-ai-can-improve-your-python-codetested-336p</link>
      <guid>https://forem.com/entelligenceai/6-ways-ai-can-improve-your-python-codetested-336p</guid>
      <description>&lt;p&gt;Let’s face it, today’s enterprise dev teams are expected to move fast and write flawless Python code. Isn't it? I know the struggle.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;That’s a tough combo.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But here’s the good news: AI isn’t just hype anymore, it’s quietly transforming how we build and maintain software. I’ve seen it firsthand. With the right tools, you can automate the boring stuff, catch bugs before they bite, and even tighten up your code reviews without burning out your team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9pmiqutxbh4clu7nrlr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9pmiqutxbh4clu7nrlr.gif" alt="Python code" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through six powerful ways AI can instantly boost your Python code quality. From &lt;a href="https://dub.sh/PyTyY3V" rel="noopener noreferrer"&gt;AI-powered review agents&lt;/a&gt; to smarter test generation, these techniques are already helping top teams ship cleaner, more reliable code &amp;amp; faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Automated AI-Powered Code Reviews
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifnv4dr3pfy33hcdcp9l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifnv4dr3pfy33hcdcp9l.gif" alt="ai review" width="480" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Manual code reviews can be slow, inconsistent, and easy to overlook minor errors, especially at enterprise scale. AI changes this by automating large parts of the review process. For example, intelligent code review agents can scan an entire pull request in seconds and flag issues like style deviations, potential bugs, or missing error checks. I personally use Entelligence AI’s PR Review agent and recommend trying it out if you want more consistent improvements. Tools like &lt;a href="https://dub.sh/PyTyY3V" rel="noopener noreferrer"&gt;Entelligence AI&lt;/a&gt;  run inside your development workflow (VS Code) to give real-time feedback on Python code, making reviews faster and more thorough. Studies of AI code review show that these systems complete reviews “in a fraction of the time,” analyzing vast amounts of code quickly and making actionable recommendations.&lt;/p&gt;

&lt;p&gt;Critically, AI-driven reviews are also much more consistent than purely human ones. An AI agent never gets tired or distracted, so it applies the same coding rules uniformly across every file. This means common errors (like missing null checks or inconsistent naming) get flagged reliably. Many enterprise tools use machine learning to automatically identify defects, security vulnerabilities, and performance issues in Python and other languages. By integrating such AI code review services into your &lt;a href="https://www.ibm.com/think/topics/ci-cd-pipeline" rel="noopener noreferrer"&gt;CI/CD pipeline&lt;/a&gt;, every pull request can be scanned for problems before it’s merged. In short, automated AI reviews speed up the feedback loop and help enforce team coding standards, leading to cleaner Python code with less manual effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Advanced Static Analysis and Bug Detection
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk73oqcycmhstna2obrp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk73oqcycmhstna2obrp.gif" alt="bug detection" width="480" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Beyond high-level reviews, AI excels at deep static analysis of code. Modern AI analysis tools use machine learning to detect subtle bugs and security flaws that traditional linters or human reviewers might miss. For example, an AI model can trace through complex code paths and identify edge-case errors or race conditions. A research firm analysed that AI-based code review notes that these tools are “highly effective at detecting errors that are difficult to spot through manual review”. In practice, integrating AI-based scanners into your workflow means every commit is checked for hard-to-find issues. These tools have been trained on millions of code examples, so they catch patterns of bugs (like SQL injection risks or memory leaks) even in unfamiliar code.&lt;/p&gt;

&lt;p&gt;Enterprise surveys confirm this benefit: developers report that AI tools help them deliver more secure software with higher quality. For instance, the &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;/&lt;a href="https://www.accenture.com/us-en" rel="noopener noreferrer"&gt;Accenture&lt;/a&gt; research found 90% of developers saw an improvement in code security and quality when using AI-assisted coding tools. In practice, you might configure an AI scanner to run on every pull request, ensuring that even minor security or reliability issues (say, unchecked exceptions or unused variables) are caught immediately. By detecting these bugs early and automatically, AI-driven static analysis significantly reduces the risk of defects slipping into production, making your Python applications more robust.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. AI-Generated Testing and Coverage
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6biwo6ron3dsx3ny6fg2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6biwo6ron3dsx3ny6fg2.gif" alt="testing" width="480" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Writing comprehensive unit and integration tests is one of the best ways to ensure code quality – but it’s also labor-intensive. AI can help automate this tedious task. New AI assistants (like &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot&lt;/a&gt;) can analyze &lt;a href="https://www.python.org/" rel="noopener noreferrer"&gt;Python&lt;/a&gt; functions and automatically generate meaningful test cases. As one developer puts it, “progress in AI has opened doors to automated test generation…presenting developers with an innovative method for creating code tests.”. With these tools, you can often click a button or prompt the AI with a function, and it will produce a suite of unit tests covering normal and edge-case inputs. The generated tests can be easily reviewed and tweaked, saving developers hours of manual writing.&lt;/p&gt;

&lt;p&gt;The benefits are clear: automated test generation improves coverage and catches bugs early. In fact, respondents in a GitHub survey specifically noted “improved test case generation” as a key advantage of AI coding tools. By letting AI propose tests, teams find and fix hidden logic errors and regressions much sooner. Crucially for enterprises, AI test tools integrate into IDEs and pipelines, so you can automatically generate or update tests as part of development. The result is higher confidence in your code – every new Python module gets thoroughly tested by AI, ensuring defects are caught before they reach production.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AI-Driven Documentation and Code Consistency
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4tgni2qum081dvgmz0f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4tgni2qum081dvgmz0f.gif" alt="documentation" width="480" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clean, maintainable code needs good documentation, and AI is starting to automate that too. LLMs like GPT-4 can analyze a Python function’s code and generate explanatory docstrings or comments in natural language. These models work by understanding the function’s logic and translating it into readable descriptions. This means trivial documentation tasks – like writing the “Args/Returns” in a docstring – can be done in seconds by AI rather than minutes by hand. The payoff is huge: better documentation makes the codebase easier to understand and reduces bugs caused by misusing a function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dub.sh/PyTyY3V" rel="noopener noreferrer"&gt;AI documentation tools&lt;/a&gt; also enforce consistency across the codebase. They apply the same style and naming conventions everywhere, so all docstrings or comments follow unified templates. For example, if your team has a standard format for function descriptions, an AI tool will stick to that format in every file. This uniform approach saves hours of manual editing and makes the code easier to skim and review. Entelligence AI notes that AI generation yields “documentation that would take hours to write manually…in seconds,” and ensures consistent standards across the project. By integrating an AI doc generator into reviews or CI, you can auto-generate or validate docs on each commit. In short, AI-powered commenting and docstring generation keep your Python code self-explanatory and maintainable at enterprise scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. AI-Powered Developer Assistants and Autocompletion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbaihjwz7kzf4kd8p0uyj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbaihjwz7kzf4kd8p0uyj.gif" alt="AI ASSISTANCE" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI isn’t just for post-checks; it can help you write better code from the get-go. Intelligent coding assistants such as GitHub Copilot, Entelligence ai,&lt;a href="https://www.tabnine.com/" rel="noopener noreferrer"&gt;Tabnine&lt;/a&gt;, or &lt;a href="https://windsurf.com/" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt; run inside your IDE and suggest code snippets or completions as you type. For example, as you start writing a function, Copilot might auto-complete it with the correct loop or library call. This on-the-fly advice often leads to more idiomatic, error-free code. Aimultiple describes Copilot as “an AI-powered code completion tool that assists developers by suggesting code snippets and entire functions as they type”. By catching simple mistakes (like syntax errors or wrong API usage) instantly, these tools reduce the number of bugs in the code you write.&lt;/p&gt;

&lt;p&gt;More importantly, AI pair programmers accelerate development and boost confidence. In an enterprise study with Accenture, developers using Copilots coded up to 55% faster and 85% reported feeling more confident in their code quality. In practical terms, this means teams spend less time on mundane coding tasks and more time on design and complex problems. As these AI assistants learn from millions of open-source examples, they also inject best practices automatically. For instance, suggesting secure coding patterns or efficient data structures. The bottom line is that using an AI coding assistant during development helps enforce quality by preventing issues early and speeding up coding, leading to cleaner Python code right from the first draft.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. AI-Powered Code Refactoring and Maintenance
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9nxiums1xlrrukqkohs.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9nxiums1xlrrukqkohs.gif" alt="Code" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Large enterprise codebases accumulate technical debt over time. Refactoring (cleaning up and reorganizing code) is essential but time-consuming. Here too, AI can help. Advanced “AI coding agents” can analyze your Python code and recommend systematic refactors. For example, they might detect that certain code blocks are duplicated or too complex, and suggest a function to encapsulate that logic. Zencoder’s guide on refactoring notes that AI agents can “analyze vast amounts of code in the blink of an eye” and “quickly identify areas ripe for improvement, saving developers countless hours of manual review.”. This efficiency boost means you can safely refactor large sections of code under AI guidance, freeing engineers to focus on high-level design rather than tedious cleanup.&lt;/p&gt;

&lt;p&gt;AI refactoring tools also ensure consistency and accuracy during large-scale changes. Because the AI applies the same transformation rules everywhere, your code’s style and structure become more uniform. For instance, if your team decides on a new class naming convention or wants to replace a deprecated API, an AI agent can update it across the entire codebase without missing spots. Importantly, these tools track code dependencies, so they avoid introducing new bugs. As Zencoder explains, “AI agents are less prone to errors…they can meticulously analyze code dependencies and potential impacts, reducing the risk of introducing bugs during the refactoring process.”. By periodically running AI-driven refactoring passes, enterprise teams can keep their Python code clean, well-structured, and up-to-date with modern standards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu96yvlvih5kjubexvsv6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu96yvlvih5kjubexvsv6.gif" alt="pheww" width="500" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI is no longer just a nice-to-have it’s quickly becoming a must-have for any team serious about writing high-quality Python code at scale. In this article, we explored six powerful ways AI can elevate your code: from automated code reviews and smart static analysis to AI-assisted testing, documentation, refactoring, and intelligent coding companions.&lt;br&gt;
What’s exciting is that these tools don’t disrupt your workflow they enhance it. They quietly catch bugs before they reach production, enforce clean architecture, and give your team superpowers without adding extra meetings or manual effort.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So, what’s next?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Start small. Pick just one area maybe plug an AI reviewer into your pull requests or let &lt;a href="https://dub.sh/PyTyY3V" rel="noopener noreferrer"&gt;AI code agent&lt;/a&gt; assist you as you code. Give it a week. You’ll likely be surprised at how much smoother things get. Fewer bugs, faster reviews, more confidence in every release.&lt;br&gt;
AI won’t replace great developers but it can make every developer better.&lt;/p&gt;

&lt;p&gt;Now’s the time to embrace it. Experiment. Iterate. And let AI take your Python code quality to the next level.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let me know if I have missed something!!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>beginners</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
