<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: gauravsoni</title>
    <description>The latest articles on Forem by gauravsoni (@gauravsoni1992).</description>
    <link>https://forem.com/gauravsoni1992</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gauravsoni1992"/>
    <language>en</language>
    <item>
      <title>Expectations vs Reality: Adopting AI (LLM) in Product Engineering</title>
      <dc:creator>gauravsoni</dc:creator>
      <pubDate>Sat, 05 Apr 2025 11:28:29 +0000</pubDate>
      <link>https://forem.com/gauravsoni1992/expectations-vs-reality-adopting-llms-in-product-engineering-13gc</link>
      <guid>https://forem.com/gauravsoni1992/expectations-vs-reality-adopting-llms-in-product-engineering-13gc</guid>
      <description>&lt;p&gt;As LLM become part of everyday engineering workflows, it's critical to understand what they can and cannot do — especially in product engineering, where business context matters deeply.&lt;/p&gt;

&lt;p&gt;When you ask LLM to generate code for a particular business context (something only you know entirely), LLM don't "know" your business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM can only predict based on:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Common patterns from the code that was trained on&lt;/li&gt;
&lt;li&gt;Your prompt (whatever hints and context you give to LLM)&lt;/li&gt;
&lt;li&gt;General best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;BUT — LLM cannot magically "know" your private business rules unless you clearly describe them.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLM predict code that sounds correct. LLM don't understand or validate your real-world needs unless you explain them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;How LLM work when you ask for code&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prediction:&lt;/strong&gt;
It predicts the most likely following tokens that look like good code for the problem you described.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern matching:&lt;/strong&gt;
LLM copies and adapts patterns from its training data that seem to fit your request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filling gaps:&lt;/strong&gt;
If you miss details, it's based on common sense, but that guess might be wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;So, how do you get reliable code from an LLM?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Give very detailed prompts&lt;/strong&gt;
Include your business rules, data structures, constraints, and edge cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use critical review after generation&lt;/strong&gt;
After LLM give you the code, you must review and test it to ensure it fits your business.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up test cases&lt;/strong&gt;
Always ask for code plus tests that verify the logic according to your business needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterative correction&lt;/strong&gt;
You might need to correct, guide, or refine the code a few times by giving feedback ("No, this function also needs to handle XYZ").&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Analogy&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Asking an LLM to write perfect business code without full context is like asking a lawyer who never heard your case to write a court judgment — they'll guess based on experience, but the real facts must come from you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's a simple reality check for anyone adopting AI into their engineering teams:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expectations vs Reality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM writes perfect business-specific code&lt;/strong&gt;&lt;br&gt;
➡️ LLM predicts likely code patterns based on general training data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM understands my business domain&lt;/strong&gt;&lt;br&gt;
➡️ LLM does not know your private business context unless you explain it explicitly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM validates outputs against business rules&lt;/strong&gt;&lt;br&gt;
➡️ LLM only predicts outputs — you must validate and test separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM reduces need for strong specs&lt;/strong&gt;&lt;br&gt;
➡️ LLM needs more precise and stricter specifications to perform well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM saves time without supervision&lt;/strong&gt;&lt;br&gt;
➡️ LLM amplify productivity but requires human review to ensure correctness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM innovate or invent new ideas&lt;/strong&gt;&lt;br&gt;
➡️ LLM recombine existing knowledge; they don't invent beyond what they have seen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM replace developers or product engineers&lt;/strong&gt;&lt;br&gt;
➡️ LLM are assistants, not replacements — judgment and domain expertise stay human-driven.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bigger models are always better&lt;/strong&gt;&lt;br&gt;
➡️ Smaller fine-tuned models often perform better for specific business use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Mindset Shift for Product Engineers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Think of LLM as prediction engines, not knowledge engines.&lt;/li&gt;
&lt;li&gt;Use them for draft generation, automation, and exploration, but own the final quality yourself.&lt;/li&gt;
&lt;li&gt;Training data ≠ for your company’s context.

&lt;ul&gt;
&lt;li&gt;If it’s not in the prompt, it’s not in the model’s head.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Good prompts = good outputs.

&lt;ul&gt;
&lt;li&gt;Better input leads to much better results.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tasks where LLM are very effective:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drafting code templates&lt;/strong&gt;&lt;br&gt;
➡️ Fast at generating boilerplate, CRUD operations, and API wrappers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating documentation&lt;/strong&gt;&lt;br&gt;
➡️ Summarises, formats, and structures text very quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generating test cases&lt;/strong&gt;&lt;br&gt;
➡️ Can suggest unit/integration tests if logic is clear.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploring alternatives&lt;/strong&gt;&lt;br&gt;
➡️ Provides multiple ways to solve a coding or design problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speeding up research&lt;/strong&gt;&lt;br&gt;
➡️ Quickly summarises concepts, tools, libraries, and frameworks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idea expansion&lt;/strong&gt;&lt;br&gt;
➡️ Good at suggesting more use cases, edge cases, or features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing first drafts of emails, specs, and user stories&lt;/strong&gt;&lt;br&gt;
➡️ Useful for early rough drafts to save time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic data transformation scripts&lt;/strong&gt;&lt;br&gt;
➡️ Good at SQL queries, simple ETL scripts, and data formatting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks that require human ownership:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding deep business context&lt;/strong&gt;&lt;br&gt;
➡️ LLM can't "know" your company’s strategy, policies, or customer expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validating correctness&lt;/strong&gt;&lt;br&gt;
➡️ AI-generated code, tests, or documents still need human review and adjustment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural decisions&lt;/strong&gt;&lt;br&gt;
➡️ LLM can suggest, but real-world trade-offs must be handled by experienced engineers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and compliance&lt;/strong&gt;&lt;br&gt;
➡️ LLM may miss critical risks unless you guide them specifically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creative product thinking&lt;/strong&gt;&lt;br&gt;
➡️ True innovation — new product ideas and differentiation — still requires human creativity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prioritisation and trade-offs&lt;/strong&gt;&lt;br&gt;
➡️ AI doesn't "feel" urgency, politics, or customer pain points like humans do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural and communication nuances&lt;/strong&gt;&lt;br&gt;
➡️ Writing for internal stakeholders, clients, or executives needs human judgment on tone and sensitivity.&lt;/p&gt;

&lt;p&gt;One-line Summary&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLM are power tools — not decision-makers.&lt;br&gt;
Use them to amplify your thinking, not replace it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;💬 I'd love to hear how you're blending LLM into your engineering workflow! What challenges have you faced, and what wins have you seen?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>product</category>
      <category>developertools</category>
      <category>llm</category>
    </item>
    <item>
      <title>Solving 502 Errors in Microservices Node.js and AWS ALB</title>
      <dc:creator>gauravsoni</dc:creator>
      <pubDate>Sat, 27 Apr 2024 11:18:07 +0000</pubDate>
      <link>https://forem.com/gauravsoni1992/solving-502-errors-in-microservices-using-nodejs-and-aws-elb-1mka</link>
      <guid>https://forem.com/gauravsoni1992/solving-502-errors-in-microservices-using-nodejs-and-aws-elb-1mka</guid>
      <description>&lt;p&gt;&lt;strong&gt;Understanding 502 Errors&lt;/strong&gt;&lt;br&gt;
A 502 Bad Gateway error occurs when a server acting as a gateway or proxy (like AWS Application Load Balancer or ALB) receives an invalid response from the upstream server (e.g., your Node.js application server). In microservices architectures, where multiple servers communicate through a load balancer, this error is common when server-to-server communication is interrupted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem: Timeout Mismatch&lt;/strong&gt;&lt;br&gt;
In my setup, AWS ALB was configured with a 30-second timeout, and the Node.js servers were handling requests using their default timeout settings. However, these default server timeout settings were not aligned with the ALB configuration, leading to unexpected connection terminations and resulting in 502 errors.&lt;/p&gt;

&lt;p&gt;To resolve this, the timeout settings on the Node.js servers must be configured to complement ALB’s timeout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Timeout Settings in Node.js&lt;/strong&gt;&lt;br&gt;
There are two critical timeout settings in Node.js that affect its interaction with AWS ALB:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;KeepAliveTimeout&lt;br&gt;
This setting determines how long the server keeps a connection open to allow additional requests from the same client. By default, Node.js sets this to 5 seconds, which is too short compared to ALB’s 30 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HeadersTimeout&lt;br&gt;
This setting specifies the maximum time the server waits for complete request headers before closing the connection. By default, it is 60 seconds, which can be optimized for consistency.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To prevent 502 errors, the KeepAliveTimeout should be set slightly longer than the ALB timeout, and the HeadersTimeout should exceed the KeepAliveTimeout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adjust Node.js Server Timeout Settings&lt;/strong&gt;&lt;br&gt;
Configure the timeout settings in your Node.js application server. Here’s an example of how to do this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;const http = require("http");&lt;br&gt;
const app = require("./app"); // Your Express or custom app setup&lt;br&gt;
const server = http.createServer(app);&lt;br&gt;
// Configure timeouts&lt;br&gt;
server.keepAliveTimeout = 35000; // 35 seconds&lt;br&gt;
server.headersTimeout = 40000;  // 40 seconds&lt;br&gt;
server.listen(3000, () =&amp;gt; {&lt;br&gt;
  console.log("Server running on port 3000");&lt;br&gt;
});&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Apply these timeout settings consistently across all Node.js servers involved in the microservices architecture to prevent unexpected close connections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why These Settings Work&lt;/strong&gt;&lt;br&gt;
The KeepAliveTimeout ensures the Node.js server keeps the connection alive long enough to match or slightly exceed ALB’s idle timeout.&lt;br&gt;
The HeadersTimeout ensures the server has ample time to process and respond to headers, preventing premature connection termination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0wwoxwbf503fq8ictu8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0wwoxwbf503fq8ictu8.webp" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuring the correct timeout settings in Node.js servers when used with AWS ALB is crucial in mitigating 502 errors.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>502</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
