<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Voskan Voskanyan</title>
    <description>The latest articles on Forem by Voskan Voskanyan (@voskan89).</description>
    <link>https://forem.com/voskan89</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/voskan89"/>
    <language>en</language>
    <item>
      <title>The 15-Minute Goroutine Leak Triage: Two Dumps, One Diff, Zero Guessing</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Tue, 06 Jan 2026 18:16:02 +0000</pubDate>
      <link>https://forem.com/voskan89/the-15-minute-goroutine-leak-triage-two-dumps-one-diff-zero-guessing-1oi8</link>
      <guid>https://forem.com/voskan89/the-15-minute-goroutine-leak-triage-two-dumps-one-diff-zero-guessing-1oi8</guid>
      <description>&lt;p&gt;Goroutine leaks rarely announce themselves with a dramatic outage. They show up as "slowly getting worse":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;p95/p99 creeps up over an hour or two&lt;/li&gt;
&lt;li&gt;memory trends upward even though traffic is flat&lt;/li&gt;
&lt;li&gt;goroutine count keeps climbing and &lt;strong&gt;doesn’t return to baseline&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve been on-call long enough, you’ve seen the trap: people debate &lt;em&gt;why&lt;/em&gt; it’s happening before they’ve proven &lt;em&gt;what&lt;/em&gt; is accumulating.&lt;/p&gt;

&lt;p&gt;This post is a compact, production-first triage that I use to confirm a goroutine leak fast, identify the dominant stuck pattern, and ship a fix that holds.&lt;/p&gt;

&lt;p&gt;If you want the full long-form runbook with a root-cause catalog, hardening defaults, and a production checklist, I published it here:&lt;br&gt;&lt;br&gt;
&lt;a href="https://compile.guru/goroutine-leaks-production-pprof-tracing/" rel="noopener noreferrer"&gt;https://compile.guru/goroutine-leaks-production-pprof-tracing/&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What a goroutine leak is (the only definition that matters in production)
&lt;/h2&gt;

&lt;p&gt;In production I don’t define a leak as "goroutines are high."&lt;/p&gt;

&lt;p&gt;A goroutine is &lt;em&gt;leaked&lt;/em&gt; when it outlives the request/job that created it and it has no bounded lifetime (no reachable exit path tied to cancellation, timeout budget, or shutdown).&lt;/p&gt;

&lt;p&gt;That framing matters because it turns debugging into lifecycle accounting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What started this goroutine?&lt;/li&gt;
&lt;li&gt;What is its exit condition?&lt;/li&gt;
&lt;li&gt;Why is the exit unreachable?&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Minute 0–3: confirm the signature (don’t skip this)
&lt;/h2&gt;

&lt;p&gt;Before you touch profiling, answer one question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is the system accumulating concurrency footprint without a matching increase in work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What I look at together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;QPS / job intake (flat or stable-ish)&lt;/li&gt;
&lt;li&gt;goroutines (upward slope)&lt;/li&gt;
&lt;li&gt;inuse heap / RSS (upward slope)&lt;/li&gt;
&lt;li&gt;tail latency (upward slope)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If goroutines spike during a burst and then gradually return: that’s not a leak, that’s load.&lt;/p&gt;

&lt;p&gt;If goroutines rise &lt;strong&gt;linearly&lt;/strong&gt; (or step-up repeatedly) while work is stable: treat it as a leak until proven otherwise.&lt;/p&gt;


&lt;h2&gt;
  
  
  Minute 3–10: capture two goroutine profiles and diff them
&lt;/h2&gt;

&lt;p&gt;The key move is comparison. A single goroutine dump is noisy. Two dumps tell you what’s &lt;em&gt;growing&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Option A (best for diffing): capture profile format and use &lt;code&gt;go tool pprof&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Capture twice, separated by 10–15 minutes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sS "http://$HOST/debug/pprof/goroutine" &amp;gt; goroutine.1.pb.gz

sleep 900

curl -sS "http://$HOST/debug/pprof/goroutine" &amp;gt; goroutine.2.pb.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now diff them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go tool pprof -top -diff_base=goroutine.1.pb.gz ./service-binary goroutine.2.pb.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What you want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one (or a few) stacks that &lt;em&gt;grow a lot&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;a clear wait reason: channel send/recv, network poll, lock wait, select, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Option B (fastest human scan): &lt;code&gt;debug=2&lt;/code&gt; text dumps
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sS "http://$HOST/debug/pprof/goroutine?debug=2" &amp;gt; goroutines.1.txt

sleep 900

curl -sS "http://$HOST/debug/pprof/goroutine?debug=2" &amp;gt; goroutines.2.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then do a "poor man’s diff":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;search for repeated top frames&lt;/li&gt;
&lt;li&gt;count occurrences (even roughly)&lt;/li&gt;
&lt;li&gt;focus on the stacks with the biggest growth&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Minute 10-15: map the dominant stack to the first fix you should try
&lt;/h2&gt;

&lt;p&gt;Once you have "the stack that grows," the fix is usually not mysterious. Here’s the mapping I use to choose the first patch.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Many goroutines blocked on &lt;code&gt;chan send&lt;/code&gt; / &lt;code&gt;chan receive&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt; backpressure/coordination bug. Producers outpace consumers, or receivers are missing, or close ownership is unclear.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First fix:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add a cancellation edge to send/receive paths (&lt;code&gt;select { case &amp;lt;-ctx.Done(): ... }&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;bound the queue/channel (and decide policy: block with timeout vs reject)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example helper:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func sendWithContext[T any](ctx context.Context, ch chan&amp;lt;- T, v T) error {
    select {
    case ch &amp;lt;- v:
        return nil
    case &amp;lt;-ctx.Done():
        return ctx.Err()
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2) Many goroutines stuck in &lt;code&gt;net/http.(*Transport).RoundTrip&lt;/code&gt; / netpoll waits
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt; outbound I/O without a real deadline or missing request context wiring. Slow downstream causes your service to "hold on" to goroutines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First fix:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;enforce timeouts at the client level (transport + overall cap)&lt;/li&gt;
&lt;li&gt;always use &lt;code&gt;http.NewRequestWithContext&lt;/code&gt; (or &lt;code&gt;req = req.WithContext(ctx)&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;always close bodies and bound reads&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3) Many goroutines waiting on &lt;code&gt;WaitGroup.Wait&lt;/code&gt;, semaphores, or &lt;code&gt;errgroup&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt; join/cancellation bug or unbounded fan-out. Work starts faster than it completes; cancellation doesn’t propagate; someone forgot to wait.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First fix:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ensure there is exactly one "owner" that always calls &lt;code&gt;Wait()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;use &lt;code&gt;errgroup.WithContext&lt;/code&gt; so cancellation is wired&lt;/li&gt;
&lt;li&gt;bound concurrency explicitly (e.g., &lt;code&gt;SetLimit&lt;/code&gt;)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;g, ctx := errgroup.WithContext(ctx)
g.SetLimit(16)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4) Many goroutines in timers/tickers / periodic loops
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Interpretation:&lt;/strong&gt; time-based resources not stopped, or loops not tied to cancellation/shutdown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First fix:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stop tickers&lt;/li&gt;
&lt;li&gt;stop + drain timers when appropriate&lt;/li&gt;
&lt;li&gt;ensure the loop has a &lt;code&gt;ctx.Done()&lt;/code&gt; exit&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Where tracing fits (and why it’s worth it even if pprof "already shows the stack")
&lt;/h2&gt;

&lt;p&gt;pprof tells you &lt;em&gt;what&lt;/em&gt; is stuck. Tracing tells you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;which&lt;/em&gt; request/job spawned it&lt;/li&gt;
&lt;li&gt;what deadline/budget it had&lt;/li&gt;
&lt;li&gt;which downstream call/queue wait never returned&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you already have OpenTelemetry (or any tracing), the fastest win is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;put spans around anything that can block: outbound HTTP/gRPC, DB calls, queue publish/consume, semaphore acquire, worker enqueue&lt;/li&gt;
&lt;li&gt;tag spans with route/operation, downstream name, and timeout budget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That way, when profiling says "these goroutines are stuck in RoundTrip," tracing tells you "95% of them are from &lt;code&gt;/enrich&lt;/code&gt;, tenant X, calling &lt;code&gt;payments-api&lt;/code&gt;, timing out at 800ms."&lt;/p&gt;




&lt;h2&gt;
  
  
  The patch that actually holds: ship "hardening defaults," not one-off fixes
&lt;/h2&gt;

&lt;p&gt;If you only patch the one stack you saw today, the next incident will be a different stack.&lt;/p&gt;

&lt;p&gt;The fixes that keep paying dividends are defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;timeout budgets at boundaries&lt;/li&gt;
&lt;li&gt;bounded concurrency for any fan-out&lt;/li&gt;
&lt;li&gt;bounded queues + explicit backpressure policy&lt;/li&gt;
&lt;li&gt;explicit channel ownership rules&lt;/li&gt;
&lt;li&gt;structured shutdown (stop admission → cancel context → wait with a shutdown budget)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I keep the complete hardening patterns + production checklist in the full post:&lt;br&gt;&lt;br&gt;
&lt;a href="https://compile.guru/goroutine-leaks-production-pprof-tracing/" rel="noopener noreferrer"&gt;https://compile.guru/goroutine-leaks-production-pprof-tracing/&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Prove it’s fixed (don’t accept vibes)
&lt;/h2&gt;

&lt;p&gt;A real fix has artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;goroutine slope stabilizes under the same traffic/load pattern&lt;/li&gt;
&lt;li&gt;the dominant growing stack is gone (or bounded) in comparable snapshots&lt;/li&gt;
&lt;li&gt;tail latency and timeout rate improve (or at least stop worsening)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also watch out for "false confidence":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;restarts and autoscaling can hide leaks without removing the bug&lt;/li&gt;
&lt;li&gt;short tests miss slow leaks (especially timer/ticker issues)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;The fastest way to win against goroutine leaks is to stop guessing:&lt;/p&gt;

&lt;p&gt;1) confirm the signature (slope + correlation)&lt;br&gt;&lt;br&gt;
2) take two goroutine captures and diff them&lt;br&gt;&lt;br&gt;
3) fix the dominant stack with lifecycle bounds (timeout/cancel/join/backpressure)&lt;br&gt;&lt;br&gt;
4) prove the fix with before/after slope and comparable snapshots  &lt;/p&gt;

&lt;p&gt;If you want the deeper catalog of leak patterns and the production checklist I use in reviews and incident response, here’s the complete runbook:&lt;br&gt;&lt;br&gt;
&lt;a href="https://compile.guru/goroutine-leaks-production-pprof-tracing/" rel="noopener noreferrer"&gt;https://compile.guru/goroutine-leaks-production-pprof-tracing/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>backend</category>
      <category>microservices</category>
      <category>programming</category>
    </item>
    <item>
      <title>10 Advanced Prompting Techniques for Getting Better Results from ChatGPT</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Wed, 19 Nov 2025 18:27:37 +0000</pubDate>
      <link>https://forem.com/voskan89/10-advanced-prompting-techniques-for-getting-better-results-from-chatgpt-4lha</link>
      <guid>https://forem.com/voskan89/10-advanced-prompting-techniques-for-getting-better-results-from-chatgpt-4lha</guid>
      <description>&lt;p&gt;Most people think ChatGPT gives random or shallow answers - but in reality, &lt;strong&gt;the quality of its output depends entirely on how you structure the input&lt;/strong&gt;. Modern LLMs respond best when you treat them like highly capable assistants: give them a role, context, constraints, and a clear outcome.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through &lt;strong&gt;10 advanced prompting techniques&lt;/strong&gt; that consistently produce expert-level answers, cleaner code, deeper analysis, and more useful documents. These techniques are based on 2026-era model behavior and are designed for developers, founders, analysts, and anyone who uses AI for daily work.&lt;/p&gt;

&lt;p&gt;For a deeper dive and more examples, check out my extended guide:&lt;br&gt;
&lt;a href="https://topicforge.site/10-powerful-prompts-for-chatgpt-2026-edition/" rel="noopener noreferrer"&gt;10 powerful prompts for ChatGPT&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Advanced Prompting Matters
&lt;/h2&gt;

&lt;p&gt;Large language models don’t "guess" what you want—they follow instructions.&lt;br&gt;
The more structure you provide, the more predictable the result.&lt;/p&gt;

&lt;p&gt;Here are four fundamentals behind every strong prompt:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Assign a Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting with "Act as a..." frames the model's tone, depth, and domain knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Provide Clear Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Give it the background, goals, constraints, examples, and assumptions upfront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Encourage Reasoning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use "think step-by-step" or "show your reasoning" for complex tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Define the Output Format&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tell it exactly how to present the result: JSON, table, bullets, sections, etc.&lt;/p&gt;

&lt;p&gt;Master these four elements, and the rest becomes easy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Advanced Prompts for 2026
&lt;/h2&gt;

&lt;p&gt;Below are 10 fully ready prompts you can paste directly into ChatGPT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Guided Learning Prompt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A powerful way to actually learn a complex subject, not skim it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a Socratic tutor for the topic of [your topic]. 
Start by asking me a foundational question to assess my knowledge. 
Do not explain anything directly—only guide me through questions. 
Wait for my response after each question.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. The Code Refinement Expert&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Great for improving messy or legacy code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a senior developer specializing in [language]. 
I will paste code. 
Refactor it for readability, performance, and best practices. 
Return the improved code first, then summarize all changes in bullets.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. The On-Demand Data Analyst&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a data analyst. 
I will paste a CSV dataset. 
Your tasks:
1. Identify the top 3 insights.
2. Summarize the findings.
3. Present key metrics in a Markdown table.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. The Multi-Format Content Transformer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Useful for content creators and marketers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Take the core ideas from this article (text or link). 
Turn them into a 5-part Twitter/X thread. 
Each tweet must be under 280 characters and include a relevant hashtag.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. The Devil’s Advocate Prompt&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;My argument is: [your argument]. 
Act as a devil’s advocate and generate 3 strong counterpoints. 
Identify the primary logical framework behind each one.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. The Persona-Based Copywriter&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a world-class copywriter for a luxury automotive brand. 
Write a 150-word product description for an all-electric SUV. 
Focus on silent power, eco materials, minimalism. 
Audience: tech-savvy professionals 35-50.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. The Meeting -&amp;gt; JSON Converter&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summarize the following meeting transcript. 
Output must be a JSON object with:
- key_decisions: string[]
- action_items: { owner: string, task: string }[]
- open_questions: string[]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8. The API Documentation Generator&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a technical writer. 
Generate OpenAPI 3.0 documentation for this Flask route. 
Include descriptions, parameters, and example responses for 200 and 404.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;9. The Analogy Explainer&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Explain [complex topic] using an analogy based on [simple topic]. 
Make the analogy detailed enough to teach a beginner.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10. The Strategic SWOT Architect&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Act as a strategist. 
Create a SWOT analysis for a [business type] operating in a [market condition].
Organize each section into 3–5 concise points.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting: When Your Prompts Still Fail
&lt;/h2&gt;

&lt;p&gt;If the results still feel off, check these common mistakes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start a new chat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clears leftover context.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break large tasks into smaller ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLMs handle sequential workflows better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Show the format you want.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rephrase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even small changes fix misinterpretations.&lt;/p&gt;




&lt;p&gt;Good prompts aren’t "magic"-they’re structured instructions.&lt;br&gt;
Once you learn how to communicate with AI in a role-context-reasoning-format style, you unlock capabilities most users never reach.&lt;/p&gt;

&lt;p&gt;For expanded explanations, deeper examples, and fully customizable templates, take a look at my full guide:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://topicforge.site/10-powerful-prompts-for-chatgpt-2026-edition/" rel="noopener noreferrer"&gt;Core Principles of Effective Prompting&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Breaking the Monolith: How We Split a Node.js Backend into Go Microservices on AWS ECS-Without Stopping the World</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Sat, 25 Oct 2025 14:36:48 +0000</pubDate>
      <link>https://forem.com/voskan89/breaking-the-monolith-how-we-split-a-nodejs-backend-into-go-microservices-on-aws-ecs-without-1okg</link>
      <guid>https://forem.com/voskan89/breaking-the-monolith-how-we-split-a-nodejs-backend-into-go-microservices-on-aws-ecs-without-1okg</guid>
      <description>&lt;p&gt;We didn’t "rewrite everything from scratch". We couldn’t. We have a product to ship and customers to support. What we did at &lt;a href="https://solargenix.ai/" rel="noopener noreferrer"&gt;SolarGenix.ai&lt;/a&gt; was more boring and more durable: we peeled a Node.js (TypeScript) monolith apart one seam at a time, stood up small Go services on AWS ECS, and changed the way services talk to each other-from synchronous REST fan-out to events on EventBridge.&lt;/p&gt;

&lt;p&gt;This is the playbook we actually used. It’s not a template for everyone, but if you're staring at a similar migration, it should save you a few traps.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why we changed
&lt;/h2&gt;

&lt;p&gt;Our monolith wasn’t "bad". It was successful enough to accumulate real traffic and real teams. But success came with some sharp edges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tight coupling inside request/response.&lt;/strong&gt; A single HTTP request often fanned out to three or more internal calls. When one downstream was slow, p95 went up for the entire user flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cascading latency.&lt;/strong&gt; Spiky load in one area caused queueing in unrelated areas because everything was tied to the same hot path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy risk.&lt;/strong&gt; Shipping a "small" change could nudge a shared code path and cause unwanted side-effects. Rollbacks existed in theory; in practice, the blast radius was too big.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our goals were mundane: &lt;strong&gt;safer deploys, clearer ownership, and predictable delivery&lt;/strong&gt;. We wanted a system where one domain could evolve without touching six others, and where we could choose consistency vs. latency explicitly instead of inheriting it accidentally.&lt;/p&gt;




&lt;h2&gt;
  
  
  The plan (gradual, not big-bang)
&lt;/h2&gt;

&lt;p&gt;We kept users on REST at the edge. Internals moved from sync calls to events.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;REST at the boundary&lt;/strong&gt;. Edge handlers stayed in the monolith for a while because the API surface was stable and familiar to clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Turn side-effects into events&lt;/strong&gt;. Instead of calling N services synchronously, the handler published an event-proposal.published is the canonical example-and returned fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge for bus + routing&lt;/strong&gt;. We picked Amazon EventBridge as the managed bus and routing layer. It let us route with simple patterns and avoid building our own "bus that only we understand."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go for small services&lt;/strong&gt;. New domain services were written in Go. Teams liked the simplicity, the small memory footprint, and the standard library. No framework rabbit hole.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mirror prod names in staging&lt;/strong&gt;. Staging mirrored production naming (with a -stg suffix), so cutovers were predictable. Versioned detail-type events (proposal.published:v2) gave us room to evolve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We ran this migration as a rolling set of small moves, not a calendar-driven "big switch."&lt;/p&gt;




&lt;h2&gt;
  
  
  Contracts and compatibility
&lt;/h2&gt;

&lt;h2&gt;
  
  
  OpenAPI at the edge
&lt;/h2&gt;

&lt;p&gt;Clients didn’t need to care that our internals changed. We published OpenAPI contracts for the edge endpoints and kept them stable. Where we knew we would break something, we versioned the endpoint explicitly or added a feature flag to control new behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event envelope
&lt;/h2&gt;

&lt;p&gt;Every event followed the same envelope. It sounds nitpicky; it saved us a lot of confusion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "id": "01J9Q9W0R3W3S3DAXYVZ8R3M7V",
  "source": "app.solargenix",
  "type": "proposal.published",
  "version": "v2",
  "occurredAt": "2025-09-21T14:33:12Z",
  "data": {
    "proposalId": "pr_0f3b1e",
    "accountId": "acc_7a21",
    "publishedBy": "u_193",
    "currency": "USD"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;id is a stable ULID (more readable, roughly time-sortable).&lt;/li&gt;
&lt;li&gt;type is a domain noun: proposal.published, account.updated, etc.&lt;/li&gt;
&lt;li&gt;version is a &lt;strong&gt;major&lt;/strong&gt; version only. Minor changes must be additive.&lt;/li&gt;
&lt;li&gt;occurredAt is UTC. The producers set it once. Consumers don’t "fix" it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Dual-publish for majors
&lt;/h2&gt;

&lt;p&gt;Major changes published both v1 and v2 for a sprint. Consumers opted into v2 when ready. We measured v1 usage and removed it when zero. No hidden toggles, no guessing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Idempotency, retries, DLQ
&lt;/h2&gt;

&lt;p&gt;You don’t get exactly-once delivery. Assume duplicates; you’ll sleep better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idempotency table in DynamoDB&lt;/strong&gt;. Each consumer writes a row keyed by eventId with a TTL at least equal to the retry window (we use 24h). If the key exists, the side-effect already happened; drop it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-target DLQs and explicit retry policies&lt;/strong&gt;. Every rule target has its own DLQ and RetryPolicy. When something breaks, it breaks locally. We can triage by DLQ name and owner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;At-least-once thinking&lt;/strong&gt;. We stopped treating duplicates as "bugs" and started treating lack of idempotency as the bug.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal consumer shape (SQS buffer -&amp;gt; Lambda):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Skeleton; real code has tracing, structured logs, and metrics.
func Handle(ctx context.Context, e events.SQSEvent) error {
    for _, r := range e.Records {
        var ev Event
        if err := json.Unmarshal([]byte(r.Body), &amp;amp;ev); err != nil {
            return err // retried by SQS/Lambda, ends up in DLQ if persistent
        }

        // TTL ≥ retry window; we used 24h.
        seen, err := idem.Seen(ctx, ev.ID, 24*time.Hour)
        if err != nil { return err }
        if seen { continue }

        if err := apply(ev); err != nil {
            return err // retry; isolated to this target
        }
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Data projections &amp;amp; reads
&lt;/h2&gt;

&lt;p&gt;We kept records of truth where they fit best and projected what the UI needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB Streams for projections&lt;/strong&gt;. Where a table is authoritative (e.g., counters, idempotency, some hot entities), Streams emit change events that update search indexes, analytics, or a read-optimized view.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hot read path&lt;/strong&gt;. UI reads follow: cache -&amp;gt; key lookup (DynamoDB) -&amp;gt; fallback to source of truth (often Aurora or the monolith during transition). Cache TTLs match how stale each endpoint is allowed to be.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache busting&lt;/strong&gt;. Any state transition that changes what the UI renders triggers cache invalidation. We use small, explicit helpers-no "magic auto-busting."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: &lt;strong&gt;p95 fast-path reads dropped from 48 ms to 11 ms&lt;/strong&gt;. Most pages now hit a projection or cache instead of stitching five joins under pressure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Release strategy on ECS
&lt;/h2&gt;

&lt;p&gt;We didn’t forklift deploy. We pared the system into boundaries and rolled them out one at a time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service-by-service rollout&lt;/strong&gt;. Each new Go service ran in ECS with its own task definition, autoscaling policy, and health checks. We didn’t cram everything into one cluster service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shadow consumers&lt;/strong&gt;. Early consumers ran in shadow for a sprint. They processed the real event stream and wrote results next to the old path. We diffed until we trusted them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature flags&lt;/strong&gt;. When a consumer replaced an old synchronous call, we shipped the flag first, validated in staging, then flipped for a subset of tenants in production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback plan&lt;/strong&gt;. We kept the synchronous path alive for a while. If the consumer misbehaved, we flipped the flag off and investigated with a hot DLQ replay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blast-radius limits&lt;/strong&gt;. We routed by detail-type, source, and sometimes accountId or region to control who got the new behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A sketch of the EventBridge rule we used repeatedly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Name": "proposal-published-v2",
  "EventPattern": {
    "source": ["app.solargenix"],
    "detail-type": ["proposal.published:v2"]
  },
  "Targets": [{
    "Arn": "arn:aws:lambda:...:function:proposal-emailer",
    "RetryPolicy": { "MaximumRetryAttempts": 185, "MaximumEventAgeInSeconds": 86400 },
    "DeadLetterConfig": { "Arn": "arn:aws:sqs:...:dlq-proposal-emailer" }
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each target had its own DLQ (e.g., dlq-proposal-emailer). Ownership was never ambiguous.&lt;/p&gt;




&lt;h2&gt;
  
  
  The small Go publisher
&lt;/h2&gt;

&lt;p&gt;Producers do one thing: put a clean event on the bus. No tight loops. No cleverness.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Event struct {
    ID         string      `json:"id"`
    Source     string      `json:"source"`
    Type       string      `json:"type"`    // e.g., "proposal.published"
    Version    string      `json:"version"` // e.g., "v2"
    OccurredAt time.Time   `json:"occurredAt"`
    Data       interface{} `json:"data"`
}

func Publish(ctx context.Context, bus, region string, ev Event) error {
    cfg, err := config.LoadDefaultConfig(ctx, config.WithRegion(region))
    if err != nil { return err }
    cli := eventbridge.NewFromConfig(cfg)

    detailType := fmt.Sprintf("%s:%s", ev.Type, ev.Version)
    payload, _ := json.Marshal(ev)

    _, err = cli.PutEvents(ctx, &amp;amp;eventbridge.PutEventsInput{
        Entries: []types.PutEventsRequestEntry{{
            EventBusName: &amp;amp;bus,
            Source:       aws.String(ev.Source),
            DetailType:   aws.String(detailType),
            Time:         aws.Time(ev.OccurredAt),
            Detail:       aws.String(string(payload)),
        }},
    })
    return err
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. The repository or handler constructs the event with the right version and calls Publish.&lt;/p&gt;




&lt;h2&gt;
  
  
  Observability that catches real problems
&lt;/h2&gt;

&lt;p&gt;We avoided dashboards that look great in demos and tell you nothing on-call. The few signals that consistently matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DLQ depth and age per target&lt;/strong&gt;. Alerts when DLQ depth ≥ 5 for 3 minutes, and when max age &amp;gt; 10 minutes. That split catches both bursts and stuck poison messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge target failure rate&lt;/strong&gt;. Metric math on Invocations vs. FailedInvocations per rule target. We page on a non-zero failure rate sustained for 5 minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read p95 for hot endpoints&lt;/strong&gt;. Because that’s what users feel. We annotate deploys so we can correlate regressions with changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projection lag&lt;/strong&gt;. If the read model is stale beyond our acceptable TTL, we want to know before users do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also added one "boring but lifesaving" alarm on &lt;strong&gt;idempotency table write failures&lt;/strong&gt;. If the table throttles or permissions drift, duplicates slip through.&lt;/p&gt;




&lt;h2&gt;
  
  
  Results and one scar-tissue story
&lt;/h2&gt;

&lt;p&gt;Numbers first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;p95 fast-path reads:&lt;/strong&gt; 48 ms -&amp;gt; 11 ms after shifting hot reads to a key lookup or cache.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DLQ rate:&lt;/strong&gt; &amp;lt; 0.1% over the last 30 days. Replays are scripted and dull (which is the point).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now the scar tissue.&lt;/p&gt;

&lt;p&gt;During one cutover we assumed &lt;strong&gt;global ordering&lt;/strong&gt; for a certain flow. It wasn’t. A burst produced duplicates in a consumer that sent emails. Some customers received two messages. The fix was straightforward once we accepted the delivery model: we keyed idempotency by eventId + recipient and set a 24h TTL. The consumer became "boring." On-call became boring with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What we’d do again
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clean envelopes&lt;/strong&gt;. That little JSON contract carried more weight than any library choice.&lt;/li&gt;
&lt;li&gt;type:version in detail-type. Routing by major version without parsing payloads is a gift to operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-target DLQs&lt;/strong&gt;. Ownership and blame become obvious. It shortens incidents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A few boring alarms&lt;/strong&gt;. DLQ depth/age, failure rate per target, p95 reads, projection lag. No noise.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we’d change next time
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Earlier "projection first" thinking&lt;/strong&gt;. We could have moved hot reads to projections sooner and earned the p95 win earlier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stricter contracts for optional fields&lt;/strong&gt;. We allowed too many "maybe present" fields that crept into business logic. I’d lock those down sooner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost/ops trade-off, acknowledged&lt;/strong&gt;. Managed EventBridge beats self-hosted when you’re small or moving fast. At higher scale, some teams roll their own bus for cost and control. For us, the ops time saved was worth the bill. Your curve may differ.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Notes on Clean Architecture (brief, practical)
&lt;/h2&gt;

&lt;p&gt;We didn’t treat Clean Architecture as a religion. We kept it to two rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Domain code doesn’t depend on the transport&lt;/strong&gt;. Handlers call a use-case; use-cases call repositories; repositories hide storage and messaging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No cross-domain imports&lt;/strong&gt;. If two domains need to coordinate, they publish/consume events. They don’t import each other’s packages and reach in.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It kept the Go services small and readable, and it made moving logic between processes almost trivial.&lt;/p&gt;




&lt;p&gt;We didn’t stop the world; we changed how it moved. The monolith is smaller now, the teams are less entangled, and the hot paths are faster. The nice part is that none of this requires heroics-just consistent patterns and the discipline to apply them.&lt;/p&gt;

&lt;p&gt;If there’s interest, I’ll follow up with the exact fan-out patterns we use and how we handle backfill/replay safely. If this was useful, follow &lt;a href="https://www.linkedin.com/in/vvoskanyan/" rel="noopener noreferrer"&gt;my profile&lt;/a&gt; and the &lt;a href="https://www.linkedin.com/company/solargenix-ai/" rel="noopener noreferrer"&gt;SolarGenix.ai page&lt;/a&gt; to catch the next deep dives and benchmarks as they land.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>go</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Event-Driven Architecture in Production: Designing with AWS EventBridge and Go</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Thu, 23 Oct 2025 06:32:42 +0000</pubDate>
      <link>https://forem.com/voskan89/event-driven-architecture-in-production-designing-with-aws-eventbridge-and-go-306b</link>
      <guid>https://forem.com/voskan89/event-driven-architecture-in-production-designing-with-aws-eventbridge-and-go-306b</guid>
      <description>&lt;p&gt;We recently finished moving a set of synchronous REST flows at &lt;a href="https://solargenix.ai/" rel="noopener noreferrer"&gt;SolarGenix.ai&lt;/a&gt; to an event-driven core. The goal wasn’t to chase buzzwords; we wanted fewer cross-service dependencies, clearer ownership, and predictable delivery under load. Below is what we actually shipped: &lt;strong&gt;EventBridge&lt;/strong&gt; as the bus, &lt;strong&gt;Go&lt;/strong&gt; for publishers/consumers, &lt;strong&gt;DynamoDB Streams&lt;/strong&gt; for projections, and a small set of rules that kept us out of trouble.&lt;/p&gt;

&lt;p&gt;Two quick numbers after cutover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fast-path reads p95: 48 ms -&amp;gt; 11 ms&lt;/strong&gt; (UI paths now hit a keyed projection or cache).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DLQ rate: &amp;lt;0.1%&lt;/strong&gt; over the last 30 days (and most of those were known poison messages we fixed and replayed).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Why we moved off "sync everything"
&lt;/h3&gt;

&lt;p&gt;We had a handful of handlers that fanned out to multiple services. When one downstream slowed down, the whole user flow did too. We kept REST at the edge (users), but replaced side-effects with events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before:&lt;/strong&gt; handler -&amp;gt; service A -&amp;gt; service B -&amp;gt; service C&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After:&lt;/strong&gt; handler (200 OK) -&amp;gt; &lt;strong&gt;publish&lt;/strong&gt; proposal.published -&amp;gt; N consumers do their work independently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The day we shipped the first flow, on-call went noticeably quieter.&lt;/p&gt;




&lt;h3&gt;
  
  
  Bus and naming (obfuscated but real)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bus:&lt;/strong&gt; app-core-prod (staging mirrors prod names with -stg suffix)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detail-type:&lt;/strong&gt; proposal.published:v2 (type:major) so we can route by version without parsing JSON&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rule example:&lt;/strong&gt; proposal-published-v2 -&amp;gt; targets emailer, projector, analytics indexer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DLQ:&lt;/strong&gt; one per target, e.g. dlq-proposal-emailer (ownership is obvious)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Event envelope and versioning
&lt;/h3&gt;

&lt;p&gt;We keep a small, stable envelope and version the &lt;strong&gt;detail-type:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "id": "01J9Q9W0R3W3S3DAXYVZ8R3M7V",
  "source": "app.solargenix",
  "type": "proposal.published",
  "version": "v2",
  "occurredAt": "2025-10-10T14:33:12Z",
  "data": {
    "proposalId": "pr_0f3b1e",
    "accountId": "acc_7a21",
    "publishedBy": "u_193",
    "currency": "USD"
  }
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minor fields&lt;/strong&gt; -&amp;gt; additive only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Major changes&lt;/strong&gt; -&amp;gt; dual-publish (v1 + v2) for a sprint; consumers opt in when ready.&lt;/li&gt;
&lt;li&gt;Schemas live in the &lt;strong&gt;EventBridge Schema Registry&lt;/strong&gt; and we generate Go bindings.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Idempotency: assume duplicates
&lt;/h3&gt;

&lt;p&gt;EventBridge and Streams are &lt;strong&gt;at-least-once&lt;/strong&gt;. We treat duplicates as normal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every event has a &lt;strong&gt;stable&lt;/strong&gt; id (we use ULID).&lt;/li&gt;
&lt;li&gt;Consumers write eventId into a small &lt;strong&gt;idempotency table&lt;/strong&gt; (DynamoDB) with a TTL &amp;gt;= retry window.&lt;/li&gt;
&lt;li&gt;If the key exists, we skip the side effect and move on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal consumer shape (SQS buffer -&amp;gt; Lambda):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func Handle(ctx context.Context, e events.SQSEvent) error {
    for _, r := range e.Records {
        var ev Event
        if err := json.Unmarshal([]byte(r.Body), &amp;amp;ev); err != nil { return err }

        seen, err := idem.Seen(ctx, ev.ID, 24*time.Hour)
        if err != nil { return err }
        if seen { continue }

        if err := apply(ev); err != nil { return err } // retried by SQS/Lambda
    }
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Delivery guarantees, retries, and DLQs
&lt;/h3&gt;

&lt;p&gt;Each &lt;strong&gt;rule target&lt;/strong&gt; sets an explicit RetryPolicy and a &lt;strong&gt;DLQ&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Name": "proposal-published-v2",
  "EventPattern": { "source": ["app.solargenix"], "detail-type": ["proposal.published:v2"] },
  "Targets": [{
    "Arn": "arn:aws:lambda:...:function:proposal-emailer",
    "RetryPolicy": { "MaximumRetryAttempts": 185, "MaximumEventAgeInSeconds": 86400 },
    "DeadLetterConfig": { "Arn": "arn:aws:sqs:...:dlq-proposal-emailer" }
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is boring on purpose. When something breaks, it breaks &lt;strong&gt;locally&lt;/strong&gt; (one rule/target), not across a chain.&lt;/p&gt;




&lt;h3&gt;
  
  
  DynamoDB Streams for projections
&lt;/h3&gt;

&lt;p&gt;Where the &lt;strong&gt;record of truth&lt;/strong&gt; lives in DynamoDB (e.g., counters, hot entities), we project changes out with Streams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streams retention is &lt;strong&gt;24h&lt;/strong&gt;; consumers must catch up.&lt;/li&gt;
&lt;li&gt;Ordering is &lt;strong&gt;per item&lt;/strong&gt;, not global. If we care about causal order, we put a version and timestamps in the payload and validate before applying.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Observability: alarms we actually watch
&lt;/h3&gt;

&lt;p&gt;We wired a few CloudWatch alarms that correlate with user pain. Two that paid off immediately:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DLQ depth / age&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "AlarmName": "dlq-proposal-emailer-depth",
  "MetricName": "ApproximateNumberOfMessagesVisible",
  "Namespace": "AWS/SQS",
  "Dimensions": [{"Name":"QueueName","Value":"dlq-proposal-emailer"}],
  "Statistic": "Sum",
  "Period": 60,
  "EvaluationPeriods": 3,
  "Threshold": 5,
  "ComparisonOperator": "GreaterThanOrEqualToThreshold"
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EventBridge rule failure rate (per target)&lt;/strong&gt; Metric math on Invocations vs FailedInvocations for proposal-published-v2 -&amp;gt; proposal-emailer. We page when failures &amp;gt; 0 for 5 minutes. That caught a bad template deploy before users did.&lt;/p&gt;




&lt;h3&gt;
  
  
  Migration notes (what actually happened)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We wrapped the old REST handler so it &lt;strong&gt;returns 200 quickly&lt;/strong&gt; and publishes an event.&lt;/li&gt;
&lt;li&gt;New consumers ran &lt;strong&gt;in shadow&lt;/strong&gt; for a sprint and wrote their results next to the old path so we could diff.&lt;/li&gt;
&lt;li&gt;We hit one snag: an emailer Lambda assumed &lt;strong&gt;global ordering&lt;/strong&gt;; a burst of duplicate events produced repeat sends. Fix was trivial once we added idempotency with a 24h TTL and keyed by eventId + recipient.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Results after cutover
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fast-path p95 reads: 48 ms -&amp;gt; 11 ms&lt;/strong&gt; (most UI paths now hit a projection or cache).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DLQ rate:&lt;/strong&gt; &amp;lt;0.1% in 30 days; replays are scripted and boring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploys:&lt;/strong&gt; producers and consumers ship independently; on-call load is down because failures isolate to one rule/target.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Minimal Go publisher
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func Publish(ctx context.Context, bus, region string, ev Event) error {
    cfg, err := config.LoadDefaultConfig(ctx, config.WithRegion(region))
    if err != nil { return err }
    cli := eventbridge.NewFromConfig(cfg)

    detailType := fmt.Sprintf("%s:%s", ev.Type, ev.Version)
    payload, _ := json.Marshal(ev)

    _, err = cli.PutEvents(ctx, &amp;amp;eventbridge.PutEventsInput{
        Entries: []types.PutEventsRequestEntry{{
            EventBusName: &amp;amp;bus,
            Source:       aws.String(ev.Source),
            DetailType:   aws.String(detailType),
            Time:         aws.Time(ev.OccurredAt),
            Detail:       aws.String(string(payload)),
        }},
    })
    return err
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;EventBridge isn’t magic, but it let us decouple without inventing infrastructure. The patterns above-clean envelopes, type:version, idempotency, per-target DLQs, and a few boring alarms-were enough to make the system predictable.&lt;/p&gt;

&lt;p&gt;If there’s interest, I’ll follow up with the exact fan-out rules we use and how we handle backfill/replay safely. If this was useful, follow my profile and the &lt;a href="https://www.linkedin.com/company/solargenix-ai/" rel="noopener noreferrer"&gt;SolarGenix.ai page&lt;/a&gt; to catch the next deep dives and benchmarks as they land.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>go</category>
      <category>eventdriven</category>
      <category>programming</category>
    </item>
    <item>
      <title>Hybrid Cloud Stack: Balancing Aurora PostgreSQL and DynamoDB for Optimal Performance</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Wed, 22 Oct 2025 03:45:37 +0000</pubDate>
      <link>https://forem.com/voskan89/hybrid-cloud-stack-balancing-aurora-postgresql-and-dynamodb-for-optimal-performance-1ddn</link>
      <guid>https://forem.com/voskan89/hybrid-cloud-stack-balancing-aurora-postgresql-and-dynamodb-for-optimal-performance-1ddn</guid>
      <description>&lt;p&gt;At &lt;a href="https://solargenix.ai/" rel="noopener noreferrer"&gt;SolarGenix.ai&lt;/a&gt;, we are building an AI-driven platform that turns the slow, manual parts of solar proposals into a fast, reliable, and automated flow, from roof detection and shading analysis to financial modeling and polished customer-ready PDFs. We are a startup, and development is moving fast.&lt;/p&gt;

&lt;p&gt;This article walks through how we split workloads between &lt;strong&gt;Amazon Aurora PostgreSQL&lt;/strong&gt; and &lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;, what consistency/latency trade-offs we accept, and how a &lt;strong&gt;unified data-access layer in Go&lt;/strong&gt; plus &lt;strong&gt;caching&lt;/strong&gt; lets us keep developer ergonomics high without sacrificing performance or reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why we run both Aurora PostgreSQL and DynamoDB
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Aurora PostgreSQL&lt;/strong&gt; gives us strong consistency, relational integrity, and powerful SQL for reporting/joins-ideal for workflows that must be correct first and fast second (e.g., billing artifacts, subscription state, proposal audit trails, RBAC metadata). &lt;strong&gt;DynamoDB&lt;/strong&gt; gives us predictable low-latency, elastic throughput, and effortless horizontal scale-ideal for high-velocity, key-based access with simple access patterns (e.g., proposal snapshots, step autosaves, layout/planning intermediates, idempotency records).&lt;/p&gt;

&lt;p&gt;On the product side, this split reflects how our engine works: the AI pipeline produces intermediate states and final artifacts that are naturally document-like and accessed by key, while financials, user/org relationships, and compliance data benefit from transactions, joins, and SQL ergonomics. The outcome is a platform that feels instant without compromising accuracy in the places where accuracy is non-negotiable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decision guide - which workloads go where?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Aurora PostgreSQL (strongly consistent, relational, transactional)&lt;/li&gt;
&lt;li&gt;DynamoDB (low latency, high throughput, partition-friendly)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple rule we use internally: If the access pattern is "get by key, occasionally update, serve fast," it probably belongs in DynamoDB. If we need cross-entity constraints, transactions, or ad-hoc queries, it’s an Aurora problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency, latency, and the "fast path"
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User actions&lt;/strong&gt; write to Aurora in a transactional way when correctness matters (e.g., plan upgrades), but we project a &lt;strong&gt;derived, read-optimized view&lt;/strong&gt; into DynamoDB (or cache) for the UI fast path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background processors&lt;/strong&gt; hydrate DynamoDB with the most-needed fields for the next interaction, turning expensive joins into a single-digit-millisecond key lookup.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;rare, cross-cutting queries&lt;/strong&gt;, we go straight to Aurora and treat the added latency as acceptable (and cache results aggressively).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives us &lt;strong&gt;strongly consistent writes where we need them&lt;/strong&gt; and &lt;strong&gt;very low-latency reads where we want them&lt;/strong&gt;, without confusing app developers: they talk to a single repository interface, and the implementation decides when/how to use each store.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unified data-access layer in Go
&lt;/h3&gt;

&lt;p&gt;We present storage behind a clean interface and keep &lt;strong&gt;policy (when to read/write which store)&lt;/strong&gt; inside the repository, not the handlers. That means product teams ship features without learning two databases’ footguns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interface:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// pkg/proposals/repo.go
package proposals

import "context"

type ID string

type Proposal struct {
    ID          ID
    AccountID   string
    Version     int64
    State       string            // "draft", "ready" ...
    Snapshots   map[string]string // lightweight links to artifacts
    UpdatedAt   int64
}

type Repository interface {
    Get(ctx context.Context, id ID) (*Proposal, error)
    Save(ctx context.Context, p *Proposal) error
    Snapshot(ctx context.Context, id ID, label string, ref string) error
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Implementation sketch (Aurora + DynamoDB + Cache)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// pkg/proposals/repo_hybrid.go
type hybridRepo struct {
    aurora  AuroraStore   // wraps pgx with ctx-aware tracing/retries
    ddb     DynamoStore   // wraps DynamoDB SDK with marshaling helpers
    cache   Cache         // Redis or in-memory with TTL + stampede control
    clock   Clock
    metrics Metrics
}

func (r *hybridRepo) Get(ctx context.Context, id ID) (*Proposal, error) {
    // Try cache
    if p, ok := r.cache.Get(string(id)); ok {
        return p, nil
    }

    // Hot path: DynamoDB by key
    if p, err := r.ddb.GetByID(ctx, string(id)); err == nil &amp;amp;&amp;amp; p != nil {
        r.cache.Set(string(id), p, ttlFast())
        return p, nil
    }

    // Fallback: Aurora (authoritative), then project to DDB
    p, err := r.aurora.GetProposal(ctx, string(id))
    if err != nil {
        return nil, err
    }
    _ = r.ddb.Put(ctx, p)
    r.cache.Set(string(id), p, ttlSlow())
    return p, nil
}

func (r *hybridRepo) Save(ctx context.Context, p *Proposal) error {
    // Authoritative write to Aurora (transactional)
    if err := r.aurora.UpsertProposal(ctx, p); err != nil {
        return err
    }
    // Async or inline projection to DynamoDB for the fast path
    _ = r.ddb.Put(ctx, p)
    r.cache.Delete(string(p.ID))
    return nil
}

func (r *hybridRepo) Snapshot(ctx context.Context, id ID, label, ref string) error {
    // Snapshots are key-addressable, perfect for DynamoDB
    if err := r.ddb.AddSnapshot(ctx, string(id), label, ref, r.clock.Now()); err != nil {
        return err
    }
    r.cache.Delete(string(id))
    return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Aurora writes are the source of truth for mutable, relational entities.&lt;/li&gt;
&lt;li&gt;DynamoDB holds &lt;strong&gt;read-optimized projections&lt;/strong&gt; and &lt;strong&gt;append-only events&lt;/strong&gt; (snapshots, idempotency, counters).&lt;/li&gt;
&lt;li&gt;Cache shields both and absorbs spikes; cache TTLs reflect staleness tolerance per endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Read/write patterns that actually matter in production
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autosave drafts:&lt;/strong&gt; write to DynamoDB (cheap, frequent); periodically consolidate into Aurora.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing a proposal:&lt;/strong&gt; transactional write in Aurora; project final state to DynamoDB; bust cache.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fetching the latest proposal for UI:&lt;/strong&gt; cache → DynamoDB by key → fallback to Aurora (and re-project).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit/exports:&lt;/strong&gt; run directly on Aurora with SQL; results cached by hash of the query params.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent APIs:&lt;/strong&gt; store request hashes in DynamoDB with short TTL; reject duplicates fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate limiting and quotas:&lt;/strong&gt; DynamoDB counters (or Redis) with atomic increments and per-key TTL.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Caching strategy (simple rules)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Per-entity caches&lt;/strong&gt; keyed by ID with short TTLs (seconds to a minute).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-query caches&lt;/strong&gt; keyed by normalized params with longer TTLs only for read-only analytics views.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stampede protection&lt;/strong&gt; (singleflight) around cold misses; &lt;strong&gt;negative caching&lt;/strong&gt; for known-absent keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicit cache&lt;/strong&gt; busting on any state transition that affects the fast path.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Observability and operations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Emit storage labels on every call: store=aurora|ddb|cache, op=get|put|tx, plus latencies and error classes.&lt;/li&gt;
&lt;li&gt;Keep &lt;strong&gt;service-level SLOs&lt;/strong&gt;: p95 read latency, error rates, projection lag.&lt;/li&gt;
&lt;li&gt;Run regular &lt;strong&gt;consistency checks&lt;/strong&gt; that diff a sample of Aurora rows against their DynamoDB projections; alert on shape drift.&lt;/li&gt;
&lt;li&gt;Backfills and schema evolution run behind feature flags; repositories expose &lt;strong&gt;read-only mode&lt;/strong&gt; if we need to pause writes during critical migrations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Aurora PostgreSQL and DynamoDB together isn’t about hedging bets-it’s about putting each workload where it performs best, then hiding that complexity behind a clean Go API and a disciplined caching layer. That’s how we keep the product feeling instantaneous while preserving the correctness guarantees we need for money, compliance, and trust.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>database</category>
      <category>performance</category>
    </item>
    <item>
      <title>5 Security Patterns Go Developers Should Avoid</title>
      <dc:creator>Voskan Voskanyan</dc:creator>
      <pubDate>Sun, 03 Aug 2025 07:29:24 +0000</pubDate>
      <link>https://forem.com/voskan89/5-security-patterns-go-developers-should-avoid-6lb</link>
      <guid>https://forem.com/voskan89/5-security-patterns-go-developers-should-avoid-6lb</guid>
      <description>&lt;p&gt;I’ve been writing and reviewing Go code for a while now - from building backend services to contributing to open-source projects. One thing I’ve learned the hard way is that insecure code doesn’t always look "wrong" - sometimes it looks completely ordinary.&lt;/p&gt;

&lt;p&gt;In this post, I want to share &lt;strong&gt;five common patterns I keep seeing in real-world Go codebases&lt;/strong&gt; that can easily turn into serious security issues. These aren’t exotic bugs — they’re patterns that show up quietly, subtly, and often go unnoticed in PRs, even by experienced teams.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Passing Raw Input Directly into Queries
&lt;/h3&gt;

&lt;p&gt;Let’s start with the classic:&lt;br&gt;
&lt;code&gt;query := fmt.Sprintf("SELECT * FROM users WHERE email = '%s'", email)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Even if &lt;code&gt;email&lt;/code&gt; is sanitized upstream, or “should never come from user input”, you’ve just opened the door for SQL injection. I’ve seen this exact line in production code. It’s easy to write, easy to miss, and dangerous.&lt;/p&gt;

&lt;p&gt;What to do instead:&lt;br&gt;
Use parameterized queries. Always.&lt;br&gt;
With &lt;code&gt;database/sql&lt;/code&gt;, it’s:&lt;br&gt;
&lt;code&gt;query := "SELECT * FROM users WHERE email = ?"&lt;br&gt;
row := db.QueryRowContext(ctx, query, email)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Also: be careful even with ORM “raw” queries — a lot of people think &lt;code&gt;gorm.Raw()&lt;/code&gt; handles escaping. It doesn’t.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Constructing Shell Commands with Input Strings
&lt;/h3&gt;

&lt;p&gt;Another one I keep seeing is dynamic command execution:&lt;br&gt;
&lt;code&gt;cmd := exec.Command("sh", "-c", "run-task "+userInput)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here’s the problem: if &lt;code&gt;userInput&lt;/code&gt; contains a semicolon, pipe, or any shell metacharacters — you’ve got command injection.&lt;/p&gt;

&lt;p&gt;Better:&lt;br&gt;
Avoid &lt;code&gt;sh -c&lt;/code&gt; unless you really need it. Use &lt;code&gt;exec.Command&lt;/code&gt; with explicit args:&lt;br&gt;
&lt;code&gt;cmd := exec.Command("run-task", userInput)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And validate or sanitize the input before it gets near the shell.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Overexposing &lt;code&gt;http.Request&lt;/code&gt; Data to Internal Layers
&lt;/h3&gt;

&lt;p&gt;This one’s sneakier.&lt;/p&gt;

&lt;p&gt;I’ve seen APIs where &lt;code&gt;r.URL.Query()&lt;/code&gt; or &lt;code&gt;r.FormValue()&lt;/code&gt; are passed directly to service layers, logging systems, or even database access. The assumption is that somewhere down the line, someone will validate it. But that rarely happens.&lt;/p&gt;

&lt;p&gt;The result?&lt;br&gt;
Untrusted input ends up buried inside your logic, touching sensitive code.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
Validate early. Don’t pass raw &lt;code&gt;http.Request&lt;/code&gt; data into your internal layers. Parse and validate in the handler/controller layer — not deeper.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Leaky Global State and Insecure Defaults
&lt;/h3&gt;

&lt;p&gt;This one is more architectural, but still causes security problems.&lt;/p&gt;

&lt;p&gt;Global configuration, shared variables, or &lt;code&gt;init()&lt;/code&gt; functions that silently override behavior often lead to unexpected exposure. I've seen cases where a global &lt;code&gt;debug = true&lt;/code&gt; flag was left on in production — leaking sensitive logs.&lt;/p&gt;

&lt;p&gt;Tip:&lt;br&gt;
Avoid global mutable state unless you really need it. Use dependency injection, context, and controlled configuration structs.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Ignoring Error Values — Especially from Critical Calls
&lt;/h3&gt;

&lt;p&gt;We all know Go makes you write &lt;code&gt;if err != nil&lt;/code&gt;. And yet, I’ve seen plenty of cases where it's quietly ignored:&lt;br&gt;
&lt;code&gt;token, _ := jwt.Parse(...)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That underscore hides a lot. If parsing fails and you continue anyway, that’s an auth bypass waiting to happen.&lt;/p&gt;

&lt;p&gt;Always check your errors, especially in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auth-related code&lt;/li&gt;
&lt;li&gt;Encryption/decryption&lt;/li&gt;
&lt;li&gt;File system and network operations&lt;/li&gt;
&lt;li&gt;Anything parsing untrusted data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s better to fail early than to keep going with a broken assumption.&lt;/p&gt;




&lt;p&gt;None of these patterns are exotic. That’s what makes them dangerous — they blend in. They’re easy to write, easy to review past, and they compile just fine.&lt;/p&gt;

&lt;p&gt;Over time, I realized I was repeating the same feedback in code reviews. That led me to build &lt;a href="https://github.com/Voskan/codexsentinel" rel="noopener noreferrer"&gt;CodexSentinel&lt;/a&gt;, a static analyzer for Go that focuses on real security patterns like these — not just lint rules or formatting.&lt;/p&gt;

&lt;p&gt;If you’re working in Go and care about security, give it a try or drop me a message.&lt;br&gt;
And if you’ve seen other common anti-patterns — I’d love to hear them.&lt;/p&gt;

&lt;p&gt;Thanks for reading. Stay safe&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/Voskan/codexsentinel" rel="noopener noreferrer"&gt;https://github.com/Voskan/codexsentinel&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
