<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Naveen Chandra Adhikari</title>
    <description>The latest articles on Forem by Naveen Chandra Adhikari (@naveenc83002940).</description>
    <link>https://forem.com/naveenc83002940</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/naveenc83002940"/>
    <language>en</language>
    <item>
      <title>How I Cut API Latency by 95% by Fixing One Hidden Loop ❤️‍🔥</title>
      <dc:creator>Naveen Chandra Adhikari</dc:creator>
      <pubDate>Sat, 31 Jan 2026 04:28:30 +0000</pubDate>
      <link>https://forem.com/naveenc83002940/how-i-cut-api-latency-by-95-by-fixing-one-hidden-loop-4i7j</link>
      <guid>https://forem.com/naveenc83002940/how-i-cut-api-latency-by-95-by-fixing-one-hidden-loop-4i7j</guid>
      <description>&lt;p&gt;It started with a Slack message from our support team.&lt;/p&gt;

&lt;p&gt;“Hey, the organization dashboard feels really slow for some customers. Like… 3–4 seconds slow.”&lt;/p&gt;

&lt;p&gt;I sighed🥴. Another performance ticket.&lt;/p&gt;

&lt;p&gt;I hit the endpoint locally:&lt;br&gt;
280ms. Fine.&lt;/p&gt;

&lt;p&gt;Checked staging:&lt;br&gt;
~320ms. Also fine.&lt;/p&gt;

&lt;p&gt;“Probably their network,” I thought, closing the tab — a classic developer reflex.&lt;/p&gt;

&lt;p&gt;But the messages didn’t stop.&lt;/p&gt;

&lt;p&gt;Every complaint had the same pattern:&lt;br&gt;
“It only happens for larger organizations.”&lt;br&gt;
The ones with lots of workspaces. Lots of files. Real data.&lt;/p&gt;

&lt;p&gt;By the end of the week, I couldn’t brush it off anymore.&lt;/p&gt;

&lt;p&gt;There were no errors. No crashes. No alarming CPU spikes. The service was technically healthy. And yet, production latency had quietly crept from a respectable 200ms to an uncomfortable 3–4 seconds during peak hours.&lt;/p&gt;

&lt;p&gt;The code responsible for this endpoint had been touched recently. It was clean. Idiomatic Go. The kind of code you skim during a review and immediately trust. The previous developer was solid — no obvious mistakes, no red flags.&lt;/p&gt;

&lt;p&gt;Still, something felt off.&lt;/p&gt;

&lt;p&gt;So I opened SigNoz, filtered by that endpoint, and clicked into a trace from one of the slowest requests.&lt;/p&gt;

&lt;p&gt;What I found wasn’t a bug.&lt;br&gt;
It wasn’t bad infrastructure.&lt;br&gt;
It wasn’t a missing index.&lt;/p&gt;

&lt;p&gt;It was a pattern.&lt;/p&gt;

&lt;p&gt;And it was silently strangling our database.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Silent Killer
&lt;/h2&gt;

&lt;p&gt;The problem I was looking at wasn’t a bug.There was no stack trace. No failing test. No error log to chase.&lt;/p&gt;

&lt;p&gt;It was a pattern — one of the most dangerous ones in backend systems.&lt;/p&gt;

&lt;p&gt;The N+1 query problem.😫&lt;/p&gt;

&lt;p&gt;If you’ve never heard of it, here’s the short version: Your code runs one query to get a list of items, and then loops through that list to run N additional queries for each item.&lt;/p&gt;

&lt;p&gt;In Go, the code that causes this looks absolutely correct. It reads beautifully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Fetch all workspaces for an organization
workspaces, err := repo.GetWorkspaces(orgID)
if err != nil {
    return err
}
// For each workspace, fetch its storage stats
for _, ws := range workspaces {
    storage, err := repo.GetWorkspaceStorage(ws.ID) // 🔴 This is the problem
    if err != nil {
        return err
    }
    // ... do something with storage
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It reads beautifully.&lt;br&gt;
It’s clean.&lt;br&gt;
It’s idiomatic Go.&lt;br&gt;
And it’s absolutely destroying your database.&lt;/p&gt;

&lt;p&gt;Here’s what’s actually happening under the hood:&lt;br&gt;
Let’s say an organization has 100 workspaces.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Query #1: SELECT FROM workspaces WHERE org_id = 123 (gets all 100 workspaces)*

Query #2: SELECT used_bytes FROM workspace_storage_view WHERE workspace_id = 1

Query #3: SELECT used_bytes FROM workspace_storage_view WHERE workspace_id = 2

Query #4: SELECT used_bytes FROM workspace_storage_view WHERE workspace_id = 3

……..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Query #101: SELECT used_bytes FROM workspace_storage_view WHERE workspace_id = 100&lt;br&gt;
That’s the N+1 problem 👀&lt;br&gt;
One query to get the list, plus N queries to get the related data for each item.&lt;/p&gt;

&lt;p&gt;Why does this feel so innocent?&lt;br&gt;
Because it works perfectly fine in development.&lt;/p&gt;

&lt;p&gt;With 3 workspaces, you’d never notice.&lt;br&gt;
With 200 workspaces, you’ve got a 2–3 second problem.&lt;/p&gt;

&lt;p&gt;Each individual query is fast — around 8–10ms.&lt;/p&gt;

&lt;p&gt;But you’re paying that cost:&lt;/p&gt;

&lt;p&gt;200 times&lt;br&gt;
plus network round trips plus connection pool contention plus database CPU overhead&lt;br&gt;
The math is brutal:&lt;/p&gt;

&lt;p&gt;200 queries × 10ms = 2000ms of pure database time🥳&lt;br&gt;
Add network latency and runtime overhead&lt;br&gt;
→ suddenly you’re staring at 3+ seconds for an endpoint that should be sub-100ms&lt;/p&gt;
&lt;h2&gt;
  
  
  The Investigation
&lt;/h2&gt;

&lt;p&gt;I’ll admit something uncomfortable:&lt;/p&gt;

&lt;p&gt;I didn’t find this problem in a code review.&lt;br&gt;
I didn’t catch it in testing.&lt;br&gt;
I didn’t even suspect it at first.&lt;/p&gt;

&lt;p&gt;I found it because I finally stopped guessing -and followed a single request all the way through.&lt;/p&gt;

&lt;p&gt;Dead End #1: Logs&lt;br&gt;
My first instinct was the obvious one: logs.😼&lt;/p&gt;

&lt;p&gt;Maybe there was a slow query.&lt;br&gt;
Maybe a timeout.&lt;br&gt;
Maybe a retry loop silently hammering the database.&lt;/p&gt;

&lt;p&gt;I filtered logs for the slow endpoint and scrolled through hours of requests.&lt;/p&gt;

&lt;p&gt;Nothing.Every query completed in 8–12ms.&lt;br&gt;
No timeouts.&lt;br&gt;
No retries.&lt;br&gt;
No errors.&lt;/p&gt;

&lt;p&gt;The logs weren’t lying — they were just telling me the wrong story.&lt;/p&gt;

&lt;p&gt;Each query was fast in isolation.&lt;br&gt;
The problem was the volume.&lt;/p&gt;

&lt;p&gt;Dead End #2: Basic Metrics&lt;br&gt;
Next, I checked our standard metrics dashboard.&lt;/p&gt;

&lt;p&gt;CPU: normal&lt;br&gt;
Memory: normal&lt;br&gt;
Error rate: 0%&lt;br&gt;
Request rate: steady&lt;br&gt;
Database connections: within limits&lt;br&gt;
Everything looked… healthy.&lt;/p&gt;

&lt;p&gt;The only anomaly was API latency.&lt;br&gt;
Not spiking.&lt;br&gt;
Not flapping.&lt;br&gt;
Just consistently bad — like a fever that wouldn’t break.&lt;/p&gt;

&lt;p&gt;At this point, I was stuck 😔&lt;/p&gt;

&lt;p&gt;The Breakthrough: Tracing the Request&lt;br&gt;
Frustrated, I opened our APM and did the thing I should have done first.&lt;/p&gt;

&lt;p&gt;I filtered by the problematic endpoint.&lt;br&gt;
Sorted by slowest requests.&lt;br&gt;
Clicked into a single trace.&lt;/p&gt;

&lt;p&gt;The moment the waterfall view loaded, the illusion of a “healthy” service vanished.&lt;/p&gt;

&lt;p&gt;Normally, a good trace looks boring:&lt;/p&gt;

&lt;p&gt;a short HTTP handler&lt;br&gt;
one or two database calls&lt;br&gt;
done&lt;br&gt;
This trace looked like a skyscraper collapsing.&lt;/p&gt;

&lt;p&gt;The Moment It Clicked&lt;br&gt;
At the very top, I saw the total request time: ~3.2 seconds.&lt;/p&gt;

&lt;p&gt;Below it:&lt;/p&gt;

&lt;p&gt;a tiny sliver of Go application logic&lt;br&gt;
and then… a massive wall of database calls&lt;br&gt;
I expanded the database section.&lt;/p&gt;

&lt;p&gt;The same query shape repeated again and again.&lt;/p&gt;

&lt;p&gt;[DB] SELECT * FROM workspaces WHERE org_id = ?          (15ms)&lt;br&gt;
[DB] SELECT used_bytes FROM workspace_storage_view...   (8ms)&lt;br&gt;
[DB] SELECT used_bytes FROM workspace_storage_view...   (7ms)&lt;br&gt;
[DB] SELECT used_bytes FROM workspace_storage_view...   (9ms)&lt;br&gt;
[DB] SELECT used_bytes FROM workspace_storage_view...   (8ms)&lt;br&gt;
... &lt;br&gt;
(repeats 200+ times)&lt;br&gt;
I started counting.&lt;/p&gt;

&lt;p&gt;By query #50, I stopped.&lt;/p&gt;

&lt;p&gt;I scrolled to the bottom.&lt;/p&gt;

&lt;p&gt;217 database queries.&lt;br&gt;
For one API request.&lt;/p&gt;

&lt;p&gt;That was my smoking gun.🚨&lt;/p&gt;

&lt;p&gt;What the Trace Made Impossible to Ignore&lt;br&gt;
The trace revealed three things that logs and metrics never could.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Query Count Explosion&lt;/strong&gt;&lt;br&gt;
This request executed 217 queries.&lt;br&gt;
Most of our other endpoints averaged 3–5.&lt;/p&gt;

&lt;p&gt;A 40× outlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Latency Waterfall&lt;/strong&gt;&lt;br&gt;
The breakdown was brutal:&lt;/p&gt;

&lt;p&gt;Application code: ~45ms&lt;br&gt;
Database queries: ~2,800ms&lt;br&gt;
Network overhead: ~200ms&lt;br&gt;
Over 90% of the request time was spent just talking to the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Pattern&lt;/strong&gt;&lt;br&gt;
SigNoz groups similar queries together.&lt;br&gt;
It showed me:&lt;/p&gt;

&lt;p&gt;SELECT … FROM workspaces — 1 execution&lt;br&gt;
SELECT … FROM workspace_storage_view — 216 executions&lt;br&gt;
One query to get the list.&lt;br&gt;
216 queries to get the details.&lt;/p&gt;

&lt;p&gt;Classic N+1.&lt;/p&gt;

&lt;p&gt;Why This Was Invisible Before&lt;br&gt;
Without tracing, I would have never found this.&lt;/p&gt;

&lt;p&gt;The N+1 problem doesn’t show up in slow query logs because no single query is slow.&lt;br&gt;
It doesn’t show up in error logs because nothing is failing.&lt;br&gt;
It doesn’t show up in CPU metrics because the database can handle it (until it can’t).&lt;/p&gt;

&lt;p&gt;You need traces.&lt;br&gt;
You need to see the full request lifecycle.&lt;br&gt;
You need to count the queries.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Realization😶‍🌫️
&lt;/h2&gt;

&lt;p&gt;I sat there for a full minute, just staring at the trace.&lt;/p&gt;

&lt;p&gt;217 queries.&lt;br&gt;
One endpoint.&lt;br&gt;
Zero errors.&lt;/p&gt;

&lt;p&gt;I thought about the developer who wrote this code.&lt;/p&gt;

&lt;p&gt;They weren’t careless.&lt;br&gt;
They weren’t inexperienced.&lt;br&gt;
They just never saw what I was seeing.&lt;/p&gt;

&lt;p&gt;They tested with 5 workspaces.&lt;br&gt;
I was staring at an organization with 216.&lt;/p&gt;

&lt;p&gt;That’s the brutal truth about N+1 problems:&lt;/p&gt;

&lt;p&gt;They’re invisible… until they’re not.And tracing was the flashlight that finally illuminated the dark corner where this one had been hiding.&lt;/p&gt;

&lt;p&gt;The Real Mole&lt;br&gt;
Once I saw the trace, I didn’t need to hunt for long.&lt;/p&gt;

&lt;p&gt;I searched for the endpoint handler, followed it into the service layer, and landed in the repository code.&lt;/p&gt;

&lt;p&gt;There it was.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workspaces, _ := repo.GetWorkspacesByOrg(ctx, orgID)
for _, ws := range workspaces {
    used, _ := repo.GetWorkspaceStorageUsed(ctx, ws.ID)
    total += used
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That loop was the entire problem.&lt;/p&gt;

&lt;p&gt;One query to fetch the list.&lt;br&gt;
One query per workspace to fetch storage.&lt;br&gt;
Repeated hundreds of times.&lt;/p&gt;

&lt;p&gt;The code wasn’t wrong.&lt;br&gt;
It wasn’t sloppy.&lt;br&gt;
It wasn’t even “bad.”&lt;/p&gt;

&lt;p&gt;It just assumed the number of workspaces would stay small.&lt;/p&gt;

&lt;p&gt;In production, that assumption quietly collapsed.&lt;/p&gt;

&lt;p&gt;Once I saw this loop with the trace numbers in mind, everything clicked. There was no mystery anymore just a very expensive pattern hiding behind clean code.&lt;/p&gt;

&lt;p&gt;The fix wasn’t complicated.&lt;/p&gt;

&lt;p&gt;It was just time to stop asking the database the same question over and over again.&lt;/p&gt;

&lt;p&gt;The Fix&lt;br&gt;
Once I saw the trace, I knew exactly what needed to change.&lt;/p&gt;

&lt;p&gt;The solution wasn’t complicated.&lt;br&gt;
It just required a different way of thinking.&lt;/p&gt;

&lt;p&gt;Become a member&lt;br&gt;
Instead of asking the database the same question 216 times, I needed to ask it once.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Strategy🤗
&lt;/h2&gt;

&lt;p&gt;The fix came down to two deliberate changes:&lt;/p&gt;

&lt;p&gt;Stop querying inside a loop&lt;br&gt;
Fetch all workspace storage data in a single database call.&lt;br&gt;
Actually use the database view we already had&lt;br&gt;
We already had workspace_storage_view — a pre-aggregated snapshot of storage usage. We just weren’t using it efficiently.&lt;br&gt;
That was it.&lt;/p&gt;

&lt;p&gt;The code change took about 20 minutes.&lt;/p&gt;

&lt;p&gt;The impact was immediate.&lt;/p&gt;

&lt;p&gt;The After Code (Conceptually)&lt;br&gt;
Instead of this:&lt;/p&gt;

&lt;p&gt;Fetch workspaces&lt;br&gt;
Loop&lt;br&gt;
Query storage per workspace (N times)&lt;br&gt;
The flow became:&lt;/p&gt;

&lt;p&gt;Fetch all workspace storage usage in one query&lt;br&gt;
Aggregate in memory&lt;br&gt;
The loop didn’t disappear — but now it was looping over in-memory data, not triggering database calls.&lt;/p&gt;

&lt;p&gt;That distinction matters more than it sounds.&lt;/p&gt;

&lt;p&gt;What Changed&lt;br&gt;
Same logic.&lt;br&gt;
Same output.&lt;br&gt;
Completely different performance profile.&lt;/p&gt;

&lt;p&gt;Why the Database View Mattered🥳&lt;br&gt;
The real win wasn’t in the Go code.&lt;/p&gt;

&lt;p&gt;It was in the database.&lt;/p&gt;

&lt;p&gt;The workspace_storage_view already did the expensive work:&lt;/p&gt;

&lt;p&gt;Scanning millions of rows in the files table&lt;br&gt;
Calculating SUM(size) per workspace&lt;br&gt;
Storing the result in a pre-aggregated form&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE OR REPLACE VIEW workspace_storage_view AS
SELECT
    workspace_id,
    COALESCE(SUM(size), 0) AS used_bytes
FROM files
WHERE type != 'folder'
GROUP BY workspace_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without this view, the “fix” would have just moved the problem -forcing the database to recompute heavy aggregations on every request.&lt;/p&gt;

&lt;p&gt;With the view:&lt;/p&gt;

&lt;p&gt;Aggregation happens once&lt;br&gt;
Reads are fast&lt;br&gt;
Logic is consistent across services&lt;br&gt;
It became our single source of truth for storage usage — shared between Go services and legacy PHP code. No duplication. No discrepancies.&lt;/p&gt;

&lt;p&gt;The Moment I Deployed It&lt;br&gt;
I pushed the fix at 3:47 PM on a Thursday☕️&lt;/p&gt;

&lt;p&gt;Then I opened the monitoring dashboard and watched.&lt;/p&gt;

&lt;p&gt;The latency line for the endpoint&lt;br&gt;
didn’t slowly improve.&lt;/p&gt;

&lt;p&gt;It fell off a cliff.&lt;/p&gt;

&lt;p&gt;MetricBeforeAfterImprovementQueries per request2171217× fewerAvg response time~2.8s~80ms~35× fasterP95 latency~4.2s~120ms~35× betterDB CPU usage~65%~12%82% reduction&lt;/p&gt;

&lt;p&gt;I refreshed the endpoint for our largest customer.78ms.&lt;/p&gt;

&lt;p&gt;Refreshed again.82ms.&lt;/p&gt;

&lt;p&gt;I sat there for five minutes just watching the graphs.&lt;/p&gt;

&lt;p&gt;No spikes.&lt;br&gt;
No regression.&lt;br&gt;
Just… fast.&lt;/p&gt;

&lt;p&gt;The Best Metric of All&lt;br&gt;
The next morning, our support lead messaged me:&lt;/p&gt;

&lt;p&gt;“Hey, whatever you did yesterday the dashboard complaints stopped completely. Users are happy again.”&lt;/p&gt;

&lt;p&gt;I didn’t tell him it was one structural query change.&lt;/p&gt;

&lt;p&gt;I just said:&lt;/p&gt;

&lt;p&gt;“Fixed a thing.”😁 (i told after that by the way)&lt;/p&gt;
&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;I’ve fixed performance bugs before.&lt;br&gt;
But this one stuck with me.&lt;/p&gt;

&lt;p&gt;Not because it was complex it wasn’t.&lt;br&gt;
But because it exposed a blind spot I didn’t know I had.&lt;/p&gt;

&lt;p&gt;Fixing the N+1 problem felt great. Watching latency drop from 3 seconds to ~80ms was an instant dopamine hit.&lt;/p&gt;

&lt;p&gt;Here’s what I learned:&lt;/p&gt;

&lt;p&gt;I used to think clean code was enough.Readable loops, proper error handling, separation of concerns — that was the goal.&lt;br&gt;
This bug taught me otherwise.&lt;/p&gt;

&lt;p&gt;The real lesson wasn’t about JOINs or database views.&lt;br&gt;
It was about mindset.&lt;/p&gt;

&lt;p&gt;Databases don’t think in loops. They think in sets.&lt;br&gt;
Every time I write a for loop that hits the database, I’m forcing it to work against its nature — and paying for it in latency.I also learned that silent problems stay silent until you give yourself the tools to hear them.&lt;br&gt;
Logs and metrics told me something was off.&lt;br&gt;
SigNoz traces told me exactly what — 217 queries where one would do.&lt;/p&gt;

&lt;p&gt;Now I test with real volumes.&lt;br&gt;
I watch query counts like a hawk.&lt;br&gt;
And I never, ever query inside a loop without asking:&lt;/p&gt;

&lt;p&gt;“Can I do this once?”&lt;/p&gt;

&lt;p&gt;Because the fix was one query change.&lt;br&gt;
But the lesson -that’s permanent.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;One more thing i want to add on is that *&lt;/em&gt;&lt;br&gt;
I've seen teams blame ORMs for N+1 problems. But the tool isn't the culprit — it's abstraction without understanding.&lt;/p&gt;

&lt;p&gt;ORMs are convenience wrappers, not magic. They eliminate boilerplate, but you still need to know:&lt;/p&gt;

&lt;p&gt;What SQL your code actually generates&lt;/p&gt;

&lt;p&gt;How the database executes it&lt;/p&gt;

&lt;p&gt;When lazy loading silently triggers extra queries&lt;/p&gt;

&lt;p&gt;The previous developer didn't misuse GORM. They just hadn't tested at production scale. That gap — in testing, monitoring, team awareness - is on us, not the tool.&lt;/p&gt;

&lt;p&gt;Great abstractions make the right thing easy and the wrong thing hard. But no tool replaces understanding what happens under the hood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Habits to Avoid N+1&lt;/strong&gt;&lt;br&gt;
Here’s what I do now and what I wish I’d done before:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Don’t query the database inside a loop. Ever.&lt;/strong&gt;&lt;br&gt;
This is the cardinal rule.&lt;br&gt;
If you see a for loop with a database call inside, stop. Refactor.&lt;/p&gt;

&lt;p&gt;There are very few legitimate exceptions.&lt;br&gt;
If you think you have one, measure it first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Think in sets, not rows.&lt;/strong&gt;&lt;br&gt;
Before writing a query, ask:&lt;br&gt;
“What’s the smallest number of queries that can get me everything I need?”&lt;/p&gt;

&lt;p&gt;Databases excel at set operations — JOINs, IN clauses, bulk reads.&lt;br&gt;
They're terrible at being asked the same question repeatedly.&lt;/p&gt;

&lt;p&gt;Your job is to translate “I need data for each of these items” into “I need all this data at once.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Enable query logging in development.&lt;/strong&gt;&lt;br&gt;
Yes, the logs are noisy.&lt;br&gt;
Yes, it’s annoying to scroll past hundreds of queries.&lt;/p&gt;

&lt;p&gt;But it’s the only way to catch N+1 patterns early.&lt;/p&gt;

&lt;p&gt;In Go with GORM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db, err := gorm.Open(postgres.Open(dsn), &amp;amp;gorm.Config{
    Logger: logger.Default.LogMode(logger.Info), // Shows all queries
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With sqlx or database/sql, wrap your queries with logging middleware.&lt;/p&gt;

&lt;p&gt;Make it part of your local dev setup.&lt;br&gt;
Turn it off when you need to. But keep it on by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set query count budgets for endpoints.&lt;/strong&gt;&lt;br&gt;
Not all endpoints are created equal:&lt;/p&gt;

&lt;p&gt;Detail page (single resource): ≤ 3 queries&lt;br&gt;
List page (collection): ≤ 5–7 queries&lt;br&gt;
Dashboard (aggregated data): ≤ 10 queries&lt;br&gt;
If you exceed these, ask why.&lt;br&gt;
Is it N+1? Over-fetching? Missing JOIN?&lt;/p&gt;

&lt;p&gt;Track these budgets in code comments or your team’s style guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Write tests that assert query counts.&lt;/strong&gt;&lt;br&gt;
This is the game-changer.&lt;/p&gt;

&lt;p&gt;Use a query counter or mock your database layer to verify you’re not making too many calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TestGetOrgStorageUsage_QueryCount(t *testing.T) {
    // Setup test with 200 workspaces
queryLog := &amp;amp;QueryLogger{}
    service := NewWorkspaceService(queryLog)
    service.GetOrgStorageUsage(ctx, orgID)
    if queryLog.Count() &amp;gt; 2 {
        t.Errorf("Expected ≤ 2 queries, got %d", queryLog.Count())
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the test fails, you’ve caught an N+1 before it reaches production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Profile with production-scale data.&lt;/strong&gt;&lt;br&gt;
Testing with 5 records is like testing a car in your driveway.&lt;br&gt;
It might work. You won’t know how it handles the highway.&lt;/p&gt;

&lt;p&gt;Before deploying:&lt;/p&gt;

&lt;p&gt;Seed your staging database with realistic volumes&lt;br&gt;
Run the endpoint with your largest customer’s data profile&lt;br&gt;
Measure latency, query count, and database load&lt;br&gt;
If you can’t test at scale, at least add a warning comment:&lt;/p&gt;

&lt;p&gt;// ⚠️ N+1 risk: loops over workspaces&lt;br&gt;
// Test with 200+ workspaces before deploying&lt;br&gt;
&lt;strong&gt;7. Use APM tools to catch what you miss.&lt;/strong&gt;&lt;br&gt;
SigNoz, DataDog, New Relic — whatever you use, configure it to alert on:&lt;/p&gt;

&lt;p&gt;Queries per request &amp;gt; threshold&lt;br&gt;
Database time &amp;gt; 50% of total request time&lt;br&gt;
Linear correlation between data size and latency&lt;br&gt;
Set up a dashboard that shows query counts by endpoint.&lt;br&gt;
Review it weekly. Look for outliers.&lt;/p&gt;

&lt;p&gt;Silent killers stay silent until you give yourself the tools to hear them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Learn your ORM’s data fetching patterns.&lt;/strong&gt;&lt;br&gt;
If you use GORM:&lt;br&gt;
Preload() = separate query for related data (can cause N+1 if nested)&lt;br&gt;
Joins() = single query with JOINs (usually better for lists)&lt;br&gt;
Select() = fetch only the columns you need&lt;br&gt;
If you use sqlx or raw queries:&lt;/p&gt;

&lt;p&gt;Prefer IN clauses over loops: SELECT * FROM table WHERE id IN (?, ?, ?)&lt;br&gt;
Use JOINs when you need related data from multiple tables&lt;br&gt;
Consider database views for pre-aggregated data (like we did with workspace_storage_view)&lt;br&gt;
Don’t treat your database layer as a black box.&lt;br&gt;
Understand what SQL your code generates.&lt;br&gt;
Read the query plans. Use EXPLAIN.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Make “query count” part of code reviews.&lt;/strong&gt;&lt;br&gt;
Add it to your checklist:&lt;/p&gt;

&lt;p&gt;[ ] Does this loop make database calls?&lt;br&gt;
[ ] Could this be a single query with a JOIN or IN clause?&lt;br&gt;
[ ] What happens when this list grows to 200+ items?&lt;br&gt;
[ ] Are we fetching more data than we need?&lt;br&gt;
A second pair of eyes catches what you miss.&lt;br&gt;
A team habit prevents what individuals forget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. When in doubt, measure.&lt;/strong&gt;&lt;br&gt;
Don’t guess. Don’t assume. Don’t trust “it worked in dev.”&lt;/p&gt;

&lt;p&gt;Measure:&lt;/p&gt;

&lt;p&gt;Query count per request&lt;br&gt;
Total database time&lt;br&gt;
Latency at different data volumes&lt;br&gt;
If the numbers look wrong, the code is wrong — even if it “works.”&lt;/p&gt;

&lt;p&gt;These habits won’t eliminate every performance problem.&lt;br&gt;
But they’ll catch 90% of N+1 issues before they reach production.&lt;/p&gt;

&lt;p&gt;And the 10% that slip through?&lt;br&gt;
That’s what SigNoz is for.&lt;/p&gt;

&lt;p&gt;If you’ve handled N+1 differently, or think there’s a better approach, I’d love to hear it.&lt;/p&gt;

&lt;p&gt;Happy reading. Go check your query logs — I’ll wait.❤️&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>go</category>
      <category>database</category>
      <category>postgres</category>
    </item>
    <item>
      <title>How to auto scaling of navbar in bootstrap4</title>
      <dc:creator>Naveen Chandra Adhikari</dc:creator>
      <pubDate>Wed, 01 Jan 2020 07:44:10 +0000</pubDate>
      <link>https://forem.com/naveenc83002940/how-to-auto-scaling-of-navbar-in-bootstrap4-6af</link>
      <guid>https://forem.com/naveenc83002940/how-to-auto-scaling-of-navbar-in-bootstrap4-6af</guid>
      <description>

</description>
      <category>help</category>
    </item>
    <item>
      <title>Can anyone tell me how to measure time in python </title>
      <dc:creator>Naveen Chandra Adhikari</dc:creator>
      <pubDate>Mon, 30 Dec 2019 07:39:31 +0000</pubDate>
      <link>https://forem.com/naveenc83002940/can-anyone-tell-me-how-to-measure-time-in-python-pkc</link>
      <guid>https://forem.com/naveenc83002940/can-anyone-tell-me-how-to-measure-time-in-python-pkc</guid>
      <description>

</description>
      <category>help</category>
    </item>
    <item>
      <title>PYTHON analyzing PHISSING EMAILS </title>
      <dc:creator>Naveen Chandra Adhikari</dc:creator>
      <pubDate>Sun, 29 Dec 2019 18:02:29 +0000</pubDate>
      <link>https://forem.com/naveenc83002940/python-analyzing-phissing-emails-21de</link>
      <guid>https://forem.com/naveenc83002940/python-analyzing-phissing-emails-21de</guid>
      <description>

</description>
      <category>help</category>
    </item>
  </channel>
</rss>
