DEV Community

Cover image for Optimizing Query Performance in Django with PostgreSQL: A Case Study
Priya
Priya

Posted on

2 1

Optimizing Query Performance in Django with PostgreSQL: A Case Study

Recently, I started working on a feature that required me to optimize the database queries being fired from the code I wrote. This optimization is critical for ensuring the home page loads quickly, providing a seamless user experience without delays.

Let me give a short intro about the application I’m working with. It’s a Django app with a PostgreSQL database. The feature I’m working on involves logic that needs data from multiple models, as the application is well modularized.

More modularized? A problem!

We all tend to follow certain coding principles we’ve learned, one of which is writing modularized, reusable pieces of code. This approach is great for maintainability and scalability, but when it comes to database queries, it can sometimes lead to inefficiency.


Checking Query Execution Time in Django

First, I thought of checking the performance of the queries that are being fired from my code. I got the raw SQL and then tried executing that in pgAdmin and analyzed the results using EXPLAIN ANALYZE. For example, If you want to analyze a query, prefix it with EXPLAIN ANALYZE.

Here’s a snippet of how it looked when I used the IN clause:

"Unique  (cost=8.19..8.20 rows=1 width=16) (actual 
time=0.045..0.047 rows=1 loops=1)"
"  ->  Sort  (cost=8.19..8.20 rows=1 width=16) (actual 
time=0.045..0.045 rows=1 loops=1)"
"Planning Time: 0.229 ms"
"Execution Time: 0.076 ms"
Enter fullscreen mode Exit fullscreen mode

Next, I switched to using JOIN, and here's the result:

"HashAggregate  (cost=30.07..32.69 rows=262 width=16) (actual 
time=0.148..0.150 rows=1 loops=1)"
"  Batches: 1  Memory Usage: 37kB"
"  ->  Nested Loop  (cost=4.21..28.76 rows=262 width=16) (actual
 time=0.119..0.140 rows=1 loops=1)"
"Planning Time: 0.455 ms"
"Execution Time: 0.222 ms"
Enter fullscreen mode Exit fullscreen mode

Noted Anything???

Yes, the "Execution Time"!

It shows that using IN is faster than using JOIN for this specific case. However, there’s a key question we need to consider: Will this scale for larger datasets? What happens if the data inside the IN clause is significantly larger? Will it still perform well? The answer to this is A Big No.


Performance Comparison

To better understand the trade-offs between IN and JOIN, let’s analyze how the database handles these two approaches, especially as the dataset grows.

Using IN:

When using IN, the database is essentially checking if each row in the main table exists in a list of values. While this can be fast for smaller datasets, as the list in the IN clause grows, PostgreSQL has to scan a larger set of values. This leads to increased execution time and potentially a full table scan, especially if the subquery or list of values is large.

Using JOIN:

On the other hand, JOIN creates a relationship between tables based on matching values in the columns, allowing the database to optimize the process using indexes. Although the execution time may seem worse for small data sets (as we saw in our initial test), JOIN performs much better when dealing with larger datasets.


What you can use?

It depends! Yes, it depends on your use case and the problem you're trying to solve. It’s not always the case that you should use JOIN—for smaller sets of data, IN might be more efficient.

So, have a good understanding of the use case and get insights from people who have a better understanding than you about it. If you feel like the data is going to be large and it will affect performance, you can definitely go for JOIN. And yes, don't forget to create Indexes on your database to make your JOINS perform better.


Know Indexing better

Indexes are critical when working with joins on large datasets. They allow PostgreSQL to quickly locate rows based on indexed columns, significantly speeding up the query performance. To optimize your JOIN queries, ensure that the columns used in the ON clause, as well as other frequently queried columns like those in WHERE and ORDER BY clauses.

Want to learn more about indexing strategies for PostgreSQL? Check out this resource on choosing table indexes.


My Decision

After evaluating both the IN and JOIN approaches, I decided to use JOIN in cases where the data is expected to grow, as it provides better performance with larger datasets. However, for situations where I know the data will remain small, I opted for IN as it offers faster query execution for smaller datasets.

Additionally, I’ve implemented indexing on key columns (like id's) to ensure that JOIN queries continue to perform well as the application scales.


In conclusion, always evaluate your use case, choose the most appropriate method based on data size, and monitor your queries over time to ensure continued performance optimization as your data scales.

Happy coding! 💻

Heroku

Built for developers, by developers.

Whether you're building a simple prototype or a business-critical product, Heroku's fully-managed platform gives you the simplest path to delivering apps quickly — using the tools and languages you already love!

Learn More

Top comments (0)

ACI image

ACI.dev: Fully Open-source AI Agent Tool-Use Infra (Composio Alternative)

100% open-source tool-use platform (backend, dev portal, integration library, SDK/MCP) that connects your AI agents to 600+ tools with multi-tenant auth, granular permissions, and access through direct function calling or a unified MCP server.

Check out our GitHub!

Join the Runner H "AI Agent Prompting" Challenge: $10,000 in Prizes for 20 Winners!

Runner H is the AI agent you can delegate all your boring and repetitive tasks to - an autonomous agent that can use any tools you give it and complete full tasks from a single prompt.

Check out the challenge

DEV is bringing live events to the community. Dismiss if you're not interested. ❤️