Python is widely celebrated for its simplicity and readability, making it a favorite among beginners and seasoned developers alike. However, it's also known that Python may not always be the fastest language when it comes to raw execution speed, especially in compute-intensive scenarios.
Fortunately, with a thoughtful approach and the application of smart performance optimization techniques, you can significantly boost the speed and efficiency of your Python code without compromising its clean and elegant syntax.
From leveraging built-in functions and data structures to adopting libraries like NumPy, using just-in-time compilers like Numba, or optimizing loops and memory usage, there are many powerful strategies at your disposal. These performance enhancements allow you to write Python code that not only remains highly readable and maintainable but also delivers near-native speed in the right contexts.
1. Use List Comprehensions Instead of Loops
Loops are slower compared to list comprehensions, which are optimized in Python.
❌ Slow:
squared = []
for i in range(10):
squared.append(i ** 2)
✅ Faster:
squared = [i ** 2 for i in range(10)]
2. Use join()
Instead of String Concatenation
String concatenation in a loop is inefficient since strings are immutable.
❌ Slow:
result = ""
for word in ["Hello", "World"]:
result += word + " "
✅ Faster:
result = " ".join(["Hello", "World"])
3. Use map()
Instead of Loops for Transformations
Built-in functions like map()
can be faster than explicit loops.
❌ Slow:
numbers = [1, 2, 3, 4]
squared = []
for num in numbers:
squared.append(num ** 2)
✅ Faster:
squared = list(map(lambda x: x ** 2, numbers))
4. Use set
for Faster Membership Checks
Checking for values in a list (O(n)
) is slower than using a set
(O(1)
).
❌ Slow:
items = ["apple", "banana", "cherry"]
if "banana" in items:
print("Found!")
✅ Faster:
items = {"apple", "banana", "cherry"} # Using a set
if "banana" in items:
print("Found!")
5. Use functools.lru_cache
for Memoization
If a function performs expensive calculations, cache results to speed up repeated calls.
from functools import lru_cache
@lru_cache(maxsize=1000)
def expensive_function(n):
print("Computing...")
return n * n
6. Use multiprocessing
for Parallel Execution
Python’s multiprocessing
module allows you to use multiple CPU cores.
from multiprocessing import Pool
def square(n):
return n ** 2
with Pool(4) as p:
results = p.map(square, range(10))
7. Avoid Global Variables
Python slows down when accessing global variables in loops. Use local variables instead.
❌ Slow:
x = 10
def compute():
for _ in range(1000000):
global x
x += 1
✅ Faster:
def compute():
x = 10 # Local scope is faster
for _ in range(1000000):
x += 1
8. Use cython
or numba
for Heavy Computation
If your Python code involves a lot of number crunching, use cython
or numba
for just-in-time (JIT) compilation.
from numba import jit
@jit(nopython=True)
def fast_function(n):
return n * n
9. Use itertools
for Memory-Efficient Iteration
Instead of storing large lists in memory, use iterators from itertools
.
from itertools import islice
with open("large_file.txt") as f:
first_10_lines = list(islice(f, 10))
10. Prefer f-strings
Over format()
Python’s f-strings are faster than .format()
for string formatting.
❌ Slow:
name = "John"
greeting = "Hello, {}".format(name)
✅ Faster:
name = "John"
greeting = f"Hello, {name}"
11. Use enumerate()
Instead of Range
Avoid manually managing an index when iterating through lists.
❌ Slow Approach:
items = ["apple", "banana", "cherry"]
for i in range(len(items)):
print(i, items[i])
✅ Faster Approach:
items = ["apple", "banana", "cherry"]
for i, item in enumerate(items):
print(i, item)
Why? enumerate()
is optimized and makes the code more readable.
12. Use zip()
for Parallel Iteration
Instead of iterating through multiple lists using indexing, use zip()
.
❌ Slow Approach:
names = ["Alice", "Bob", "Charlie"]
ages = [25, 30, 35]
for i in range(len(names)):
print(names[i], ages[i])
✅ Faster Approach:
for name, age in zip(names, ages):
print(name, age)
Why? zip()
is faster and avoids manual indexing.
13. Use itertools
for Memory Efficiency
For large datasets, itertools
can help avoid unnecessary memory usage.
✅ Example: Using islice()
to Limit Iteration
from itertools import islice
big_list = range(10**6)
first_ten = list(islice(big_list, 10))
print(first_ten)
Why? This avoids creating a full list in memory.
14. Use defaultdict
for Cleaner Dictionary Operations
Avoid key existence checks by using defaultdict
.
❌ Slow Approach:
counts = {}
words = ["apple", "banana", "apple"]
for word in words:
if word in counts:
counts[word] += 1
else:
counts[word] = 1
✅ Faster Approach:
from collections import defaultdict
counts = defaultdict(int)
for word in words:
counts[word] += 1
Why? defaultdict
initializes values automatically, reducing checks.
15. Use Counter
for Fast Frequency Counts
Counting elements in a list is easier with collections.Counter()
.
❌ Slow Approach:
words = ["apple", "banana", "apple"]
word_counts = {}
for word in words:
if word in word_counts:
word_counts[word] += 1
else:
word_counts[word] = 1
✅ Faster Approach:
from collections import Counter
words = ["apple", "banana", "apple"]
word_counts = Counter(words)
Why? Counter
is optimized for frequency calculations.
16. Use deque
Instead of Lists for Fast Insertions
Lists are slow when inserting at the beginning. Use deque
instead.
❌ Slow Approach:
items = []
items.insert(0, "new")
✅ Faster Approach:
from collections import deque
items = deque()
items.appendleft("new")
Why? deque
has O(1)
insertions, while lists have O(n)
.
17. Use any()
and all()
Instead of Loops
Instead of manually checking conditions, use any()
or all()
.
❌ Slow Approach:
values = [0, 0, 1, 0]
found = False
for v in values:
if v == 1:
found = True
break
✅ Faster Approach:
values = [0, 0, 1, 0]
found = any(v == 1 for v in values)
Why? any()
short-circuits as soon as it finds a True
value.
18. Use sorted()
with key
for Custom Sorting
Instead of sorting manually, use the key
argument.
❌ Slow Approach:
people = [("Alice", 25), ("Bob", 30), ("Charlie", 20)]
people.sort(key=lambda x: x[1])
✅ Faster Approach:
people = [("Alice", 25), ("Bob", 30), ("Charlie", 20)]
sorted_people = sorted(people, key=lambda x: x[1])
Why? sorted()
is more optimized than manual sorting.
19. Use @staticmethod
and @classmethod
to Avoid Unnecessary Instantiations
Instead of creating unnecessary objects, use @staticmethod
and @classmethod
.
❌ Slow Approach:
class Math:
def square(self, x):
return x * x
m = Math()
print(m.square(5))
✅ Faster Approach:
class Math:
@staticmethod
def square(x):
return x * x
print(Math.square(5))
Why? No need to instantiate the class to use the method.
20. Use dataclass
Instead of Regular Classes
Python's dataclass
reduces boilerplate and improves performance.
❌ Slow Approach:
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
✅ Faster Approach:
from dataclasses import dataclass
@dataclass
class Person:
name: str
age: int
Why? dataclass
automatically generates __init__
, __repr__
, and __eq__
.
Final Thoughts
By combining these 20 Python tricks, you can significantly improve performance, reduce memory usage, and write cleaner code.
Did I miss any of your favorite Python optimizations? Let me know in the comments!
📢 Let’s Connect!
💼 LinkedIn | 📂 GitHub | ✍️ Dev.to | 🌐 Hashnode
💡 Join the Conversation:
- Found this useful? Like 👍, comment 💬.
- Share 🔄 to help others on their journey!
- Have ideas? Share them below!
- Bookmark 📌 this content for easy access later.
Let’s collaborate and create something amazing! 🚀
Top comments (18)
Write on medium too!
Thanks Anmol Baranwal Subscriber!
I’ve been considering it. I would love to start writing on Medium too soon. Appreciate the encouragement
Regards,
Ram
Instead of a list of options I would preferred to see more explanation why to use one or the other and also more context.
A list comprehension is faster than the
map
function when not using a lambda. So if you want speed the map example should beFor 4 and 16, the fix is use the right data structure.
Avoid global variables is not only a speed fix, it makes the code more maintainable.
9 and 13 are the same thing, so there are 19 different things.
Between
defaultdict
andCounter
, the latter will be faster as the first is more generic.Only use
@staticmethod
and@classmethod
when it is appropriate. Don't use them just to avoid class instantiation.Hi david duymelinck
Thanks so much for the thoughtful feedback, You're absolutely right. I appreciate you pointing out where more explanation would add value. Also great catch on the overlap between points 9 and 13. l will consolidate that. Feedback like yours really helps improve the quality.
Thanks & Regards,
Ram
Thanks for this article! 🔥
Hi davinceleecode
Thanks for reading. Glad you liked it!
Regards,
Ram
Great article. I would have appreciated a little more explanation (such as the data structures and algorithms behind certain python methods) on why you use a specific syntax rather than another.
Hi Teddy Assih,
Thank you for the feedback. I will update the post with explanations.
Regards,
Ram
been using python for a bit and some of these little changes really do help, keeps me going back to refactor old code all the time
Thank you for the acknowledgment Nevo David 😊👍
Thank you so much for this! ✨
Hi Anita Olsen,
You’re very welcome! I’m glad you found it helpful. 😊👍
Regards,
Ram
pretty cool rundown tbh, i always forget about stuff like lru_cache till i see it used - you ever feel like most speed problems only show up when something actually breaks?
Hi Nathan Tarbert,
Thanks for the comment. I’ve never really faced or noticed that myself. Most of the time, things just run fine unless I’m specifically benchmarking or digging into optimizations.
Regards,
Ram
absolutely love little speed tricks like these honestly makes coding way less annoying over time
Hi Nathan Tarbert
Good to hear that. Refactoring old code with small optimizations can really make a big difference over time. I'm glad the post sparked that motivation!
Thanks & Regards,
Ram
Good!
Thank you RankmyAl