As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Python's context managers and decorators transform how we handle resources and modify behavior. They encapsulate setup and teardown logic while keeping code clean. I've found these patterns invaluable in production systems, especially when dealing with transactions, errors, and resource management. Let me share eight advanced techniques I regularly use.
Stateful context managers maintain internal state across operations. Consider a database transaction system tracking multiple operations before commit. Here's a practical implementation:
class StatefulTransaction:
def __init__(self):
self.operations = []
self.active = False
def __enter__(self):
self.active = True
return self
def add_operation(self, sql):
if not self.active:
raise RuntimeError("Transaction not active")
self.operations.append(sql)
def __exit__(self, exc_type, _, __):
if not exc_type:
print(f"Committing: {self.operations}")
# Actual commit logic here
else:
print(f"Rolling back: {exc_type.__name__}")
self.active = False
# Real-world usage
with StatefulTransaction() as tx:
tx.add_operation("UPDATE accounts SET balance = balance - 100")
tx.add_operation("UPDATE accounts SET balance = balance + 100")
# Transaction auto-commits if no exceptions
Retry decorators handle transient failures gracefully. This pattern saved me during API integrations where network glitches were common. Here's an enhanced version with exponential backoff:
import time
import random
from functools import wraps
def retry(max_attempts=4, initial_delay=0.1, backoff=2):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt == max_attempts:
raise
jitter = random.uniform(0.5, 1.5)
sleep_time = delay * jitter
print(f"Attempt {attempt} failed. Retrying in {sleep_time:.2f}s")
time.sleep(sleep_time)
delay *= backoff
return wrapper
return decorator
@retry(max_attempts=3, initial_delay=0.5)
def call_flaky_service(endpoint):
# Simulate 30% failure rate
if random.random() < 0.3:
raise ConnectionError("Service timeout")
return "Response data"
Timing context managers help identify bottlenecks. I use this daily when optimizing data pipelines:
from time import perf_counter
from contextlib import contextmanager
@contextmanager
def timer(description):
start = perf_counter()
try:
yield
finally:
elapsed = perf_counter() - start
print(f"{description}: {elapsed:.4f} seconds")
# Practical application
with timer("Matrix multiplication"):
import numpy as np
a = np.random.rand(1000, 1000)
b = np.random.rand(1000, 1000)
np.dot(a, b)
Combined decorator-context managers simplify resource handling. This factory pattern creates reusable resource managers:
from contextlib import contextmanager
def resource_manager(resource_type):
@contextmanager
def manager(*args, **kwargs):
resource = resource_type(*args, **kwargs)
try:
print(f"Initialized {resource}")
yield resource
finally:
resource.cleanup()
print(f"Released {resource}")
return manager
@resource_manager
class DatabaseConnection:
def __init__(self, connection_string):
self.conn_string = connection_string
# Actual connection logic
def cleanup(self):
# Close connections
print(f"Closing {self.conn_string}")
# Usage
@DatabaseConnection("postgres://user:pass@localhost")
def run_queries(db):
print(f"Querying with {db.conn_string}")
Memoization with expiration balances performance and freshness. I implemented this for financial data feeds:
from functools import wraps
import time
def expiring_cache(ttl=60):
def decorator(func):
cache = {}
@wraps(func)
def wrapper(*args):
key = args
if key in cache:
result, timestamp = cache[key]
if time.time() - timestamp < ttl:
return result
result = func(*args)
cache[key] = (result, time.time())
return result
return wrapper
return decorator
@expiring_cache(ttl=10)
def get_live_stock_price(symbol):
print(f"Fetching live price for {symbol}")
# Simulate API call
return f"{symbol}: {100 + time.time() % 10:.2f}"
Temporary environment context managers isolate configuration changes. This prevents "env var pollution" in tests:
import os
from contextlib import contextmanager
@contextmanager
def scoped_env(**overrides):
original = {}
for key in overrides:
original[key] = os.environ.get(key)
os.environ[key] = str(overrides[key])
try:
yield
finally:
for key, value in original.items():
if value is None:
os.environ.pop(key, None)
else:
os.environ[key] = value
# Test scenario
print("Before:", os.getenv("DEBUG"))
with scoped_env(DEBUG="1", MAX_WORKERS="4"):
print("Inside:", os.getenv("DEBUG"))
print("After:", os.getenv("DEBUG"))
Runtime type enforcement decorators catch errors early. I enforce contracts in data processing pipelines:
def validate_types(func):
@wraps(func)
def wrapper(*args, **kwargs):
annotations = func.__annotations__
for name, value in zip(func.__code__.co_varnames, args):
if name in annotations and not isinstance(value, annotations[name]):
raise TypeError(f"{name} requires {annotations[name]}, got {type(value)}")
return func(*args, **kwargs)
return wrapper
@validate_types
def transform_data(inputs: list, coefficient: float) -> dict:
return {k: v * coefficient for k, v in enumerate(inputs)}
# Valid call
transform_data([1,2,3], 1.5)
# Invalid call
transform_data("text", 0.5) # Raises TypeError
Async context managers manage resources in concurrent code. This pattern is crucial for modern web applications:
import aiohttp
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def managed_session(timeout=10):
session = aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=timeout))
try:
yield session
finally:
await session.close()
async def fetch_multiple(urls):
async with managed_session() as session:
tasks = [session.get(url) for url in urls]
responses = await asyncio.gather(*tasks)
return [await r.text() for r in responses]
# Practical usage
urls = ["https://httpbin.org/get"] * 3
results = asyncio.run(fetch_multiple(urls))
print(f"Fetched {len(results)} responses")
These patterns demonstrate Python's flexibility in resource management and behavior modification. By encapsulating complex logic, they reduce boilerplate and prevent common errors. In my experience, mastering these techniques leads to more robust and maintainable systems. Each serves distinct purposes but shares a common goal: making complex operations simple and safe.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)