<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brent Ochieng</title>
    <description>The latest articles on Forem by Brent Ochieng (@brentwash35).</description>
    <link>https://forem.com/brentwash35</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brentwash35"/>
    <language>en</language>
    <item>
      <title>Guys what do you think about the AI bubble in 2026.. Is there reason to be optimistic as a dev, data scientist etc? coz honestly people are just in denial about AI capabilities. I still hold onto the the thought that the future is gloomy. what do you think</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Tue, 30 Dec 2025 16:16:31 +0000</pubDate>
      <link>https://forem.com/brentwash35/guys-what-do-you-think-about-the-ai-bubble-in-2026-is-there-reason-to-be-optimistic-as-a-dev-4bjj</link>
      <guid>https://forem.com/brentwash35/guys-what-do-you-think-about-the-ai-bubble-in-2026-is-there-reason-to-be-optimistic-as-a-dev-4bjj</guid>
      <description></description>
      <category>ai</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Understanding Big O Notation in Python: A Practical Guide</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Tue, 30 Dec 2025 15:29:18 +0000</pubDate>
      <link>https://forem.com/brentwash35/understanding-big-o-notation-in-python-a-practical-guide-28g1</link>
      <guid>https://forem.com/brentwash35/understanding-big-o-notation-in-python-a-practical-guide-28g1</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the world of software development, writing code that works is only half the battle. Writing code that works efficiently is what separates good developers from great ones. This is where Big O notation becomes invaluable. Whether you're building a startup's MVP, optimizing a data pipeline, or preparing for technical interviews, understanding algorithmic complexity is a fundamental skill that will serve you throughout your career.&lt;/p&gt;

&lt;p&gt;Big O notation provides a mathematical framework for analyzing how algorithms scale. It answers a critical question: "What happens to my code's performance when my data grows from 100 records to 100 million?" In this comprehensive guide, we'll explore Big O notation through the lens of Python, examining real-world scenarios where this knowledge makes the difference between an application that scales and one that collapses under load.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Big O Notation?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fgk8pjacnu51wgutvgd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fgk8pjacnu51wgutvgd.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends toward infinity. In computer science, we use it to classify algorithms according to how their run time or space requirements grow as the input size grows.&lt;br&gt;
The "O" stands for "order of" and represents the worst-case scenario of an algorithm's performance. When we say an algorithm is O(n), we mean that in the worst case, the time it takes grows linearly with the input size.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Key Characteristics of Big O&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Focus on Growth Rate, Not Exact Time&lt;/strong&gt;&lt;br&gt;
Big O doesn't tell you that an algorithm takes exactly 5 seconds or 10 milliseconds. Instead, it describes the shape of the growth curve. An O(n²) algorithm might be faster than an O(n) algorithm for small inputs, but as the input grows, the quadratic algorithm will eventually become slower.&lt;br&gt;
&lt;strong&gt;2. Worst-Case Analysis&lt;/strong&gt;&lt;br&gt;
Big O typically represents the worst-case scenario. This conservative approach ensures we're prepared for the most challenging conditions our code might face.&lt;br&gt;
&lt;strong&gt;3. Ignores Constants and Lower-Order Terms&lt;/strong&gt;&lt;br&gt;
An algorithm that takes 3n² + 5n + 100 operations is classified as O(n²) because, as n grows large, the n² term dominates. The constants (3, 5, 100) and the lower-order term (5n) become negligible.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;The Big O Hierarchy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Understanding the hierarchy of Big O complexities helps you make informed decisions about algorithm selection:&lt;br&gt;
&lt;strong&gt;O(1) &amp;lt; O(log n) &amp;lt; O(n) &amp;lt; O(n log n) &amp;lt; O(n²) &amp;lt; O(2ⁿ) &amp;lt; O(n!)&lt;/strong&gt;&lt;br&gt;
From best (most efficient) to worst (least efficient), here's what each means:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;O(1)&lt;/strong&gt; - Constant: Executes in the same time regardless of input size&lt;br&gt;
&lt;strong&gt;O(log n)&lt;/strong&gt; - Logarithmic: Time grows logarithmically with input&lt;br&gt;
&lt;strong&gt;O(n)&lt;/strong&gt; - Linear: Time grows proportionally with input&lt;br&gt;
&lt;strong&gt;O(n log n)&lt;/strong&gt; - Linearithmic: Common in efficient sorting algorithms&lt;br&gt;
&lt;strong&gt;O(n²)&lt;/strong&gt; - Quadratic: Time grows quadratically (nested operations)&lt;br&gt;
&lt;strong&gt;O(2ⁿ)&lt;/strong&gt; - Exponential: Time doubles with each addition to input&lt;br&gt;
&lt;strong&gt;O(n!)&lt;/strong&gt; - Factorial: Extremely slow, only practical for tiny inputs&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Big O in Python: Common Operations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Python's built-in data structures have specific time complexities. Understanding these is crucial for writing efficient code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lists&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# O(1) - Constant Time Operations
my_list = [1, 2, 3, 4, 5]
my_list.append(6)           # Adding to end
element = my_list[2]        # Index access
my_list[-1] = 10           # Index assignment

# O(n) - Linear Time Operations
my_list.insert(0, 0)        # Insert at beginning (shifts all elements)
my_list.remove(3)           # Remove by value (searches then shifts)
5 in my_list                # Membership check
my_list.pop(0)              # Pop from beginning

# O(k) where k is the slice size
subset = my_list[1:4]       # Slicing creates new list

# O(n) for iteration
for item in my_list:
    print(item)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dictionaries&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# O(1) Average Case - Hash Table Operations
my_dict = {'name': 'Alice', 'age': 30}
my_dict['email'] = 'alice@example.com'  # Assignment
age = my_dict['name']                     # Lookup
del my_dict['age']                        # Deletion
'name' in my_dict                         # Membership check

# O(n) - Must iterate all items
for key, value in my_dict.items():
    print(key, value)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sets&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# O(1) Average Case
my_set = {1, 2, 3, 4, 5}
my_set.add(6)               # Add element
my_set.remove(3)            # Remove element
5 in my_set                 # Membership check

# O(n) where n is the smaller set
set1 = {1, 2, 3}
set2 = {3, 4, 5}
intersection = set1 &amp;amp; set2  # Intersection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Real-World Examples&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's explore scenarios where understanding Big O notation makes a tangible difference in application performance.&lt;br&gt;
&lt;strong&gt;Example 1: Social Media Feed - Finding Duplicates&lt;/strong&gt;&lt;br&gt;
Imagine you're building a social media application where users can post updates. You need to detect duplicate posts to prevent spam.&lt;br&gt;
&lt;strong&gt;Naive Approach - O(n²):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_duplicates_naive(posts):
    """
    Check every post against every other post.
    Time Complexity: O(n²)
    Space Complexity: O(1)
    """
    duplicates = []
    for i in range(len(posts)):
        for j in range(i + 1, len(posts)):
            if posts[i] == posts[j]:
                duplicates.append(posts[i])
    return duplicates

# With 1,000 posts: ~500,000 comparisons
# With 10,000 posts: ~50,000,000 comparisons
# With 100,000 posts: ~5,000,000,000 comparisons
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Optimized Approach - O(n):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_duplicates_optimized(posts):
    """
    Use a set to track seen posts.
    Time Complexity: O(n)
    Space Complexity: O(n)
    """
    seen = set()
    duplicates = []
    for post in posts:
        if post in seen:
            duplicates.append(post)
        else:
            seen.add(post)
    return duplicates

# With 1,000 posts: 1,000 operations
# With 10,000 posts: 10,000 operations
# With 100,000 posts: 100,000 operations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-World Impact:&lt;/strong&gt; For 100,000 posts, the naive approach performs 5 billion comparisons, while the optimized approach performs just 100,000 operations—a 50,000x improvement! This could be the difference between a response time of milliseconds versus several minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2: E-Commerce Search - Product Lookup&lt;/strong&gt;&lt;br&gt;
You're building an e-commerce platform with millions of products. Users need to find products quickly.&lt;br&gt;
&lt;strong&gt;Linear Search - O(n):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_product_linear(products, target_id):
    """
    Search through products sequentially.
    Time Complexity: O(n)
    """
    for product in products:
        if product['id'] == target_id:
            return product
    return None

# Average case: check half the products
# With 1 million products: ~500,000 checks on average
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Binary Search - O(log n):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_product_binary(products, target_id):
    """
    Binary search on sorted product list.
    Time Complexity: O(log n)
    Prerequisite: products must be sorted by id
    """
    left, right = 0, len(products) - 1

    while left &amp;lt;= right:
        mid = (left + right) // 2
        if products[mid]['id'] == target_id:
            return products[mid]
        elif products[mid]['id'] &amp;lt; target_id:
            left = mid + 1
        else:
            right = mid - 1

    return None

# With 1 million products: ~20 checks maximum
# With 1 billion products: ~30 checks maximum
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hash Table Lookup - O(1):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_product_index(products):
    """
    Create a hash table for instant lookups.
    Time Complexity: O(n) to build, O(1) to lookup
    Space Complexity: O(n)
    """
    return {product['id']: product for product in products}

def find_product_hash(product_index, target_id):
    """
    Look up product in hash table.
    Time Complexity: O(1)
    """
    return product_index.get(target_id)

# With any number of products: 1 check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-World Impact: For a database of 1 million products:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linear search: ~500,000 operations on average&lt;/li&gt;
&lt;li&gt;Binary search: ~20 operations maximum&lt;/li&gt;
&lt;li&gt;Hash table: 1 operation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This explains why databases use indexes (essentially hash tables and B-trees) rather than scanning entire tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 3: Netflix Recommendations - Comparing User Preferences&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You're building a recommendation system that needs to compare a user's watch history with millions of other users to find similar tastes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nested Loop Approach - O(n² × m):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_similar_users_naive(target_user, all_users):
    """
    Compare target user with every other user's entire history.
    Time Complexity: O(n² × m) where n is number of users, 
                     m is average watch history length
    """
    similarities = []
    target_history = set(target_user['watched'])

    for user in all_users:
        if user['id'] == target_user['id']:
            continue

        user_history = user['watched']
        common_shows = 0

        # O(m²) comparison for each pair
        for show1 in target_history:
            for show2 in user_history:
                if show1 == show2:
                    common_shows += 1

        similarities.append({
            'user_id': user['id'],
            'common_shows': common_shows
        })

    return sorted(similarities, key=lambda x: x['common_shows'], reverse=True)

# With 10,000 users and 100 shows each: 10 billion comparisons
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Set Intersection Approach - O(n × m):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_similar_users_optimized(target_user, all_users):
    """
    Use set intersection for efficient comparison.
    Time Complexity: O(n × m) where n is number of users,
                     m is average watch history length
    """
    similarities = []
    target_history = set(target_user['watched'])

    for user in all_users:
        if user['id'] == target_user['id']:
            continue

        user_history = set(user['watched'])
        # O(m) set intersection
        common_shows = len(target_history &amp;amp; user_history)

        similarities.append({
            'user_id': user['id'],
            'common_shows': common_shows
        })

    return sorted(similarities, key=lambda x: x['common_shows'], reverse=True)

# With 10,000 users and 100 shows each: ~1 million operations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-World Impact:&lt;/strong&gt; The optimized version is 10,000x faster. This is why recommendation systems use sophisticated algorithms and data structures—the naive approach becomes computationally infeasible at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 4: Data Processing Pipeline - Log Analysis&lt;/strong&gt;&lt;br&gt;
You're analyzing server logs to identify suspicious activity patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reading Entire File - O(n):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def analyze_logs_in_memory(log_file):
    """
    Load entire log file into memory.
    Time Complexity: O(n)
    Space Complexity: O(n) - stores entire file in memory
    """
    with open(log_file, 'r') as f:
        logs = f.readlines()  # Loads everything into memory

    suspicious_ips = set()
    ip_counts = {}

    for log in logs:
        ip = extract_ip(log)
        ip_counts[ip] = ip_counts.get(ip, 0) + 1

        if ip_counts[ip] &amp;gt; 1000:  # Suspicious threshold
            suspicious_ips.add(ip)

    return suspicious_ips

# Problem: A 10GB log file requires 10GB of RAM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Streaming Approach - O(n) with O(k) Space:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def analyze_logs_streaming(log_file):
    """
    Process logs line by line without loading everything.
    Time Complexity: O(n)
    Space Complexity: O(k) where k is number of unique IPs
    """
    suspicious_ips = set()
    ip_counts = {}

    with open(log_file, 'r') as f:
        for line in f:  # Generator - one line at a time
            ip = extract_ip(line)
            ip_counts[ip] = ip_counts.get(ip, 0) + 1

            if ip_counts[ip] &amp;gt; 1000:
                suspicious_ips.add(ip)

    return suspicious_ips

# A 10GB log file needs only enough RAM for unique IP tracking
# Typically a few MB instead of 10GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-World Impact:&lt;/strong&gt; The streaming approach allows you to process logs that are larger than your available RAM, making it possible to analyze terabytes of data on modest hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When Big O Doesn't Tell the Whole Story&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While Big O notation is incredibly useful, it has limitations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Constants Matter for Small Inputs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Algorithm A: O(n) with constant factor of 1000
def algorithm_a(data):
    result = 0
    for _ in range(1000):
        for item in data:
            result += item
    return result

# Algorithm B: O(n²) with constant factor of 1
def algorithm_b(data):
    result = 0
    for i in data:
        for j in data:
            result += i * j
    return result

# For n &amp;lt; 32, algorithm_b is actually faster!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Average Case vs. Worst Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python's list.sort() uses Timsort, which is O(n log n) in the worst case but often performs closer to O(n) on partially sorted data—a common real-world scenario.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Space-Time Tradeoffs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes you can trade memory for speed or vice versa:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Time-optimized: O(1) lookup, O(n) space
def fibonacci_memoized(n, memo={}):
    if n in memo:
        return memo[n]
    if n &amp;lt;= 1:
        return n
    memo[n] = fibonacci_memoized(n-1, memo) + fibonacci_memoized(n-2, memo)
    return memo[n]

# Space-optimized: O(n) time, O(1) space
def fibonacci_iterative(n):
    if n &amp;lt;= 1:
        return n
    prev, curr = 0, 1
    for _ in range(2, n + 1):
        prev, curr = curr, prev + curr
    return curr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Practical Guidelines for Python Developers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Choose the Right Data Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Need fast lookups? Use dict or set (O(1) average) instead of list (O(n))&lt;br&gt;
Need ordered data with fast insertions at both ends? Use collections.deque instead of list&lt;br&gt;
Need to maintain sorted data? Use bisect module for O(log n) insertions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Avoid Nested Loops When Possible&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Bad: O(n²)
for item in list1:
    if item in list2:  # O(n) lookup
        print(item)

# Good: O(n)
set2 = set(list2)  # O(n) to create set
for item in list1:  # O(n) iteration
    if item in set2:  # O(1) lookup
        print(item)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Use Built-in Functions and Libraries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python's built-in functions are implemented in C and highly optimized:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Slower
max_value = float('-inf')
for value in data:
    if value &amp;gt; max_value:
        max_value = value

# Faster
max_value = max(data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Profile Before Optimizing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use Python's profiling tools to identify actual bottlenecks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import cProfile
import pstats

def my_function():
    # Your code here
    pass

profiler = cProfile.Profile()
profiler.enable()
my_function()
profiler.disable()

stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(10)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion: The Relevance of Big O in Modern Development&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Understanding Big O notation is not about memorizing complexities—it's about developing an intuition for how code scales. In today's world where applications routinely handle millions of users and petabytes of data, this intuition is invaluable.&lt;/p&gt;

&lt;p&gt;Consider these real-world scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Startups: Your MVP works with 100 users. Will it survive 10,000? Understanding Big O helps you avoid architectural decisions that require complete rewrites.&lt;/li&gt;
&lt;li&gt;Data Science: Processing a sample dataset takes 10 seconds. Your production dataset is 1000x larger. Big O tells you if you need 10,000 seconds (2.7 hours) or 10,000,000 seconds (115 days).&lt;/li&gt;
&lt;li&gt;API Development: Your endpoint responds in 50ms. What happens when traffic doubles? Triples? Understanding algorithmic complexity helps you predict and plan.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>python</category>
      <category>performance</category>
    </item>
    <item>
      <title>Understanding *args and **kwargs in Python, Explained in Plain English</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Fri, 05 Dec 2025 08:35:38 +0000</pubDate>
      <link>https://forem.com/brentwash35/understanding-args-and-kwargs-in-python-explained-in-plain-english-1cel</link>
      <guid>https://forem.com/brentwash35/understanding-args-and-kwargs-in-python-explained-in-plain-english-1cel</guid>
      <description>&lt;p&gt;If you’ve ever written a Python function and wondered how to make it accept any number of inputs, you’re not alone. This is where two strangely named but incredibly powerful tools come in:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;*args&lt;br&gt;
**kwargs&lt;/strong&gt;&lt;br&gt;
Think of them as flexible containers that let your functions handle unpredictable or unlimited inputs — without breaking.&lt;/p&gt;

&lt;p&gt;In this article, we’ll break them down using real-life examples, simple explanations, and copy-paste code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. What Is *args?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*args lets your function accept any number of positional arguments, even zero.&lt;/p&gt;

&lt;p&gt;Think of it like this:&lt;/p&gt;

&lt;p&gt;Imagine you’re at a supermarket checkout.&lt;br&gt;
Some customers bring 2 items, some bring 20.&lt;br&gt;
Instead of creating a different till for each customer, the cashier accepts any number of items.&lt;/p&gt;

&lt;p&gt;***args **does exactly that for functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Adding Any Number of Numbers&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def add_numbers(*args):
    return sum(args)

print(add_numbers(2, 3))           # 5
print(add_numbers(10, 20, 30))     # 60
print(add_numbers())               # 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whatever you pass into the function is packed into a &lt;strong&gt;tuple&lt;/strong&gt; called &lt;strong&gt;args&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. What Is **kwargs?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;****kwargs **lets your function accept any number of named arguments — things passed as key=value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-life comparison:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Think of filling out a hotel registration form.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every guest gives different details:&lt;/p&gt;

&lt;p&gt;Name only&lt;br&gt;
Name + email&lt;br&gt;
Name + phone + nationality&lt;br&gt;
The hotel needs one form flexible enough to accept anything.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;**kwargs&lt;/em&gt;* works exactly like that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Guest Registration&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def register_guest(**kwargs):
    return kwargs

print(register_guest(name="Brent", room=305))
# {'name': 'Brent', 'room': 305}

print(register_guest(name="Alice", email="alice@email.com", nights=3))
# {'name': 'Alice', 'email': 'alice@email.com', 'nights': 3}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything goes into a dictionary called &lt;strong&gt;kwargs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OUTPUT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Toppings: ('cheese', 'beef')
Details: {'size': 'large', 'crust': 'thin'}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.Why *args and **kwargs Matter in Real Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s why developers (and data analysts) rely on them:&lt;/p&gt;

&lt;p&gt;✔ They make functions flexible&lt;br&gt;
✔ They help you build reusable code&lt;br&gt;
✔ They simplify APIs and utilities&lt;br&gt;
✔ They clean up long function signatures&lt;br&gt;
✔ They allow you to pass through unknown parameters&lt;/p&gt;

&lt;p&gt;In short:&lt;br&gt;
They help your code adapt instead of break.&lt;/p&gt;

&lt;p&gt;One-sentence summary:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifjph1wc93afl7bjzevl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifjph1wc93afl7bjzevl.png" alt=" " width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*args is a bag of unnamed values, and **kwargs is a dictionary of named values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;**args&lt;/em&gt;* and *&lt;em&gt;**kwargs&lt;/em&gt;* are not advanced concepts but rather practical tools that make your functions smarter and more flexible.&lt;br&gt;
Mastering them early will save you hours of debugging in the future.&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>django</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Understanding Common Table Expressions (CTEs) in SQL — And Why They Matter in Data Analysis</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Fri, 21 Nov 2025 09:15:43 +0000</pubDate>
      <link>https://forem.com/brentwash35/understanding-common-table-expressions-ctes-in-sql-and-why-they-matter-in-data-analysis-2j2i</link>
      <guid>https://forem.com/brentwash35/understanding-common-table-expressions-ctes-in-sql-and-why-they-matter-in-data-analysis-2j2i</guid>
      <description>&lt;p&gt;If you’ve spent enough time writing SQL queries, you’ve probably found yourself staring at a long, repetitive, hard-to-read block of code. Maybe you copied the same subquery three times. Maybe you nested queries inside queries until your brain begged for mercy.&lt;/p&gt;

&lt;p&gt;This is where Common Table Expressions, better known as CTEs, come in. They offer a simple way to write cleaner, more readable, and more maintainable SQL. And for data analysts, learning CTEs is one of those skills that instantly upgrades the quality of your work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13cfp1r776cwl6shpiab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13cfp1r776cwl6shpiab.png" alt=" " width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Exactly Is a CTE?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Common Table Expression (CTE)&lt;/strong&gt; is a temporary, named result set that exists only for the duration of a single SQL statement.&lt;/p&gt;

&lt;p&gt;Think of it like creating a mini-table, just for a moment, to help you break down a query into logical steps.&lt;/p&gt;

&lt;p&gt;It usually starts with the keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A simple example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH top_customers AS (
    SELECT customer_id, SUM(amount) AS total_spent
    FROM payments
    GROUP BY customer_id
)
SELECT *
FROM top_customers
WHERE total_spent &amp;gt; 1000;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, top_customers is not a permanent table. It’s just a temporary result that makes the final query easier to manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Use a CTE?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;CTEs shine in three major areas:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Readability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A long query becomes easier to understand when broken down into smaller steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Reusability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can reference the CTE multiple times within the same query without repeating code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Maintainability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you need to modify something, you change it once inside the CTE instead of updating multiple subqueries.&lt;/p&gt;

&lt;p&gt;For analysts who read and maintain SQL regularly, these benefits are huge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How CTEs Are Used&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use a CTE for almost anything, but here are the most common use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplifying Complex Queries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes, your logic requires several transformations. With CTEs, you can approach SQL like a workflow: Step 1, Step 2, Step 3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH cleaned_data AS (
    SELECT 
        order_id,
        customer_id,
        amount,
        DATE(order_date) AS order_date
    FROM orders
),
aggregated AS (
    SELECT 
        customer_id, 
        SUM(amount) AS total_spent
    FROM cleaned_data
    GROUP BY customer_id
)
SELECT *
FROM aggregated
ORDER BY total_spent DESC;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each section becomes easier to explain and debug.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Avoiding Repeated Subqueries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without a CTE, you might write the same subquery multiple times.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH avg_sales AS (
    SELECT AVG(amount) AS avg_amount
    FROM sales
)
SELECT *
FROM sales
WHERE amount &amp;gt; (SELECT avg_amount FROM avg_sales);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of running the same calculation repeatedly, the CTE stores it once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Creating Recursive Queries (Hierarchies)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is one of the most powerful SQL features: recursive CTEs.&lt;/p&gt;

&lt;p&gt;They're used for:&lt;/p&gt;

&lt;p&gt;Employee-management hierarchies&lt;/p&gt;

&lt;p&gt;Folder/subfolder structures&lt;/p&gt;

&lt;p&gt;Generating sequences or running totals&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH RECURSIVE hierarchy AS (
    SELECT 
        employee_id,
        manager_id,
        1 AS level
    FROM employees
    WHERE manager_id IS NULL

    UNION ALL

    SELECT 
        e.employee_id,
        e.manager_id,
        h.level + 1
    FROM employees e
    JOIN hierarchy h ON e.manager_id = h.employee_id
)
SELECT *
FROM hierarchy
ORDER BY level;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This walks through an entire organizational structure using recursion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Preparing Data for Visualization or Reporting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most dashboards and reports need data that has been:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cleaned&lt;/li&gt;
&lt;li&gt;Aggregated&lt;/li&gt;
&lt;li&gt;Filtered&lt;/li&gt;
&lt;li&gt;Joined&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CTEs help create step-by-step logic, which makes transformations clear and transparent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When Should You Use a CTE?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;CTEs are ideal when:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your SQL query is becoming long or unreadable&lt;/li&gt;
&lt;li&gt;You need to reuse the same result in multiple places&lt;/li&gt;
&lt;li&gt;You want to logically break the query into stages&lt;/li&gt;
&lt;li&gt;You’re calculating something step-by-step&lt;/li&gt;
&lt;li&gt;You need recursion&lt;/li&gt;
&lt;li&gt;You want to make your SQL explainable to teammates, managers, or future you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They may not be the best choice when performance is extremely critical, since some databases materialize CTEs and others don’t. But for most daily analytical tasks, CTEs are perfectly efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why CTEs Are Important in Data Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In data analysis, clarity is everything. It’s not enough to get the right answer—your SQL needs to be readable, reproducible, and trustworthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CTEs help analysts:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✔ Build cleaner data pipelines&lt;br&gt;
✔ Avoid errors from duplicated logic&lt;br&gt;
✔ Explain logic clearly to stakeholders&lt;br&gt;
✔ Work iteratively and transform data step by step&lt;br&gt;
✔ Write analysis queries that can easily be revisited later&lt;/p&gt;

&lt;p&gt;SQL can be intimidating when queries become too complex. CTEs allow you to think in layers, similar to dataframes in Python or steps in Power BI.&lt;/p&gt;

&lt;p&gt;This makes your analysis process more structured and your results more reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Learning CTEs is one of the fastest ways to become a better SQL writer. They make your queries more organized, more readable, and far easier to debug.&lt;/p&gt;

&lt;p&gt;Whether you’re analyzing sales performance, building dashboards, or preparing datasets for machine learning, CTEs give you the clarity and structure you need to work efficiently.&lt;/p&gt;

&lt;p&gt;If you’re moving into professional data work, mastering CTEs isn’t optional—it’s essential.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>database</category>
      <category>datascience</category>
      <category>sqlserver</category>
    </item>
    <item>
      <title>10 Reasons Why Python is Still the Best Programming Language in 2025 (and Beyond)</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Tue, 30 Sep 2025 06:59:53 +0000</pubDate>
      <link>https://forem.com/brentwash35/10-reasons-why-python-is-still-the-best-programming-language-in-2025-and-beyond-442d</link>
      <guid>https://forem.com/brentwash35/10-reasons-why-python-is-still-the-best-programming-language-in-2025-and-beyond-442d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoa8ys8puk1w1mrfrutt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoa8ys8puk1w1mrfrutt.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Python has been around for decades, yet in 2025 it still wears the crown as the most beloved programming language. Whether you’re a beginner in Nairobi learning to code for the first time or a senior engineer working on advanced AI models in Tokyo, Python continues to be the language everyone talks about. Let’s dive into ten clear reasons why Python remains the GOAT programming language today and why it’s set to dominate the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Python powers Artificial Intelligence and Machine Learning&lt;/strong&gt;&lt;br&gt;
AI is everywhere right now, and Python is the backbone. From training deep learning models to building chatbots and recommendation systems, Python frameworks like TensorFlow, PyTorch, and Scikit-learn are leading the charge. In Africa, where AI is being applied to solve problems in agriculture, fintech, and healthcare, Python is the language behind these innovations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz8egfa8mg801sma886w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhz8egfa8mg801sma886w.png" alt=" " width="474" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Python is the language of Data Science&lt;/strong&gt;&lt;br&gt;
Data is the new oil, and Python is the refinery. With tools like Pandas, NumPy, and Matplotlib, data analysts can clean, process, and visualize datasets effortlessly. For startups in Africa that rely on insights to make smarter decisions, Python provides the foundation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Python is beginner-friendly&lt;/strong&gt;&lt;br&gt;
Unlike many programming languages that feel intimidating, Python reads almost like English. Its simplicity means a university student in Kenya can pick it up fast, yet still use it to build powerful systems. That gentle learning curve is one of the reasons why it continues to attract new developers every year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Python has a massive global community&lt;/strong&gt;&lt;br&gt;
When you get stuck, chances are someone else has already solved your problem. Python has one of the largest developer communities in the world, with countless tutorials, forums, and conferences like PyCon. This community support ensures that the language stays alive and keeps improving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Python works across multiple industries&lt;/strong&gt;&lt;br&gt;
Python is not locked to one domain. It’s in finance, education, gaming, manufacturing, medicine, and agriculture. In Kenya, fintech companies are using it to build mobile banking apps, while agritech startups use it to analyze farm data. This cross-industry adoption makes Python extremely versatile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Python still matters in Web Development&lt;/strong&gt;&lt;br&gt;
Even though JavaScript dominates front-end development, Python quietly powers the back end. Frameworks like Django and Flask make it easy to build scalable and secure applications. Many African startups use Django to launch their first products because it allows them to move quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5fcv6vbwo4ae3yo8ffr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5fcv6vbwo4ae3yo8ffr.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Python integrates with almost everything&lt;/strong&gt;&lt;br&gt;
Python is often called a “glue language” because it works well with other technologies. You can combine it with C, C++, Java, SQL, and even IoT hardware. For developers working on complex systems that require multiple technologies, Python is the bridge that brings everything together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Python makes automation effortless&lt;/strong&gt;&lt;br&gt;
If you’ve ever wanted to save time by automating boring tasks, Python is your best friend. From web scraping and report generation to managing servers and DevOps pipelines, Python scripts cut down hours of manual work into minutes. That’s productivity at its best.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68jub0mzt186pe569rpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68jub0mzt186pe569rpx.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://www.bacancytechnology.com/blog/python-for-automation" rel="noopener noreferrer"&gt;Bacancy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Python dominates in education and research&lt;/strong&gt;&lt;br&gt;
Most universities and coding bootcamps around the world, including here in Africa, are teaching Python as the first programming language. It’s simple enough for beginners, yet powerful enough for advanced research in AI, blockchain, and quantum computing. This ensures that the next generation of developers already speak Python fluently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Python’s ecosystem is future-proof&lt;/strong&gt;&lt;br&gt;
The ecosystem keeps growing every year. Whatever problem you’re trying to solve, such as cloud computing, blockchain, natural language processing, or even game development, there’s probably a Python library for it. With big tech companies like Google, Microsoft, and Meta backing it, Python is here to stay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkl90rhb3toblqq438n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkl90rhb3toblqq438n.png" alt=" " width="800" height="599"&gt;&lt;/a&gt;&lt;br&gt;
source: &lt;a href="https://tristansalles.github.io/EnviReef/3-scientific/pythonstack.html" rel="noopener noreferrer"&gt;Tristan Salles&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Python in 2025 is more than just a programming language. It’s a tool for solving real-world problems, a gateway for beginners to enter tech, and a reliable partner for professionals working on cutting-edge projects. For developers across Africa and beyond, Python represents opportunity, simplicity, and limitless potential. That’s why it’s still the GOAT and likely will be for years to come.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Will Humans Ever Live to 100? A Data-Driven Journey Through Life Expectancy</title>
      <dc:creator>Brent Ochieng</dc:creator>
      <pubDate>Tue, 16 Sep 2025 09:48:23 +0000</pubDate>
      <link>https://forem.com/brentwash35/will-humans-ever-live-to-100-a-data-driven-journey-through-life-expectancy-3ip</link>
      <guid>https://forem.com/brentwash35/will-humans-ever-live-to-100-a-data-driven-journey-through-life-expectancy-3ip</guid>
      <description>&lt;p&gt;When I began exploring global life expectancy data from 1960 to 2023, I wasn’t just curious about numbers; I wanted to answer a deeper question:&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Are we moving toward a future where living to 100 becomes the norm?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a Kenyan data analyst, I’ve seen firsthand how conversations about health often focus on the here and now, outbreaks, access to hospitals, and the affordability of treatment. But when I pulled 63 years of data into a Jupyter Notebook and began visualizing the trends, something fascinating emerged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What the Data Reveals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmhpimy7ax059r45sjbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmhpimy7ax059r45sjbg.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsr6j9yh8pkcqbl5mn9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsr6j9yh8pkcqbl5mn9g.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Globally:&lt;/strong&gt; Life expectancy has risen by more than 15 years since 1960. That’s a whole extra childhood added to the average life.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Europe:&lt;/strong&gt; Life expectancy jumped from 68 years (1960) to over 80 today — and Prophet forecasts suggest 82–84 years by 2049.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoeuk62nr9kaw23oen0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvoeuk62nr9kaw23oen0a.png" alt=" " width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy2m9cvcbqq1igrstg3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy2m9cvcbqq1igrstg3a.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;America:&lt;/strong&gt; The steady climb was suddenly interrupted in 2020–21, a powerful reminder of how global crises can undo years of progress almost overnight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvw4tq1kqb5mxfczgi9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvw4tq1kqb5mxfczgi9s.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6x8449c0vv41ekw00ye7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6x8449c0vv41ekw00ye7.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;East Africa:&lt;/strong&gt; My favorite part of this project is seeing that despite decades of challenges, our region is catching up, with steady gains year after year.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel2u1kj5ecpghej1egqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel2u1kj5ecpghej1egqc.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5mcegorygif555y7xor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5mcegorygif555y7xor.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I plotted East Africa’s numbers, it felt personal. It told a story of resilience — of how healthcare campaigns, better nutrition, and expanded access to medicine have changed what it means to grow old here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modern Medicine’s Silent Revolution&lt;/strong&gt;&lt;br&gt;
Modern medicine has completely redefined what it means to age. Vaccinations, antibiotics, improved maternal care, and chronic disease management have turned once-fatal illnesses into treatable conditions.&lt;/p&gt;

&lt;p&gt;When we talk about life expectancy increasing, we’re really talking about millions of lives saved. For instance, children living past their fifth birthday, mothers surviving childbirth, and elders living long enough to meet their great-grandchildren.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;But the data also reminds us:&lt;/em&gt;&lt;/strong&gt; progress is not guaranteed. The pandemic dip in 2020 is a warning that gains can be fragile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Will We Get to 100?&lt;/strong&gt;&lt;br&gt;
The projections show we are inching closer, but slowly. If Europe reaches 84 years by 2049, that’s still 16 years away from 100. To cross that line, we will need more than just medical innovation. We’ll need:&lt;/p&gt;

&lt;p&gt;Preventive healthcare that keeps people healthier for longer.&lt;/p&gt;

&lt;p&gt;Equitable access to treatments across countries and income levels.&lt;/p&gt;

&lt;p&gt;Social systems that support ageing populations without leaving anyone behind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Project Taught Me&lt;/strong&gt;&lt;br&gt;
As analysts, we often focus on the code, the models, the plots. But data like this reminds us that behind every line on a chart is a story about humanity.&lt;/p&gt;

&lt;p&gt;For me, this was more than an analysis. It was a chance to reflect on how far we’ve come, and what it will take to ensure that a child born in Nairobi today might one day celebrate their 100th birthday not as an exception, but as part of the new normal.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;You can explore the full analysis, code, and forecasts here:&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;a href="https://github.com/BrentOchieng/Global-Life-Expectancy-Trend-1960-2049" rel="noopener noreferrer"&gt;https://github.com/BrentOchieng/Global-Life-Expectancy-Trend-1960-2049&lt;/a&gt;&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>datascience</category>
      <category>python</category>
    </item>
  </channel>
</rss>
