<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Maciej Łopalewski</title>
    <description>The latest articles on Forem by Maciej Łopalewski (@u11d-maciej-lopalew).</description>
    <link>https://forem.com/u11d-maciej-lopalew</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/u11d-maciej-lopalew"/>
    <language>en</language>
    <item>
      <title>AWS CloudFront Explained: How Cache, Origin, and Response Policies Supercharge Your CDN</title>
      <dc:creator>Maciej Łopalewski</dc:creator>
      <pubDate>Wed, 21 Jan 2026 09:00:00 +0000</pubDate>
      <link>https://forem.com/u11d/aws-cloudfront-explained-how-cache-origin-and-response-policies-supercharge-your-cdn-3l11</link>
      <guid>https://forem.com/u11d/aws-cloudfront-explained-how-cache-origin-and-response-policies-supercharge-your-cdn-3l11</guid>
      <description>&lt;p&gt;If you have configured Amazon CloudFront in the past, you might remember wrestling with "Cache Behaviors" - a monolithic setting where caching logic, origin forwarding, and header manipulation were all jumbled together.&lt;/p&gt;

&lt;p&gt;Those days are over.&lt;/p&gt;

&lt;p&gt;Modern CloudFront architecture uses a modular &lt;strong&gt;Policy System&lt;/strong&gt;. This approach decouples &lt;strong&gt;caching&lt;/strong&gt; (what is stored) from &lt;strong&gt;origin requests&lt;/strong&gt; (what is sent to the backend) and &lt;strong&gt;response headers&lt;/strong&gt; (security/CORS).&lt;/p&gt;

&lt;p&gt;For DevOps engineers and cloud architects, understanding these three policy types is the key to building performant, secure, and scalable content delivery networks. This guide breaks down the ecosystem of CloudFront Managed Policies and helps you choose the right tools for the job.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is CloudFront?
&lt;/h2&gt;

&lt;p&gt;Before diving into policies, let’s ground ourselves in the basics. &lt;strong&gt;Amazon CloudFront&lt;/strong&gt; is a global Content Delivery Network (CDN). Its primary job is to sit between your users and your infrastructure (the "Origin" - like an S3 bucket or an EC2 load balancer).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latency:&lt;/strong&gt; It serves content from "Edge Locations" physically closer to the user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security:&lt;/strong&gt; It terminates TLS connections at the edge and blocks DDoS attacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale:&lt;/strong&gt; It absorbs traffic spikes so your backend doesn't crash.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Policy Trio: How They Work
&lt;/h2&gt;

&lt;p&gt;In the modern CloudFront request flow, three distinct policies interact to process a user's request. Understanding the distinction between them is critical for avoiding common pitfalls like "cache misses" or CORS errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Cache Policy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Where it sits:&lt;/strong&gt; At the very front of the flow.&lt;br&gt;
&lt;strong&gt;What it does:&lt;/strong&gt; It determines the &lt;strong&gt;Cache Key&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a user requests content, CloudFront uses this policy to decide if it already has a copy. It defines which headers, cookies, or query strings make a request "unique."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Strict policies = Higher cache hit ratio.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Loose policies = Lower cache hit ratio (more load on origin).&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Origin Request Policy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Where it sits:&lt;/strong&gt; Between CloudFront and your Backend (Origin).&lt;br&gt;
&lt;strong&gt;What it does:&lt;/strong&gt; It determines what data is forwarded to the backend &lt;strong&gt;during a cache miss&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is the most misunderstood policy. It allows you to send data (like user-specific cookies) to your backend &lt;em&gt;without&lt;/em&gt; including that data in the Cache Key. This keeps your cache efficiency high while still giving your application the data it needs to process logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Response Headers Policy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Where it sits:&lt;/strong&gt; On the way back to the user.&lt;br&gt;
&lt;strong&gt;What it does:&lt;/strong&gt; It injects specific HTTP headers into the response.&lt;/p&gt;

&lt;p&gt;Regardless of what your backend sends, this policy ensures the browser receives the correct Security (HSTS, XSS protection) and CORS headers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Top Managed Policies: A Cheat Sheet
&lt;/h2&gt;

&lt;p&gt;AWS maintains a library of "Managed Policies" that cover about 90% of use cases. Using these is a best practice - they are rigorously tested, updated by AWS, and require zero maintenance.&lt;/p&gt;

&lt;p&gt;Here are the most essential managed policies for each category.&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Managed Cache Policies
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Control the Cache Key and TTL (Time To Live).&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CachingOptimized&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: S3 Buckets, Static Websites, Images/Assets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; It ignores almost all headers and cookies. It aggressively caches content based solely on the URL path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; This provides the highest possible cache hit ratio. If your content doesn't change based on who is viewing it, use this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CachingDisabled&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Dynamic APIs, WebSockets, Real-time data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; It sets the Time-To-Live (TTL) to 0. Every request bypasses the cache and goes straight to the origin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; Essential for endpoints where data changes every second, or for write operations (POST/PUT) where caching would break functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;UseOriginCacheControlHeaders&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: CMS (WordPress/Drupal), Hybrid Apps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; It defers the decision to your server. It looks for &lt;code&gt;Cache-Control&lt;/code&gt; headers sent by your backend to decide how long to store the file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; Perfect if you have a mix of static and dynamic content and want your application code, rather than CloudFront configuration, to control cache duration.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  B. Managed Origin Request Policies
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Control what the backend sees (without breaking the cache).&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;AllViewer&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Legacy Applications, Complex Dynamic Apps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; Forwards &lt;strong&gt;everything&lt;/strong&gt; - every header, every cookie, every query string - to the origin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; If your application relies on specific, obscure headers or client-side cookies to function, this ensures nothing is stripped out. &lt;em&gt;Warning: This may expose internal origin details.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CORS-S3Origin&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: S3 Buckets serving assets to other domains.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; Specifically whitelists the headers S3 requires to process CORS checks (&lt;code&gt;Origin&lt;/code&gt;, &lt;code&gt;Access-Control-Request-Method&lt;/code&gt;, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; S3 handles CORS differently than a standard web server. Standard forwarding often fails with S3; this policy fixes those specific issues instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;UserAgentRefererHeaders&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Analytics, Hotlink Protection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; It specifically forwards the &lt;code&gt;User-Agent&lt;/code&gt; and &lt;code&gt;Referer&lt;/code&gt; headers while stripping others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; If your backend needs to block requests from specific sites (hotlinking) or serve different content to mobile vs. desktop devices, but doesn't need full cookie visibility.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  C. Managed Response Headers Policies
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Control browser security and access.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;SecurityHeadersPolicy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Everything. (Seriously, use this everywhere).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; Automatically injects industry-standard security headers like:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Strict-Transport-Security&lt;/code&gt; (HSTS)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-Frame-Options: DENY&lt;/code&gt; (prevents clickjacking)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;X-Content-Type-Options: nosniff&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Why choose it:&lt;/strong&gt; It instantly hardens your application against common web attacks without requiring code changes on your server.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CORS-with-preflight-and-SecurityHeadersPolicy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Single Page Apps (React, Vue, Angular) calling APIs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; Combines the security headers above with a permissive CORS configuration. It handles the &lt;code&gt;OPTIONS&lt;/code&gt; pre-flight requests that modern browsers send before making API calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; It solves the dreaded "CORS Error" in browser consoles for modern web applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;SimpleCORS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Best For: Public, read-only data feeds.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;How it works:&lt;/strong&gt; Adds &lt;code&gt;Access-Control-Allow-Origin: *&lt;/code&gt; to the response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why choose it:&lt;/strong&gt; If you are hosting public data (like a weather feed or public JSON file) that you want &lt;em&gt;anyone&lt;/em&gt; to be able to use on their website, this is the quickest way to enable it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Common CloudFront Misconfigurations and How Managed Policies Fix Them
&lt;/h2&gt;

&lt;p&gt;Even experienced DevOps teams run into the same CloudFront issues over and over. Almost all of them trace back to legacy cache behaviors or overly customized settings. Here’s how CloudFront Managed Policies solve the most common problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. “My cache hit ratio is terrible.”
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cause:&lt;/strong&gt; Your Cache Key is too loose - it includes unnecessary headers, cookies, or query strings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Symptom:&lt;/strong&gt; Every request is seen as "unique," forcing a constant stream of cache misses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Use the &lt;strong&gt;CachingOptimized&lt;/strong&gt; managed policy. It strips almost everything from the Cache Key, restoring high hit ratios - perfect for static assets, SPAs, and S3 origins.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. “CloudFront keeps forwarding too many headers to my origin.”
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cause:&lt;/strong&gt; Legacy behaviors often forward all headers by default.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact:&lt;/strong&gt; Increased origin load, slower responses, and potential "Request Header Too Large" errors on the backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Switch to an Origin Request Policy like &lt;strong&gt;UserAgentRefererHeaders&lt;/strong&gt; or &lt;strong&gt;CORS-S3Origin&lt;/strong&gt;. This ensures you forward &lt;em&gt;only&lt;/em&gt; what your backend actually needs to function.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. “I’m still getting CORS errors in the browser.”
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cause:&lt;/strong&gt; Missing or inconsistent &lt;code&gt;Access-Control-*&lt;/code&gt; headers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Apply the &lt;strong&gt;CORS-with-preflight-and-SecurityHeadersPolicy&lt;/strong&gt; response policy. It handles &lt;code&gt;OPTIONS&lt;/code&gt; preflight requests and injects all required CORS headers at the edge - even if your backend forgets them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. “S3 CORS works on localhost, but not in CloudFront.”
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cause:&lt;/strong&gt; S3 requires specific headers to process CORS. If CloudFront strips them, S3 treats the request as standard and omits the CORS response headers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Use the &lt;strong&gt;CORS-S3Origin&lt;/strong&gt; Origin Request Policy. This explicitly forwards &lt;code&gt;Origin&lt;/code&gt;, &lt;code&gt;Access-Control-Request-Method&lt;/code&gt;, and &lt;code&gt;Access-Control-Request-Headers&lt;/code&gt; so S3 knows to respond correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. “My dynamic API is being cached when it shouldn’t be.”
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cause:&lt;/strong&gt; Your API path (&lt;code&gt;/api/*&lt;/code&gt;) is falling through to a default behavior that has caching enabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Fix:&lt;/strong&gt; Create a specific behavior for your API path and attach &lt;strong&gt;CachingDisabled&lt;/strong&gt;. This guarantees every request bypasses the edge and reaches your application.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Moving to Managed Policies allows you to operate with "Intent-Based Configuration." Instead of tweaking individual settings, you select a policy that matches your architectural intent.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Intent&lt;/th&gt;
&lt;th&gt;Recommended Policy Combo&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Static Website&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cache: &lt;code&gt;CachingOptimized&lt;/code&gt; Origin Request: &lt;code&gt;None&lt;/code&gt; (or &lt;code&gt;CORS-S3Origin&lt;/code&gt;) Response: &lt;code&gt;SecurityHeadersPolicy&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dynamic API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cache: &lt;code&gt;CachingDisabled&lt;/code&gt; Origin Request: &lt;code&gt;AllViewer&lt;/code&gt; Response: &lt;code&gt;SimpleCORS&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Modern Web App (SPA)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cache: &lt;code&gt;CachingOptimized&lt;/code&gt; (for assets) Origin Request: &lt;code&gt;None&lt;/code&gt; Response: &lt;code&gt;CORS-with-preflight&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>aws</category>
      <category>cdn</category>
      <category>webdev</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>Architecting Dagster at Scale: Navigating the Challenges of 50+ Code Locations on Kubernetes</title>
      <dc:creator>Maciej Łopalewski</dc:creator>
      <pubDate>Mon, 20 Oct 2025 07:58:08 +0000</pubDate>
      <link>https://forem.com/u11d/architecting-dagster-at-scale-navigating-the-challenges-of-50-code-locations-on-kubernetes-4ba7</link>
      <guid>https://forem.com/u11d/architecting-dagster-at-scale-navigating-the-challenges-of-50-code-locations-on-kubernetes-4ba7</guid>
      <description>&lt;p&gt;As your organization embraces Dagster for data orchestration, your projects will inevitably grow. What starts with a single, manageable code location can quickly expand into dozens, each owned by a different team or serving a distinct data domain. While this modularity is one of Dagster’s strengths, it introduces significant architectural challenges, especially when deploying on Kubernetes.&lt;/p&gt;

&lt;p&gt;Managing 50, 100, or even more code locations is not just a matter of adding more entries to a YAML file. It has profound implications for resource consumption, deployment speed, and overall maintainability.&lt;/p&gt;

&lt;p&gt;This article dives deep into the trade-offs of managing Dagster at scale on Kubernetes. We’ll explore different deployment models, discuss performance and observability, and share real-world patterns (and anti-patterns) to help you design a data platform that is both powerful and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  First, What Are Dagster Code Locations?
&lt;/h3&gt;

&lt;p&gt;In Dagster, a &lt;strong&gt;code location&lt;/strong&gt; is a collection of Dagster definitions (like assets, jobs, schedules, and sensors) that are loaded in a single Python environment. Think of it as a self-contained package of data pipelines. By isolating code into distinct locations, you achieve several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fault Tolerance:&lt;/strong&gt; An error in one code location (e.g., a missing Python dependency) won’t prevent other code locations from loading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Independent Deployments:&lt;/strong&gt; Team A can update their pipelines without forcing Team B to redeploy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency Management:&lt;/strong&gt; Each code location can have its own &lt;code&gt;requirements.txt&lt;/code&gt; or &lt;code&gt;pyproject.toml&lt;/code&gt;, avoiding conflicts between teams that need different library versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dagster’s central components, like the webserver and the daemon, communicate with these code locations via a gRPC API to fetch definitions and launch runs. This separation is key to its scalability.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hidden Costs of Managing 50+ Code Locations
&lt;/h3&gt;

&lt;p&gt;When you deploy Dagster on Kubernetes using the official Helm chart, the standard approach is to use &lt;strong&gt;user code deployments&lt;/strong&gt;. This feature creates a dedicated Kubernetes &lt;code&gt;Deployment&lt;/code&gt; and &lt;code&gt;Service&lt;/code&gt; for each code location you define.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A typical Dagster architecture on Kubernetes, where each code location runs in its own pod.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This model works perfectly for a handful of locations. But as you scale past 50, you start to feel the pain points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Resource Overhead:&lt;/strong&gt; Each code location pod consumes resources just by running. A baseline Python process, the gRPC server, and health checks require a certain amount of CPU and memory. While a single pod might only need 100MB of RAM, 50 of them instantly consume 5GB - and that’s before they even load your code. This idle resource consumption can become a significant cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Bottlenecks:&lt;/strong&gt; If you need to update a shared library or a base Docker image used by all code locations, you trigger a massive rollout. Kubernetes must terminate 50+ old pods and schedule 50+ new ones. In a resource-constrained cluster, this can lead to long deployment times, "Pod unschedulable" events, and service degradation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Launchpad" Problem:&lt;/strong&gt; It's crucial to remember that these code location pods &lt;strong&gt;do not run your data pipelines&lt;/strong&gt;. Their primary role is to serve metadata to the webserver and provide the necessary code to the Dagster daemon, which then launches &lt;em&gt;another&lt;/em&gt; pod (the "run pod" or "job pod") to actually execute the pipeline. This means your infrastructure must support both the standing army of code location pods &lt;em&gt;and&lt;/em&gt; the transient pods for active runs, further compounding resource pressure.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kubernetes Deployment Models: Trade-Offs and Strategies
&lt;/h3&gt;

&lt;p&gt;Given the challenges, let's analyze the two primary architectural models for deploying Dagster code locations on Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model 1: The Standard "Pod per Code Location"
&lt;/h3&gt;

&lt;p&gt;This is the default and recommended approach using &lt;code&gt;user-code-deployments&lt;/code&gt; in the Dagster Helm chart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; You define each code location in your &lt;code&gt;values.yaml&lt;/code&gt; file, and Helm creates a separate Kubernetes Deployment for each.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# values.yaml&lt;/span&gt;
&lt;span class="na"&gt;userCodeDeployments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;deployments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sales-analytics"&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-registry/sales-analytics"&lt;/span&gt;
        &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.2.1"&lt;/span&gt;
      &lt;span class="c1"&gt;# ... resources, env vars, etc.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;marketing-etl"&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-registry/marketing-etl"&lt;/span&gt;
        &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.5.0"&lt;/span&gt;
      &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="c1"&gt;# ... 50 more entries&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full Isolation:&lt;/strong&gt; The best model for fault tolerance and dependency management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear Ownership:&lt;/strong&gt; Easy to map a code location pod to a specific team or project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granular Updates:&lt;/strong&gt; An update to the &lt;code&gt;sales-analytics&lt;/code&gt; image only triggers a rollout for that single deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Resource Overhead:&lt;/strong&gt; The primary driver of idle resource consumption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slow Global Deployments:&lt;/strong&gt; Updating all locations at once is slow and resource-intensive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Limits:&lt;/strong&gt; Can strain clusters that have a low limit on the total number of pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Model 2: The Monolithic "Single Pod" Approach (A Workaround)
&lt;/h3&gt;

&lt;p&gt;For teams struggling with the overhead of the standard model, an alternative is to consolidate all code locations into a single process. This is &lt;strong&gt;not officially recommended&lt;/strong&gt; as it moves away from Dagster's core isolation principles, but it can be a pragmatic solution in specific, resource-constrained scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt; You can "hack" the official Helm chart to run all your code locations within the main Dagster webserver and daemon pods. This involves building a single, monolithic Docker image containing the code for all pipelines and providing a &lt;code&gt;workspace.yaml&lt;/code&gt; that loads them from the local filesystem.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# In your monolithic Dockerfile&lt;/span&gt;
&lt;span class="s"&gt;COPY ./pipelines/sales_analytics /opt/dagster/app/sales_analytics&lt;/span&gt;
&lt;span class="s"&gt;COPY ./pipelines/marketing_etl /opt/dagster/app/marketing_etl&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;

&lt;span class="c1"&gt;# workspace.yaml loaded into the webserver/daemon&lt;/span&gt;
&lt;span class="na"&gt;load_from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;python_module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;module_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sales_analytics.definitions&lt;/span&gt;
      &lt;span class="na"&gt;working_directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/dagster/app/sales_analytics&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;python_module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;module_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;marketing_etl.definitions&lt;/span&gt;
      &lt;span class="na"&gt;working_directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/dagster/app/marketing_etl&lt;/span&gt;
  &lt;span class="c1"&gt;# ... all other locations&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You would disable &lt;code&gt;userCodeDeployments&lt;/code&gt; and ensure this workspace file is used by the main Dagster pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minimal Resource Footprint:&lt;/strong&gt; Dramatically reduces the number of standing pods, saving significant idle resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast Deployments:&lt;/strong&gt; An update involves rolling out just a few pods (webserver, daemon), which is much faster than 50+.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Fault Tolerance:&lt;/strong&gt; A single broken dependency or syntax error in one code location can bring down the entire system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency Hell:&lt;/strong&gt; All teams must agree on a single, shared set of Python dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Massive Pods:&lt;/strong&gt; The webserver and daemon pods become huge, potentially requiring very large and expensive Kubernetes nodes to run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coupled Deployments:&lt;/strong&gt; Any change requires rebuilding and redeploying the entire monolithic image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Strategies for Maintainability and Scaling
&lt;/h3&gt;

&lt;p&gt;Instead of choosing one extreme, the best strategy often lies in intelligent application of the standard model.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use a Separate Image Per Code Location:&lt;/strong&gt; Avoid using a single base image for all your code locations. While it seems efficient, it creates tight coupling. Instead, build and version a Docker image for each code location independently. This ensures that only the code locations that have actually changed will be redeployed during an update.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggressively Monitor Resources:&lt;/strong&gt; Use tools like Prometheus and Grafana to monitor the CPU and memory usage of your code location pods. Are they constantly sitting at 5% of their requested resources? You are likely overprovisioning. Adjust their &lt;code&gt;resources.requests&lt;/code&gt; in your Helm chart to free up capacity for run pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize Deployment Times:&lt;/strong&gt; Keep your Docker images lean. A smaller image pulls faster, leading to quicker pod startup times. Use multi-stage builds and avoid including unnecessary build-time dependencies in your final image.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Real-World Patterns and Anti-Patterns
&lt;/h3&gt;

&lt;p&gt;Theory is one thing, but production issues are the best teacher. Here are some patterns to emulate and anti-patterns to avoid.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anti-Pattern: The Heavyweight Code Location
&lt;/h3&gt;

&lt;p&gt;A common mistake is to load large models or initialize expensive clients at the module level of your Dagster code. Remember: everything you import and define globally in your code location gets loaded into memory the moment the pod starts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-world example:&lt;/strong&gt;&lt;br&gt;
A team was using the &lt;code&gt;libpostal&lt;/code&gt; library for address parsing. Simply adding &lt;code&gt;import postal&lt;/code&gt; to their asset definitions caused the memory footprint of their code location pod to jump by &lt;strong&gt;2GB&lt;/strong&gt;. When several other teams copied this pattern, the cluster's memory usage skyrocketed, causing widespread performance issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# assets/address_parsing.py
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;postal.parser&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;parse_address&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- This import loads a large model into memory!
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dagster&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asset&lt;/span&gt;

&lt;span class="nd"&gt;@asset&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parsed_addresses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw_addresses&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# This asset's code location pod now holds a 2GB model in memory,
&lt;/span&gt;    &lt;span class="c1"&gt;# even when the asset is not running.
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;parse_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;raw_addresses&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Fix:&lt;/strong&gt; There are two great ways to solve this problem.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lazy Loading:&lt;/strong&gt;  The simplest fix is to lazily import or load expensive resources &lt;em&gt;inside&lt;/em&gt; your asset or op functions. This ensures the resource is only loaded into memory in the short-lived run pod, not the long-running code location pod.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# A better approach
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dagster&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asset&lt;/span&gt;

&lt;span class="nd"&gt;@asset&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parsed_addresses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw_addresses&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;postal.parser&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;parse_address&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;-- Import inside the function
&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;parse_address&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;addr&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;raw_addresses&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Externalize as a Microservice:&lt;/strong&gt; For an even more robust and scalable solution, you can externalize the heavy dependency entirely. You can deploy &lt;code&gt;libpostal&lt;/code&gt; as a microservice (e.g., using a wrapper like &lt;code&gt;libpostal-rest&lt;/code&gt;) to have more control over its resources. This centralizes the resource-intensive component into a single, dedicated instance that you can manage and scale independently, serving all your Dagster pipelines via a simple network call.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Pattern: Domain-Driven Consolidation
&lt;/h3&gt;

&lt;p&gt;If you have many small, related code locations owned by the same team, consider consolidating them. Instead of having &lt;code&gt;sales-team-daily&lt;/code&gt;, &lt;code&gt;sales-team-weekly&lt;/code&gt;, and &lt;code&gt;sales-team-hourly&lt;/code&gt;, merge them into a single &lt;code&gt;sales-team&lt;/code&gt; code location. This reduces pod sprawl without creating a true monolith.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: When to Split and When to Consolidate
&lt;/h3&gt;

&lt;p&gt;Choosing the right architecture is a balancing act. Here's a simple heuristic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stick with the "Pod per Code Location" model as your default.&lt;/strong&gt; The isolation and maintainability benefits are immense and align with Dagster's core design. Use the strategies outlined above to mitigate the resource overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consolidate code locations&lt;/strong&gt; that are owned by the same team, share the same dependencies, and are deployed together. This is a pragmatic way to reduce pod count.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Only consider the "Monolithic" model as a last resort.&lt;/strong&gt; If you are in a highly resource-constrained environment and suffering from cripplingly slow rollouts due to pod churn, it can be a temporary lifeline. But be fully aware of the trade-offs in stability and dependency management you are making.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>dagster</category>
      <category>datascience</category>
      <category>python</category>
    </item>
    <item>
      <title>Boost Your Medusa E-Commerce Development: Streamlined Local Setup Guide</title>
      <dc:creator>Maciej Łopalewski</dc:creator>
      <pubDate>Fri, 27 Jun 2025 05:27:38 +0000</pubDate>
      <link>https://forem.com/u11d/boost-your-medusa-e-commerce-development-streamlined-local-setup-guide-4f0b</link>
      <guid>https://forem.com/u11d/boost-your-medusa-e-commerce-development-streamlined-local-setup-guide-4f0b</guid>
      <description>&lt;p&gt;&lt;a href="https://medusajs.com/" rel="noopener noreferrer"&gt;Medusa&lt;/a&gt; is a powerful open-source headless commerce engine that provides a flexible and robust foundation for building e-commerce applications. Getting started with a new framework often involves setting up various prerequisites and understanding configuration nuances. While the official documentation is excellent, having a pre-configured starting point can significantly accelerate the local development process.&lt;/p&gt;

&lt;p&gt;This article will guide you through setting up Medusa locally, focusing on common requirements and helpful configurations. We'll also introduce a specific resource, the &lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2" rel="noopener noreferrer"&gt;medusa-starter repository&lt;/a&gt;, designed to simplify these initial steps and provide valuable examples for future deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Standard Medusa Installation Path
&lt;/h2&gt;

&lt;p&gt;The official Medusa documentation offers comprehensive guides for getting started. You can find the detailed installation instructions covering prerequisites like Node.js, PostgreSQL, and Redis on the &lt;a href="https://docs.medusajs.com/learn/installation" rel="noopener noreferrer"&gt;Medusa Installation guide&lt;/a&gt;. It's highly recommended to familiarize yourself with these official steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplifying Prerequisites with &lt;code&gt;medusa-starter&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;One of the initial hurdles in setting up a local development environment for Medusa is provisioning the necessary database (PostgreSQL) and caching layer (Redis). The &lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2" rel="noopener noreferrer"&gt;medusa-starter repository&lt;/a&gt; addresses this by providing a simple Docker Compose file specifically for these services.&lt;/p&gt;

&lt;p&gt;Within the repository, you'll find a &lt;code&gt;compose.db.yaml&lt;/code&gt; file (&lt;a href="https://github.com/u11d-com/medusa-starter/blob/v2/compose.db.yaml" rel="noopener noreferrer"&gt;link to compose.db.yaml&lt;/a&gt;). This file allows you to spin up ready-to-use PostgreSQL and Redis instances with a single command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose -f compose.db.yaml up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command brings up the necessary services in detached mode (&lt;code&gt;-d&lt;/code&gt;), allowing you to quickly get your database and cache dependencies running without manual installation and configuration. &lt;strong&gt;Based on the configuration in &lt;code&gt;compose.db.yaml&lt;/code&gt;, this will set up:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL:&lt;/strong&gt; Available on &lt;code&gt;localhost:5432&lt;/code&gt;. It will be configured with the user, password, and database name specified within the compose file or corresponding environment variables, ready for your Medusa instance to connect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis:&lt;/strong&gt; Available on &lt;code&gt;localhost:6379&lt;/code&gt;. For this local development setup, authentication is typically not required, allowing for straightforward connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;medusa-starter&lt;/code&gt; repository includes environment variables tailored to connect to these services, making integration straightforward once they are running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding &lt;code&gt;NODE_ENV&lt;/code&gt; and Its Implications
&lt;/h2&gt;

&lt;p&gt;A critical aspect of configuring your Medusa application, both locally and in production, is the &lt;code&gt;NODE_ENV&lt;/code&gt; environment variable. This variable significantly influences Medusa's behavior.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;NODE_ENV=development&lt;/code&gt;:&lt;/strong&gt; This is the standard setting for local development. In this mode, Medusa often provides more detailed logging and error messages, and certain security constraints are relaxed to facilitate rapid iteration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;NODE_ENV=production&lt;/code&gt;:&lt;/strong&gt; This setting is intended for production deployments. Medusa enables stricter security measures and optimizes for performance. A key behavior change in &lt;code&gt;production&lt;/code&gt; mode is the requirement for a secure connection (HTTPS/TLS) and a custom domain to access the Medusa Admin panel. This is because Medusa uses secure cookies for authentication, which browsers will only send over a secure connection to a specific domain, not &lt;code&gt;localhost&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A &lt;code&gt;NODE_ENV&lt;/code&gt; Workaround for Local Docker (Without TLS)
&lt;/h3&gt;

&lt;p&gt;If you are trying to run your Medusa application locally within a Docker container &lt;em&gt;without&lt;/em&gt; setting up TLS and a custom domain, using &lt;code&gt;NODE_ENV=production&lt;/code&gt; will prevent you from logging into the admin panel.&lt;/p&gt;

&lt;p&gt;As a workaround for this specific scenario (local Docker testing without TLS), you can set &lt;code&gt;NODE_ENV&lt;/code&gt; to a different value, such as &lt;code&gt;CI&lt;/code&gt;. While this allows you to bypass the secure cookie requirement locally, &lt;strong&gt;it is important to understand that this is a workaround and not a recommended practice for actual production deployments.&lt;/strong&gt; For production, always use &lt;code&gt;NODE_ENV=production&lt;/code&gt; and ensure proper TLS setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Local Development: Production Examples in &lt;code&gt;medusa-starter&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2" rel="noopener noreferrer"&gt;medusa-starter repository&lt;/a&gt; isn't just for getting started locally. It also provides valuable examples to help you transition towards production deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;medusa-config.ts&lt;/code&gt; Example:&lt;/strong&gt; The repository includes an example &lt;code&gt;medusa-config.ts&lt;/code&gt; file (&lt;a href="https://github.com/u11d-com/medusa-starter/blob/v2/medusa-config.ts" rel="noopener noreferrer"&gt;link to medusa-config.ts&lt;/a&gt;) that demonstrates how to configure modules and settings suitable for a production environment, often integrating with the Docker Compose database setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example Dockerfiles:&lt;/strong&gt; You'll find example &lt;code&gt;Dockerfiles&lt;/code&gt; (&lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2/docker" rel="noopener noreferrer"&gt;link to Dockerfiles&lt;/a&gt;) that show you how to package your Medusa application into a Docker image, a common step for cloud deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example GitHub Actions Workflow:&lt;/strong&gt; The repository includes a basic GitHub Actions workflow example (&lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2/.github/workflows" rel="noopener noreferrer"&gt;link to GitHub Actions&lt;/a&gt;) that you can adapt for your own projects. This workflow demonstrates a basic Continuous Integration (CI) pipeline, which is crucial for automating testing and building your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up a development environment can sometimes feel like the first significant hurdle. The &lt;a href="https://github.com/u11d-com/medusa-starter/tree/v2" rel="noopener noreferrer"&gt;medusa-starter repository (v2)&lt;/a&gt; aims to lower that barrier by providing pre-configured examples for essential services via Docker Compose. By understanding the role of &lt;code&gt;NODE_ENV&lt;/code&gt; and leveraging the provided configuration and Docker examples, you can not only streamline your local development workflow but also gain a head start on preparing your Medusa application for production deployment. Explore the repository, adapt the examples to your needs, and happy coding!&lt;/p&gt;

</description>
      <category>medusa</category>
      <category>docker</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
