<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tomasz Fidecki</title>
    <description>The latest articles on Forem by Tomasz Fidecki (@tomasz_fidecki_u11d).</description>
    <link>https://forem.com/tomasz_fidecki_u11d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tomasz_fidecki_u11d"/>
    <language>en</language>
    <item>
      <title>The cost of serverless application development on AWS: A Collector's Platform case study</title>
      <dc:creator>Tomasz Fidecki</dc:creator>
      <pubDate>Wed, 27 Aug 2025 07:00:00 +0000</pubDate>
      <link>https://forem.com/u11d/the-cost-of-serverless-application-development-on-aws-a-collectors-platform-case-study-27m9</link>
      <guid>https://forem.com/u11d/the-cost-of-serverless-application-development-on-aws-a-collectors-platform-case-study-27m9</guid>
      <description>&lt;p&gt;Serverless architecture represents a paradigm shift in deploying and managing applications. This approach not only alters the way companies design solutions but also significantly redefines the cost structure in cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining serverless computing
&lt;/h2&gt;

&lt;p&gt;Serverless computing, contrary to what the name suggests, does not eliminate servers but abstracts their management from the users. It is a cloud-computing execution model where the cloud provider dynamically manages the allocation and provisioning of infrastructure and accompanying resources. Essentially, developers can build and run applications and services without the overhead of managing the infrastructure typically associated with computing. This distribution of responsibilities can influence and potentially reduce software development time.&lt;br&gt;
In a classic architecture servers run continuously to allow the client to execute a query at any time. In contrast to this approach, serverless solutions do not generate any additional costs (except for data storage fees) during the absence of user activity. This makes it possible to reduce the cost of operating a server by up to several times, down to zero when application is not used.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Characteristics
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Event-driven and Scalable - Serverless architectures are inherently event-driven. They are designed to respond to a variety of events, such as HTTP requests, database changes, or specific actions triggered by the user. This setup ensures automatic scaling, allowing the application to handle varying loads without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Micro-billing and Cost-effectiveness - one of the most appealing aspects of serverless computing is the 'pay-as-you-go' pricing model. Costs are based on the actual amount of resources consumed by an application, down to the function execution level, rather than on pre-purchased capacity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Faster deployment and time to market - Serverless computing enables developers to focus solely on writing code and deploying functionalities, as the cloud provider manages the underlying infrastructure, scaling, and maintenance. This results in faster development cycles and a quicker time to market for new feature development.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;p&gt;Serverless computing offers numerous benefits, including reduced operational costs, simplified scalability, and a focus on innovation rather than infrastructure management. It is particularly advantageous for applications with variable or unpredictable traffic, as it ensures efficient resource utilization. The adoption of serverless computing is growing rapidly across industries, driven by its ability to enable more agile and cost-effective cloud solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;Despite its advantages, serverless computing also comes with challenges. These include concerns around vendor lock-in, limitations in runtime environments, and complexities in monitoring and debugging. Understanding these challenges is crucial for decision makers and architects considering adopting a serverless architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In the following sections, we will delve into the specifics of serverless computing and explore how this model impacts the cost and efficiency of cloud-based applications, with a particular focus on our case study: a platform designed for collectors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Collectors' Platform with serverless application model
&lt;/h2&gt;

&lt;p&gt;The Collectors' Platform, an exemplary application, is thoughtfully designed for hobbyists and enthusiasts. This section provides an overview of the platform's purpose, its key features, and the technical requirements needed to bring such a concept to life with a serverless paradigm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Purpose of the platform
&lt;/h3&gt;

&lt;p&gt;The Collectors' Platform is a virtual space where collectors from various domains can connect, showcase, and manage their collections. It serves as a hub for like-minded individuals to share their passions, exchange insights, and even discover rare items. From vintage stamps to contemporary art, this platform caters to a wide range of collecting interests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features and technical requirements
&lt;/h3&gt;

&lt;p&gt;We assume that the only actor in the designed system is the user, who should be able to perform the set of actions listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Registration: allows users to create accounts using their Facebook, Google accounts, or email and password.
Technical requirements: integration with OAuth for social media logins, secure database for storing user credentials, and encryption for data protection.&lt;/li&gt;
&lt;li&gt;Login: provides users access to their accounts.
Technical requirements: secure authentication process, session management, and possibly multi-factor authentication for enhanced security.

&lt;ul&gt;
&lt;li&gt;Password reset: assists users in recovering their accounts.
Technical requirements: email integration for password reset links, secure token generation, validation process and user interface to enable workflow.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Managing a network of friends: enables users to connect with other collectors.
Technical requirements: friend request system, database to track connections, and real-time updates of friend lists.&lt;/li&gt;

&lt;li&gt;Profile editing: allows users to personalize their profiles with personal information.
Technical requirements: user interface for profile editing, database updates, and data validation.&lt;/li&gt;

&lt;li&gt;Avatar photo: lets users upload and update their profile picture.
Technical requirements: image upload capability, server-side processing for image resizing and storage.&lt;/li&gt;

&lt;li&gt;Managing collections: users can add, edit, and delete their collections.
Technical requirements: interface for managing collections, database support for storing collection details, image hosting for collection items.&lt;/li&gt;

&lt;li&gt;Managing items in a collection: facilitates the addition, editing, and deletion of items within a collection.
Technical requirements: detailed item description fields, image upload and management, and categorization features.&lt;/li&gt;

&lt;li&gt;Viewing collections: allows users to view their own and others' collections.
Technical requirements: gallery view, search and filter functions, and user access control for private collections.&lt;/li&gt;

&lt;li&gt;Reacting to collections and items: enables users to interact with collections and items through likes, comments, or shares.
Technical requirements: interactive elements for user engagement. For the sake of simplicity real-time communication is not used.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The Collectors' Platform, with its array of features, is designed to be a comprehensive solution for collectors to celebrate their passions. As we proceed, we will explore how serverless computing may support these features, particularly in terms of scalability, performance, and cost-effectiveness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless services used in the Collectors' Platform
&lt;/h3&gt;

&lt;p&gt;In adopting a serverless architecture for the Collectors' Platform, various cloud services are employed to manage different aspects of the application. Each service plays a major role in ensuring the platform is scalable, efficient and user-friendly. Here, we focus on the key serverless services utilized in the platform, including AWS Lambda, Lambda@Edge, Amazon S3, Amazon API Gateway, Amazon DynamoDB, AWS Cognito and Amazon CloudFront.&lt;/p&gt;

&lt;h4&gt;
  
  
  Database
&lt;/h4&gt;

&lt;p&gt;The core component of most systems is the database, which stores and handles user input. Designed platform will make use of DynamoDB service which is a fully managed cloud database. DynamoDB offers a NoSQL database service that handles the platform's data, including user profiles, collection details, and interaction records. It provides fast and predictable performance, seamless scalability, and is fully managed, which reduces the overhead of manual database administration.&lt;/p&gt;

&lt;h4&gt;
  
  
  Identity and access management
&lt;/h4&gt;

&lt;p&gt;Another key element of today's systems is access management, specifically authentication and authorization. First is checking that the user is who they say they are, and latter is checking whether the user has the permissions to perform desired action. It might seem that delegating identity management to another provider is less secure than implementing and maintaining security systems yourself. However, this is not a true hypothesis, as creating an independent, mature and secure system on one's own would require a huge amount of time and labor, and consequently money. Leading companies are investing billions of dollars a year on cybersecurity. Therefore, it is usually much more secure and cheaper to use an off-the-shelf solution. In our design we will use AWS Cognito to simplify user sign-up, sign-in, and access control processes for the platform. It enhances security, scales to millions of users, and integrates easily with other AWS services, providing a comprehensive identity and access management solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  File storage
&lt;/h4&gt;

&lt;p&gt;Several functional requirements of the designed platform relate to binary files, especially images. To meet this requirement, one could use Amazon Simple Storage Service, known as Amazon S3, a highly scalable and secure file store capable of storing and processing petabytes of data. In this context Amazon S3 is utilized for storing user-uploaded content, including avatar photos and images of collection items. S3 offers high durability, availability, and scalability. It is an ideal solution for large-scale storage needs, ensuring data is safe and easily accessible.&lt;/p&gt;

&lt;h4&gt;
  
  
  Execution of business logic
&lt;/h4&gt;

&lt;p&gt;In order to take advantage of the functionality of the platform, business logic must be implemented to properly handle the user requests. However, in order for the code to execute, you need some kind of computer to execute it - this is where the AWS Lambda service is used, which allows the cloud client to use the computing power running the code. Functions that can be run at any time are the essence of the serverless approach. Specifically, considering scalability, AWS Lambda provides as many computing resources as are needed to meet the application's requirements. In this scenario AWS Lambda handles the platform's backend processes, such as user authentication, database operations, and dynamic content generation. &lt;br&gt;
In most of the cases cloud providers enable developers to execute the code created using various technology stacks e.g. Node.js, Python, Go or Java.&lt;br&gt;
With AWS Lambda, the platform benefits from automatic scaling and only pays for the compute time consumed, contributing to significant cost savings and efficiency.&lt;/p&gt;

&lt;h4&gt;
  
  
  Authorization@Edge using cookies
&lt;/h4&gt;

&lt;p&gt;When architecting Collectors’ Platform we are applied common accessibility patterns and best security practices. In order to protect Amazon CloudFront content from being downloaded by unauthenticated users we will use a solution that also uses Lambda@Edge and Cognito along with HTTP cookies. Using cookies provides transparent authentication to web apps, and also allows secure downloads of the platform content. With help comes a service Lambda@Edge which will set the cookies after sign-in and browsers automatically send the cookies on subsequent requests. Lambda@Edge extends the capabilities of AWS Lambda, allowing for function execution closer to the user's location by leveraging the AWS global network. This is particularly useful for customizing content delivered through Amazon CloudFront. It improves application performance by reducing latency, enhances user experience through personalized content delivery, and reduces the load on origin servers.&lt;/p&gt;

&lt;h4&gt;
  
  
  API Gateway for RESTful APIs
&lt;/h4&gt;

&lt;p&gt;Amazon API Gateway acts as the front door for requests to the platform's backend services hosted on AWS Lambda and elsewhere. It provides efficient management of RESTful APIs, offers scalability, ensures security through various mechanisms, and handles traffic management, authorization, and access control.&lt;br&gt;
Having implemented all of the services described above it is now possible to handle incoming requests from the users. For example to add an item to a user's collection the following workflow must be applied:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User request will be authorized by CloudFront and Cognito&lt;/li&gt;
&lt;li&gt;API Gateway will invoke the addItem() function&lt;/li&gt;
&lt;li&gt;Lambda will execute the addItem() function code&lt;/li&gt;
&lt;li&gt;DynamoDB will store the new item in the database.&lt;/li&gt;
&lt;li&gt;S3 will save the item image in the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Optimal cloud-based content delivery
&lt;/h4&gt;

&lt;p&gt;Amazon CloudFront is a content delivery network (CDN) service that securely delivers data, images, applications, and APIs to users globally with low latency and high transfer speeds. By caching content at edge locations closest to users, it accelerates content delivery, reduces server load, and improves overall user experience. Collectors’ Platform architecture makes use of CloudFront in case of data delivery as well as web application that is an end user facing frontend. In addition this service provides capabilities in the area of cybersecurity providing protection against network and application layer attacks and SSL traffic termination.&lt;/p&gt;

&lt;h4&gt;
  
  
  Summary of services used
&lt;/h4&gt;

&lt;p&gt;In summary, the Collectors' Platform leverages a combination of serverless services to create a robust, scalable, and cost-effective architecture. The combination of these serverless services provides a solid and reliable foundation for the Collectors' Platform. AWS Lambda and Lambda@Edge offer powerful compute capabilities, Amazon S3 and DynamoDB handle storage and database needs, AWS Cognito secures user authentication, while Amazon API Gateway and CloudFront ensure secure, efficient and fast content delivery. Together, these services create a seamless, scalable, and cost-effective serverless architecture for the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf63vp9zg8e9xuhojsf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxf63vp9zg8e9xuhojsf6.png" alt="collectors-platform-aws-serverless-architecture.png" width="684" height="458"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1: Overview of Collectors’ Platform serverless architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost analysis framework
&lt;/h3&gt;

&lt;p&gt;Understanding the cost implications of serverless architecture is crucial for any platform, including applications like the Collectors' Platform. This section outlines a framework for analyzing and estimating the costs involved in a serverless environment, focusing on the unique aspects of serverless billing and cost optimization strategies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cost factors in serverless computing
&lt;/h4&gt;

&lt;p&gt;Serverless computing introduces a different cost structure compared to traditional cloud services. The key factors affecting costs in a serverless environment include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compute time: costs are incurred based on the amount of compute time consumed by functions (e.g. AWS Lambda). This includes the execution time and the resources allocated to the function.&lt;/li&gt;
&lt;li&gt;Number of requests: each request to serverless functions (e.g., Lambda invocations) is billed. High-frequency interactions can impact costs significantly.&lt;/li&gt;
&lt;li&gt;Data transfer and storage: costs are associated with the amount of data stored (e.g., in Amazon S3) and transferred, especially when data moves across different regions or out of the AWS ecosystem.&lt;/li&gt;
&lt;li&gt;Additional services: utilizing other AWS services, such as DynamoDB or API Gateway, adds to the cost based on their specific pricing models.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Methodology for cost analysis
&lt;/h4&gt;

&lt;p&gt;To accurately estimate and analyze costs, the following approach is usually applied:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify and categorize usage: break down the platform's operations into distinct categories (e.g., user authentication, data storage, API calls, storage) to understand where costs are incurred.&lt;/li&gt;
&lt;li&gt;Quantify resource utilization: assume or measure the usage of each service in terms of compute time, number of requests, and data storage/transfers.&lt;/li&gt;
&lt;li&gt;Apply pricing models: using the specific pricing models of each AWS service, calculate the costs based on the assumed usage. Tools like &lt;a href="https://calculator.aws" rel="noopener noreferrer"&gt;AWS Pricing Calculator&lt;/a&gt; can support this analysis.&lt;/li&gt;
&lt;li&gt;Include scaling considerations: factor in the scalability of serverless services, estimating costs for both average and peak loads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cost optimization strategies
&lt;/h4&gt;

&lt;p&gt;Implementing cost optimization strategies can significantly reduce expenditures in a serverless environment. Key strategies to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Efficient coding: optimize serverless function code to run faster and consume fewer resources. Reducing execution time directly lowers costs.&lt;/li&gt;
&lt;li&gt;Resource allocation tuning: adjust the allocated resources for serverless functions (e.g., memory size for Lambda functions) to match the actual need, avoiding over-provisioning.&lt;/li&gt;
&lt;li&gt;Caching strategy and mechanisms: implement caching (e.g., using Amazon CloudFront or DynamoDB Accelerator) to reduce the number of function invocations and database reads/writes.&lt;/li&gt;
&lt;li&gt;Monitoring and alerts: Regularly monitor usage and set up alerts for unexpected spikes in usage or costs, enabling quick response to potential issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A thorough understanding of the cost factors, coupled with a strategic approach to analyzing and optimizing these costs, is essential for efficiently managing serverless expenses. By applying this framework, the Collectors' Platform can not only enjoy the benefits of serverless architecture but also maintain cost-effective operations. The next sections will explore a detailed cost analysis of the platform's specific features and offer insights into potential cost savings.&lt;br&gt;
In the upcoming section, 'Detailed Cost Analysis,' we will delve deeper, applying this framework to present a granular breakdown of the Collectors' Platform's costs, offering a clear view of the financial landscape as we scale our serverless solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Detailed cost analysis
&lt;/h3&gt;

&lt;p&gt;Building on our established framework, this section goes through a detailed cost analysis of the Collectors' Platform. We will examine the costs associated with each major feature of the platform, utilizing serverless computing, and highlight areas where costs can vary significantly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User authentication (AWS Cognito, Lambda@Edge, CloudFront): Registration, login, and possible password reset features predominantly utilize AWS Cognito and Lambda functions. Cost factors are related to and arise from the number of user authentications, Lambda invocations for custom authentication flows, and data storage for user credentials. Considering the platform’s user base and frequency of authentication actions, we can estimate a monthly cost based on AWS Cognito's pricing for active users and Lambda’s per-invocation pricing model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Social features (API Gateway, Lambda, DynamoDB): Features like friend network management, profile editing, and avatar management primarily use API Gateway, Lambda, and DynamoDB services. Costs are mainly driven by API calls, the execution time of Lambda functions for processing requests, and the storage and retrieval of data in DynamoDB. The cost is calculated based on the API Gateway request pricing, Lambda execution time and memory allocation, and DynamoDB’s read/write capacity units.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collection management (Lambda, S3, DynamoDB): The management of collections, which includes adding, editing, and deleting collection information and images, primarily utilizes AWS Lambda for backend processing, Amazon S3 for image storage, and DynamoDB for data storage. The costs for this feature are based on the compute time used by Lambda functions, the storage space occupied in S3, and the read/write capacity and storage used in DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Item management within collections (Lambda, S3, DynamoDB): Managing individual items within collections involves similar services as collection management. Costs in this area are incurred due to Lambda executions for processing item-related actions, S3 for storing item images, and DynamoDB for maintaining item data. The estimated costs include charges for Lambda compute time, S3 storage, and DynamoDB’s usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Viewing collections (CloudFront, Lambda@Edge): Viewing collections and items efficiently uses Amazon CloudFront and &lt;a href="mailto:Lambda@Edge"&gt;Lambda@Edge&lt;/a&gt;. The costs for this feature arise from CloudFront data transfer and request handling, and Lambda@Edge for any edge computing needs. The cost estimation is based on the data transfer out rates of CloudFront and the number of Lambda@Edge requests, which helps in delivering content faster and more efficiently to users globally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Interacting with collections and items (API Gateway, Lambda, DynamoDB): Interactions such as likes, comments and shares on collections and items are handled through API Gateway for request management, Lambda functions for executing the necessary backend processes, and DynamoDB for recording these interactions. The costs here are calculated based on the frequency and volume of user interactions, leading to charges for API Gateway requests, Lambda function executions, and DynamoDB read/write operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Variable vs fixed costs in serverless environments
&lt;/h4&gt;

&lt;p&gt;In serverless environments, most costs are variable, scaling with the usage of the application. This is evident in the usage of Lambda, where costs are incurred per execution, API Gateway calls, and DynamoDB operations, which scale with the read/write requests. However, some costs are relatively fixed, such as the storage costs in S3 and a minimal baseline throughput in DynamoDB, providing a predictable element in the overall cost structure.&lt;/p&gt;

&lt;p&gt;This detailed cost breakdown illustrates the various factors contributing to the operating expenses of the Collectors' Platform in a serverless architecture. It underscores the need for ongoing cost management and optimization strategies to maintain financial efficiency. The next section covers the details necessary to estimate the costs that make up a platform, providing a practical approach to financial planning in serverless architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Estimated costs
&lt;/h3&gt;

&lt;p&gt;Our journey through serverless architecture has highlighted the flexibility and efficiency it offers. However, it's crucial to translate these benefits into tangible cost implications to truly understand the economic impact. This chapter aims to demystify the cost structure of the Collectors' Platform by providing a granular view of the expenses based on assumed usage data.&lt;/p&gt;

&lt;p&gt;We will examine each key component of the platform - ranging from user authentication to various data interactions - and present an itemized list of costs. This cost analysis will not only aid in comprehensively understanding the financial dynamics of serverless computing but also serve as a practical guide for budgeting and financial planning for similar projects. This will empower decision-makers, developers, and financial planners with the knowledge to make informed choices when selecting technology solutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Assumptions for cost estimation
&lt;/h4&gt;

&lt;p&gt;To accurately estimate the costs associated with running the Collectors' Platform on a serverless architecture, it's essential to base calculations on realistic user activity assumptions. This section outlines the specific usage assumptions for a single user on a monthly basis. These assumptions will provide the foundation for a detailed cost breakdown.&lt;/p&gt;

&lt;p&gt;User activity assumptions (per month):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests made: each user is assumed to make 300 requests to the platform. This includes actions like logging in, browsing collections, and interacting with items.&lt;/li&gt;
&lt;li&gt;Data transfer:

&lt;ul&gt;
&lt;li&gt;downloads: users are expected to download approximately 15MB of data. This might involve viewing high-resolution images of collection items or downloading collection details.&lt;/li&gt;
&lt;li&gt;uploads: on average, a user will upload about 1MB of data. This includes actions like updating profile information or adding new items to a collection.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Function invocations: users will trigger backend functions (e.g., AWS Lambda) leading to a total of 300 seconds of execution time and a total memory allocation of 30MB. This equates to 9 Gigabyte-seconds (GB-s) of compute time.&lt;/li&gt;

&lt;li&gt;Database storage: each user is assumed to store data in a database with a total size of 1MB. This includes user profiles, collection details, and interaction records.&lt;/li&gt;

&lt;li&gt;Database operations:

&lt;ul&gt;
&lt;li&gt;Write capacity units (WCUs): each user is estimated to consume 30 write capacity units per month, reflecting the frequency of adding or updating data in the database.&lt;/li&gt;
&lt;li&gt;Read capacity units (RCUs): a consumption of 150 read capacity units is assumed for each user per month, indicating the frequency of data retrieval operations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Image storage: users are expected to store images, such as avatars or collection items, with a total size of 10MB.&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Activity&lt;/th&gt;
&lt;th&gt;Assumption&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests Made&lt;/td&gt;
&lt;td&gt;300 requests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Downloads&lt;/td&gt;
&lt;td&gt;15MB of data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Uploads&lt;/td&gt;
&lt;td&gt;1MB of data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Function Invocations&lt;/td&gt;
&lt;td&gt;Total duration of 300 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Total memory of 30MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Total of 9 Gigabyte-seconds (GB-s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database Storage&lt;/td&gt;
&lt;td&gt;1MB of data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database Operations&lt;/td&gt;
&lt;td&gt;30 Write Capacity Units (WCUs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;150 Read Capacity Units (RCUs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Storage&lt;/td&gt;
&lt;td&gt;10MB of data&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These assumptions are crucial for a few key reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Realism - they reflect typical user behavior and interactions within the platform, providing a realistic basis for cost estimation.&lt;/li&gt;
&lt;li&gt;Scalability - understanding individual user costs helps in scaling these estimates to larger user bases.&lt;/li&gt;
&lt;li&gt;Cost optimization - identifying the key areas of resource usage allows for targeted cost optimization strategies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Calculations: understanding the cost components
&lt;/h3&gt;

&lt;p&gt;In serverless architecture, especially in a diverse platform like the Collectors' Platform, the final bill is composed of various services and their respective pricing factors. Each service contributes to the overall cost based on its specific usage and pricing model. This section explores the variety of services utilized and the parameters that form the final cost calculation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Breakdown of services and pricing factors
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;CloudFront

&lt;ul&gt;
&lt;li&gt;Data transfer out to the Internet: the cost depends on the amount of data transferred out to the internet. CloudFront offers the first 1,024 GB/month for free, which is beneficial for platforms with moderate data transfer needs.&lt;/li&gt;
&lt;li&gt;Data transfer out to origin: charges are incurred for data transferred back to the origin server from CloudFront.&lt;/li&gt;
&lt;li&gt;Number of requests (HTTPS): CloudFront also bills based on the number of HTTPS requests, with the first 10 million requests per month being free.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AWS Lambda

&lt;ul&gt;
&lt;li&gt;Number of requests: Lambda charges per request, making high-frequency actions a significant cost factor.&lt;/li&gt;
&lt;li&gt;Duration of each request: costs are calculated based on the execution duration of each function, measured in milliseconds.&lt;/li&gt;
&lt;li&gt;Amount of memory allocated: the allocated memory size for each function affects the cost, with higher memory allocations leading to higher charges.&lt;/li&gt;
&lt;li&gt;Amount of ephemeral storage allocated: the temporary storage used by a Lambda function also contributes to the cost.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Lambda@Edge

&lt;ul&gt;
&lt;li&gt;Similar to AWS Lambda, Lambda@Edge charges are based on the number of requests, duration of each request, and the amount of memory allocated. The key difference is that Lambda@Edge runs closer to the user, reducing latency.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;API Gateway

&lt;ul&gt;
&lt;li&gt;HTTP APIs: charges for API Gateway are based on the number of HTTP API calls made.&lt;/li&gt;
&lt;li&gt;Average size of each request: the size of each API request can influence the cost, especially for data-rich applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB

&lt;ul&gt;
&lt;li&gt;Data storage size: DynamoDB charges for the amount of data stored, measured in GB.&lt;/li&gt;
&lt;li&gt;Average item size: the size of each item stored (including all attributes) affects the cost, especially for write and read operations.&lt;/li&gt;
&lt;li&gt;Number of writes and reads: DynamoDB's pricing model includes charges for read and write operations, which can be significant for platforms with heavy database interaction.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Amazon S3

&lt;ul&gt;
&lt;li&gt;Standard storage: S3 charges for the amount of data stored in the standard storage class.&lt;/li&gt;
&lt;li&gt;Requests: costs are incurred for PUT, COPY, POST, LIST, GET, SELECT, and other requests.&lt;/li&gt;
&lt;li&gt;Data returned and scanned by S3 Select: using S3 Select for querying stored data incurs additional costs based on the amount of data returned and scanned.&lt;/li&gt;
&lt;li&gt;Data transfer: inbound data transfer to S3 is free, while outbound data transfer to Amazon CloudFront is also free, reducing costs for content delivery.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Amazon Cognito

&lt;ul&gt;
&lt;li&gt;Number of Monthly Active Users (MAU): Cognito charges based on the number of MAUs.&lt;/li&gt;
&lt;li&gt;Advanced security features: utilizing advanced security features in Cognito incurs additional costs.&lt;/li&gt;
&lt;li&gt;SAML or OIDC Federation: additional charges apply for users who sign in through SAML or OIDC federation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cost calculation for the Collectors' Platform in a serverless environment is a complex process involving multiple services and varied pricing factors. Understanding these components is crucial for accurate budgeting and cost optimization. By analyzing each service's usage against its pricing model, we can derive a detailed and accurate estimate of the platform's operational costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  The numbers: cost estimates for different user tiers
&lt;/h4&gt;

&lt;p&gt;To provide a comprehensive understanding of the operational costs for the Collectors' Platform under the Serverless Application Model (SAM), we have calculated detailed estimates for five different user tiers: SAM-1k-U, SAM-10k-U, SAM-100k-U, SAM-1M-U, and SAM-10M-U. These tiers represent varying levels of user engagement, from 1,000 to 10,000,000 users, offering insights into how costs scale with the platform's growth. All calculations include always free tier limits for every service.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;User Tier&lt;/th&gt;
&lt;th&gt;CloudFront (USD)&lt;/th&gt;
&lt;th&gt;Lambda (USD)&lt;/th&gt;
&lt;th&gt;API Gateway (USD)&lt;/th&gt;
&lt;th&gt;DynamoDB (USD)&lt;/th&gt;
&lt;th&gt;S3 (USD)&lt;/th&gt;
&lt;th&gt;Cognito (USD)&lt;/th&gt;
&lt;th&gt;Monthly Total (USD)&lt;/th&gt;
&lt;th&gt;Annual Cost (USD)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SAM-1k-U&lt;/td&gt;
&lt;td&gt;0.01&lt;/td&gt;
&lt;td&gt;0.74&lt;/td&gt;
&lt;td&gt;0.33&lt;/td&gt;
&lt;td&gt;0.34&lt;/td&gt;
&lt;td&gt;0.77&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;2.19&lt;/td&gt;
&lt;td&gt;26.28&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAM-10k-U&lt;/td&gt;
&lt;td&gt;0.18&lt;/td&gt;
&lt;td&gt;7.83&lt;/td&gt;
&lt;td&gt;3.33&lt;/td&gt;
&lt;td&gt;3.46&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;22.5&lt;/td&gt;
&lt;td&gt;270&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAM-100k-U&lt;/td&gt;
&lt;td&gt;66.3&lt;/td&gt;
&lt;td&gt;135.89&lt;/td&gt;
&lt;td&gt;33.3&lt;/td&gt;
&lt;td&gt;34.66&lt;/td&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;td&gt;275&lt;/td&gt;
&lt;td&gt;622.15&lt;/td&gt;
&lt;td&gt;7,465.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAM-1M-U&lt;/td&gt;
&lt;td&gt;1,535.71&lt;/td&gt;
&lt;td&gt;1,420.74&lt;/td&gt;
&lt;td&gt;333&lt;/td&gt;
&lt;td&gt;346.64&lt;/td&gt;
&lt;td&gt;775.52&lt;/td&gt;
&lt;td&gt;4,415&lt;/td&gt;
&lt;td&gt;8,826.61&lt;/td&gt;
&lt;td&gt;105,919.32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAM-10M-U&lt;/td&gt;
&lt;td&gt;13,786.08&lt;/td&gt;
&lt;td&gt;14,269.28&lt;/td&gt;
&lt;td&gt;3033&lt;/td&gt;
&lt;td&gt;3,466.3&lt;/td&gt;
&lt;td&gt;7,704&lt;/td&gt;
&lt;td&gt;33,665&lt;/td&gt;
&lt;td&gt;75,923.66&lt;/td&gt;
&lt;td&gt;911,083.92&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Table representing the estimated costs for the Collectors' Platform across different user tiers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Observations and findings
&lt;/h4&gt;

&lt;p&gt;The cost estimates for the Collectors' Platform across different user tiers reveal insightful patterns and implications about the scalability and economic aspects of serverless architecture. Here are the key observations and findings:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scalability costs: as the user base grows from 1,000 to 10,000,000 users, there is a noticeable increase in costs across all services. This underscores a fundamental characteristic of serverless computing: while it offers scalability and flexibility, the costs associated with these benefits rise in tandem with increased usage.&lt;/li&gt;
&lt;li&gt;Service-specific cost dynamics:

&lt;ul&gt;
&lt;li&gt;CloudFront: the cost increases significantly at higher user tiers, reflecting the increased demand for content delivery and data transfer as the number of users grows.&lt;/li&gt;
&lt;li&gt;Lambda: the cost increment is steady and substantial, indicative of the growing computational needs associated with a larger user base.&lt;/li&gt;
&lt;li&gt;API Gateway: the cost growth here is notable, particularly at the highest tiers, highlighting the increased API interaction with a larger number of users.&lt;/li&gt;
&lt;li&gt;DynamoDB: similar to Lambda, the increase in costs is steady, reflecting the escalated database interaction and storage requirements.&lt;/li&gt;
&lt;li&gt;S3: the cost follows a rising trend, correlating with the increased data storage needs for user content.&lt;/li&gt;
&lt;li&gt;Cognito: initially, there is no cost, but as the platform scales up to 100,000 users and beyond, the cost becomes significant, emphasizing the importance of user authentication and security at scale.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Exponential growth in costs at higher tiers: while the costs for lower user tiers (1k and 10k users) are relatively modest, there is an exponential jump observed as the platform scales to 100k users and beyond. This jump is particularly evident in services like CloudFront and Cognito, reflecting the increased complexity and resource demand of a larger user base.&lt;/li&gt;
&lt;li&gt;Economies of scale: despite the overall increase in costs, the per-user cost may decrease or the value derived per user may increase as the platform scales. This economy of scale is a crucial factor for platforms anticipating rapid user growth.&lt;/li&gt;
&lt;li&gt;Importance of cost optimization: the analysis highlights the need for strategic cost optimization, especially for higher user tiers. Efficient use of resources, caching strategies, and optimizing serverless function executions can significantly control costs.&lt;/li&gt;
&lt;li&gt;Budgeting and financial planning: the estimates provide a foundation for financial planning and budgeting. Companies can use this data to forecast expenses  related to technology, plan resource allocation, and strategize for funding as the user base grows.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Deciphering the technology costs
&lt;/h3&gt;

&lt;p&gt;The detailed cost analysis of the Collectors' Platform sheds light on the economic realities of serverless computing, particularly as it scales across various user tiers. One of the most striking aspects of deploying new technologies like serverless architectures is the often unpredictable nature of associated costs. While serverless computing offers remarkable scalability and operational flexibility, this analysis reveals that with these benefits come significant, and sometimes unforeseen, financial implications.&lt;/p&gt;

&lt;p&gt;Key Findings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The cost increases exponentially, especially as the platform scales to higher user tiers (100k to 10M users). This underscores the need for careful financial planning and resource management.&lt;/li&gt;
&lt;li&gt;Services like CloudFront and Cognito, which initially seem cost-effective at lower scales, can lead to substantial expenses as the platform grows.&lt;/li&gt;
&lt;li&gt; The analysis underscores the possible need for strategic cost optimization, especially on a larger scale. Effective resource utilization, implementing caching strategies, and optimizing serverless function executions are vital to controlling costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The uncertainty in technology costs, particularly in a field as dynamic as serverless computing, poses both challenges and opportunities. On one hand, companies must navigate these uncertainties, balancing technological advancement with financial viability. On the other hand, this unpredictability necessitates innovation in cost management and optimization strategies.&lt;/p&gt;

&lt;p&gt;For businesses and developers venturing into serverless applications, this analysis serves as a reminder of the importance of continuous monitoring and adaptation. With the expansion of user base and the evolution of application demands, strategies for managing operational costs must also adapt and evolve. This proactive approach is key to harnessing the full potential of serverless computing while maintaining economic sustainability and efficiency.&lt;/p&gt;

&lt;p&gt;In conclusion, while the costs associated with serverless technologies can be unpredictable, thorough analysis and strategic planning can provide a roadmap for navigating these uncertainties. By understanding and preparing for the potential financial impacts of scaling, organizations can make informed decisions that balance innovation with economic wisdom.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Selecting the right technology partner is a key decision that can significantly influence the success and efficiency of any serverless project. A knowledgeable and experienced partner brings to the table not just technical expertise, but also the foresight to make smart design decisions that are crucial for long-term sustainability. They can identify potential areas for optimization early in the development process, ensuring that the architecture is both cost-effective and scalable.&lt;/p&gt;

&lt;p&gt;Good design decisions made in collaboration with a capable technology partner can lead to a more robust and adaptable serverless application. This approach minimizes the risk of encountering unforeseen costs and performance bottlenecks as the application scales. In essence, the right technology partner doesn't just aid in building a solution, they help in architecting a future-proof, scalable, and economically viable digital ecosystem.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Infrastructure pipelines: the core of Continuous Integration</title>
      <dc:creator>Tomasz Fidecki</dc:creator>
      <pubDate>Tue, 26 Aug 2025 08:00:00 +0000</pubDate>
      <link>https://forem.com/u11d/infrastructure-pipelines-the-core-of-continuous-integration-2emb</link>
      <guid>https://forem.com/u11d/infrastructure-pipelines-the-core-of-continuous-integration-2emb</guid>
      <description>&lt;p&gt;Modern software development focused on building the components faster and in a predictable manner requires seamless collaboration between the development and operations teams. When properly designed and executed it leads to a very efficient, flawless, and less exposed to vulnerabilities software, being at the same time a door opener for continuous deployments to various environments. DevOps, as a combination of engineering and best practices, builds competitive advantage and business value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep things in tip-top shape
&lt;/h2&gt;

&lt;p&gt;Nowadays with the great support from communities the role of open source software brings a high dynamic to a software development life cycle (SDLC). The code base is evolving fast and on a daily basis which brings the burden of having things always tied up. Hence, the automation of separate steps in the software development process takes on importance. These steps, often referred to as jobs, form a logical sequence of events leading to the final result. The sequence is named as a pipeline and a transportation medium for all the jobs executed along the way. The jobs, theirs results and the pipeline which acts as an umbrella defines the Continuous Integration (CI) practice. The integration of code changes along with the validation and testing may lead to deployment into the selected environment. This practice is named continuous Deployment (CD) and in some cases may be directly interconnected with a Continuous Integration forming the CI/CD practice. It starts from the tiniest change in the code base and ends with tangible changes in the observable environment.&lt;/p&gt;

&lt;p&gt;Pipelines may have different purposes with varied complexity. Organized into stages containing jobs, pipelines create powerful concepts to automate processes and deployments - even without human intervention. Overall organizations may benefit from well designed automation in various ways. It may start from accelerated development, better code quality, stability and predictability and ends with contented customers receiving high quality products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code
&lt;/h2&gt;

&lt;p&gt;With the rise of cloud computing the infrastructure became the topic and control over it started to be as important as in any other piece in the software development process. It turned out that infrastructure may be thought of as a code and the same concepts may be applied. Once the management and provisioning of the infrastructure is performed through the code, manual handling is minimized or completely eliminated. At the same time it automatically brings the benefits of versioning and documenting while easing the process of traceability. Infrastructure may be subjected to the same rules that apply to the application source code. It means that it can be statically analyzed, validated and tested before merging to the concurrent versioning system. As in programming, the OOP paradigm can be applied here, allowing templating, modularization and automation finally. The latter may drastically improve software development efficiency through the elimination of need of manual provisioning and managing loose components that make up the whole infrastructure such as machine types, operating systems, attached storage, networking or even functions as a service. Having infrastructure defined declaratively (describing the intended goal and not separate steps) e.g. using Terraform, gives means to control the economic aspect of the venture. This means that with every iteration of infrastructure modification the costs may be estimated and compared with previous state to have a clear understanding of upcoming changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure pipelines
&lt;/h2&gt;

&lt;p&gt;Pipeline as top level component in Continuous Integration concept may be successfully utilized when infrastructure is being developed. As with a regular application code pipelines can be granular to build awareness whenever changes are aimed to be merged to a branch or it is required to apply changes into different deployment environments either fully- or semi-automated. These pipelines are often referred to as merge request pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure merge request pipeline
&lt;/h3&gt;

&lt;p&gt;This type of pipeline is triggered for execution whenever engineers are planning to commit the changes to any or selected branch. In case of infrastructural changes it is a good practice to have implemented stages that are validating, formatting and previewing changes planned to be introduced.&lt;br&gt;
When Terraform is used as a tool to create and manage infrastructure as code then abovementioned pipeline stages are using command line interface and built-in commands. Once executed, the &lt;code&gt;validate&lt;/code&gt; command validates all of the configuration files in a defined directory. It does not access any of the services but checks the completeness, consistency and overall correctness of the configuration. After validating the good and recommended practice is to automatically rewrite infrastructure configuration files to a canonical form and style. It is easily achievable with the Terraform &lt;code&gt;fmt&lt;/code&gt; command which transforms files content to conform to the Terraform language style convention. This way the writing style is consistent and does not provide a basis for discussion during the code review phase. The last stage of the pipeline is to preview changes planned to be introduced to the infrastructure. Terraform’s plan command provides the possibility to read the current state, make comparisons with previous configuration and plan changes that are needed to achieve the desired state of the infrastructure. It is worth mentioning that using GitLab, information contained in merge request is presented in a convenient UI. The data consist of a brief summary of changes planned to be introduced to the infrastructure along with a link to the full execution plan.&lt;br&gt;
Merge request pipeline for infrastructure may be expanded to various deployment environments such as development, staging and production to stay consistent while still introducing changes specific for each environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure post merge request pipeline
&lt;/h3&gt;

&lt;p&gt;This pipeline consists of several other stages to apply the intended changes to the deployment environments. Once fundamental changes were successfully checked in the merge request pipeline then infrastructure may be updated in fully- or semi-automated manner. Full automation is advisable/recommended when introducing changes to non-production environments due to the fact that the risk of exposing unwanted changes to the customers is usually low or non-existent. On the other hand, application of infrastructural changes to the production environment often requires full awareness, attention and controllable release management. Thus, such pipelines are created with the possibility to manually release changes to the production environment.&lt;/p&gt;

&lt;p&gt;Continuing considering usage of Terraform tool the exemplary pipeline may be composed with the following stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialization: needed to properly initialize a working directory containing Terraform configuration files,&lt;/li&gt;
&lt;li&gt;Plan: create an execution plan with changes that are to be introduced to the infrastructure. When building a pipeline it is beneficial to pass over the artifact with a plan to the next stage (Apply). This allows to avoid potential inconsistency in subsequent calls of the same command as well as the need to invoke this command again,&lt;/li&gt;
&lt;li&gt;Apply: actual introduction of changes to the infrastructure. It is a good practice that the ‘Apply’ stage is manually invoked to avoid introducing unwanted changes in the  production environment.
Common practice is to remotely store a state and lock it to prevent concurrent executions against the same state. This operation is supported by Terraform itself by defining a suitable backend. Remote state storage allows seamless collaboration within team members. In addition, when utilizing a cloud computing model and one of the well known service providers, the state is often stored within the object storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s5qy1irrkhfc9xfiza6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2s5qy1irrkhfc9xfiza6.png" alt="merge-request-pipeline-example.png" width="800" height="143"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Exemplary pipeline aimed for automation of two environments staging and production is shown below. Here, the execution environment is a GitLab Runner  that controls and processes CI/CD jobs and sends back the results.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Proper design and organization of the software development process using Continuous Integration concept and supported by automation brings a lot of benefits. It is not only the rise of the efficiency and delivery acceleration but also improved collaboration. Overall it leads to shorter cycles of software development while maintaining quality and security.&lt;/p&gt;

&lt;p&gt;Need support in automation? Let us do it in a DevOps way.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Maximizing Efficiency with Dev Containers: A Developer's Guide</title>
      <dc:creator>Tomasz Fidecki</dc:creator>
      <pubDate>Mon, 21 Jul 2025 07:00:00 +0000</pubDate>
      <link>https://forem.com/u11d/maximizing-efficiency-with-dev-containers-a-developers-guide-17a5</link>
      <guid>https://forem.com/u11d/maximizing-efficiency-with-dev-containers-a-developers-guide-17a5</guid>
      <description>&lt;h2&gt;
  
  
  Part I: The Role of Dev Containers in Modern Development
&lt;/h2&gt;

&lt;p&gt;In the software development landscape we are often dealing with the necessity of not just innovative thinking but change the way we work and build efficient setup of development environments. The deployment blueprint was changed with the rise of containers, bringing in a lightweight and scalable nature, ideal for Kubernetes or cloud services.&lt;br&gt;
At the same time, the development workflow adopted the concept of containerization, with all its benefits, to create an isolated, predictable and transferable development environment. With applying a finite number of steps, one can create an image definition with all dependencies needed, build the image and finally spin up the container.&lt;br&gt;
Visual Studio Code as an open-source code editor with extensive plugin support for development, offers &lt;a href="https://code.visualstudio.com/docs/devcontainers/containers" rel="noopener noreferrer"&gt;Dev Containers&lt;/a&gt; extension enabling developers to use containers as development environments. This way developers can make use of the full feature set of VSC, while introducing a seamless, consistent, and reproducible development experience across any platform. Moreover, as projects grow in complexity, especially in areas like AI/ML, embedded systems, and web development, the need for a consistent, reproducible, and scalable environment becomes even more critical.&lt;br&gt;
In this guide, we will discuss how Dev Containers can transform your development workflow, ensuring consistency, efficiency, and scalability from start to finish.&lt;/p&gt;
&lt;h2&gt;
  
  
  Reshaping Development: The container approach
&lt;/h2&gt;

&lt;p&gt;Developing inside a container transforms the traditional approach to software development by leveraging the power of Visual Studio Code's Dev Containers extension. This innovative method allows developers to utilize containers not just as a deployment mechanism but as dynamic, fully-featured development environments. By encapsulating the development environment within a container, it abstracts away the underlying operating system and hardware, providing a consistent, isolated, and reproducible workspace.&lt;/p&gt;

&lt;p&gt;The core of this approach lies in the &lt;code&gt;devcontainer.json&lt;/code&gt; file, a project-level configuration that instructs Visual Studio Code on how to access or construct the development container. This file specifies the container's tool and runtime stack, ensuring that every developer working on the project has an identical setup.&lt;/p&gt;

&lt;p&gt;Project files can be seamlessly integrated into the container environment, either by mounting from the local file system or by copying or cloning directly into the container. This integration extends to Visual Studio Code extensions, which are installed and executed within the container, granting them full access to the container's tools, platforms, and filesystems. Consequently, developers can enjoy a rich development experience with features like IntelliSense, code navigation, and debugging, regardless of the location of the tools or code.&lt;/p&gt;

&lt;p&gt;The Dev Containers extension offers two primary use cases for containers in development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a primary development environment: this model allows developers to use a container as their main workspace, ensuring that all development activities, from coding to debugging, are done within a consistent and containerized environment.&lt;/li&gt;
&lt;li&gt;For inspection and interaction with running containers: developers can attach to and interact with containers that are already running, which is particularly useful for debugging, inspecting state, or testing changes in a live environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that it is totally possible to attach to a container in a Kubernetes cluster with additional &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; extension.&lt;/p&gt;

&lt;p&gt;Supporting the open &lt;a href="https://containers.dev/" rel="noopener noreferrer"&gt;Dev Containers Specification&lt;/a&gt;, the extension encourages a standardized approach to configuring development environments across different tools and platforms. This specification aims to foster consistency and portability in development setups, making it easier for teams to collaborate and for individuals to switch projects without the overhead of reconfiguring their development environment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Enhancing the Concept of Developing Inside a Container with Configuration Files
&lt;/h2&gt;

&lt;p&gt;After introducing the transformative approach of developing inside a container, it's essential to address how this environment may be precisely defined and configured. The heart of configuring Dev Containers in Visual Studio Code lies in the &lt;code&gt;devcontainer.json&lt;/code&gt; file, accompanied by Dockerfile and Docker-compose files for a comprehensive environment setup. This trio of configuration files forms the backbone of a Dev Container, ensuring that the development environment is not only consistent but also customizable to project-specific requirements.&lt;/p&gt;
&lt;h3&gt;
  
  
  Utilizing devcontainer.json for Project-Level Configuration
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;devcontainer.json&lt;/code&gt; file acts as a project-level guide for Visual Studio Code, detailing how to access or construct the development container. It specifies the container's tool and runtime stack, aligning every developer with an identical setup. This configuration eliminates the common dilemma of "it works on my machine" by standardizing the development environment across the team. Here, you can define settings such as the container image to use, extensions to install within the container, and port forwarding rules for accessing web applications running inside the container.&lt;/p&gt;
&lt;h3&gt;
  
  
  Leveraging Dockerfile for Custom Environment Setup
&lt;/h3&gt;

&lt;p&gt;While devcontainer.json specifies the environment's configuration, the Dockerfile goes a step further by allowing developers to define a custom image that includes all the necessary tools, libraries, and other dependencies. This file is crucial for projects with specific requirements not covered by existing container images. By customizing the Dockerfile, teams can create a tailored development environment that perfectly fits their project's needs, ensuring that all dependencies are pre-installed and configured upon container initialization.&lt;/p&gt;
&lt;h3&gt;
  
  
  Orchestrating with Docker-compose for Complex Environments
&lt;/h3&gt;

&lt;p&gt;For more complex setups that involve multiple containers (e.g., a web application that requires a database and a redis cache), Docker-compose files come into play. These files allow for the definition of multi-container Docker applications, where each service can be configured with its own image, environment variables, volumes, and network settings. Incorporating Docker-compose into the Dev Container setup enables teams to mirror their production environment closely, facilitating a smoother transition from development to deployment.&lt;/p&gt;
&lt;h3&gt;
  
  
  Bringing It All Together
&lt;/h3&gt;

&lt;p&gt;By understanding and utilizing the devcontainer.json, Dockerfile, and Docker-compose files in tandem, developers gain unparalleled control over their development environments. This level of customization ensures that no matter the project's complexity or specific requirements, the development workflow remains streamlined, consistent, and efficient.&lt;/p&gt;

&lt;p&gt;Incorporating these detailed configurations within the section on developing inside a container not only enriches the reader's understanding but also showcases the depth of customization and control Dev Containers offer. This addition would reinforce the message that your consultancy is well-versed in the nuances of modern development environments, further establishing trust and expertise in the eyes of potential clients.&lt;/p&gt;
&lt;h2&gt;
  
  
  Dev Containers and the Power of modern IDE
&lt;/h2&gt;

&lt;p&gt;Dev Containers leverage the power of containerization, specifically Docker, to provide developers with a consistent and portable development environment addressing inconsistencies and inefficiencies in development environments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consistency and portability: with Dev Containers, your development environment is defined by a single devcontainer.json file in your repository. This means that every developer working on the project will have the exact same setup. Whether you're working on an AI/ML project in Python without the need for virtual environments, or diving into embedded systems, Dev Containers ensure that everyone is on the same page.&lt;/li&gt;
&lt;li&gt;Flexibility: Dev Containers are incredibly versatile. You can use pre-built container images, modify existing ones, or even build your environment from scratch. This flexibility ensures that your environment is tailored to your project's specific needs.&lt;/li&gt;
&lt;li&gt;Integration with host machine: one of the standout features of Dev Containers is their ability to integrate seamlessly with the host machine. For instance, in embedded development, while the build process takes place within the container, the resulting files or artifacts are readily available on the host machine, thanks to mounted volumes. This ensures a smooth transition between development and deployment.&lt;/li&gt;
&lt;li&gt;Quick onboarding and environment replication: setting up a new development environment can be a time-consuming task, often fraught with installation errors and configuration hiccups. Dev Containers streamline this process, enabling new team members to get up and running with a fully configured development environment in minutes. This ease of replication also extends to deploying environments across different machines, ensuring that every developer works within the same setup, dramatically reducing the time spent on troubleshooting environment-specific issues.&lt;/li&gt;
&lt;li&gt;Enhanced productivity with pre-configured workspaces: Dev Containers come with the ability to pre-configure workspaces with the necessary tools, extensions, and settings for your project. This out-of-the-box setup saves developers from the hassle of manually configuring their development environment, allowing them to focus on what they do best: coding. Moreover, these containers can be customized to include additional software or packages specific to your project's needs, further enhancing productivity.&lt;/li&gt;
&lt;li&gt;Isolation from the host system: by leveraging Docker containerization, Dev Containers keep your project and its dependencies isolated from your host system. This isolation not only ensures that your project's environment remains clean and uncluttered but also prevents potential conflicts between different projects' dependencies. Moreover, since everything runs within a container, you can experiment with new tools or packages without the risk of affecting your host system's setup.&lt;/li&gt;
&lt;li&gt;A catalyst for collaboration: the reproducibility and portability of Dev Containers not only streamline the development process but also enhance team collaboration. With every member of the team working within an identical environment, sharing work, and collaborating on code becomes more straightforward. This uniformity helps in minimizing compatibility issues, making it easier to review and merge code changes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Limitations of Dev Containers
&lt;/h2&gt;

&lt;p&gt;While Dev Containers offer numerous advantages, it's essential to be aware of their limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker dependency: to leverage Dev Containers, Docker must be installed on the machine. This adds an additional layer of setup for developers unfamiliar with Docker.&lt;/li&gt;
&lt;li&gt;Device limitations on Windows and MacOS: due to the way Docker operates on Windows and MacOS, there are challenges in passing and using devices, such as USB ports or graphics accelerators.&lt;/li&gt;
&lt;li&gt;Potential performance overheads: running within a container might introduce some performance overheads compared to native development, especially when dealing with resource-intensive tasks.&lt;/li&gt;
&lt;li&gt;Cost implications: while Docker offers free licensing for personal use, businesses and larger teams might need to consider the pricing options provided by Docker. Depending on the scale and requirements of the project, this could introduce additional costs that need to be factored into the development budget.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Overcoming Dev Container Challenges: Performance and Device Compatibility
&lt;/h2&gt;

&lt;p&gt;While Dev Containers offer a transformative approach to development, certain challenges such as performance overhead and device limitations on Windows and MacOS can affect their efficiency. In this section, we delve into strategies and best practices to mitigate these issues, ensuring a smooth development experience across all platforms.&lt;/p&gt;
&lt;h3&gt;
  
  
  Mitigating Performance Overhead
&lt;/h3&gt;

&lt;p&gt;Performance overhead, particularly in resource-intensive applications, can be a concern when using containers. However, several strategies can help minimize this impact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; Resource allocation: Docker allows for the specification of CPU and memory limits for containers. Adjusting these settings can ensure that your containerized environment has sufficient resources without overburdening your system.&lt;/li&gt;
&lt;li&gt; Volume optimization: for applications that require extensive read/write operations, consider using Docker volumes. Volumes are managed by Docker and can offer better performance compared to bind mounts, especially on Windows and MacOS.&lt;/li&gt;
&lt;li&gt; Docker Desktop settings: on Windows and MacOS, Docker Desktop's settings can be tweaked for improved performance. For example, increasing the allocated memory and CPUs in Docker Desktop can significantly enhance the speed of your containers.&lt;/li&gt;
&lt;li&gt; Use &lt;code&gt;.dockerignore&lt;/code&gt; files: similar to &lt;code&gt;.gitignore&lt;/code&gt;, a &lt;code&gt;.dockerignore&lt;/code&gt; file can prevent unnecessary files from being built into your Docker context, reducing build time and minimizing potential performance issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Navigating Device Limitations on Windows and MacOS
&lt;/h3&gt;

&lt;p&gt;Device limitations, such as accessing USB devices or specific hardware from within a container, can pose challenges, particularly on Windows and MacOS. Here are strategies to work around these limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;USB passthrough: while direct USB passthrough might be challenging, solutions like virtualizing the USB device or using network-based USB sharing software can help bridge the gap, allowing containers to interact with USB devices indirectly.&lt;/li&gt;
&lt;li&gt;Using Docker Toolbox: for specific use cases, Docker Toolbox on Windows can sometimes offer better hardware interfacing capabilities compared to Docker Desktop, especially for older versions of Windows.&lt;/li&gt;
&lt;li&gt;Leveraging network protocols: for devices that can be accessed over the network (e.g., network-attached storage or certain IoT devices), configuring your container to communicate over the network can circumvent direct device access limitations.&lt;/li&gt;
&lt;li&gt;Hybrid development environments: for development scenarios that heavily rely on specific hardware, consider a hybrid approach. Use containers for the majority of development tasks but switch to native development environments for tasks requiring direct hardware access.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Real-world Applications of Dev Containers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI/ML development: with the rise of machine learning, setting up environments with the right libraries and dependencies can be a challenge. Dev Containers simplify this process. For instance, a Python-based machine learning project can leverage a Dev Container with pre-installed libraries like TensorFlow or PyTorch, ensuring that all developers have the same setup without the hassle of virtual environments.&lt;/li&gt;
&lt;li&gt;Embedded systems: embedded development often requires specific toolchains and configurations. With Dev Containers, these setups can be encapsulated within a container, ensuring consistency. Moreover, as mentioned earlier, the build process can occur within the container, with the resulting files available on the host machine, streamlining the development-to-deployment pipeline.&lt;/li&gt;
&lt;li&gt;Web Development: whether you're working with Node.js, Django, or any other framework, Dev Containers provide a consistent environment for all developers. This ensures that the application behaves consistently across all stages of development.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Value Proposition for Businesses and Project Leads
&lt;/h2&gt;

&lt;p&gt;For businesses and project managers, the value of Dev Containers is clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Efficiency: streamlined setups reduce onboarding time for new developers and eliminate environment-related bugs.&lt;/li&gt;
&lt;li&gt;Consistency: ensuring that all developers work in the same environment reduces discrepancies and ensures that the application behaves as expected across all stages.&lt;/li&gt;
&lt;li&gt;Scalability: as projects grow, Dev Containers make it easy to update the development environment without affecting individual developers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Part II: A Comprehensive Developer's Guide to Utilizing Dev Containers
&lt;/h2&gt;

&lt;p&gt;The examples provided below demonstrate the use of Dev Containers across various scenarios, showcasing how this approach can significantly enhance development efficiency. In the following sections, we will outline the workflow for creating readily available Dev Containers, modifying images using a Dockerfile, and exploring a more advanced scenario: attaching Visual Studio Code (VSC) to a container already running within a Kubernetes cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  The programming challenge
&lt;/h3&gt;

&lt;p&gt;Imagine we're tasked with developing a Python application. For this example, we'll use a script inspired by an official &lt;a href="https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-tensors" rel="noopener noreferrer"&gt;PyTorch tutorial&lt;/a&gt;. This tutorial addresses the challenge of approximating the function y=sin(x) using a third-order polynomial. The model, equipped with four parameters, employs gradient descent to optimize its fit to randomly generated data by minimizing the Euclidean distance between its output and the actual values.&lt;/p&gt;

&lt;p&gt;In our adaptation, we'll increase the training iterations to enhance learning accuracy without falling into the trap of overfitting. Given that our container lacks GPU support, computations will be performed solely on the CPU.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -*- coding: utf-8 -*-
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;


&lt;span class="n"&gt;dtype&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;
&lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;device&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cpu&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create random input and output data
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Randomly initialize weights
&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1e-6&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Forward pass: compute predicted y
&lt;/span&gt;    &lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

    &lt;span class="c1"&gt;# Compute and print loss
&lt;/span&gt;    &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;pow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Backprop to compute gradients of a, b, c, d with respect to loss
&lt;/span&gt;    &lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grad_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grad_y_pred&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_d&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Update weights using gradient descent
&lt;/span&gt;    &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_a&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_b&lt;/span&gt;
    &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_c&lt;/span&gt;
    &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_d&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Result: y = &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x^2 + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x^3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script is a practical implementation of linear regression through PyTorch Tensors and the gradient descent method. While the objective is to closely fit a cubic polynomial to the sine wave, the essence of gradient descent remains the same — iteratively refining model parameters to reduce the discrepancy between predicted outcomes and actual data. As the code executes, it will display the loss metrics at each training milestone, culminating in the revelation of the optimized coefficients.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up and Connecting to the Container
&lt;/h3&gt;

&lt;p&gt;Dev Containers provide a versatile and expansive technology stack, making them ideal for development. For our purposes, we'll opt for a pre-configured Python image maintained by Microsoft, accessible in their &lt;a href="https://github.com/devcontainers/images" rel="noopener noreferrer"&gt;repository&lt;/a&gt;. Specifically, the Python development containers we're interested in are found under &lt;code&gt;src/python&lt;/code&gt;, with a comprehensive list of available images available at &lt;a href="https://mcr.microsoft.com/v2/devcontainers/python/tags/list" rel="noopener noreferrer"&gt;location&lt;/a&gt;. We'll select the &lt;code&gt;mcr.microsoft.com/devcontainers/python&lt;/code&gt; image tagged &lt;code&gt;3-bullseye&lt;/code&gt;. This container comes equipped with Git, various Python tools, zsh, the Oh My Zsh! Framework, a non-root Visual Studio Code (VSCode) user with sudo privileges, and a suite of common development dependencies. For those curious about the construction of the Docker image, further details can be found &lt;a href="https://github.com/devcontainers/images/tree/main/src/python/.devcontainer" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moving forward with the setup, we'll first create a &lt;code&gt;.devcontainer&lt;/code&gt; folder within the project repository. Within this folder, a &lt;code&gt;devcontainer.json&lt;/code&gt; file is created to specify container configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"u11d-devcontainers-example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcr.microsoft.com/devcontainers/python:3-bullseye"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"workspaceMount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"source=${localWorkspaceFolder},target=/development,type=bind,consistency=cached"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"workspaceFolder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/development"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"postCreateCommand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pip install torch==2.2.0 numpy==1.26.4"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Configure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;tool-specific&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;properties.&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"customizations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Configure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;properties&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;specific&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;VS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Code.&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vscode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;*default*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;specific&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;settings.json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;values&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;create.&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"settings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"python.formatting.provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"black"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"editor.formatOnSave"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"python.languageServer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pylance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"python.analysis.typeCheckingMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"basic"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;IDs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;extensions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;want&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;installed&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;when&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;created.&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"extensions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ms-python.python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ms-python.vscode-pylance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ms-python.black-formatter"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;VS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;when&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;connecting&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"remoteUser"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"vscode"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the folder and file are in place, Visual Studio Code (VSCode) will detect them and prompt you to build and open the folder within a container. Alternatively, you can initiate the container by selecting the appropriate action from the command palette (F1) or by opening a remote window through the icon at the bottom left corner of the IDE.&lt;/p&gt;

&lt;p&gt;Opening the folder in a container allows for debugging or executing Python code directly within VSCode or via the terminal, offering a streamlined development workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyemo20psl6n7clef7m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyemo20psl6n7clef7m0.png" alt="screenshot-1-maximizng-efficiency-with-dev-containers.png" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Exploring the devcontainer.json configuration:
&lt;/h4&gt;

&lt;p&gt;This section explores into the devcontainer.json file, a key component in defining the configuration for a development container tailored to Python projects, specifically using the mcr.microsoft.com/devcontainers/python:3-bullseye image. Let's dissect the critical elements of this configuration:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: Optionally names the development container for easy identification.&lt;/li&gt;
&lt;li&gt;image: Determines the base Docker image for the container.&lt;/li&gt;
&lt;li&gt;workspaceMount: Specifies how the local project folder (${localWorkspaceFolder}) is mounted inside the container (/development).&lt;/li&gt;
&lt;li&gt;workspaceFolder: Sets the working directory inside the container.
postCreateCommand: Executes a command to install specific Python libraries (e.g., pip install torch==2.2.0 numpy==1.26.4) after container initialization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VS Code Customizations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Settings: Adjusts default settings for VS Code within the container, such as:

&lt;ul&gt;
&lt;li&gt;python.formatting.provider for selecting the Python code formatter.&lt;/li&gt;
&lt;li&gt;editor.formatOnSave to enable automatic code formatting upon saving.&lt;/li&gt;
&lt;li&gt;python.languageServer and python.analysis.typeCheckingMode for enhanced Python language support.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Extensions: Lists VS Code extensions to be auto-installed in the container, including the official Python extension, Pylance language server, and Black formatter.&lt;/li&gt;

&lt;li&gt;Additional Configuration:

&lt;ul&gt;
&lt;li&gt;remoteUser: Identifies the user account that VS Code should utilize when connecting to the container (typically &lt;code&gt;vscode&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This setup, which doesn't require profound Docker knowledge, demonstrates fundamental Docker practices like utilizing a pre-made Docker image for a specified development setting (Python 3), seamlessly integrating the local project directory into the container, and executing commands to install additional software within the container. The &lt;code&gt;devcontainer.json&lt;/code&gt; file thus offers a streamlined approach to create a consistent and reproducible Python development environment leveraging Docker and VS Code.&lt;/p&gt;

&lt;p&gt;For scenarios requiring more complex configurations, the &lt;code&gt;devcontainer.json&lt;/code&gt; file allows for the inclusion of a Dockerfile and Docker-compose files, enhancing the container's setup. By incorporating a "build" section, one can direct the file to utilize a specific Dockerfile and define the build context (relative to &lt;code&gt;devcontainer.json&lt;/code&gt; file), ensuring that all necessary instructions and files are in place for constructing the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"u11d-devcontainers-example"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"dockerfile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dockerfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"workspaceMount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"source=${localWorkspaceFolder},target=/development,type=bind,consistency=cached"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"workspaceFolder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/development"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"postCreateCommand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pip install torch==2.2.0 numpy==1.26.4"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="w"&gt;    
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Container&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;VS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;when&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;connecting&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"remoteUser"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"vscode"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section specifies the instructions and files needed to build a Docker image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using a Dockerfile: the &lt;code&gt;"dockerfile": "Dockerfile"&lt;/code&gt; part indicates that the build process will use the file named "Dockerfile" located in the same directory as the configuration file. This file contains the commands and instructions to create the image layers.&lt;/li&gt;
&lt;li&gt;Building context: the &lt;code&gt;"context": ".."&lt;/code&gt; part defines the location of the files and folders that will be available inside the container during the build process. In this case, it specifies the parent directory ("..") of the configuration file. This means all files and folders in that directory (except those ignored by a .dockerignore file, if present) will be accessible to the build process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Advantages of Utilizing a Custom Dockerfile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customized environment: tailor the environment with only the necessary tools and libraries, optimizing image size and efficiency.&lt;/li&gt;
&lt;li&gt;Version control: maintain the Dockerfile alongside project code, guaranteeing uniformity and reproducibility.&lt;/li&gt;
&lt;li&gt;Enhanced security: gain greater oversight over package sources and the security framework of your development environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For further insights on configuring and employing development environments via Dev Containers, consult the comprehensive &lt;a href="https://code.visualstudio.com/docs/devcontainers/create-dev-container" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting to a Running Container in Kubernetes
&lt;/h3&gt;

&lt;p&gt;In this section, we'll explore the scenario of connecting to a container that's running within a Kubernetes cluster pod. For demonstration purposes, we're using the &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE) service&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Assume the cluster is operational, and our objective is to deploy a new container equipped with all necessary dependencies to run our Python code effectively. We're elevating complexity by utilizing PyTorch Tensors, capable of leveraging GPU for accelerated computation, and thus, we'll opt for a Google-recommended image encompassing the necessary tech stack.&lt;/p&gt;

&lt;p&gt;To adapt our code for GPU acceleration, we'll switch the computation device from CPU to CUDA.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# -*- coding: utf-8 -*-
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;


&lt;span class="n"&gt;dtype&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;
&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;device&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;

&lt;span class="c1"&gt;# Create random input and output data
&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;linspace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Randomly initialize weights
&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randn&lt;/span&gt;&lt;span class="p"&gt;((),&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;device&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1e-6&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Forward pass: compute predicted y
&lt;/span&gt;    &lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

    &lt;span class="c1"&gt;# Compute and print loss
&lt;/span&gt;    &lt;span class="n"&gt;loss&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;pow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Backprop to compute gradients of a, b, c, d with respect to loss
&lt;/span&gt;    &lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_pred&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;grad_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grad_y_pred&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;grad_d&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;grad_y_pred&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Update weights using gradient descent
&lt;/span&gt;    &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_a&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_b&lt;/span&gt;
    &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_c&lt;/span&gt;
    &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="n"&gt;learning_rate&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;grad_d&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Result: y = &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x^2 + &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;item&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; x^3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because our container will be hosted in a deployed Kubernetes cluster then the &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools" rel="noopener noreferrer"&gt;VSC Kubernetes extension&lt;/a&gt; and &lt;code&gt;kubectl&lt;/code&gt; command-line tool is needed.&lt;/p&gt;

&lt;p&gt;Initial Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure connectivity to the cluster and correct namespace usage with commands like &lt;code&gt;kubectl cluster-info&lt;/code&gt; and &lt;code&gt;kubectl get nodes&lt;/code&gt;, verifying the cluster's accessibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set context and namespace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before deploying resources or executing commands with &lt;code&gt;kubectl&lt;/code&gt;, it's a best practice to set both the context and namespace explicitly. The context determines which cluster you're interacting with, while the namespace scopes your operations to a specific area within that cluster. This preparatory step ensures that your commands are executed against the correct cluster and within the intended namespace, reducing the risk of unintended actions. Notably, objects created without a specified namespace are placed in the Kubernetes "default" namespace by default. Relying excessively on the "default" namespace can complicate object segregation and management, as it becomes a catch-all space for various unrelated resources. Properly setting your context and namespace helps maintain a clean, organized cluster environment, facilitating easier resource tracking and management.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# permanently save the namespace for all subsequent kubectl commands in that context&lt;/span&gt;
kubectl config set-context &lt;span class="nt"&gt;--current&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; ml-experiments

&lt;span class="c"&gt;# display list of contexts&lt;/span&gt;
kubectl config get-contexts

&lt;span class="c"&gt;# display the current-context&lt;/span&gt;
kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating a GPU-Enabled Pod:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We aim to create a pod hosting a container on a GPU-enabled node (specifically, an NVIDIA L4 instance). This involves applying a Kubernetes manifest detailing our pod configuration, named &lt;code&gt;ml-runner-gpu&lt;/code&gt; within the &lt;code&gt;ml-experiments&lt;/code&gt; namespace.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-runner-gpu&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-runner-gpu&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/deeplearning-platform-release/pytorch-gpu.1-13.py310&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-runner&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-ec"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tail&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-f&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/dev/null"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12Gi"&lt;/span&gt;
          &lt;span class="na"&gt;nvidia.com/gpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nvidia.com/gpu&lt;/span&gt;
      &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Equal&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
  &lt;span class="na"&gt;dnsPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterFirst&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pod manifest breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kind: identifies the resource type as Pod.&lt;/li&gt;
&lt;li&gt;Metadata: specifies the Pod's namespace (&lt;code&gt;ml-experiments&lt;/code&gt;), name (&lt;code&gt;ml-runner-gpu&lt;/code&gt;), and labels for organization.&lt;/li&gt;
&lt;li&gt;Containers: outlines the container setup, including the image (&lt;code&gt;gcr.io/deeplearning-platform-release/pytorch-gpu.1-13.py310&lt;/code&gt;) equipped with PyTorch and GPU support, and a command to keep the container running.&lt;/li&gt;
&lt;li&gt;Resources: defines resource limits, including CPU, memory, and GPU usage.&lt;/li&gt;
&lt;li&gt;Tolerations: allows for scheduling flexibility, prioritizing GPU-equipped nodes but permitting scheduling on non-GPU nodes under certain conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This Pod configuration is tailored for deep learning tasks with PyTorch, emphasizing GPU utilization. Despite the indefinite running command, the primary goal is to ensure the container's readiness for development tasks.&lt;/p&gt;

&lt;p&gt;Connecting via Visual Studio Code (VSC):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply the manifest: &lt;code&gt;kubectl apply -f ml-runner-gpu.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;confirm the Pod's active state: &lt;code&gt;kubectl get pods&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;use VSC to attach to the pod directly. This is achieved by navigating to Kubernetes in VSC, right-clicking the pod, and selecting "Attach Visual Studio Code".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Working with source code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To work with the Python script inside the container, transfer it via kubectl cp. This places the file in the container's &lt;code&gt;/home&lt;/code&gt; directory, ready for execution or modification.
&lt;code&gt;kubectl cp pytorch-example.py ml-runner-gpu:/home/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieving modified code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Post-modification, the script can be copied back to the local machine using a similar kubectl cp command, facilitating easy iteration on the code.
&lt;code&gt;kubectl cp ml-runner-gpu:home/pytorch-example.py pytorch-example.py&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cleanup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conclude experiments by deleting the Pod via kubectl delete pod ml-runner-gpu, freeing up resources.
&lt;code&gt;kubectl delete pod ml-runner-gpu&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach showcases the capability to leverage remote resources, like GPU acceleration, not readily available locally, enhancing the development and testing of compute-intensive applications.&lt;/p&gt;

&lt;p&gt;This container setup not only facilitates the execution of Python scripts but also supports running Jupyter notebooks, thanks to a Jupyter server installed within the container. This addition enhances the container's versatility, allowing for an interactive development environment that's ideal for data analysis, visualization, and testing complex algorithms directly in a IDE interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dhm3qdqer6cy8c7726j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dhm3qdqer6cy8c7726j.png" alt="screenshot-2-maximizng-efficiency-with-dev-containers.png" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  From traditional setups to Dev Containers
&lt;/h3&gt;

&lt;p&gt;Traditionally, setting up a development environment locally involves a series of time-consuming steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;installing the correct versions of languages, libraries, and tools;&lt;/li&gt;
&lt;li&gt;configuring these components to work together; and&lt;/li&gt;
&lt;li&gt;ensuring compatibility across team members' machines.
This process not only demands a substantial initial investment of time but also ongoing maintenance to keep the environment updated and in sync with project requirements. The complexity escalates with the project's growth, as more dependencies and configurations are required, increasing the potential for discrepancies among team members' environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dev Containers streamline this process by encapsulating the development environment within a container. This approach eliminates the need to manually set up and maintain individual development environments on each developer's machine. Instead, developers can instantly spin up pre-configured containers that mirror the project's exact requirements, ensuring consistency across all team members' environments. This not only accelerates the initial setup process but also significantly reduces the effort involved in onboarding new team members and transitioning between projects. Moreover, Dev Containers abstract away the underlying OS differences, providing additional value by ensuring that the development environment is truly cross-platform and reproducible, regardless of whether the developer is working on Windows, macOS, or Linux.&lt;/p&gt;

&lt;p&gt;In essence, the concept of Dev Containers shifts the focus from managing development environments to actual development work, offering a more efficient, consistent, and scalable solution to the challenges of modern software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The introduction of Visual Studio Code's Dev Containers marks a pivotal shift in the software development paradigm, simplifying the creation and maintenance of consistent development environments across diverse platforms. By encapsulating development tools and configurations within containers, Dev Containers not only facilitate a seamless transition to cloud-based workflows but also empower developers to focus more on innovation and less on configuration. This breakthrough addresses the unique needs of both solo developers and teams, enhancing productivity, fostering collaboration, and ensuring a stable development experience.&lt;/p&gt;

&lt;p&gt;It's important to acknowledge certain limitations, but the advantages they offer significantly overshadow these concerns, positioning Dev Containers as an essential asset in a developer's arsenal.&lt;/p&gt;

&lt;p&gt;Embracing Dev Containers goes beyond a simple enhancement; it signifies a fundamental change that lays the groundwork for continuous innovation and progress. Why wait? Explore the transformative potential of Dev Containers and witness firsthand the impact they can have on your projects.&lt;/p&gt;

&lt;p&gt;For those seeking to bolster their Docker expertise, our Docker series offers a wealth of knowledge, beginning with strategies to accelerate Docker image builds through efficient cache management. Start enhancing your Docker skills today: &lt;a href="https://u11d.com/blog/speed-up-docker-image-builds-with-cache-management/" rel="noopener noreferrer"&gt;Speed Up Docker Image Builds With Cache Management&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For insights into the latest &lt;a href="https://www.designrush.com/agency/web-development-companies/trends/website-optimization" rel="noopener noreferrer"&gt;website optimization&lt;/a&gt; trends, explore these recommendations.&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>docker</category>
      <category>developer</category>
      <category>dev</category>
    </item>
    <item>
      <title>Templating Values in Kustomize: Unlocking the Potential of Dynamic Naming for Kubernetes Resources</title>
      <dc:creator>Tomasz Fidecki</dc:creator>
      <pubDate>Mon, 07 Jul 2025 08:48:18 +0000</pubDate>
      <link>https://forem.com/u11d/templating-values-in-kustomize-unlocking-the-potential-of-dynamic-naming-for-kubernetes-resources-2433</link>
      <guid>https://forem.com/u11d/templating-values-in-kustomize-unlocking-the-potential-of-dynamic-naming-for-kubernetes-resources-2433</guid>
      <description>&lt;p&gt;In the world of Kubernetes, managing and customizing configurations across multiple environments or instances can be both crucial and complex. Enter &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/" rel="noopener noreferrer"&gt;Kustomize&lt;/a&gt; – a tool that enhances Kubernetes' native configuration management capabilities. Among its many features, one stands out for its potential to significantly streamline and dynamize configuration: the ability to template values using the replacements feature. Though not widely explored, this feature can be a very handy in DevOps life, particularly when it comes to dynamically building resource names and other values within Kubernetes manifests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding replacements in Kustomize
&lt;/h2&gt;

&lt;p&gt;Before diving into examples, let's understand what replacements in Kustomize are. As detailed in &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt;, replacements allow you to specify fields from one resource that should be used to replace fields in another. This can include anything from simple value substitutions to more complex scenarios like dynamically building names based on other resource attributes.&lt;/p&gt;

&lt;p&gt;This feature opens up a variety of possibilities for making your configurations more flexible and adaptable to different environments, deployment scenarios, or naming conventions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic naming with replacements
&lt;/h2&gt;

&lt;p&gt;One of the most compelling applications of the replacements feature lies in constructing dynamic names for resources such as PersistentVolumeClaims (PVCs), ConfigMaps, Deployments, and more. This capability unlocks the potential to not only copy data from a single source manifest into multiple specified targets but also to refine selection with precision. Developers can specify targets using the &lt;code&gt;select&lt;/code&gt; field and exclude specific matches by utilizing the &lt;code&gt;reject&lt;/code&gt; field. This granularity introduces the flexibility needed to dynamically create manifests.&lt;/p&gt;

&lt;p&gt;Let's explore some practical examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: Dynamically named PVCs
&lt;/h3&gt;

&lt;p&gt;Consider a scenario where you have a PVC with a base name and want to append a dynamic segment to it based on the deployment name. For simplicity, let's say our base PVC name is &lt;code&gt;fast-pvc-&lt;/code&gt;, and we want to create a naming convention like &lt;code&gt;fast-pvc-{deployment-name}&lt;/code&gt;. In addition we want to append to PVC also a &lt;code&gt;namespace&lt;/code&gt; from deployment.&lt;/p&gt;

&lt;p&gt;Here's how you could achieve this using Kustomize replacements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the base PersistentVolumeClaim in &lt;code&gt;pcv.yaml&lt;/code&gt; file.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast-pvc-&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
  &lt;span class="na"&gt;volumeMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Filesystem&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Recycle&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Next create the &lt;code&gt;deployment.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.25.4&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;64Mi&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Lastly, create a &lt;code&gt;kustomization.yaml&lt;/code&gt; file with an additional section that specifies the replacement rules:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize.config.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kustomization&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pvc.yaml&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deployment.yaml&lt;/span&gt;

&lt;span class="na"&gt;replacements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
      &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
    &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast-pvc-&lt;/span&gt;
          &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
        &lt;span class="na"&gt;fieldPaths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
      &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.name&lt;/span&gt;
    &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast-pvc-&lt;/span&gt;
          &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
        &lt;span class="na"&gt;fieldPaths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;metadata.name&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;delimiter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-"&lt;/span&gt;
          &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration instructs Kustomize to perform two actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract the &lt;code&gt;metadata.namespace&lt;/code&gt; field from the Deployment manifest and replicate it within the PersistentVolumeClaim under the same &lt;code&gt;metadata.namespace&lt;/code&gt; field. Notably, the options parameter is specified and set to &lt;code&gt;true&lt;/code&gt;, enabling the addition of the field in the target if it is absent. Essentially, the value from the Deployment will either overwrite the existing one or be newly created in the PVC.&lt;/li&gt;
&lt;li&gt;Retrieve the &lt;code&gt;metadata.name&lt;/code&gt; from a Deployment resource and concatenate it with the &lt;code&gt;metadata.name&lt;/code&gt; of the PVC, leading to a dynamically generated PVC name: &lt;code&gt;fast-pvc-my-nginx-deployment&lt;/code&gt;. This manipulation leverages optional parameters such as &lt;code&gt;delimiter&lt;/code&gt; and &lt;code&gt;index&lt;/code&gt; for partial string replacement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To generate the Kustomize output, use the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl kustomize example-pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that in this example, the relevant files are located within the &lt;code&gt;example-pvc&lt;/code&gt; directory, and the &lt;code&gt;kubectl&lt;/code&gt; command is executed from the parent directory.&lt;/p&gt;

&lt;p&gt;The output of kustomize build will be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast-pvc-my-nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Recycle&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fast&lt;/span&gt;
  &lt;span class="na"&gt;volumeMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Filesystem&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.25.4&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;250m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;64Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the context of this example, dynamic naming and namespace adjustments were illustrated using the replacements feature. It's important to highlight that the specific action of namespace replacement serves merely as an example to showcase the versatility of Kustomize. This particular task of adjusting the namespace can also be directly achieved using a tool specifically designed for this purpose within Kustomize. The &lt;code&gt;namespace&lt;/code&gt; transformer offers a straightforward method to set or change the namespace for all resources in a kustomization at once, simplifying the process for common namespace adjustments. For more details on this transformer and its usage, please refer to the official Kustomize documentation: &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/namespace/" rel="noopener noreferrer"&gt;Setting the Namespace with Kustomize&lt;/a&gt;. As always, developers should choose the most effective tool for their specific configuration needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2: ConfigMap values based on deployment environment
&lt;/h3&gt;

&lt;p&gt;Suppose you aim to adjust ConfigMap values based on the deployment's environment (e.g., development, staging, production) and the parameter count of deploying a machine learning model. You can dynamically template these values using replacements.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define your ConfigMap in &lt;code&gt;configMap.yaml&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ai-app-config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;modelVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;llm--v.05&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a deployment in &lt;code&gt;deployment.yaml&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ai-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
    &lt;span class="na"&gt;modelParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7B&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Define replacements in a separate file, contrasting with the inline method used previously. Create a file named &lt;code&gt;model-replacement.yaml&lt;/code&gt; with the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ai-app&lt;/span&gt;
  &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.labels.modelParameters&lt;/span&gt;
&lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ai-app-config&lt;/span&gt;
  &lt;span class="na"&gt;fieldPaths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;data.modelVersion&lt;/span&gt;
  &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;delimiter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-"&lt;/span&gt;
    &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the &lt;code&gt;kustomization.yaml&lt;/code&gt; file as the final step:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kustomize.config.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kustomization&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;configMap.yaml&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deployment.yaml&lt;/span&gt;

&lt;span class="na"&gt;replacements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ai-app&lt;/span&gt;
      &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.labels.environment&lt;/span&gt;
    &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ai-app-config&lt;/span&gt;
        &lt;span class="na"&gt;fieldPaths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;data.environment&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
      &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
    &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;select&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
        &lt;span class="na"&gt;fieldPaths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;model-replacement.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration dynamically updates the ConfigMap's &lt;code&gt;data.environment&lt;/code&gt; with the environment label from the Deployment, facilitating environment-specific settings. It demonstrates the use of inline replacements (directly in the &lt;code&gt;kustomization.yaml&lt;/code&gt; file) and the capability to define replacements in an external file (&lt;code&gt;model-replacement.yaml&lt;/code&gt;), specified by the &lt;code&gt;path&lt;/code&gt; field. By leveraging optional &lt;code&gt;delimiter&lt;/code&gt; and &lt;code&gt;index&lt;/code&gt; fields, it also incorporates the machine learning model's parameter count into the &lt;code&gt;modelVersion&lt;/code&gt; value.&lt;/p&gt;

&lt;p&gt;Again, to generate the Kustomize output, use the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl kustomize example-configMap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that in this example, the relevant files are located within the &lt;code&gt;example-configMap&lt;/code&gt; directory, and the &lt;code&gt;kubectl&lt;/code&gt; command is executed from the parent directory.&lt;/p&gt;

&lt;p&gt;Kustomize build output this time as as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
  &lt;span class="na"&gt;modelVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;llm-7B-v.05&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ai-app-config&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
    &lt;span class="na"&gt;modelParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7B&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-ai-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ml-experiments&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advantages of Dynamic Naming
&lt;/h2&gt;

&lt;p&gt;Implementing dynamic naming and value templating with Kustomize offers numerous advantages for Kubernetes configuration management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistency: Achieves naming consistency across Kubernetes resources, significantly reducing the risk of manual errors and inconsistencies in your deployments.&lt;/li&gt;
&lt;li&gt;Automation: Facilitates the automation of deployment processes across varied environments, ensuring that customizations are applied consistently and efficiently.&lt;/li&gt;
&lt;li&gt;Flexibility: Enables the modification of configurations dynamically without the need to directly alter the base manifests. This approach enhances the maintainability and scalability of your Kubernetes setup, allowing for easier adaptations as requirements evolve.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;The replacements feature in Kustomize represents an exceptionally versatile yet underexploited capability within the Kubernetes ecosystem. As demonstrated through the examples of dynamically named PersistentVolumeClaims, ConfigMaps adjusted based on deployment environments, and the incorporation of deployment-specific parameters into ConfigMaps, this feature substantially augments the flexibility and dynamism of Kubernetes configurations.&lt;/p&gt;

&lt;p&gt;By applying dynamic naming and value templating, you can more effectively manage and streamline complex Kubernetes environments. This encompasses everything from resource naming and configuration value adjustments to tailoring settings for specific deployment contexts. The ability to precisely control these aspects through Kustomize not only simplifies administrative tasks but also empowers developers and operators with a more resilient and adaptable infrastructure.&lt;/p&gt;

&lt;p&gt;As Kubernetes continues to evolve, leveraging advanced features like replacements in Kustomize will be crucial for staying ahead in the fast-paced world of container orchestration. The potential for innovation is vast, and by incorporating these practices into your Kubernetes strategy, you can unlock new levels of efficiency and flexibility in your deployments.&lt;/p&gt;

&lt;p&gt;As Kubernetes continues to evolve, the integration of advanced features like replacements in Kustomize becomes crucial, especially for staying at the forefront of container orchestration's rapid development. This is particularly true in fields like machine learning, where deploying and verifying models demand unparalleled efficiency and adaptability. By weaving these practices into your Kubernetes strategy, you're not just enhancing current deployment processes; you're also laying the groundwork for future innovations. This approach ensures that your infrastructure is primed for the dynamic needs of machine learning model deployment and verification, unlocking new potentials for automation and scalability.&lt;/p&gt;

&lt;p&gt;We encourage you to experiment with these techniques, explore the capabilities of Kustomize further, and consider how dynamic naming and templating can enhance your Kubernetes projects. The journey towards more agile and responsive infrastructure management starts here.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
