<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Navin Prasad </title>
    <description>The latest articles on Forem by Navin Prasad  (@navinprasadk).</description>
    <link>https://forem.com/navinprasadk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/navinprasadk"/>
    <language>en</language>
    <item>
      <title>Building Multi-Platform Container Images with Docker Buildx</title>
      <dc:creator>Navin Prasad </dc:creator>
      <pubDate>Mon, 22 May 2023 13:39:36 +0000</pubDate>
      <link>https://forem.com/kcdchennai/building-multi-platform-container-images-with-docker-buildx-44bl</link>
      <guid>https://forem.com/kcdchennai/building-multi-platform-container-images-with-docker-buildx-44bl</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: This article was originally published on &lt;a href="https://navinprasadk.medium.com/building-multi-platform-container-images-with-docker-buildx-c2d051d5ca2c"&gt;Medium.com&lt;/a&gt; and has been reposted here on Dev.to&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If your containers are intended to be used on multiple platforms, then building multi-platform container images can be very useful. Building multi-platform container images allows you to create a single image that can be run on different types of systems, such as Intel, ARM, and AMD. This can save you time and effort because you don’t need to build separate container images for each platform, and it can also make it easier to distribute your container images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Docker Buildx&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/docker/buildx"&gt;Docker Buildx&lt;/a&gt; is a Docker CLI plugin that extends the docker build command with the ability to build container images for multiple platforms and architectures at the same time. It is an experimental feature of Docker version 18.09. Using Docker Buildx, you can build and push container images to multiple platforms with a single command, allowing you to create cross-platform container images that can be used on different types of systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Docker Buildx Works&lt;/strong&gt;&lt;br&gt;
Docker Buildx works by creating a build "context" that includes the Dockerfile, source code, and build context, and then building the container images for all the specified platforms at the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to Build Multi-Platform Docker Images and Push them to Docker Hub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Docker Buildx&lt;/strong&gt;&lt;br&gt;
To install Docker Buildx, run the following command in the terminal: &lt;br&gt;
&lt;code&gt;docker buildx install&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Step 2: Enable Experimental Features&lt;/strong&gt;&lt;br&gt;
To enable experimental features in Docker, add the following line to your Docker configuration file (usually located at ~/.docker/config.json):** "experimental": "enabled"**. Restart Docker to apply the changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a New Docker Buildx Builder&lt;/strong&gt;&lt;br&gt;
To create a new Docker Buildx builder, run the following command: &lt;code&gt;docker buildx create - name mybuilder&lt;/code&gt;&lt;br&gt;
This will create a new builder with the default settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Switch to the New Builder&lt;/strong&gt;&lt;br&gt;
To switch to the new builder, run the following command: &lt;code&gt;docker buildx use mybuilder&lt;/code&gt;&lt;br&gt;
This will activate the new builder for all subsequent Docker commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Check Your Builder's Capabilities&lt;/strong&gt;&lt;br&gt;
Run the following command to check your builder's capability of building multi-platform container images: &lt;code&gt;docker buildx inspect - bootstrap&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx inspect --bootstrap&lt;br&gt;
Name:   mybuilder&lt;br&gt;
Driver: docker-container&lt;br&gt;
Nodes:&lt;br&gt;
Name:      mybuilder0&lt;br&gt;
Endpoint:  unix:///var/run/docker.sock&lt;br&gt;
Status:    running&lt;br&gt;
Buildkit:  v0.11.3&lt;br&gt;
Platforms: linux/arm64, linux/amd64, linux/amd64/v2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note that the image you are pulling from must also support the architectures you plan to target. This can be checked using: &lt;code&gt;docker buildx imagetools inspect alpine:3.16&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx imagetools inspect alpine:3.16&lt;br&gt;
Name:      docker.io/library/alpine:3.16&lt;br&gt;
MediaType: application/vnd.docker.distribution.manifest.list.v2+json&lt;br&gt;
Digest:    sha256:1bd67c81e4ad4b8f4a5c1c914d7985336f130e5cefb3e323654fd09d6bcdbbe2&lt;br&gt;
Manifests:&lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:0b29a7f4d42d6b5d6433ea91322903900e81b95d47d97d909a6e388e840f4f4a&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/amd64             &lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:ed4d840b601f052e9d1c3bb843ad11e904d0265936c90c18e8e7bf6dc2f80d41&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/arm/v6        &lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:32c3aacc36c2ceb104f43efbd7cf1c0732cf798e088b784b7c02cee27498b0a8&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/arm/v7            &lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:3b37be168bb81cf274793cebd596b4a023bdbca6f3b5bc3dfe8974e569b74feb&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/arm64/v8         &lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:abc178f562a2827f068bd0e60f48d998179dcd7130cc5036a287e1716536c306&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/386     &lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:e6a8ba98c4be90c07b31be6e3315beeaa1fd5269ac215e581babef640a34ed0b&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/ppc64le&lt;br&gt;
  Name:      docker.io/library/alpine:3.16@sha256:1394c85a9984cb03d369381672a1767378b49d8701adfc6337258756052b2eeb&lt;br&gt;
  MediaType: application/vnd.docker.distribution.manifest.v2+json&lt;br&gt;
  Platform:  linux/s390&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Create a Dockerfile&lt;/strong&gt;&lt;br&gt;
Create a Dockerfile with the following content:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;# syntax=docker/dockerfile:1&lt;br&gt;
FROM alpine:3.16 &lt;br&gt;
RUN apk add curl&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Log in to &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
Log in to Docker Hub using the following command: &lt;code&gt;docker login&lt;/code&gt;&lt;br&gt;
Enter your Docker Hub username and password when prompted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Build, Tag and Push Your Docker Image&lt;/strong&gt;&lt;br&gt;
Build and tag your Docker image using the newly created builder and specify the desired platforms. For example, to build and tag an image for Intel x86_64, AMD64, and ARM64 architectures, you can use the following command: &lt;br&gt;
&lt;code&gt;docker buildx build - platform linux/amd64,linux/arm64,linux/386 -t your-dockerhub-username/your-image-name:tag.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx build --platform linux/amd64,linux/arm64 -t navinprasadk/apkaddcurl:v2 . --push&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can specify additional platforms for your image by adding them to the &lt;code&gt;- platform&lt;/code&gt;option.&lt;br&gt;
Add the &lt;code&gt;- push command&lt;/code&gt; to the above command to instruct the docker CLI to push the Docker image to Docker Hub.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker buildx build --platform linux/amd64,linux/arm64 -t navinprasadk/apkaddcurl:v2 . --push&lt;br&gt;
[+] Building 22.5s (12/12) FINISHED                                                                                                                                                                          &lt;br&gt;
 =&amp;gt; [internal] load build definition from Dockerfile                                                                                                                                                    0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; transferring dockerfile: 99B                                                                                                                                                                     0.0s&lt;br&gt;
 =&amp;gt; [internal] load .dockerignore                                                                                                                                                                       0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; transferring context: 2B                                                                                                                                                                         0.0s&lt;br&gt;
 =&amp;gt; resolve image config for docker.io/docker/dockerfile:1                                                                                                                                              0.8s&lt;br&gt;
 =&amp;gt; CACHED docker-image://docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a7782a409b14                                                                         0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; resolve docker.io/docker/dockerfile:1@sha256:39b85bbfa7536a5feceb7372a0817649ecb2724562a38360f4d6a7782a409b14                                                                                    0.0s&lt;br&gt;
 =&amp;gt; [linux/amd64 internal] load metadata for docker.io/library/alpine:3.16                                                                                                                              0.3s&lt;br&gt;
 =&amp;gt; [linux/arm64 internal] load metadata for docker.io/library/alpine:3.16                                                                                                                              0.6s&lt;br&gt;
 =&amp;gt; [linux/arm64 1/2] FROM docker.io/library/alpine:3.16@sha256:1bd67c81e4ad4b8f4a5c1c914d7985336f130e5cefb3e323654fd09d6bcdbbe2                                                                        0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; resolve docker.io/library/alpine:3.16@sha256:1bd67c81e4ad4b8f4a5c1c914d7985336f130e5cefb3e323654fd09d6bcdbbe2                                                                                    0.0s&lt;br&gt;
 =&amp;gt; [linux/amd64 1/2] FROM docker.io/library/alpine:3.16@sha256:1bd67c81e4ad4b8f4a5c1c914d7985336f130e5cefb3e323654fd09d6bcdbbe2                                                                        0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; resolve docker.io/library/alpine:3.16@sha256:1bd67c81e4ad4b8f4a5c1c914d7985336f130e5cefb3e323654fd09d6bcdbbe2                                                                                    0.0s&lt;br&gt;
 =&amp;gt; CACHED [linux/amd64 2/2] RUN apk add curl                                                                                                                                                           0.0s&lt;br&gt;
 =&amp;gt; CACHED [linux/arm64 2/2] RUN apk add curl                                                                                                                                                           0.0s&lt;br&gt;
 =&amp;gt; exporting to image                                                                                                                                                                                 20.9s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting layers                                                                                                                                                                                 0.1s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting manifest sha256:403248829517c743afe783d3931166e800d7d716f7d0d210a9381ca1c45318df                                                                                                       0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting config sha256:cc19d353d0f23f8a8f92e2a32cc7d0f5aee98d2352b5fdeb34c06b17600b5c69                                                                                                         0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting manifest sha256:595145e572854ce6f2107e012c69be2eb4e148d7759a7535f9325d8b49ed1300                                                                                                       0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting config sha256:af6a49b540831ee553433675b3d38c100991995c3d82b5074a5a2d6d931208a9                                                                                                         0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; exporting manifest list sha256:55b1de08ad830205a77330b52e3d32740839cc4a6a88e033bdcd954192e749b2                                                                                                  0.0s&lt;br&gt;
 =&amp;gt; =&amp;gt; pushing layers                                                                                                                                                                                  18.8s&lt;br&gt;
 =&amp;gt; =&amp;gt; pushing manifest for docker.io/navinprasadk/apkaddcurl:v2@sha256:55b1de08ad830205a77330b52e3d32740839cc4a6a88e033bdcd954192e749b2                                                                1.9s&lt;br&gt;
 =&amp;gt; [auth] navinprasadk/apkaddcurl:pull,push token for registry-1.docker.io&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Overall, Docker Buildx is a powerful tool for building multi-platform container images. It can be handy if you need your container images to run on multiple platforms or if you want to take advantage of cloud platforms that support running Docker containers on a variety of platforms. By following the steps in this tutorial, you can easily create cross-platform container images that can be used on different types of systems.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>container</category>
      <category>buildx</category>
      <category>kcd</category>
    </item>
    <item>
      <title>Ultimate Guide to Cloud Cost Optimisation for VM Services</title>
      <dc:creator>Navin Prasad </dc:creator>
      <pubDate>Fri, 19 May 2023 11:18:47 +0000</pubDate>
      <link>https://forem.com/kcdchennai/ultimate-guide-to-cloud-cost-optimisation-for-vm-services-5foi</link>
      <guid>https://forem.com/kcdchennai/ultimate-guide-to-cloud-cost-optimisation-for-vm-services-5foi</guid>
      <description>&lt;p&gt;In today’s cloud-driven landscape, businesses across various industries use virtual machine (VM) services from major cloud providers like &lt;strong&gt;AWS EC2, Azure VM, and GCP Compute Engine&lt;/strong&gt; to leverage their power and flexibility. These services provide the foundational infrastructure for running diverse workloads and applications in the cloud. However, to get the most out of these services, organisations must prioritise cost optimisation.&lt;/p&gt;

&lt;p&gt;This blog post will provide you with a comprehensive checklist to ensure maximum cost savings on VM services. This guide will equip you with the essential knowledge to optimise your VM costs effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Identifying Underutilised VMs&lt;/strong&gt;&lt;br&gt;
Utilise AWS CloudWatch, Azure Monitor and GCP Compute Engine Monitor to analyse performance metrics and usage patterns, enabling cost optimisation through downsizing or terminating underutilised VMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Deleting Unused VMs - Eliminating Wasteful Expenses&lt;/strong&gt;&lt;br&gt;
Regularly assess and decommission unnecessary VMs to minimise wasteful expenses and optimise cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Leveraging Reservations - Cost Savings for Long-Term Workloads&lt;/strong&gt;&lt;br&gt;
Maximise cost savings by identifying suitable workloads and implementing effective reservation strategies for long-term usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Spot Instances&lt;/strong&gt;&lt;br&gt;
Explore Spot Instances (AWS), Spot VMs (Azure), and Preemptible VMs (GCP) to identify non-critical workloads and strategically leverage them for optimised cost savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Optimising Storage Costs - Tackling Unattached Volumes&lt;/strong&gt;&lt;br&gt;
Unattached volumes can significantly impact costs. Efficiently review and manage storage volumes, removing or detaching unattached ones to reduce expenses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Minimising Idle Costs - Maximising IP Address Efficiency&lt;/strong&gt;&lt;br&gt;
Understanding the cost implications of unattached IP addresses and actively managing their allocation to optimise costs. Regularly reviewing and releasing unattached IPs ensures efficient utilisation of resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Opting for Burstable VMs - Efficiently Handling Workload Spikes&lt;/strong&gt;&lt;br&gt;
Introducing burstable instances like AWS T3 and Azure B-series for optimised performance and cost balance during workload fluctuations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Scheduling Dev/Non-Prod Instances - Optimising Usage and Costs&lt;/strong&gt;&lt;br&gt;
Maximise efficiency in dev/non-production environments by scheduling instances during active periods and reducing costs by pausing or shutting down instances during idle periods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Rightsizing Your VMs - Matching Workloads, Saving Costs&lt;/strong&gt;&lt;br&gt;
Evaluate resource requirements, identify over provisioned instances, and optimise VM sizes to align with workload demands, resulting in cost savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Auto Scaling&lt;/strong&gt;&lt;br&gt;
Maximise resource utilisation and cost efficiency by leveraging AWS Auto Scaling Groups for automated, demand-based scaling.&lt;/p&gt;

&lt;p&gt;By implementing these strategies, organisations can achieve substantial cost savings and optimise their cloud investments. Start implementing these best practices today and take control of your VM costs for a more efficient and cost-effective.&lt;/p&gt;

</description>
      <category>virtualmachine</category>
      <category>cloudcost</category>
      <category>cloudoptimisation</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Achieving 90% Cost Reduction and Scaling Amazon Prime Video's Audio/Video Monitoring with Monolithic Architecture</title>
      <dc:creator>Navin Prasad </dc:creator>
      <pubDate>Fri, 05 May 2023 11:58:01 +0000</pubDate>
      <link>https://forem.com/navinprasadk/achieving-90-cost-reduction-and-scaling-amazon-prime-videos-audiovideo-monitoring-with-monolithic-architecture-2ecj</link>
      <guid>https://forem.com/navinprasadk/achieving-90-cost-reduction-and-scaling-amazon-prime-videos-audiovideo-monitoring-with-monolithic-architecture-2ecj</guid>
      <description>&lt;p&gt;In recent years, microservices have gained popularity as an architectural pattern that allows organizations to build and deploy applications faster and more efficiently. However, as Amazon discovered, microservices aren't always the best choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon's Prime Video Quality Monitoring
&lt;/h2&gt;

&lt;p&gt;In a recent post, the team that works on &lt;a href="http://primevideotech.com/video-streaming/scaling-up-the-prime-video-audio-video-monitoring-service-and-reducing-costs-by-90"&gt;Amazon's Prime Video explained their approach&lt;/a&gt; to ensuring customers receive high-quality content. They use a tool to monitor every stream viewed by customers and identify quality issues.&lt;/p&gt;

&lt;p&gt;Initially, the tool was intended to run on a small scale, but the team noticed that monitoring more streams caused the service to become less responsive. Therefore, they decided to revise the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Architecture
&lt;/h2&gt;

&lt;p&gt;The initial architecture consisted of several serverless components orchestrated by AWS Step Functions. The components were responsible for performing various tasks such as processing video, analyzing logs, and sending notifications.&lt;/p&gt;

&lt;p&gt;The team chose to build the initial solution with serverless components because it enabled them to develop and scale each component quickly. However, they soon discovered that using some components caused performance issues, accounting for 5% of the expected load.&lt;/p&gt;

&lt;p&gt;The following diagram shows the serverless architecture of the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f0_6WmFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u44j8y7uwfd8xzfdmo5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f0_6WmFs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u44j8y7uwfd8xzfdmo5z.png" alt="Image credit: Prime Video | Tech" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Revised Architecture
&lt;/h2&gt;

&lt;p&gt;After analyzing the performance issues, the team concluded that the distributed approach didn't bring many benefits. As a result, they packed all the components into a single process, moving expensive operations between components into a single process to keep the data more trans within process memory.&lt;/p&gt;

&lt;p&gt;By moving to a monolithic architecture, the team reduced the service's response time to a monolithic level and reduced the infrastructure cost by over 90%. Moreover, the service scaled better and had improved reliability.&lt;/p&gt;

&lt;p&gt;The following diagram shows the architecture of the system after migrating to the monolith.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3yo_1ABF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvrudaexxns7ehap6pou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3yo_1ABF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvrudaexxns7ehap6pou.png" alt="Image credit: Prime Video | Tech" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While microservices can offer several advantages, they're not always the right choice. Sometimes, monolithic architecture can be a better fit, depending on the project's requirements. Amazon's Prime Video quality monitoring service is an excellent example of how moving from microservices to a monolith improved performance, scalability, and reliability.&lt;/p&gt;

&lt;p&gt;In summary, organizations should evaluate their project requirements and choose an architecture that best suits their needs, rather than following industry trends blindly.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>monolithic</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
