<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: CAST AI</title>
    <description>The latest articles on Forem by CAST AI (@castai).</description>
    <link>https://forem.com/castai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/castai"/>
    <language>en</language>
    <item>
      <title>Free Webinar: The DevOps Guide to Job Defensibility</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Thu, 08 May 2025 11:28:08 +0000</pubDate>
      <link>https://forem.com/castai/free-webinar-the-devops-guide-to-job-defensibility-3kce</link>
      <guid>https://forem.com/castai/free-webinar-the-devops-guide-to-job-defensibility-3kce</guid>
      <description>&lt;p&gt;AI and automation are transforming cloud operations - but where does that leave DevOps and FinOps teams?&lt;/p&gt;

&lt;p&gt;As Kubernetes, cloud storage, and cloud automation evolve, teams face new challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What gets automated, and what still requires DevOps expertise?&lt;/li&gt;
&lt;li&gt;How do you ensure cost performance for your core cloud services of storage and compute?&lt;/li&gt;
&lt;li&gt;How can IT teams move 'Beyond Migration' to true cloud modernisation and optimisation? &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Join Cloud Bridge, Cast AI and Datafy teams with Bret Fisher for the live Panel discussion on &lt;strong&gt;Weds, 14th May | 4.00 PM BST (UK) | 11.00AM EDT (Eastern Time)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Register here: &lt;a href="https://www.cloud-bridge.co.uk/live-webinar-the-devops-guide-to-job-defensibility" rel="noopener noreferrer"&gt;https://www.cloud-bridge.co.uk/live-webinar-the-devops-guide-to-job-defensibility&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Solving the Reserved Instance Resale Ban With K8s Automation</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Mon, 22 Jan 2024 16:58:22 +0000</pubDate>
      <link>https://forem.com/cast_ai/solving-the-reserved-instance-resale-ban-with-k8s-automation-agi</link>
      <guid>https://forem.com/cast_ai/solving-the-reserved-instance-resale-ban-with-k8s-automation-agi</guid>
      <description>&lt;p&gt;AWS Reserved Instance (RI) Resale has been banned. How do you keep maximum savings while staying flexible and risk-free?&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick summary of AWS’s ban on RI resale
&lt;/h2&gt;

&lt;p&gt;AWS prohibited the marketplace resale of Reserved Instances (RIs) acquired at a discount as of January 15, 2024. This means that the “flexible RI reseller market” has effectively lost the ability to provide flexibility and risk-free RI coverage.&lt;/p&gt;

&lt;p&gt;According to estimates, the RI “flexibility/trading” market results in annual savings of over $1 billion in cloud costs. These savings are no longer available. Any cloud user that fails to actively address this issue, runs a significant risk of overcommitment, waste, and vendor lock-in.&lt;/p&gt;

&lt;p&gt;The CAST AI platform offers a solution that allows for higher savings with full flexibility, zero risk, and zero vendor lock-in.&lt;/p&gt;

&lt;h2&gt;
  
  
  CAST AI offers a solution: automation
&lt;/h2&gt;

&lt;p&gt;CAST AI achieves significant savings without lock-in by applying AI to continuously ensure efficiency at all levels of the Kubernetes cloud environment. &lt;/p&gt;

&lt;p&gt;It’s a continuous process that involves bin packing pods into nodes, selecting the most cost-efficient instances, and scaling them up and down in line with actual application demand. CAST AI runs these processes automatically, 24/7.&lt;/p&gt;

&lt;p&gt;Instead of covering a non-optimized environment with groups of non-flexible and typically wasteful 1- or 3-year commitments, CAST AI makes sure every K8s environment is as efficient and flexible as possible, automatically and with practically zero set-up time.&lt;/p&gt;

&lt;p&gt;In the case of existing commitments, CAST AI will help customers make the most of their existing RIs by prioritizing the usage of reserved capacity.&lt;/p&gt;

&lt;p&gt;With CAST AI, you can help your customers drive more savings, maintain resource type flexibility, and avoid expensive long-term commitments. I’ll soon explain how.&lt;/p&gt;

&lt;h2&gt;
  
  
  CAST AI features
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh0ar9x0eczegl6mehu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh0ar9x0eczegl6mehu.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A well-optimized cloud environment keeps the following principles balanced:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Availability&lt;/strong&gt; and compliance – the cornerstones of every environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – being able to grow and meet the workload’s needs, but also to scale down when not needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt; – instance types, amount of infra provisioned, and the leveraging of different discounting mechanisms offered by the cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous compliance&lt;/strong&gt; with all of the above in an always changing environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With CAST AI, your customers get all of the above for your K8s environment, and the best part: it’s fully automated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autoscaling
&lt;/h3&gt;

&lt;p&gt;CAST AI automatically scales up and down (thousands of nodes in minutes if needed) while continuously making sure to provision only the resources needed. &lt;/p&gt;

&lt;h3&gt;
  
  
  Bin packing
&lt;/h3&gt;

&lt;p&gt;The platform bin packs pods into nodes to help your customers achieve optimal resource utilization. &lt;/p&gt;

&lt;h3&gt;
  
  
  Automated instance selection
&lt;/h3&gt;

&lt;p&gt;CAST AI selects the most cost-efficient and performance-optimized instance types that stay compliant and are always available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workload rightsizing
&lt;/h3&gt;

&lt;p&gt;On top of that, CAST AI will identify pods that request more resources than the workload actually needs and, if allowed, will automatically manage a feedback cycle that will continuously rightsize the pods themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spot instance automation
&lt;/h3&gt;

&lt;p&gt;Lastly, if applicable for a specific workload, CAST AI can run selected workloads on &lt;a href="https://cast.ai/blog/how-to-reduce-cloud-costs-by-90-spot-instances-and-how-to-use-them/"&gt;spot instances&lt;/a&gt; with an automated fallback to on-demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to achieve optimal resource utilization, that includes RIs
&lt;/h2&gt;

&lt;p&gt;To make the most of RIs, only commit to what you’re guaranteed to use. &lt;/p&gt;

&lt;p&gt;CAST AI will continuously optimize your customers’ environments and keep them available, all the while improving performance and reducing costs dramatically. All of this is fully automated, without the need to commit to the cloud provider for long-term consumption.&lt;/p&gt;

&lt;p&gt;It takes about 5 minutes to install the CAST AI agent and get an analysis of your potential savings. Your customers also get access to &lt;a href="https://cast.ai/cloud-cost-monitoring/"&gt;K8s cost monitoring&lt;/a&gt; with dashboards to report on costs by cluster, workload, namespace, node, and more.&lt;/p&gt;

&lt;p&gt;Once your customers are ready to enable automation, CAST AI offers a swift and free POC that will generate actual savings and showcase the automation and performance of the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partner with the #1 Kubernetes automation platform
&lt;/h2&gt;

&lt;p&gt;CAST AI automates Kubernetes cost, performance, and security management in one platform, achieving over 60% cost savings for its users. &lt;a href="https://cast.ai/partner-program/"&gt;Become a partner!&lt;/a&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>reservedinstances</category>
      <category>ris</category>
    </item>
    <item>
      <title>Grafana Kubernetes Dashboard: How To Use It For Finops</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Thu, 14 Dec 2023 15:41:59 +0000</pubDate>
      <link>https://forem.com/castai/grafana-kubernetes-dashboard-how-to-use-it-for-finops-4301</link>
      <guid>https://forem.com/castai/grafana-kubernetes-dashboard-how-to-use-it-for-finops-4301</guid>
      <description>&lt;p&gt;Before the rise of cloud computing, engineers could only fantasize about provisioning infrastructure without consulting anyone from the business side. Today, they can set up a virtual instance in a matter of minutes without talking to anyone. That’s why building engineer awareness around cloud costs is so important – and a well-configured Grafana dashboard can help you do that.&lt;/p&gt;

&lt;p&gt;Engineers aren’t used to factoring in expenses when they set up their cloud infrastructure, especially when they have so many other competing tasks to take care of. Counting on an in-depth cost-efficiency analysis just isn’t going to work.&lt;/p&gt;

&lt;p&gt;Building a strong FinOps culture and motivating all cloud users to engage in cloud cost management is a critical step in cloud cost control.&lt;/p&gt;

&lt;p&gt;Grafana, a monitoring and observability tool, can help you deal with it if you can find a method to include cost data in your Grafana dashboard. &lt;/p&gt;

&lt;p&gt;Keep reading to find out why adding cost insights to Grafana is worthwhile and how to do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Grafana dashboards?
&lt;/h2&gt;

&lt;p&gt;In Grafana, a dashboard helps you monitor different parts of your Kubernetes cluster, such as cluster CPU, pod CPU, memory, I/O, RX/TX, memory requests, limits, utilization, etc.&lt;/p&gt;

&lt;p&gt;Creating modern Grafana dashboards for Kubernetes is easy. If you use Helm and the Helm chart functionality, it’s a matter of one Helm install command. There are several prebuilt Grafana templates for Kubernetes you can use, like prebuilt dashboards for ingress controllers, volumes, API servers, Prometheus analytics, and more.&lt;/p&gt;

&lt;p&gt;Grafana dashboards are consolidated locations for monitoring real-time information. They are essential for Kubernetes monitoring – both for your apps and infrastructure. Kubernetes metrics in Grafana great total visibility into the condition of your Kubernetes cluster and guarantee that your services are performing as planned.&lt;/p&gt;

&lt;p&gt;Grafana dashboards are used to monitor the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster resource consumption (cluster, node, pod, and container CPU/memory use)&lt;/li&gt;
&lt;li&gt;Kubernetes cluster nodes’ actual CPU/memory consumption&lt;/li&gt;
&lt;li&gt;Individual Kubernetes node health status&lt;/li&gt;
&lt;li&gt;Available resources for individual Kubernetes nodes&lt;/li&gt;
&lt;li&gt;Requested versus real resource usage&lt;/li&gt;
&lt;li&gt;Health and availability of pods&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3 ways your Grafana dashboard can help control cloud costs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Check which workloads are costing you far too much money
&lt;/h3&gt;

&lt;p&gt;One of the key challenges teams confront today is dealing with shared costs. This is particularly true in cloud-native settings such as Kubernetes.&lt;/p&gt;

&lt;p&gt;However, a strong cost monitoring solution combined with Grafana provides a great source of insight that you can use for allocating costs to teams, departments, or projects.&lt;/p&gt;

&lt;p&gt;CAST AI allows you to distribute cloud expenses at the namespace, label, and workload levels. You can simply combine Kubernetes cluster data with Grafana to create a useful dashboard that allows you to quickly examine the cost per job and attach it to a specific team.&lt;/p&gt;

&lt;p&gt;This will come in handy when your workload costs start to spin out of control and you need to address the problem as soon as possible.&lt;/p&gt;

&lt;p&gt;Here’s an example dashboard that includes CAST AI metrics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwek3l6vuqt5vcczhukz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwek3l6vuqt5vcczhukz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Receive real-time alerts concerning price increases
&lt;/h3&gt;

&lt;p&gt;Nobody has time to regularly monitor the infrastructure.&lt;/p&gt;

&lt;p&gt;But what if a team member leaves a workload running for much too long? You may get an unexpected cloud charge of more than $500k like Adobe did. Or generate a massive cloud bill in a matter of hours, like the Silicon Valley startup Milkie Way did when they spent $72k testing Firebase and Cloud Run. &lt;/p&gt;

&lt;p&gt;You can easily avoid this by adding a simple alert based on real-time utilization and cost data. Alert configuration seems like a simple step, but these examples show how impactful it can be.&lt;/p&gt;

&lt;p&gt;If you find a way to include real-time cost data in your Grafana dashboard, you can specify usage/cost thresholds and receive alerts whenever the cost of your Kubernetes cluster exceeds them.&lt;/p&gt;

&lt;p&gt;You’ll never be surprised when your cloud bill arrives at the end of the month, and your CFO will sleep better at night.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Raise cost awareness among your teams
&lt;/h3&gt;

&lt;p&gt;According to the State of FinOps survey, getting engineers to act on cost optimization advice is a top FinOps-related challenge for over 40% of respondents, regardless of their maturity level.&lt;/p&gt;

&lt;p&gt;By introducing cost indicators to Grafana, an industry-standard technology that most teams now use, you make them more accessible. &lt;/p&gt;

&lt;p&gt;Engineers are already using Grafana for Kubernetes monitoring and observability. What’s another dashboard that shows them real-time Kubernetes cluster cost and utilization data?&lt;/p&gt;

&lt;p&gt;You’re not asking people to switch context and work with another tool on top of the hundreds they’re currently using simply to figure out how much their Kubernetes cluster costs.&lt;/p&gt;

&lt;p&gt;If you’re beginning to establish a FinOps culture at your organization, I shared some tips on how to convince your engineers that cloud cost control is crucial.&lt;/p&gt;

&lt;p&gt;The best thing here is that bringing cost data to Grafana works like a shortcut.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor your Kubernetes cluster costs with Grafana
&lt;/h2&gt;

&lt;p&gt;At CAST AI, we developed the ability to move expense metrics from the CAST AI platform to Grafana via Prometheus.&lt;/p&gt;

&lt;p&gt;Get started with a free cost monitoring report that has been fine-tuned to match the needs of Kubernetes teams – and then move all the data you need to Grafana. The Kubernetes monitoring module includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Breakdown of costs per Kubernetes cluster, workload, label, namespace, allocation group, and more.&lt;/li&gt;
&lt;li&gt;Insights into network costs and network usage, including workload-to-workload communication.&lt;/li&gt;
&lt;li&gt;Workload efficiency metrics, with CPU and memory hours wasted per workload.&lt;/li&gt;
&lt;li&gt;Available savings report that shows how much you stand to save if you move your workloads to more cost-optimized nodes.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>grafana</category>
      <category>finops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>CAST AI demo video</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Fri, 01 Dec 2023 15:45:59 +0000</pubDate>
      <link>https://forem.com/castai/cast-ai-demo-video-1inc</link>
      <guid>https://forem.com/castai/cast-ai-demo-video-1inc</guid>
      <description>&lt;p&gt;In this video, you will witness CAST AI in action and discover how easily you could achieve savings of over 60% on your Kubernetes cluster.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>CAST AI - Kubernetes cost optimization</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Fri, 01 Dec 2023 15:39:19 +0000</pubDate>
      <link>https://forem.com/castai/cast-ai-kubernetes-cost-optimization-48ad</link>
      <guid>https://forem.com/castai/cast-ai-kubernetes-cost-optimization-48ad</guid>
      <description>&lt;p&gt;Short video about CAST AI - Kubernetes cost optimization platform for all of your cloyd&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes DaemonSet: Practical Guide to Monitoring in Kubernetes</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Fri, 01 Dec 2023 11:38:51 +0000</pubDate>
      <link>https://forem.com/cast_ai/kubernetes-daemonset-practical-guide-to-monitoring-in-kubernetes-46n6</link>
      <guid>https://forem.com/cast_ai/kubernetes-daemonset-practical-guide-to-monitoring-in-kubernetes-46n6</guid>
      <description>&lt;p&gt;As teams moved their deployment infrastructure to containers, monitoring and logging methods changed a lot. Storing logs in containers or VMs just doesn’t make sense – they’re both way too ephemeral for that. This is where solutions like Kubernetes DaemonSet come in.&lt;/p&gt;

&lt;p&gt;Since pods are ephemeral as well, managing Kubernetes logs is challenging. That’s why it makes sense to collect logs from every node and send them to some sort of central location outside the Kubernetes cluster for persistence and later analysis.&lt;/p&gt;

&lt;p&gt;A DaemonSet pattern lets you implement node-level monitoring agents in Kubernetes easily. This approach doesn’t force you to apply any changes to your application and uses little resources.&lt;/p&gt;

&lt;p&gt;Dive into the world of DaemonSets to see how they work on a practical example of network traffic monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kubernetes DaemonSet? Intro to node-level monitoring in Kubernetes
&lt;/h2&gt;

&lt;p&gt;A DaemonSet in Kubernetes is a specific kind of workload controller that ensures a copy of a pod runs on either all or some specified nodes within the cluster. It automatically adds pods to new nodes and removes pods from removed nodes. &lt;/p&gt;

&lt;p&gt;This makes DaemonSet ideal for tasks like monitoring, logging, or running a network proxy on every node. &lt;/p&gt;

&lt;h3&gt;
  
  
  DaemonSet vs. Deployment
&lt;/h3&gt;

&lt;p&gt;While a Deployment ensures that a specified number of pod replicas run and are available across the nodes, a DaemonSet makes sure that a copy of a pod runs on all (or some) nodes in the cluster. It’s a more targeted approach that guarantees that specific services run everywhere they’re needed.&lt;/p&gt;

&lt;p&gt;DaemonSets provide a unique advantage in scenarios where consistent functionality across every node is crucial. This is particularly important for node-level monitoring within Kubernetes. &lt;/p&gt;

&lt;p&gt;By deploying a monitoring agent via DaemonSet, you can guarantee that every node in your cluster is equipped with the tools necessary for monitoring its performance and health. This level of monitoring is vital for early detection of issues, load balancing, and maintaining overall cluster efficiency.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An alternative approach – which involves manually deploying these agents or using other types of workload controllers like Deployments – could lead to inconsistencies and gaps in monitoring. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For example, without a DaemonSet, a newly added node might remain unmonitored until it’s manually configured. This gap could pose a risk to both the performance and security of the entire cluster. &lt;/p&gt;

&lt;h3&gt;
  
  
  The benefits of DaemonSets
&lt;/h3&gt;

&lt;p&gt;DaemonSets automate this process, ensuring that each node is brought under the monitoring umbrella without any manual intervention as soon as it joins the cluster.&lt;/p&gt;

&lt;p&gt;Furthermore, DaemonSets aren’t just about deploying the monitoring tools. They also manage the lifecycle of these tools on each node. When a node is removed from the cluster, the DaemonSet ensures that the associated monitoring tools are also cleanly removed, keeping your cluster neat and efficient.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In essence, Kubernetes DaemonSets simplify the process of maintaining a high level of operational awareness across all nodes. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They provide a hands-off, automated solution that ensures no node goes unmonitored, enhancing the reliability and performance of Kubernetes clusters. This makes DaemonSets an indispensable tool in the arsenal of Kubernetes cluster administrators, particularly for tasks like node-level monitoring that require uniform deployment across all nodes.&lt;/p&gt;

&lt;p&gt;Head over to K8s docs for details about the Kubernetes DaemonSet feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do DaemonSets work?
&lt;/h2&gt;

&lt;p&gt;A DaemonSet is a Kubernetes object that is actively controlled by a controller. You can define whatever state you wish for it – for example, declare that a specific pod should be present on all nodes.&lt;/p&gt;

&lt;p&gt;The tuning control loop compares the intended state to what is currently happening. If a matching pod doesn’t exist on the monitored node, the DaemonSet controller will create one for you. This automated approach applies to both existing and newly created nodes.&lt;/p&gt;

&lt;p&gt;By default, the DaemonSet creates pods on all nodes. You can use the node selector to limit the number of nodes it can accept. The DaemonSet controller will only create pods on nodes that match the YAML file’s preset nodeSelector field.&lt;/p&gt;

&lt;p&gt;Here’s a DaemonSet example for creating nginx pods only on nodes that have &lt;code&gt;disktype=ssd&lt;/code&gt; label:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DaemonSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-daemonset&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pod&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-pod&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-container&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disktype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you add a new node to the cluster, that pod is also added to the new node. When a node is removed (or the cluster shrinks), Kubernetes automatically garbage-collects that pod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network traffic monitoring with DaemonSet
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of network management, understanding and overseeing network traffic is pivotal. &lt;/p&gt;

&lt;p&gt;Network traffic essentially refers to the amount and type of data moving across your network – this could be anything from user requests to data transfers. It’s the lifeblood of any digital environment, influencing the performance, security, and overall health of your network.&lt;/p&gt;

&lt;h3&gt;
  
  
  The role of DaemonSets in traffic monitoring
&lt;/h3&gt;

&lt;p&gt;How do you keep an eye on this in a Kubernetes environment? This is where DaemonSets come into play.&lt;/p&gt;

&lt;p&gt;As you already know, DaemonSets are a Kubernetes feature that allows you to deploy a pod on every node in your cluster.&lt;/p&gt;

&lt;p&gt;Why is that important for network traffic monitoring? &lt;/p&gt;

&lt;p&gt;Well, each node in your Kubernetes cluster can be involved in different kinds of network activities. By deploying a monitoring agent on every node, you get a comprehensive view of what’s happening across your entire cluster.&lt;/p&gt;

&lt;p&gt;You might be wondering now:&lt;/p&gt;

&lt;p&gt;Why not just use a Deployment and adjust the number of replicas to run on one or maybe two nodes to monitor the traffic of all nodes? &lt;/p&gt;

&lt;p&gt;It sounds simpler, but here’s the catch:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and isolation:&lt;/strong&gt; In Kubernetes, each node operates in its own isolated environment. This means that a pod on one node can’t directly monitor or access the network traffic of another node due to the security policies and Linux namespaces. These security measures are crucial for maintaining the integrity of your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accurate and localized data&lt;/strong&gt;: By having a monitoring agent on each node, you get precise, localized data about the traffic. This level of granularity is essential for effective monitoring, as it helps in identifying specific issues and bottlenecks that might occur on individual nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability and reliability&lt;/strong&gt;: Using DaemonSets ensures that your monitoring setup scales with your cluster. As nodes are added or removed, the DaemonSet automatically adjusts, deploying or removing pods as needed. This dynamic scalability is a core requirement for maintaining a robust monitoring system in a growing or changing environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can see, using DaemonSets for network traffic monitoring in a Kubernetes cluster isn’t just a matter of convenience; it’s a necessity for accurate, secure, and scalable network analysis. &lt;/p&gt;

&lt;p&gt;Each node has its own unique traffic patterns and potential issues, and DaemonSets ensure you don’t miss out on these critical insights. They empower you to maintain a high-performing and secure Kubernetes environment by providing a bird’s-eye view of your network traffic, node by node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplifying network traffic monitoring in Kubernetes
&lt;/h2&gt;

&lt;p&gt;When it comes to keeping tabs on network traffic in your Kubernetes cluster, the road can be complex and challenging. &lt;/p&gt;

&lt;p&gt;Those keen on DIY approaches might consider building a custom solution. This could involve leveraging tools like &lt;code&gt;conntrack&lt;/code&gt; to monitor each pod’s traffic, crafting intricate logic to process and store data, and continuously tackling a variety of potential issues that might arise along the way. &lt;/p&gt;

&lt;p&gt;While this approach offers flexibility, it’s often resource-intensive and riddled with complexities.&lt;/p&gt;

&lt;h3&gt;
  
  
  A streamlined alternative to network monitoring
&lt;/h3&gt;

&lt;p&gt;Alternatively, what if you could bypass these hurdles and jump straight to an efficient, ready-to-use solution? &lt;/p&gt;

&lt;p&gt;That’s exactly what our open-source egressd tool offers. It’s designed to simplify network traffic monitoring in Kubernetes, providing a comprehensive and hassle-free approach.&lt;/p&gt;

&lt;p&gt;egressd consists of two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collector – a DaemonSet pod responsible for monitoring network traffic on nodes.&lt;/li&gt;
&lt;li&gt;Exporter – a Deployment pod that fetches traffic data from each collector and export logs to HTTP or Prometheus.
Here’s what our solution brings to the table:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzold4wupxp49ercyca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uzold4wupxp49ercyca.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Efficient conntrack monitoring
&lt;/h4&gt;

&lt;p&gt;egressd retrieves conntrack entries for pods on each node at a configured interval, defaulting to every 5 seconds.&lt;/p&gt;

&lt;p&gt;If you’re using Cilium, it fetches conntrack records directly from eBPF maps located in the host’s &lt;code&gt;/sys/fs/bpf&lt;/code&gt; directory, which are created by Cilium.&lt;/p&gt;

&lt;p&gt;For setups using the Linux Netfilter Conntrack module, it leverages Netlink to obtain these records.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Intelligent data reduction
&lt;/h4&gt;

&lt;p&gt;The records are then streamlined, focusing on key parameters like source IP, destination IP, and protocol to provide a clear picture of network interactions.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Enhanced with Kubernetes context
&lt;/h4&gt;

&lt;p&gt;We enrich the data by adding Kubernetes-specific context. This includes information about source and destination pods, nodes, node zones, and IP addresses, giving you a comprehensive view of your cluster’s network traffic.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Flexible export options
&lt;/h4&gt;

&lt;p&gt;The exporter in our solution is designed to be versatile, offering the capability to send logs either to an HTTP endpoint or to Prometheus for detailed analysis and alerting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sidestep the complexity of building and maintaining a custom solution with egressd
&lt;/h2&gt;

&lt;p&gt;You get a solid, ready-to-deploy system that seamlessly integrates into your Kubernetes environment, providing detailed, real-time insights into your network traffic. This means you can focus more on strategic tasks and less on the intricacies of monitoring infrastructure.&lt;/p&gt;

&lt;p&gt;Additionally, egressd provides you with two options: &lt;/p&gt;

&lt;p&gt;egressd can be installed as a standalone tool that will track your network traffic movements within the cluster, which you can then visualize in Grafana to get a better picture of your network:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywnomik4h1z2qnay0cjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywnomik4h1z2qnay0cjk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, if you’re a CAST AI user, you can connect egressd to your dashboard to get all the benefits of our fancy cost reports. &lt;/p&gt;

&lt;p&gt;This way, you can see not only the amount of traffic within the cluster but also get more insights about workload-to-workload communication – and how much you pay for that traffic as it differentiates between different providers/regions/zones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwbr9xtwk71ipv293h1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwbr9xtwk71ipv293h1b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0wcssbjll6ju9wxy8rr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0wcssbjll6ju9wxy8rr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cast.ai/blog/how-we-reduced-egress-cost-by-70-using-cast-ai/" rel="noopener noreferrer"&gt;Check out how we used the network cost report to reduce egress costs by 70%.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;Kubernetes DaemonSets come in handy for logging and monitoring purposes, but this is just the tip of the iceberg. You can also use them to tighten your security and achieve compliance by running CIS Benchmarks on each node and deploying security agents like intrusion detection systems or vulnerability scanners to run on nodes that handle PCI and PII-compliant data.&lt;/p&gt;

&lt;p&gt;And if you’re looking for more cost optimization opportunities, get started with a free cost monitoring report that has been fine-tuned to match the needs of Kubernetes teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Breakdown of costs per cluster, workload, label, namespace, allocation group, and more.&lt;/li&gt;
&lt;li&gt;Workload efficiency metrics, with CPU and memory hours wasted per workload.&lt;/li&gt;
&lt;li&gt;Available savings report that shows how much you stand to save if you move your workloads to more cost-optimized nodes.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>daemonset</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Deal With AWS’s Ban On Reserved Instances Resale</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Thu, 09 Nov 2023 16:40:30 +0000</pubDate>
      <link>https://forem.com/castai/how-to-deal-with-awss-ban-on-reserved-instances-resale-41b4</link>
      <guid>https://forem.com/castai/how-to-deal-with-awss-ban-on-reserved-instances-resale-41b4</guid>
      <description>&lt;p&gt;You’ve probably heard that AWS is no longer allowing its customers to resell Reserved Instances starting January 15, 2024. If you’ve been reselling unused RI capacity directly on the Marketplace or via a third-party provider, this is no longer an option. Keep reading to learn more about the ban and find a way out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2laacixx2gzv936n1jwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2laacixx2gzv936n1jwv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick summary of AWS’s ban on RI resale
&lt;/h2&gt;

&lt;p&gt;AWS will prohibit the resale of Reserved Instances (RIs) acquired at a discount on the Amazon EC2 Reserved Instance Marketplace as of January 15, 2024. This is due to Section 5.5 of the AWS service agreements, which prohibits the sale of discounted RIs. &lt;/p&gt;

&lt;p&gt;However, as a courtesy, customers can still resell discounted RIs for sale on the Marketplace until January 15, 2024 – but only if they were obtained before October 1, 2023.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full letter from AWS about the RI resale ban
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS does not permit the resale of RIs obtained through a discount program (per AWS Service Terms 5.5).&lt;/p&gt;

&lt;p&gt;You are receiving this message because it has come to our attention that your customer may have been listing or selling discounted Amazon EC2 Reserved Instances (RIs) (RIs purchased through a discount program like RI Volume Discount or Private pricing) on the Amazon EC2 Reserved Instance Marketplace. &lt;/p&gt;

&lt;p&gt;AWS does not permit the resale of RIs obtained through a discount program (per AWS Service Terms 5.5). &lt;/p&gt;

&lt;p&gt;We are extending a compliance period to give customers time to move their RI’s to come into compliance with AWS Service Terms. &lt;/p&gt;

&lt;p&gt;During this time, your customer may list any RIs (even if the RIs received a discount) purchased before 1-Oct-2023, on the Amazon EC2 Reserved Instance Marketplace for sale through 15-Jan-2024. &lt;/p&gt;

&lt;p&gt;However, the compliance window will close, and after 15-Jan-2024, customers may no longer have any listings and/or sales of RIs purchased via a discount program on Amazon EC2 Reserved Instance Marketplace.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to overcome the RI resale ban: Alternatives to Reserved Instances
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Solution 1: AWS Savings Plans
&lt;/h3&gt;

&lt;p&gt;One way to deal with the ban when planning your future purchases is to go for AWS Savings Plans instead. &lt;/p&gt;

&lt;p&gt;A Savings Plan is a pricing scheme that offers a discount on On-Demand instances in exchange for committing to one or three years of use, where you set your daily spending limit. Up until that point, all compute usage is available at a reduced cost. When you exceed your limit, AWS charges you the normal on-demand price.&lt;/p&gt;

&lt;h4&gt;
  
  
  EC2 Savings Plan vs. Compute Savings Plan
&lt;/h4&gt;

&lt;p&gt;EC2 Instance Savings Plan offers up to 72% price reductions on EC2 instances, and you can choose the plan’s size, OS, and tenancy. AWS also provides Compute Savings Plans, which give a similar discount amount (66% versus 72%) and include choices such as family, region, operating system, tenancy, and even individual compute services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution 2: Look beyond commitment for cost savings
&lt;/h3&gt;

&lt;p&gt;Savings Plans can help you save money on AWS, but you’re still in charge of infrastructure optimization.&lt;/p&gt;

&lt;p&gt;This is why picking the right size and type of compute instances is such an important task. If you manage a large cloud environment, you’ll need a system that automates cost optimization activities like rightsizing, autoscaling, instance type selection, and others.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It takes time to figure out which resources are running, which families control them, and whose teams own them. Trying to make sense of all 500+ EC2 instances offered by AWS is no walk in the park. It can take you many days or weeks to assess your inventory and use it to determine which instances to keep and which ones to get rid of.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you need is a unified platform that combines all the cost optimization tactics, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workload and node rightsizing to achieve optimal setup even without RIs,&lt;/li&gt;
&lt;li&gt;Spot instance automation for up to 90% of cost savings,&lt;/li&gt;
&lt;li&gt;Automated rebalancing to quickly achieve an optimized state,&lt;/li&gt;
&lt;li&gt;Automated bin packing for optimal resource utilization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CAST AI is a fully automated cloud cost optimization platform that generates cost savings of 60% and more on average, without any sort of vendor lock-in.&lt;/p&gt;

&lt;p&gt;Book a call with one of our solution engineers to find out what alternative cost-cutting measures you can take to score cost savings without making any commitments to AWS.&lt;/p&gt;

&lt;p&gt;In case you’re not sure what we’re talking about, here’s a primer on AWS Reserved Instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recap: What are Reserved Instances?
&lt;/h2&gt;

&lt;p&gt;Companies pick Reserved Instances because they offer significant savings over pay-as-you-go On-Demand pricing – 72%, to be exact. &lt;/p&gt;

&lt;p&gt;All you need is to make a commitment to a specific cloud capacity for a set length of time. AWS gives you two options: a one-year or three-year commitment. &lt;/p&gt;

&lt;p&gt;In certain situations, you will also be guaranteed that specific resources will be available to you at a particular hosting location. &lt;/p&gt;

&lt;p&gt;Choose an instance type, size, platform, and area, then click Finish. It’s like receiving a coupon that you can use to earn a discount at any moment throughout your selected reservation period, which can be shared across teams.&lt;/p&gt;

&lt;p&gt;And the greater your initial payment, the greater the savings.&lt;/p&gt;

&lt;p&gt;However, there is a catch. A Reserved Instance has a “use it or lose it” policy. Every hour your instance is idle is an hour lost (along with any financial rewards you could get). To make the most of your Reserved Instance, you must anticipate exactly what your team will require. &lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Reserved Instances
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Standard Reserved Instances
&lt;/h4&gt;

&lt;p&gt;A Standard Reserved Instance offers greater savings than a Convertible Reserved Instance, but it cannot be exchanged. You could, however, sell them via the Reserved Instance Marketplace (with certain limitations on discounted resources).&lt;/p&gt;

&lt;p&gt;This is the type of Reserved Instance the ban addresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  Convertible Reserved Instances
&lt;/h4&gt;

&lt;p&gt;Convertible Reserved Instances, on the other hand, can be exchanged during the term for a new Convertible Reserved Instance with additional properties such as instance family, instance type, platform, scope, or tenancy. You can’t resell it on the Reserved Instance Marketplace.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scheduled Reserved Instances
&lt;/h4&gt;

&lt;p&gt;Purchasing Reserved Instances on a recurring schedule lets you pay for compute power by the hour and reserve capacity ahead of time for only the periods when you’ll need it.&lt;/p&gt;

&lt;p&gt;Amazon EC2 sets the pricing, and it may fluctuate depending on supply and demand for Scheduled Reserved Instance capacity, as well as the time characteristics of your schedule. However, once you get a Scheduled Reserved Instance, the price you were quoted for is the one you’ll pay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key factors influencing Reserved Instance pricing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Commitment period (1 year vs. 3 years),&lt;/li&gt;
&lt;li&gt;Payment option (Full up-front, Partial up-front, No up-front),&lt;/li&gt;
&lt;li&gt;Region and availability zones,&lt;/li&gt;
&lt;li&gt;Instance type and family you choose.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reserved Instance Optimization
&lt;/h2&gt;

&lt;p&gt;Reserved Instance Optimization is the process of consistently increasing the value you get from using Reserved Instances. The idea is to maximize your RI consumption and associated charges as your AWS setup and computing demands vary over the course of your subscription.&lt;/p&gt;

&lt;p&gt;Here are a few best practices for RI optimization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Continuously monitor infrastructure usage&lt;/strong&gt; – utilization is a key metric for those looking to make the most of reserved capacity, so make sure that you have a viable way to measure it (this includes real-time monitoring).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make sure that instances you launch match your discount&lt;/strong&gt; – AWS will try to match your deployed instances to your current RI discounts, but what if some teams believe that they’re launching instances fulfilling all of the criteria, but in reality they’re not? By not ensuring this, you risk that your contracts will get underutilized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adjust RI purchases based on workload changes&lt;/strong&gt; – knowing what lies ahead is hard in the cloud world, but it’s still worth it to forecast your predicted usage to have a rough idea how your workload changes may affect your RI buying plan. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use tools and platforms for optimization&lt;/strong&gt; – tools like the AWS Trusted Advisor come in handy for managing RIs. AWS Trusted Advisor examines your EC2 consumption history and generates an ideal number of Partial Upfront Reserved Instances to help you maximize RI utilization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The AWS Reserved Instance Marketplace
&lt;/h2&gt;

&lt;p&gt;The AWS Reserved Instance Marketplace is a virtual marketplace where AWS users can sell or buy Reserved Instances from AWS or other third parties. The idea behind it was to provide teams with greater flexibility and savings because estimating workload demands in advance is so hard.&lt;/p&gt;

&lt;p&gt;To buy RIs on the Marketplace, you can use the EC2 interface and click the “Purchase Reserved Instances” button on the Reserve Instance screen. From here, you can select the OS, instance type, tenancy, RI duration, and payment method. This is the same interface that consumers use to purchase conventional or convertible RIs, but it now lets them select any period from one month to 36 months.&lt;/p&gt;

&lt;p&gt;To sell your RIs as a third party on the Marketplace, you must first register as a seller. The root user of your AWS account needs to sign up.&lt;/p&gt;

&lt;p&gt;And let’s not forget about the limitations on reselling RIs. As of January 15, 2024, AWS will restrict the resale of Reserved Instances (RIs) purchased at a discount on the Amazon EC2 Reserved Instance Marketplace. &lt;/p&gt;

&lt;h3&gt;
  
  
  Got an AWS Savings Plan instead? Be sure to read this: &lt;a href="https://cast.ai/blog/running-on-aws-savings-plans-you-can-still-reduce-your-cloud-bill/" rel="noopener noreferrer"&gt;AWS Savings Plan – Can You Reduce Your Cloud Costs Further?&lt;/a&gt;
&lt;/h3&gt;

</description>
      <category>reservedinstances</category>
      <category>kubernetes</category>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes Lens: How To Enhance Your Kubernetes Cluster</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Wed, 19 Jul 2023 06:42:16 +0000</pubDate>
      <link>https://forem.com/cast_ai/kubernetes-lens-how-to-enhance-your-kubernetes-cluster-194l</link>
      <guid>https://forem.com/cast_ai/kubernetes-lens-how-to-enhance-your-kubernetes-cluster-194l</guid>
      <description>&lt;p&gt;You probably know this feeling really well: one day you’re managing clusters like a pro, and another day you face a tornado of errors and bugs attacking you everywhere. We all love Kubernetes, but saying that it makes things a bit complicated would be an understatement. There’s a reason why solutions that make DevOps lives easier – with &lt;strong&gt;Kubernetes Lens&lt;/strong&gt; among them – are popping up all over the place.&lt;/p&gt;

&lt;p&gt;Kubernetes gives us a lot of good stuff – portability, extensibility, openness to automation, and an easier time managing containerized applications. But it has &lt;strong&gt;a lot of moving parts and tricky areas around scaling clusters, orchestrating storage, &lt;a href="https://cast.ai/blog/batch-processing-4-tactics-to-make-it-cost-efficient-and-reliable/"&gt;batch processing&lt;/a&gt;&lt;/strong&gt;, and many others. Using command-line CLIs&lt;/p&gt;

&lt;p&gt;Another problem with Kubernetes is the use of &lt;strong&gt;command-line interfaces&lt;/strong&gt; (CLIs), which may overwhelm anyone used to the clarity of modern GUIs.&lt;/p&gt;

&lt;p&gt;No wonder the market is full of tools that solve such Kubernetes-specific issues. One of them is Kubernetes Lens, a solution that has gained a lot of traction recently. What problems does it help DevOps solve, and how do you make it work? Keep reading and learn more about Kubernetes Lens.&lt;/p&gt;

&lt;h2 id="h-what-is-kubernetes-lens"&gt;What is Kubernetes Lens?&lt;/h2&gt;

&lt;p&gt;Kubernetes Lens is an open-source integrated development environment (IDE) for Kubernetes. It simplifies K8s management by letting cloud-native developers manage and monitor clusters in real time. &lt;/p&gt;

&lt;p&gt;In 2020, &lt;strong&gt;Mirantis&lt;/strong&gt; purchased Lens from &lt;strong&gt;Kontena&lt;/strong&gt; and later open-sourced it and made it freely available (&lt;a href="https://github.com/lensapp/lens" rel="noreferrer noopener"&gt;here’s the repo&lt;/a&gt;). A number of Kubernetes and cloud-native ecosystem pioneers, including Apple, Rakuten, Zendesk, Adobe, and Google, support it.&lt;/p&gt;

&lt;p&gt;A stand-alone tool that works on macOS, Windows, and various Linux distributions, Lens lets developers connect to and manage multiple Kubernetes clusters. &lt;/p&gt;

&lt;p&gt;It comes with a clear, user-friendly &lt;strong&gt;graphical interface&lt;/strong&gt; that helps you deploy and manage clusters directly from the console. Lens also provides &lt;strong&gt;dashboards&lt;/strong&gt; that deliver &lt;strong&gt;helpful metrics&lt;/strong&gt; and insights into everything that happens on a cluster, such as installations, configurations, networking, storage, and access control.&lt;/p&gt;

&lt;p&gt;The latest and most significant release of Lens is Lens 6. It has the following new features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can launch a local Minikube development environment, complete with a single-node Kubernetes cluster operating on a local virtual machine (VM). &lt;/li&gt;



&lt;li&gt;Container Security, which provides security reports on Common Vulnerabilities and Exposures (CVE) right from the Lens desktop.&lt;/li&gt;



&lt;li&gt;Built-in technical help chat (available only with a premium membership).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The new version introduces a different subscription model with the following tiers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lens Personal subscription &lt;/strong&gt;– this package is meant for personal use, education, and organizations with annual income or financing of less than $10 million.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Lens Pro subscription &lt;/strong&gt;– for professional usage in large companies, $19.90 per month or $199 per year per user.&lt;/li&gt;



&lt;li&gt;The licensing for the community version, &lt;strong&gt;OpenLens&lt;/strong&gt;, remains unchanged, as do the upstream projects utilized by Lens Desktop.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="h-what-problems-does-kubernetes-lens-solve"&gt;What problems does Kubernetes Lens solve?&lt;/h2&gt;

&lt;p&gt;Cloud-native DevOps teams develop applications by iterating locally and using version control and CI/CD to push code to a sequence of Kubernetes clusters for testing, staging, and production. Clusters are built and scaled using the same tools by operators. The coordination of cluster management often becomes an issue. &lt;/p&gt;

&lt;p&gt;Small, task-specific clusters are &lt;strong&gt;prioritized above big clusters&lt;/strong&gt; in many companies. As a result, teams end up managing a large number of clusters. The issue here is that the CLIs that interface with clusters consume lots of files, making it difficult to handle the complex and diverse set of methods and contexts. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When scaling up apps and clusters, managing infrastructure via the command line is sluggish and error-prone. Configurations may also differ, making tracking more challenging. An IDE combines the tools and information required to deal with various settings and jobs. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Kubernetes Lens helps in dealing with these issues by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reducing the complexity of setting cluster access and enabling you to automatically add clusters. &lt;/li&gt;



&lt;li&gt;Discovering local kubeconfig files automatically and allowing you to manage clusters across practically any infrastructure. &lt;/li&gt;



&lt;li&gt;To cope with cluster sprawl, it is now feasible to arrange a large number of Kubernetes clusters.&lt;/li&gt;



&lt;li&gt;Managing various kubectl versions – Lens installs the version necessary by each cluster. &lt;/li&gt;



&lt;li&gt;Interactions with RBAC requirements are automatically restricted so that users only access resources that are authorized. &lt;/li&gt;



&lt;li&gt;Installing Prometheus instances automatically in any namespace to deliver metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="h-key-features-of-kubernetes-lens"&gt;Key features of Kubernetes Lens&lt;/h2&gt;

&lt;h3 id="h-cluster-management"&gt;Cluster management&lt;/h3&gt;

&lt;p&gt;Adding a Kubernetes cluster to Lens is really simple: direct the local/online kubeconfig file to Lens, and it will detect and connect to it. Lens allows you to view all of the resources operating within your cluster, from simple pods and deployments to bespoke types provided by your apps.&lt;/p&gt;

&lt;p&gt;With Kubernetes Lens, you can work on many clusters while preserving context with each of them. &lt;strong&gt;It organizes and reveals the complete working system in the cluster while delivering analytics&lt;/strong&gt;, allowing you to setup, change, and reroute clusters with a single click. With this knowledge, you can make changes fast and confidently.&lt;/p&gt;

&lt;p&gt;Lens Connector is one cool feature of Lens. It’s a built-in terminal employs a kubectl version that is API-compatible with your cluster. The terminal can identify your cluster version automatically and then assign or download the appropriate version in the background. As you transition from one cluster to another, you keep the right kubectl version and context.&lt;/p&gt;

&lt;h3 id="h-user-friendly-gui"&gt;User-friendly GUI &lt;/h3&gt;

&lt;p&gt;Since managing many clusters across diverse platforms requires interpreting multiple access contexts, modes, and techniques for structuring the infrastructure, Lens provides a solution to administer Kubernetes via GUI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solving all of these via the command line would otherwise be complex, time-consuming, and prone to error.&lt;/strong&gt; This is due, in part, to the ever-increasing number of clusters and applications, as well as their configurations and requirements.&lt;/p&gt;

&lt;p&gt;Using Kubernetes Lens GUI, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manually add clusters by browsing through their kubeconfigs,&lt;/li&gt;



&lt;li&gt;quickly find kubeconfig files on your own system,&lt;/li&gt;



&lt;li&gt;organize clusters into workgroups based on how you interact with them,&lt;/li&gt;



&lt;li&gt;visualize the state of objects in your cluster, such as pods, deployments, namespaces, network, storage, and even custom resources – making it simple to detect and debug any cluster issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you still enjoy using the CLI, you can use Lens’s built-in terminal to run your preferred kubectl command line.&lt;/p&gt;

&lt;h3 id="h-metrics-and-visualization"&gt;Metrics and visualization &lt;/h3&gt;

&lt;p&gt;Kubernetes Lens has a Prometheus configuration with multi-user functionality that provides role-based access control (RBAC) for each user. This implies that users may only view visualizations for which they have authorization.&lt;/p&gt;

&lt;p&gt;When you configure a Prometheus instance in Lens, it offers &lt;strong&gt;cluster metrics and visualizations&lt;/strong&gt;. Lens autodetects Prometheus for that cluster after installation and starts providing cluster metrics and visualizations. You may also use Prometheus to preview Kubernetes manifests before applying them.&lt;/p&gt;

&lt;p&gt;Real-time graphs, resource utilization charts, and use data such as CPU, RAM, network, and requests are available with Prometheus and become part of the Lens dashboard. You’ll see these metrics displayed in the context of the specific cluster in real time.&lt;/p&gt;

&lt;h3 id="h-handy-integrations"&gt;Handy integrations &lt;/h3&gt;

&lt;p&gt;Lens smoothly integrates with a wide range of Kubernetes tools. One good example is Helm, which helps you make &lt;a href="https://cast.ai/blog/cluster-autoscaler-helm-chart-how-to-improve-your-eks-cluster/"&gt;Helm charts&lt;/a&gt; and releases simple to deploy and manage in Kubernetes.&lt;/p&gt;

&lt;p&gt;You can access available Helm repositories from the Artifact Hub, which, by default, adds a Bitnami repository if no other repositories are specified. Other repositories can be added manually via the command line if necessary. &lt;/p&gt;

&lt;h3 id="h-lens-extensions"&gt;Lens Extensions&lt;/h3&gt;

&lt;p&gt;Kubernetes Lens Extensions enable you to add new and custom features and visualizations to expedite the development processes for all &lt;strong&gt;Kubernetes-integrated technologies and services&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Kubernetes Lens also allows you to use the Lens APIs to script your own extensions. They let you add additional object information, create custom pages, add status bar items, and make other UI changes. The Kubernetes Lens install screen can use a tarball link that extensions have uploaded to npm to produce.&lt;/p&gt;

&lt;h3 id="h-collaboration-and-teamwork-via-kubernetes-lens-spaces"&gt;Collaboration and teamwork via Kubernetes Lens Spaces&lt;/h3&gt;

&lt;p&gt;Kubernetes Lens encourages teamwork and collaboration with its Spaces feature. It's basically a location for cloud-native development teams and projects to collaborate. &lt;/p&gt;

&lt;p&gt;You can easily organize and access your team clusters from anywhere with a Lens space: EKS, GKE, AKS, on-premises, or a local dev cluster. Users will be able to quickly access and securely share all clusters in one space.&lt;/p&gt;

&lt;h2 id="h-kubernetes-lens-alternatives-to-make-your-k8s-ride-even-smoother"&gt;Kubernetes Lens alternatives to make your K8s ride even smoother&lt;/h2&gt;

&lt;p&gt;Kubernetes Lens is a powerful tool for everyone looking to get things done quickly and move on to more impactful activities. You can find even more solutions in the K8s ecosystem that streamline your work even more. &lt;/p&gt;

&lt;p&gt;CAST AI is an autonomous Kubernetes management platform that automates a lot of the heavy lifting around cloud infra management to make engineers more efficient. Once you onboard CAST AI, you’ll end up using Lens even less – and put &lt;a href="https://cast.ai/blog/cloud-automation-the-new-normal-in-the-tech-industry/"&gt;cloud automation&lt;/a&gt; in place to do a lot of things for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out the &lt;a href="https://docs.cast.ai/docs/getting-started"&gt;docs&lt;/a&gt; to learn more about CAST AI’s capabilities around autoscaling, instance rightsizing, automated instance provisioning and decommissioning, and more.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Cost Management Guide: 3 Steps To Reducing Your Cloud Bill</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Wed, 14 Jun 2023 10:45:26 +0000</pubDate>
      <link>https://forem.com/castai/azure-cost-management-guide-3-steps-to-reducing-your-cloud-bill-50og</link>
      <guid>https://forem.com/castai/azure-cost-management-guide-3-steps-to-reducing-your-cloud-bill-50og</guid>
      <description>&lt;p&gt;A top choice among teams looking for enterprise-grade cloud services, Azure comes with just as many cost-related complexities as the other major cloud providers. How do you control and reduce your cloud costs? Here are three Azure cost management best practices and tools to help you get started.&lt;/p&gt;

&lt;h2 id="h-overview-of-azure-cost-management-tools"&gt;Overview of Azure cost management tools &lt;/h2&gt;

&lt;p&gt;Similar to the other major cloud service providers (AWS and Google Cloud Platform), Azure comes with multiple tools to manage cloud costs. These tools include budget setting, cost analysis, and a nifty pricing calculator, to name a few.&lt;/p&gt;

&lt;p&gt;Use them to stay within your budget and gain better control of your finances. Better yet, you may uncover a few unnecessary expenses or other areas in which you can save on your cloud costs.&lt;/p&gt;

&lt;p&gt;Azure offers multiple cost management tools natively to users to help them manage cloud spending better. These tools include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Azure pricing calculator&lt;/li&gt;



&lt;li&gt;Cost analysis tool&lt;/li&gt;



&lt;li&gt;Budgets&lt;/li&gt;



&lt;li&gt;Azure cost management alerts&lt;/li&gt;



&lt;li&gt;Azure Advisor &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, these tools might not be enough in the case of a larger cloud footprint – or the need for real-time cost insights!&lt;/p&gt;

&lt;p&gt;Keep reading to learn how each tool works and see how they provide a more granular view – allowing you to dissect your spending down to the dollar per second.&lt;/p&gt;

&lt;h3 id="h-azure-pricing-calculator"&gt;Azure pricing calculator&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://azure.microsoft.com/en-gb/pricing/calculator/"&gt;Azure pricing calculator&lt;/a&gt; allows you to estimate the cost of Azure services and resources such as compute, storage, databases, and managed services like Azure Kubernetes Service (AKS).&lt;/p&gt;

&lt;p&gt;Simply select the products you’re interested in and fill out the necessary factors for each service/resource (estimated instances, region, number of virtual machines, etc.). You’ll find a quote below, including both upfront and monthly Azure costs.&lt;/p&gt;

&lt;p&gt;For organizations that aren’t using Azure yet, the pricing calculator provides a coherent pricing estimate for your chosen package before starting a project.&lt;/p&gt;

&lt;p&gt;However, if you’re already using Azure services and resources, you may want to try a few other tools to manage costs better – for example, the cost analysis tool.&lt;/p&gt;

&lt;h3 id="h-cost-analysis-tool"&gt;Cost analysis tool&lt;/h3&gt;

&lt;p&gt;The cost analysis tool provides you with a detailed breakdown of your spending on the Azure cloud platform. It allows you to see where your money is going – grouped by resources, tags, and other features. This helps to gain a better understanding of Azure costs – and you may even identify a few anomalies in the process. &lt;/p&gt;

&lt;p&gt;If your cloud bill is higher than expected, the Azure cost analysis tool should be your first point of contact. &lt;/p&gt;

&lt;h3 id="h-budgets"&gt;Budgets&lt;/h3&gt;

&lt;p&gt;Managing cloud costs can at times feel like walking on hot pebbles blindfolded. But setting a budget lets you align Azure spending with your requirements, ensuring you don’t spend a penny more than you intended.&lt;/p&gt;

&lt;p&gt;To set a budget, open Scope and click the Budgets menu. You can then name your budget and determine the budget amount and time period – either monthly, quarterly, or annually. Be sure to give a name to your budget to keep track of it – the last thing you want is to muddy the waters, not knowing what budget was meant for which project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A cool feature of Azure budgets is notifications. You can set up notifications when your spending reaches a given percentage of the total budget or when your resources exceed the budget (either entirely or at a given time of the month based on a percentage).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With a budget and alerts set up, you can be confident that you won’t exceed your spending limits. It’ll also help you keep track of your Azure costs by showing how much money you’re actually spending on given resources or other Azure services.&lt;/p&gt;

&lt;h3 id="h-azure-cost-management-alerts"&gt;Azure cost management alerts&lt;/h3&gt;

&lt;p&gt;While there are budget alerts, Azure also includes other cost management alerts that help you keep track of your spending.&lt;/p&gt;

&lt;p&gt;For example, you can set alerts for credit and department spending quotas. Credit alerts are automatically set to notify you when you reach 90-100% of your balance, ensuring you don’t capsize your budget. &lt;/p&gt;

&lt;p&gt;On the other hand, department spending quotas can be set at a fixed threshold to alert the appropriate department heads (or other employees) when a set percentage of the threshold is met, allowing you to better optimize Azure costs and spending.&lt;/p&gt;

&lt;h3 id="h-azure-advisor"&gt;Azure Advisor &lt;/h3&gt;

&lt;p&gt;Finally, we have &lt;a href="https://docs.microsoft.com/en-us/azure/advisor/advisor-overview"&gt;Azure Advisor&lt;/a&gt; – a handy tool that analyzes resource use. It also suggests alternative solutions to help improve cloud performance, security, and cost-effectiveness. &lt;/p&gt;

&lt;p&gt;With custom recommendations, you will gain further valuable insights to understand better where your money is going. Having such clarity of your resource expenditure and knowing where you can save is the best path to optimizing Azure costs going forward.&lt;/p&gt;

&lt;h2 id="h-3-azure-cost-management-strategies-for-a-lower-cloud-bill"&gt;3 Azure cost management strategies for a lower cloud bill&lt;/h2&gt;

&lt;h2 id="h-1-start-with-cost-visibility"&gt;1. Start with cost visibility&lt;/h2&gt;

&lt;p&gt;Having a clear picture of your current cloud expenses is a key first step to understanding your spending patterns and the real utilization of the cloud resources your team provisions.&lt;/p&gt;

&lt;h3 id="h-make-sure-to-track-these-three-metrics"&gt;Make sure to track these three metrics&lt;/h3&gt;

&lt;p&gt;Before you spend money on a cost monitoring solution, make sure it contains the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time costs&lt;/strong&gt; – cloud providers usually serve cost data with a delay, and Azure is no exception. But if your teams use dynamic cloud-native approaches like Kubernetes, you need access to cost data in real time.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Daily cloud spend&lt;/strong&gt; – this metric is essential for quickly checking your budget burn rate to understand if you’re going to meet your estimation or go beyond the budget you’ve set for the month. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: Imagine that you have a $2,000 monthly budget. If your average daily spend is closer to $90 than $66.6 (30 days x $66.6 = $1998), your cloud bill is bound to be higher than you planned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ritz-e5_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/TePZNXfONSpQhVPdz89ad07kXE5yUcAS0_Eb3Bm43ernNrVyx-JjibdN8Uvx2-Ym1ebJYxqDkAzoQ5XEDDh-UyKbpibqH9sza3fwtSYNmBKs47yZOP8v6yjEbXeRX7K-fUoNvZruHvaEVLwIVdMLjWU" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ritz-e5_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh5.googleusercontent.com/TePZNXfONSpQhVPdz89ad07kXE5yUcAS0_Eb3Bm43ernNrVyx-JjibdN8Uvx2-Ym1ebJYxqDkAzoQ5XEDDh-UyKbpibqH9sza3fwtSYNmBKs47yZOP8v6yjEbXeRX7K-fUoNvZruHvaEVLwIVdMLjWU" alt="" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost per provisioned vs. requested CPU&lt;/strong&gt; – by analyzing the difference between these two numbers, you’ll be able to calculate how much you're actually paying per requested CPU to improve the accuracy of your cost reporting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: Your per-provisioned CPU cost is $2. But your cloud application isn’t optimized, and, as a result, your per-requested CPU fee is $10. This suggests you're running your cluster at a 5x higher cost than planned.&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Historical cost allocation&lt;/strong&gt; – if you go over your cloud budget, you need to know why. This report helps to do that and see where the extra costs come from. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="h-set-up-budget-alerts-and-notifications"&gt;Set up budget alerts and notifications&lt;/h3&gt;

&lt;p&gt;Another important point to keep in mind is that cloud costs can quickly get out of control. Setting up alerts and notifications when certain areas of your cloud application reach or surpass set thresholds gives you an opportunity to act immediately.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A team at Adobe once raked up a &lt;a href="https://www.teampay.co/insights/manage-cloud-costs/"&gt;cloud bill of over $500k&lt;/a&gt; because of a workload left running unchecked. One alert could have prevented this. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="h-implement-tagging-and-resource-organization"&gt;Implement tagging and resource organization&lt;/h3&gt;

&lt;p&gt;Tags are the sole mechanism for understanding the cost of your cloud environment. Cloud tagging is very important for governance and security as well. &lt;/p&gt;

&lt;p&gt;So, it pays to build a cloud tagging strategy that describes the rules and processes teams must follow and implement. Make sure that your strategy explains how to use tags effectively (including proper formatting), who should create them, and how tagging decisions will be made.&lt;/p&gt;

&lt;p&gt;Check out this guide to learn more about tagging: &lt;a href="https://cast.ai/blog/build-a-cloud-tagging-strategy-in-5-steps/"&gt;Build A Cloud Tagging Strategy In 5 Steps&lt;/a&gt;&lt;/p&gt;

&lt;h2 id="h-2-rightsize-cloud-resources"&gt;2. Rightsize cloud resources&lt;/h2&gt;

&lt;h3 id="h-start-by-defining-your-application-s-requirements"&gt;Start by defining your application’s requirements&lt;/h3&gt;

&lt;p&gt;Identify your application's minimal requirements and ensure that the instance type you choose can fulfill them across all dimensions such as CPU count (or GPU), memory, SSD storage, and network architecture.&lt;/p&gt;

&lt;p&gt;A low-cost instance may appear appealing, but it soon might experience performance challenges while executing CPU-intensive applications. &lt;/p&gt;

&lt;p&gt;Azure offers a wide range of options for virtual machines optimized for different workloads, such as compute-optimized, memory-optimized, or accelerated computing for machine learning applications. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once you determine the type that works best for your application, you need to choose the right size of the machine. &lt;strong&gt;Imagine doing all of this manually for the 650 different VMs Azure offers!&lt;/strong&gt; Hint: You can use an automated engine to do this job for you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3 id="h-examine-your-storage-performance-constraints"&gt;Examine your storage performance constraints&lt;/h3&gt;

&lt;p&gt;Another item to think about while optimizing your cost reductions is data storage. &lt;/p&gt;

&lt;p&gt;Every application has different storage requirements. When selecting a virtual machine, ensure that it has the storage throughput and IOPS that your application requires. &lt;/p&gt;

&lt;p&gt;Also, don't go for expensive disk choices like premium SSDs unless you intend to use them extensively.&lt;/p&gt;

&lt;h3 id="h-think-about-network-bandwidth"&gt;Think about network bandwidth&lt;/h3&gt;

&lt;p&gt;Pay attention to the size of the network connection between your instance and the consumers allocated to it if you're dealing with a large data migration or a high amount of traffic. &lt;/p&gt;

&lt;p&gt;There are some cases when you can boost transfer speeds to 10 or 20 Gbps. The catch is that only those instances will be able to sustain this amount of network traffic.&lt;/p&gt;

&lt;h2 id="h-3-take-advantage-of-spot-virtual-machines"&gt;3. Take advantage of spot virtual machines&lt;/h2&gt;

&lt;p&gt;Buying idle capacity from cloud providers is a smart decision because it may save you up to 90% on on-demand resources. However, Azure can reclaim &lt;a href="https://cast.ai/blog/spotvms-automation-for-ai-product-development/"&gt;spot VMs&lt;/a&gt; at any time, giving you just a short window of opportunity to locate another location for your application to execute. In the case of Azure, it’s just 30 seconds.&lt;/p&gt;

&lt;p&gt;Teams often use the following sequence when employing spot instances: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Checking to see if the workload is spot-on &lt;/strong&gt;– Can you put up with interruptions? How long will it take to finish the project? Is this a time-sensitive task? &lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Pick the spot instance&lt;/strong&gt; – Going through the available spot instances, seek for less popular instances that are less likely to be interrupted and can run for longer periods of time (interrupt frequency rate).&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Bid on a spot instance&lt;/strong&gt; – Set the maximum sum you are willing to pay for your preferred spot instance. The rule of thumb here is to set this at the level of on-demand pricing.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Manage spot instances in groups&lt;/strong&gt; – This enables you to request many instance types at the same time, enhancing your chances of securing a spot instance.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Prepare for disruptions&lt;/strong&gt; – Create a backup plan for your application in case your spot instances are reclaimed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, running spot instances requires a lot of energy and time invested in configuration, setup, and maintenance tasks. Good news: you can automate this. The mobile marketing company Branch.io saved &lt;a href="https://cast.ai/case-study/branch/"&gt;several millions of dollars per year&lt;/a&gt; by leveraging spot instance automation. &lt;/p&gt;

&lt;p&gt;You can see a practical example of what spot instance automation for Kubernetes looks like &lt;a href="https://cast.ai/blog/how-to-reduce-your-amazon-eks-costs-by-half-in-15-minutes/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y6RZNaqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/WVMLQrk0f-_PM1v3oAa3twb_mhs5cIVvKFWDG7x_6yFGMCY5C-RbQFsFnRVSHc1VyGAD848q-okbSxfSDHqwrxnNPYlS6I3OFHBc0LJVvqrBnwdrsdqpVMISy0rGh5G0i8yj8a_n_eK4lK_v-o9Koeo" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y6RZNaqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://lh3.googleusercontent.com/WVMLQrk0f-_PM1v3oAa3twb_mhs5cIVvKFWDG7x_6yFGMCY5C-RbQFsFnRVSHc1VyGAD848q-okbSxfSDHqwrxnNPYlS6I3OFHBc0LJVvqrBnwdrsdqpVMISy0rGh5G0i8yj8a_n_eK4lK_v-o9Koeo" alt="" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id="h-ace-azure-cost-management-with-the-right-tooling"&gt;Ace Azure cost management with the right tooling&lt;/h2&gt;

&lt;p&gt;Azure cost management and optimization are by no means a one-time job. These are both processes you need to carry out regularly.&lt;/p&gt;

&lt;p&gt;To slash their workload, teams are turning to automation solutions to handle tasks such as resource selection and spot instance automation. &lt;/p&gt;

&lt;p&gt;This form of optimization requires no more effort from engineers and yields round-the-clock savings, even for teams that are already doing an excellent job manually optimizing their Azure setups. &lt;/p&gt;

&lt;p&gt;If you use Kubernetes and would like to see how automation could help you solve Azure cost management challenges, book a demo with one of our engineers to get a walkthrough of the platform we built specifically for the Azure Kubernetes Service.&lt;/p&gt;

&lt;h2 id="faq"&gt;FAQ&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;strong&gt;What is Azure Cost Management, and why is it important?&lt;/strong&gt;&lt;/strong&gt; &lt;/p&gt;
&lt;p&gt;Azure Cost Management may refer to one of the native cost management tools Azure offers as part of its cloud services. It can also be interpreted as a more general term denoting the best practices and tooling for reducing the costs of running applications in Azure’s cloud. &lt;br&gt;&lt;/p&gt;  &lt;strong&gt;&lt;strong&gt;How can I effectively monitor and analyze my Azure spending?&lt;/strong&gt;&lt;/strong&gt; &lt;p&gt;Analyzing a cloud bill is a difficult task if you lack proper cost visibility, especially at the granularity levels that matter most to you. Using Azure's native cost management tools to analyze and monitor costs may be a good solution on a small scale. However, if your company’s cloud footprint grows and you start using cloud-native tools like Kubernetes, you’re going to need a more robust set of cost analysis and monitoring features. &lt;br&gt;&lt;br&gt;Real-time cost monitoring is a particularly important gap that you’ll need to fill, as Azure cost management tooling doesn’t provide you with it. That’s why so many teams are now turning to third-party cost management solutions that provide the right level of cost visibility, allow real-time cost monitoring, and automate cost optimization to achieve the highest possible savings.&lt;br&gt;&lt;/p&gt;  &lt;strong&gt;&lt;strong&gt;What tools and features are available within Azure Cost Management to help optimize costs?&lt;/strong&gt;&lt;/strong&gt; &lt;p&gt;Azure Cost Management includes a rich array of tools for monitoring and analyzing cloud costs. Here are a few examples of tools you’ll find as part of the suite.&lt;br&gt;&lt;br&gt;- &lt;strong&gt;Azure pricing calculator&lt;/strong&gt; – a handy calculator that helps in forecasting and planning cloud deployments from the cost perspective.&lt;br&gt;- &lt;strong&gt;Cost analysis tool&lt;/strong&gt; – this tool provides you with a detailed breakdown of your spending on the Azure cloud platform.&lt;br&gt;- &lt;strong&gt;Budgets&lt;/strong&gt; – you can use this tool to set a budget to align Azure spending with your requirements and limits.&lt;br&gt;- &lt;strong&gt;Azure cost management alerts &lt;/strong&gt;– you can set budget alerts, but also specific alerts for credit and department spending quotas.&lt;br&gt;- &lt;strong&gt;Azure Advisor &lt;/strong&gt;– a handy tool that analyzes resource use, and suggests alternative solutions to help improve cloud performance, security, and cost-effectiveness.&lt;/p&gt;  &lt;strong&gt;What are some best practices for implementing Azure cost optimization strategies?&lt;/strong&gt; &lt;p&gt;- &lt;strong&gt;Design with cost in mind &lt;/strong&gt;– take steps like selecting resources, setting budgets and limitations, dynamically allocating and deallocating resources, optimizing workloads, and monitoring and managing expenses. &lt;br&gt;- &lt;strong&gt;Rightsize your virtual machines&lt;/strong&gt; – establishing baseline needs and choosing the proper instance type and size might be difficult, but there are solutions that automate this process and select the best VMs for your application in line with its changing demands. &lt;br&gt;- &lt;strong&gt;Scale resources up and down according to application load &lt;/strong&gt;– make sure to remove virtual machines that don’t have any jobs assigned to them to avoid cloud waste.&lt;br&gt;- &lt;strong&gt;Use and automate spot VMs&lt;/strong&gt; - spot VMs let you use Azure's idle capacity for far less than on-demand pricing. Spot VMs are cheaper, but you need automation to use them properly. A solution like CAST AI will discover spot-friendly workloads, choose the relevant VMs, bid the price, and automatically switch your workload to on-demand instances in case of disruptions.&lt;/p&gt;  &lt;strong&gt;&lt;strong&gt;How can I take advantage of Azure reserved instances, spot instances, and Hybrid Benefit to save on costs?&lt;/strong&gt;&lt;/strong&gt; &lt;p&gt;AKS offers a number of alternatives to the on-demand pricing plan:&lt;br&gt;&lt;br&gt;&lt;strong&gt;Reserved instances&lt;br&gt;&lt;/strong&gt;&lt;br&gt;This option is a good pick for steady-state use. A year-long or three-year-long commitment provides price consistency, monthly payment alternatives, and priority computing capacity. Azure claims that 1-year reserved Virtual Machines Linux DSv2 saves 48% over pay-as-you-go.&lt;br&gt;&lt;br&gt;However, one or three years of commitment in the cloud is a long time. Your business or technology needs might change, and you’ll end up with massively underutilized capacity if you pay upfront. Also, forecasting your demand over the full term is difficult. &lt;br&gt;&lt;br&gt;&lt;strong&gt;Spot VMs&lt;br&gt;&lt;/strong&gt;&lt;br&gt;Spot Virtual Machine instances offer huge savings, up to 90% less than on-demand charges. Batch processing and machine learning training are excellent use cases for them. &lt;br&gt;&lt;br&gt;While spot VMs can save money quickly, not all workloads are suitable since Azure may reclaim the cloud resources at any moment, with only a 30-second warning. To make the most of spot VMs, you need to have a strategy for interruptions in place – and, ideally, a way of automating them &lt;a href="https://cast.ai/blog/how-to-reduce-cloud-costs-by-90-spot-instances-and-how-to-use-them/"&gt;just like CAST AI does&lt;/a&gt;. &lt;br&gt;&lt;br&gt;&lt;strong&gt;Hybrid Benefit&lt;br&gt;&lt;/strong&gt;&lt;br&gt;This is a licensing deal that enables you to migrate to Azure and save money. To be eligible for this benefit, you need to be paying for either Windows Server or SQL Server with Software Assurance, or operating instances of Red Hat Enterprise Linux or SUSE Linux Enterprise Server on a current Linux subscription.&lt;br&gt;&lt;br&gt;Azure Hybrid Benefit helps optimize business applications while achieving cost savings, modernizing and maintaining a flexible hybrid infrastructure.&lt;br&gt;&lt;/p&gt;  &lt;strong&gt;&lt;strong&gt;How can I set up budgets, alerts, and notifications to proactively manage my Azure spending?&lt;/strong&gt;&lt;/strong&gt; &lt;p&gt;You can find an alert function in Azure Cost Management. When usage surpasses the set threshold, you’ll get an alert and can quickly react to the issue before it snowballs into a massive cost problem.&lt;br&gt;&lt;br&gt;Take into account the workload's resource metrics and create alerts based on baseline thresholds for each metric. When the workload is using the services to capacity, you’ll get informed and can then adjust the resources to target SKUs.&lt;br&gt;&lt;br&gt;Additionally, you may configure alerts for permitted budgets at the management group or resource group scopes. You can balance the performance and spending needs of cloud services by establishing alerts on metrics and budgets.&lt;br&gt;&lt;/p&gt;  

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure PAM: How To Manage Access With Azure Bastion And Azure PIM  </title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Wed, 07 Jun 2023 10:01:15 +0000</pubDate>
      <link>https://forem.com/castai/azure-pam-how-to-manage-access-with-azure-bastion-and-azure-pim-1e8h</link>
      <guid>https://forem.com/castai/azure-pam-how-to-manage-access-with-azure-bastion-and-azure-pim-1e8h</guid>
      <description>&lt;p&gt;Privileged access management (PAM) is an identity security system that assists organizations in protecting themselves against cyber risks by monitoring, detecting, and preventing unwanted privileged access to important resources. Every cloud provider offers solutions for this, and Azure is no exception. But how do you make Azure PAM work for a cloud application?&lt;/p&gt;

&lt;h2 id="h-what-is-azure-privileged-access-management-pam-all-about"&gt;What is Azure privileged access management (PAM) all about?&lt;/h2&gt;

&lt;p&gt;Privileged access = access with increased administrative permissions. For example, using the SSH or RDP protocol to virtual machines running an application is considered “privileged,” especially if you get root or “administrator” access.&lt;/p&gt;

&lt;p&gt;Another area of privileged access centers around the creation, deletion, and updating of cloud resources in Azure. These types of actions require elevated permissions for Azure users specifically.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure provides diverse tooling to identify an acceptable level of security controls consistent with the current and future Identity and Access Management policies of your company.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In what follows, I focus on two specific Azure privileged access management solutions: Bastions and PIM.&lt;/p&gt;

&lt;h2 id="h-azure-bastion-for-host-access"&gt;Azure Bastion for host access&lt;/h2&gt;

&lt;p&gt;Azure Bastion PaaS service comes in handy for configuring Azure VM host access, which is key in building Azure PAM. It allows you to connect to a VM using a browser and the Azure portal. You can also connect using the native SSH or RDP client already installed on a local computer. VMs don’t require public IPs, special agents aren’t required either.&lt;/p&gt;

&lt;p&gt;The following diagram depicts the network topology required for Bastion access:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2ojveGhl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cast.ai/wp-content/uploads/2023/05/image-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2ojveGhl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cast.ai/wp-content/uploads/2023/05/image-1.png" alt="Azure PAM" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://learn.microsoft.com/en-us/azure/bastion/bastion-overview"&gt;Azure&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since VMs aren’t accessible over the internet, they’re not susceptible to port scanning and potential zero-day attacks against internet-exposed ports and protocols. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Azure Bastion is a hardened “jump box,” and Microsoft is responsible for patching, zero-day vulnerabilities, and network attacks.&lt;/p&gt;

&lt;h3 id="h-types-of-azure-bastion"&gt;Types of Azure Bastion&lt;/h3&gt;

&lt;p&gt;Azure Bastion comes in two flavors: Basic and Standard (SKUs). The differences between these offerings are as follows:&lt;/p&gt;

&lt;h4 id="h-session-management"&gt;Session Management&lt;/h4&gt;

&lt;p&gt;Azure Bastion can monitor distant sessions and perform swift management actions. Session monitoring allows you to see which users are connected to which virtual machines. It displays the IP address from which the user connected, how long they were connected, and when they connected. &lt;/p&gt;

&lt;p&gt;The session management experience lets you select an ongoing session and force-disconnect or delete a session to disconnect the user from the ongoing session.&lt;/p&gt;

&lt;h4 id="h-pricing"&gt;Pricing&lt;/h4&gt;

&lt;p&gt;How much does Azure Bastion cost? Here’s a pricing breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Bastion Basic – $0.19 per hour, $138.70 per Month,&lt;/li&gt;



&lt;li&gt;Azure Bastion Standard – $0.29 per hour, $211.70 per Month,&lt;/li&gt;



&lt;li&gt;Additional Standard Instance – $0.14 per hour, $102.20 per Month.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that you only need one bastion service for all peered VNets. Previously, Bastion required an instance per VNet, making the service cost prohibitive for all VNets.&lt;/p&gt;

&lt;h4 id="h-opening-management-ports-just-in-time"&gt;Opening management ports – just in time&lt;/h4&gt;

&lt;p&gt;Adjacent to privileged access, you can reduce the administrative attack surface by enabling VM management port access in real time, through an access request workflow. &lt;/p&gt;

&lt;p&gt;Azure Defender for Cloud provides this capability through the “secure management port” control feature.&lt;/p&gt;

&lt;p&gt;You can time-bind access to management ports and revoke it after a specified TTL. Furthermore, you can enforce a policy that only Azure Bastion hosts have access to management ports (as specified by security groups).&lt;/p&gt;

&lt;h2 id="h-azure-active-directory-and-privileged-identity-management-pim"&gt;Azure Active Directory and Privileged Identity Management (PIM)&lt;/h2&gt;

&lt;p&gt;Privileged Identity Management (PIM) is a service in Azure Active Directory (Azure AD) that allows you to manage, control, and monitor access to critical organizational resources. This includes Azure AD, Azure, and other Microsoft Online Services like Microsoft 365. &lt;/p&gt;

&lt;p&gt;PIM can help you achieve the following policy-driven objectives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow only-when-needed privileged access to Azure AD and Azure resources.&lt;/li&gt;



&lt;li&gt;Use start and end dates to assign time-bound access to resources.&lt;/li&gt;



&lt;li&gt;To activate privileged positions, you must first obtain authorization.&lt;/li&gt;



&lt;li&gt;To activate any position, require multi-factor authentication.&lt;/li&gt;



&lt;li&gt;To understand why people activate, utilize reasoning.&lt;/li&gt;



&lt;li&gt;Receive alerts when privileged roles are activated.&lt;/li&gt;



&lt;li&gt;Conduct access audits to ensure that users still require roles.&lt;/li&gt;



&lt;li&gt;Save audit history for internal or external auditing purposes.&lt;/li&gt;



&lt;li&gt;Prevents the last active Global Administrator and Privileged Role Administrator role assignments from being removed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;PIM helps teams reach the goal of removing all console access from administrative users in their landing zone. They can then activate specific roles and permissions through the PIM-provided approval workflow. Access will be time bound and auditable.&lt;/p&gt;

&lt;h3 id="h-pricing-and-licensing-requirements-for-pim"&gt;Pricing and licensing requirements for PIM&lt;/h3&gt;

&lt;p&gt;Azure PIM comes at an additional cost. Using this feature requires Azure Active Directory Premium P2 licenses. P2 costs $9 per user per month – as compared with P1 which costs $6 per user per month. Note that you only need to license Azure users in this context. &lt;/p&gt;

&lt;h3 id="h-azure-devops-and-pim"&gt;Azure DevOps and PIM&lt;/h3&gt;

&lt;p&gt;Azure DevOps has been integrated with PIM since 2019. Azure AD has an Azure DevOps administrator role that you can use in conjunction with PIM to elevate permissions. &lt;/p&gt;

&lt;p&gt;Azure DevOps is a separate product, so there is a small caveat that users must log off and log back in to activate elevated privileges. At least one user has shared their experience with AD Groups and PIM, this seems to work well.&lt;/p&gt;

&lt;h2 id="h-there-s-more-to-discover-about-azure-pam"&gt;There’s more to discover about Azure PAM &lt;/h2&gt;

&lt;p&gt;In this article, I just scratched the surface of all the available Azure services for building privileged access management capabilities into a cloud application running in Azure. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you’re looking for more Azure security insights, check out &lt;a href="https://cast.ai/blog/identity-access-management-cloud-migration/"&gt;this article on identity access management (IAM)&lt;/a&gt; and a more &lt;a href="https://cast.ai/blog/6-key-elements-for-a-secure-cloud-migration/"&gt;high-level overview of security for cloud migration and beyond&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Node Affinity, Node Selector, and Other Ways to Better Control Kubernetes Scheduling</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Fri, 26 May 2023 08:14:37 +0000</pubDate>
      <link>https://forem.com/cast_ai/node-affinity-node-selector-and-other-ways-to-better-control-kubernetes-scheduling-3no</link>
      <guid>https://forem.com/cast_ai/node-affinity-node-selector-and-other-ways-to-better-control-kubernetes-scheduling-3no</guid>
      <description>&lt;p&gt;Assigning pods to nodes is one of the most critical tasks of Kubernetes cluster management. While the default process can prove too generic, you can adjust it with advanced features like &lt;strong&gt;node affinity&lt;/strong&gt;.   &lt;/p&gt;

&lt;p&gt;The way the Kubernetes scheduler distributes pods across worker nodes impacts performance and resources and, therefore, your costs. It's then essential to understand how the process works and how to keep it in check. &lt;/p&gt;

&lt;p&gt;This article outlines basic Kubernetes scheduling concepts, including node selector, node affinity and anti-affinity, and pod affinity and anti-affinity. It also includes an example of how combining node affinity and automation can improve your workload's availability and fault tolerance. &lt;/p&gt;

&lt;h2 id="h-how-kubernetes-scheduling-works"&gt;How Kubernetes scheduling works&lt;/h2&gt;

&lt;p&gt;Kubernetes scheduling is about selecting a suitable node to run pods. &lt;em&gt;Kube-scheduler&lt;/em&gt; is part of the control plane and it selects nodes for new or not yet scheduled pods, by default trying to spread them evenly. &lt;/p&gt;

&lt;p&gt;Containers in pods can have different requirements, so the Kubernetes scheduler filters out any nodes that don't match the pod's specific needs. &lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler identifies and scores all feasible nodes for your pod. It then picks the one with the highest score and notifies the API server about this decision. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5yHrEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cast.ai/wp-content/uploads/2023/04/des-397-new-stack-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0P5yHrEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cast.ai/wp-content/uploads/2023/04/des-397-new-stack-1.png" alt="Node affinity, node selector and other ways to better control Kubernetes scheduling" width="800" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Several factors impact the scheduler's decisions, such as resource requirements, hardware and software constraints, etc. &lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler is fast, thanks to automation. However, it can be expensive as you may have to pay for resources that are insufficient for your different environments. &lt;/p&gt;

&lt;p&gt;And as there's no easy way to track your costs in Kubernetes, teams must find other ways to keep their expenses in check.   &lt;/p&gt;

&lt;h2 id="how-to-control-the-scheduler-s-choices"&gt;How to control the scheduler’s choices &lt;/h2&gt;

&lt;p&gt;In a nutshell, you can control where your pods go with &lt;a href="https://cast.ai/blog/kubernetes-labels-expert-guide-with-10-best-practices/"&gt;Kubernetes labels&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Labels are key/value pairs you can manually attach to objects like pods and nodes. By using them, you can specify identifying attributes, organize, or select subsets of objects. &lt;/p&gt;

&lt;p&gt;The simplest way to constrain the Kubernetes scheduler is to use a node selector. &lt;/p&gt;

&lt;h3 id="how-does-a-node-selector-work"&gt;How does a node selector work? &lt;/h3&gt;

&lt;p&gt;Adding the node selector field to your pod specification with a key-value pair lets you indicate the labels you wish the target node to have. &lt;/p&gt;

&lt;p&gt;Kubernetes will only schedule pods onto the nodes matching the labels you specify. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The node selector is sufficient in small clusters but is usually unsuitable for complex cases. &lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, you may have an app that needs to run in separate availability zones. Or you may want to keep the API and database separate, e.g., when you don’t have many replicas.  &lt;/p&gt;

&lt;p&gt;That’s where the concept of affinity comes in handy.  &lt;/p&gt;

&lt;h3 id="moving-beyond-the-node-selector-with-affinity"&gt;Moving beyond the node selector with affinity&lt;/h3&gt;

&lt;p&gt;Affinity and anti-affinity expand the types of constraints you can add and give you more control over the selection logic. &lt;/p&gt;

&lt;p&gt;Using them, you can create "preferred" and "soft" rules for different conditions for Kubernetes to schedule the pod even if there are no perfectly matching nodes. They also let you match the labels of pods running on the same nodes and specify the location of new pods more precisely. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s essential to keep in mind that there are two types of affinity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node affinity&lt;/strong&gt; refers to impacting how pods get matched to nodes.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Pod affinity&lt;/strong&gt; specifies how pods can be scheduled based on the labels of pods already running on that node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s now discuss both of them to highlight the difference. &lt;/p&gt;

&lt;h2 id="node-affinity-what-is-it-and-how-does-it-work"&gt;Node affinity: what is it, and how does it work? &lt;/h2&gt;

&lt;p&gt;Similar to node selector, node affinity also lets you use labels to specify to which nodes Kube-scheduler should schedule your pods. &lt;/p&gt;

&lt;p&gt;You can specify it by adding the &lt;em&gt;.spec.affinity.nodeAffinity&lt;/em&gt; field in your pod. &lt;/p&gt;

&lt;p&gt;Remember that if you specify &lt;em&gt;nodeSelector&lt;/em&gt; and &lt;em&gt;nodeAffinity&lt;/em&gt;, both must be met for the pod to be scheduled. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are two types of node affinity:   &lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt; – when using this one, the scheduler will only schedule the pod if the node meets the rule.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt;– in this scenario, the scheduler will try to find a node matching the rule, but it will still schedule the pod even if it doesn't find anything suitable. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The latter lets you specify each instance’s weight by using a value between 1 and 100. &lt;/p&gt;

&lt;p&gt;When the scheduler finds nodes meeting all of your unscheduled pod’s requirements, Kube-scheduler iterates through every preferred rule that the node matches and adds the value of the weight to a sum. &lt;/p&gt;

&lt;p&gt;The Kubernetes scheduler then adds this sum to the final score, impacting your pod's final node decision. &lt;/p&gt;

&lt;h2 id="what-is-pod-affinity"&gt;What is pod affinity?&lt;/h2&gt;

&lt;p&gt;Working along similar lines, this concept focuses on impacting the Kubernetes scheduler based on the labels on the pods already running on a given node. &lt;/p&gt;

&lt;p&gt;You can also specify it within the affinity section using the podAffinity and podAntiAffinity fields in the pod spec.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod affinity&lt;/strong&gt; assumes that a given pod can run in a specific location if there is already a pod meeting particular conditions. &lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Pod anti-affinity &lt;/strong&gt;offers the opposite functionality, preventing pods from running on the same node as pods matching particular criteria. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is a separate post diving into &lt;a href="https://cast.ai/blog/kubernetes-scheduler-how-to-make-it-work-with-inter-pod-affinity-and-anti-affinity/"&gt;inter-pod affinity and anti-affinity.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;That’s why, for now, let’s focus on one practical application of node affinity.&lt;/p&gt;

&lt;h2 id="node-affinity-in-action-high-availability-and-fault-tolerance"&gt;Node affinity in action: high availability and fault tolerance&lt;/h2&gt;

&lt;p&gt;Availability is the holy grail of migrating to the cloud, and you can also boost it with node affinity. &lt;/p&gt;

&lt;p&gt;By spreading pods across several different nodes, you can ensure that your application remains available even if one or more of those nodes fail. &lt;/p&gt;

&lt;p&gt;With node affinity, you can instruct the Kubernetes scheduler to choose nodes in different availability zones, data centers, or regions. By doing so, your app can continue running even if your AZ or data center experiences an outage.&lt;/p&gt;

&lt;p&gt;If you then add &lt;a href="https://cast.ai/"&gt;Kubernetes automation&lt;/a&gt;, you can ensure that pods get scheduled in the preferred zones even if they’re not present in your cluster. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here is an example deployment of an on-demand instance on AWS with affinity set for a single zone:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-cross-single-az
  labels:
    app: nginx-cross-single-az
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx-single-az
  template:
    metadata:
      labels:
        app: nginx-single-az
    spec:
      nodeSelector:
        topology.kubernetes.io/zone: "eu-central-1a"
      containers:
      - name: nginx
        image: nginx:1.24.0
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 2&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, the node selector will pick nodes with the label "&lt;strong&gt;topology.kubernetes.io/zone&lt;/strong&gt;" set to "&lt;strong&gt;eu-central-1a&lt;/strong&gt;".  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For comparison, here’s an example of node affinity set for multi-zone pod scheduling:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-cross-az
  labels:
    app: nginx-cross-az
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx-cross-az
  template:
    metadata:
      labels:
        app: nginx-cross-az
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "topology.kubernetes.io/zone"
                operator: In
                values:
                - eu-central-1a
                - eu-central-1b
                - eu-central-1c
      containers:
      - name: nginx
        image: nginx:1.24.0
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 2&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this scenario, CAST AI will create nodes across multiple AWS zones that match your requirements. It will use on-demand instances, and all provisioning will happen automatically.  &lt;/p&gt;

&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;

&lt;p&gt;Kubernetes affinity is an important feature allowing you to control your pod scheduling better. &lt;/p&gt;

&lt;p&gt;Pod and node affinity and anti-affinity let you have more say on where your pods get scheduled. By specifying these rules, you get more scheduling configurations. &lt;/p&gt;

&lt;p&gt;Add automation to ensure that your pods get distributed across the most suitable nodes at all times and easily keep a tab on all related costs. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>6 Key Elements For A Secure Cloud Migration</title>
      <dc:creator>CAST AI</dc:creator>
      <pubDate>Fri, 31 Mar 2023 07:46:33 +0000</pubDate>
      <link>https://forem.com/castai/6-key-elements-for-a-secure-cloud-migration-4ocj</link>
      <guid>https://forem.com/castai/6-key-elements-for-a-secure-cloud-migration-4ocj</guid>
      <description>&lt;p&gt;Secure cloud migration is possible if you dedicate resources to it from the start. A security-first approach in cloud migration assumes that everything you migrate, refactor, and re-architect produces a highly secure environment, minimizing the risks of a breach, loss of operations, and data leaks. &lt;/p&gt;

&lt;p&gt;In the public cloud, security and compliance are shared responsibilities. That’s why it’s essential to identify the elements you need to take care of to ensure proper protection of your business data in the cloud. &lt;/p&gt;

&lt;p&gt;Read on to learn more about building a security-first cloud strategy and the six elements that should be its cornerstones. &lt;/p&gt;

&lt;p&gt;This is the second article in our series about cloud migration - &lt;a href="https://cast.ai/blog/4-cloud-networking-tips-for-a-smooth-cloud-migration-strategy/"&gt;check out the first one to learn more about networking&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="h-what-is-a-secure-cloud-migration"&gt;What is a secure cloud migration?&lt;/h2&gt;

&lt;p&gt;Cloud environments can change rapidly, moving in and out of compliance dynamically. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A security-first approach to cloud computing focuses on ongoing monitoring and management of threats to ensure the organization stays on top of all potential risks. In addition, it involves understanding and acting on these dangers through automated policies, processes, and controls.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Achieving the state of continuous security-first compliance calls for a combination of modern tools, techniques, and processes. Here are the six key elements your cloud strategy should include to promote and improve your security posture. &lt;/p&gt;

&lt;h2 id="h-6-components-of-a-secure-cloud-migration"&gt;6 components of a secure cloud migration&lt;/h2&gt;

&lt;h3 id="1-cloud-security-posture-management-cspm"&gt;1. Cloud Security Posture Management (CSPM)&lt;/h3&gt;

&lt;p&gt;All cloud service providers (CSPs) share a minimum set of best practices required for the security and compliance of resources stored in the cloud. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Security Posture Management &lt;/strong&gt;(CSPM) solutions help to compare cloud resource configurations against such best practices. In addition, they can spot any configuration drifts due to ad-hoc changes or malicious intent. &lt;/p&gt;

&lt;p&gt;Microsoft Azure comes with a CSPM solution for its cloud resources, but also AWS and GCP have parts of related functionalities in place. If you prefer to remain cloud-neutral, you can choose a third-party CSPM tool, such as Palo Alto, ZScaler, Orca Security, Wiz Security, or Secberus.&lt;/p&gt;

&lt;p&gt;CSPM solutions can read and check your cloud configurations, notify you of the issues in need of remediation, and provide detailed findings and recommendations. As a result, they help to ensure a high-level security score across all components.&lt;/p&gt;

&lt;p&gt;So ideally, your cloud environments should include a CSPM tool and strive to achieve a minimum prescribed security and compliance score. As a rule of thumb, Level 3 (“Defined”) or higher of the NIST Cybersecurity Framework should be a good choice. &lt;/p&gt;

&lt;h3 id="endpoint-detection-and-response"&gt;Endpoint Detection and Response &lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Endpoint Detection and Response &lt;/strong&gt;(EDR) is a cybersecurity technology that continuously monitors end-user devices to detect and respond to threats like ransomware and malware.&lt;/p&gt;

&lt;p&gt;Each Virtual Machine in your cloud deployment should have a properly installed and configured Endpoint Security solution. &lt;/p&gt;

&lt;p&gt;Microsoft Azure provides Defender for Windows, but there is also a version for Linux. If you choose a vendor-agnostic approach, you still get a wide range of EDR products such as Crowd Strike, Sentinel One, Trend Micro, Broadcom, and many others.&lt;/p&gt;

&lt;p&gt;Having an EDR tool installed for all deployed VMs should be a part of the strategy enforced through both CSPM tools and Policy as Code. &lt;/p&gt;

&lt;h3 id="key-management-and-secrets-management"&gt;Key management and secrets management&lt;/h3&gt;

&lt;p&gt;Your cloud migration plan should include securing sensitive or secret configurations. &lt;/p&gt;

&lt;p&gt;Ideally, you should store such data in a central location using a cloud-provided secret management service. Such tools allow companies to securely store, transmit, and manage data like passwords, encryption keys, SSH keys, API keys, database credentials, tokens, and certificates. &lt;/p&gt;

&lt;p&gt;Key management and an appropriate FIPS 140.2 Hardware Security Module should support this setup. &lt;/p&gt;

&lt;p&gt;All cloud providers have functionally equivalent services for secrets and key management; you can also choose from numerous third-party tools.&lt;/p&gt;

&lt;h3 id="policy-as-code"&gt;Policy-as-code &lt;/h3&gt;

&lt;p&gt;The idea of policy-as-code involves writing code in a high-level language to manage and automate policies. This form makes it easier to use best practices for software development, like version control and automated testing and deployment.&lt;/p&gt;

&lt;p&gt;Examples of the rules your policy as-code could include are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid using public IPs on Virtual Machines. &lt;/li&gt;



&lt;li&gt;All storage must use provided encryption keys stored in your key management solution (KMS)&lt;/li&gt;



&lt;li&gt;Public storage objects are not allowed.&lt;/li&gt;



&lt;li&gt;Web application firewall must be used in front of all API and web applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When collating your list of best practices, you can draw on industry standards such as CIS Benchmarks or CISA/NSA Kubernetes Hardening Guide. Then, you can deploy such policies through IaC and test them for efficacy. &lt;/p&gt;

&lt;h3 id="logging-and-security-information-and-event-management-siem"&gt;Logging and Security Information and Event Management (SIEM)&lt;/h3&gt;

&lt;p&gt;All cloud resources generate logging data. Security information and event management (SIEM) technology collects event log data from multiple sources, helping you identify unusual activity and take appropriate action quickly. &lt;/p&gt;

&lt;p&gt;SIEM systems have different functions, but in general, they reduce cyber risks by keeping track of how users act, limiting access attempts, and making compliance reports.&lt;/p&gt;

&lt;p&gt;Your cloud deployment should integrate an end-to-end SIEM solution and use it to analyze log data for all components. Alerted by notifications, you’ll be able to discover suspicious activities and address them on the spot. &lt;/p&gt;

&lt;p&gt;Some of the recommended solutions in this category include IBM Security QRadar, Datadog Security Monitoring, AT&amp;amp;T Cybersecurity, FortiSIEM, and more. &lt;/p&gt;

&lt;h3 id="security-assurance-and-compliance"&gt;Security assurance and compliance&lt;/h3&gt;

&lt;p&gt;All key cloud service providers can ensure compliance with popular standards such as SOC2 Type II, ISO127001, PCI DSS (Data Storage Solution), and many others. This certification can be a baseline for your Security Assurance and Compliance requirements.&lt;/p&gt;

&lt;p&gt;When structuring your work in this area, focus on the two key deliverables:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ensure you get a CIS Benchmark score for all infrastructure elements&lt;/strong&gt;, with no best practice violations above “medium”. Aim for a minimum of 90% implemented checks with a passing score as measured by your selected CSPM solution. &lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Check your environment against the NIST 1.1 Cybersecure Framework&lt;/strong&gt;, again aiming for a minimum maturity level of 3 (“Defined”), ideally expanding to Level 4 (“Managed”) or 5 (“Optimized”). &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use Kubernetes, a useful tool here could be &lt;a href="https://cast.ai/cloud-security/"&gt;CAST AI’s container security report&lt;/a&gt;. By automatically scanning your cluster configurations, it checks your compliance with essential industry standards and best practices. &lt;/p&gt;

&lt;p&gt;Once you check your compliance, you can establish a timeline for &lt;strong&gt;penetration testing&lt;/strong&gt;. This cybersecurity exercise helps you find any weak spots in your system's defenses that hackers could use. Penetration testing should be internal and external—and ideally, guide your next steps thanks to issue prioritization. &lt;/p&gt;

&lt;h2 id="h-launch-a-secure-cloud-migration"&gt;Launch a secure cloud migration&lt;/h2&gt;

&lt;p&gt;Creating a security-first cloud strategy minimizes the odds of cyber attacks, protects your valuable assets, and improves your overall business agility. &lt;/p&gt;

&lt;p&gt;You can build a solid security structure for your cloud deployment by combining the outlined types of solutions and techniques – the investment will certainly pay off. &lt;/p&gt;

</description>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
