<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Learn2Skills</title>
    <description>The latest articles on Forem by Learn2Skills (@learnskills).</description>
    <link>https://forem.com/learnskills</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/learnskills"/>
    <language>en</language>
    <item>
      <title>Microsoft Ignite 2025 Release Updates</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Sat, 03 Jan 2026 05:54:36 +0000</pubDate>
      <link>https://forem.com/learnskills/microsoft-ignite-2025-release-updates-14ea</link>
      <guid>https://forem.com/learnskills/microsoft-ignite-2025-release-updates-14ea</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/KJf2VEkReCQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>azure</category>
      <category>container</category>
    </item>
    <item>
      <title>AWS re:Invent 2025 Announcements</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Thu, 25 Dec 2025 17:20:39 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-reinvent-2025-announcements-39pk</link>
      <guid>https://forem.com/aws-builders/aws-reinvent-2025-announcements-39pk</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;AWS re:Invent 2025 Announcements &lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Artificial Intelligence
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-sonic-next-generation-speech-to-speech-model-for-conversational-ai" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Nova 2 Sonic: Advanced Speech-to-Speech Model for Conversational AI&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Amazon launches Nova 2 Sonic, a next-generation speech-to-speech AI model designed to enhance natural voice interactions. This model supports multilingual conversations, dynamic speech control, and crossmodal inputs (integrating speech, text, images, etc.), along with improved telephony integration. It maintains conversational context across multiple tasks, enabling more fluid, human-like dialogue in applications such as virtual assistants, customer support, and real-time translation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-2-lite-a-fast-cost-effective-reasoning-model" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Nova 2 Lite: Fast, Cost-Effective Reasoning AI Model&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Nova 2 Lite is introduced as a streamlined AI model optimized for everyday applications requiring quick, efficient reasoning. It offers an extended context window of up to one million tokens and comes equipped with built-in tools to facilitate complex thought processes, making it suitable for cost-sensitive deployments without sacrificing performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-forge-build-your-own-frontier-models-using-nova" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Nova Forge: Custom Frontier Model Development Program&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Nova Forge empowers organizations to create bespoke frontier AI models by providing access to Nova’s training infrastructure. This program removes traditional barriers like high costs, extensive compute requirements, and lengthy development cycles, enabling companies to infuse domain-specific expertise into foundational models tailored to their unique needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-nova-2-omni-preview/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Nova 2 Omni (Preview): Multimodal Reasoning and Image Generation&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Nova 2 Omni is an all-in-one AI model preview supporting multiple input types—text, images, videos, and speech—and capable of generating both text and image outputs. This multimodal architecture facilitates sophisticated reasoning across diverse data forms, opening new possibilities for integrated AI applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/build-reliable-ai-agents-for-ui-workflow-automation-with-amazon-nova-act-now-generally-available" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Nova Act: Reliable AI Agents for UI Workflow Automation&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Now generally available, Amazon Nova Act enables developers to build AI agents that automate complex browser-based tasks such as form filling, searching and extracting information, shopping and booking, and quality assurance testing. These agents achieve over 90% reliability, making them viable for enterprise-grade automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-bedrock-agentcore-adds-quality-evaluations-and-policy-controls-for-deploying-trusted-ai-agents" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Bedrock AgentCore: Enhanced AI Agent Deployment Controls&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
AgentCore enhances AI agent deployment with advanced policy controls, quality evaluations, improved memory management, and natural conversational capabilities. This facilitates scalable and trustworthy AI agent implementations across organizations, ensuring compliance and operational integrity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-s3-vectors-now-generally-available-with-increased-scale-and-performance" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon S3 Vectors: Scalable, High-Performance Vector Storage&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Amazon S3 Vectors reaches general availability, scaling vector storage and querying to unprecedented levels—up to 2 billion vectors per index with query latencies around 100 milliseconds. It supports expanded regional availability and reduces costs by up to 90% compared to specialized vector databases, making large-scale AI workloads more accessible and economical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-bedrock-adds-fully-managed-open-weight-models" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Bedrock: Expanded Foundation Model Access&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Amazon Bedrock now offers 18 fully managed open-weight foundation models from industry leaders including Google, NVIDIA, OpenAI, Mistral AI, Kimi AI, MiniMax AI, and Qwen. The lineup includes the latest Mistral Large 3 and Ministral 3 models in various sizes (3B, 8B, 14B parameters), providing developers with a rich selection of pre-trained models optimized for diverse AI tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon SageMaker AI with Serverless MLflow: Simplified AI Experimentation&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
SageMaker AI integrates serverless MLflow to streamline AI experimentation. This zero-infrastructure service deploys within minutes, auto-scales based on demand, and integrates seamlessly with SageMaker’s model customization and pipeline tools, accelerating development cycles and reducing operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/improve-model-accuracy-with-reinforcement-fine-tuning-in-amazon-bedrock/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Bedrock Reinforcement Fine-Tuning: Smarter AI Models with Less Effort&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Bedrock introduces reinforcement fine-tuning capabilities that improve model accuracy by 66% over base models using feedback-driven training. This approach eliminates the need for large labeled datasets or deep ML expertise, democratizing advanced model customization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkpointless and Elastic Training on Amazon SageMaker HyperPod&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
SageMaker HyperPod enhances AI training with checkpointless recovery, allowing instant continuation after failures, and elastic scaling that adjusts resources dynamically. These improvements accelerate model development by reducing downtime and optimizing compute utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless Customization in Amazon SageMaker AI&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Further expanding on SageMaker’s capabilities, serverless customization enables rapid fine-tuning with automatic failure recovery and resource scaling, boosting productivity and simplifying AI model refinement.&lt;/p&gt;


&lt;h3&gt;
  
  
  Compute
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.aboutamazon.com/news/aws/aws-graviton-5-cpu-amazon-ec2" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Graviton5: Most Powerful and Efficient CPU Yet&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
AWS introduces Graviton5, its fifth-generation CPU chip delivering superior price-performance across a broad spectrum of workloads on Amazon EC2. The chip combines efficiency with high computational power, catering to diverse applications from web hosting to complex data processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.aboutamazon.com/news/aws/trainium-3-ultraserver-faster-ai-training-lower-cost" rel="noopener noreferrer"&gt;&lt;strong&gt;Trainium3 UltraServers: Advanced AI Training and Deployment&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Trainium3 UltraServers, powered by AWS’s first 3nm AI chip, provide enhanced speed and cost-efficiency for AI training and inference. These servers enable organizations to tackle ambitious AI workloads more effectively, supporting growth in AI adoption across industries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon EC2 X8aedz Instances: High-Performance Memory-Optimized Compute&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
The new EC2 X8aedz instances feature 5th Gen AMD EPYC processors with up to 5 GHz speeds and 3 TiB of memory. Designed for memory-intensive tasks like electronic design automation and large databases, these instances deliver exceptional single-threaded performance and scalability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-aws-lambda-managed-instances-serverless-simplicity-with-ec2-flexibility" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Lambda Managed Instances: Serverless Benefits with EC2 Flexibility&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Lambda Managed Instances allow running Lambda functions on EC2 infrastructure, combining serverless simplicity with the flexibility to use specialized hardware and cost-optimized EC2 pricing. AWS manages the underlying infrastructure, simplifying deployment of workloads requiring unique compute resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda Durable Functions: Multi-Step AI Workflows&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Durable Functions extend Lambda’s capabilities by enabling orchestration of multi-step applications that can run reliably over long periods (up to one year). This feature eliminates the need to pay for idle compute time during waits for external events or human input, optimizing cost and resource use.&lt;/p&gt;


&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/announcing-amazon-eks-capabilities-for-workload-orchestration-and-cloud-resource-management" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon EKS Enhancements: Workload Orchestration and Cloud Resource Management&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Amazon EKS introduces new fully managed features that streamline Kubernetes workload orchestration and cloud resource management. These enhancements reduce infrastructure maintenance burdens while offering enterprise-level reliability, security, and operational efficiency.&lt;/p&gt;


&lt;h3&gt;
  
  
  Database
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-database-savings-plans-for-aws-databases/" rel="noopener noreferrer"&gt;&lt;strong&gt;Database Savings Plans for AWS Databases&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
A new pricing model, Database Savings Plans, helps organizations optimize costs while maintaining flexibility across database services and deployment options, encouraging more cost-effective database management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-rds-for-oracle-and-rds-for-sql-server-add-new-capabilities-to-enhance-performance-and-optimize-costs" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon RDS for SQL Server and Oracle: Cost and Scalability Improvements&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Amazon RDS introduces new capabilities including SQL Server Developer Edition support, optimized CPU performance with M7i/R7i instances, and expanded storage options up to 256 TiB. These features enhance cost efficiency and scalability for development, testing, and production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon OpenSearch Service: GPU-Accelerated Vector Database Performance&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
OpenSearch Service now supports GPU acceleration and auto-optimization for vector databases, enabling workloads to run up to 10 times faster at 25% of previous costs. This advancement balances search quality, speed, and resource usage for large-scale AI search applications.&lt;/p&gt;


&lt;h3&gt;
  
  
  Global Infrastructure
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS AI Factories: On-Premises AI Infrastructure Deployment&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AWS AI Factories provide fully managed AI infrastructure that can be deployed within enterprise and government data centers. This solution integrates foundation models, specialized hardware, and AWS services, accelerating AI initiatives while ensuring data residency and compliance requirements are met.&lt;/p&gt;


&lt;h3&gt;
  
  
  Management &amp;amp; Governance
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS DevOps Agent (Preview): Autonomous Incident Response&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The DevOps Agent acts like an autonomous on-call engineer, analyzing data from CloudWatch, GitHub, ServiceNow, and more to identify root causes and coordinate incident response. This tool accelerates issue resolution and improves system reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced AWS Support Plans: AI-Powered Expert Guidance&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
New AWS Support plans combine AI-driven insights with expert human guidance to proactively monitor and prevent cloud infrastructure issues. These plans offer faster response times and comprehensive coverage across performance, security, and cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudWatch: Unified Data Management and Analytics&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
CloudWatch introduces automatic normalization of data from multiple sources, native analytics integration, and support for standards like OCSF and Apache Iceberg. These capabilities reduce complexity, lower costs, and improve operational, security, and compliance analytics.&lt;/p&gt;


&lt;h3&gt;
  
  
  Migration &amp;amp; Modernization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Transform Custom: AI-Powered Code Modernization&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AWS Transform Custom leverages AI to automate code modernization at scale, learning organizational patterns to transform repositories and reduce execution time by up to 80%. This accelerates tech debt reduction and application modernization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Transform for Windows: Full-Stack Modernization&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This service modernizes Windows applications up to five times faster by coordinating AI-powered transformations across code, UI frameworks, databases, and deployment configurations, enabling comprehensive modernization efforts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Transform for Mainframe: Reimagine and Automated Testing&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
New capabilities support mainframe modernization by transforming legacy applications into cloud-native architectures while automating complex testing. This reduces modernization timelines from years to months through intelligent analysis and automated test generation.&lt;/p&gt;


&lt;h3&gt;
  
  
  Networking &amp;amp; Content Delivery
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-route-53-global-resolver-for-secure-anycast-dns-resolution-preview/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Route 53 Global Resolver (Preview): Secure Anycast DNS Resolution&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
Global Resolver simplifies hybrid DNS management by resolving both public and private domains globally via secure anycast-based DNS. This unified service reduces operational complexity and maintains consistent security controls across hybrid environments.&lt;/p&gt;


&lt;h3&gt;
  
  
  Partner Network
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Partner Central: Console Integration&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Partner Central is now accessible directly within the AWS Management Console, streamlining the partner journey from customer onboarding to managing solutions, opportunities, and marketplace listings with enterprise-grade security in a unified interface.&lt;/p&gt;


&lt;h3&gt;
  
  
  Security, Identity, &amp;amp; Compliance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/new-aws-security-agent-secures-applications-proactively-from-design-to-deployment-preview" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Security Agent (Preview): Proactive Application Security&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
The Security Agent scales AppSec expertise through AI-powered design reviews, code analysis, and contextual penetration testing tailored to unique application architectures, enhancing security from design through deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-guardduty-adds-extended-threat-detection-for-amazon-ec2-and-amazon-ecs" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon GuardDuty: Extended Threat Detection&lt;/strong&gt;  &lt;/a&gt;&lt;br&gt;
GuardDuty now offers extended threat detection across Amazon EC2 and ECS, providing unified visibility into virtual machines and containers. This helps identify complex multi-stage attacks affecting interconnected AWS workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Security Hub: Near Real-Time Analytics and Risk Prioritization&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Security Hub is generally available with capabilities to correlate security signals in near real-time across AWS environments, enabling faster risk response and improved security posture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Policy Autopilot: Open Source MCP Server for Policy Generation&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
IAM Policy Autopilot accelerates policy creation by analyzing application code to generate valid IAM policies. It provides AI coding assistants with current AWS service knowledge and permission recommendations, simplifying secure development.&lt;/p&gt;


&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon FSx for NetApp ONTAP Integration with Amazon S3&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
FSx for NetApp ONTAP now integrates seamlessly with Amazon S3, enabling direct file system data access via S3. This facilitates unified workflows with AWS analytics, ML, and generative AI services without moving or duplicating data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables" rel="noopener noreferrer"&gt;&lt;strong&gt;Replication Support and Intelligent-Tiering for Amazon S3 Tables&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
New features introduce automated cost optimization through intelligent-tiered storage and simplified replication of S3 Tables across regions and accounts, enhancing data availability and cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/amazon-s3-storage-lens-adds-performance-metrics-support-for-billions-of-prefixes-and-export-to-s3-tables" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon S3 Storage Lens: Enhanced Performance Metrics and Scalability&lt;/strong&gt; &lt;/a&gt; &lt;br&gt;
Storage Lens adds advanced performance metrics, supports analysis of billions of prefixes, and enables metric exports to S3 Tables. These enhancements help optimize application performance and simplify large-scale data analytics.&lt;/p&gt;



&lt;p&gt;This comprehensive summary covers the latest AWS launches and updates across analytics, AI, compute, containers, databases, infrastructure, management, migration, networking, partner programs, security, and storage, providing a detailed overview of innovations designed to accelerate cloud adoption, optimize cost and performance, and enhance security and manageability.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/3IebUgYrgpg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;




</description>
      <category>aws</category>
      <category>containers</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Monitor Amazon ECS containers with ECS Exec</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Thu, 25 Dec 2025 14:09:45 +0000</pubDate>
      <link>https://forem.com/aws-builders/monitor-amazon-ecs-containers-with-ecs-exec-4lgb</link>
      <guid>https://forem.com/aws-builders/monitor-amazon-ecs-containers-with-ecs-exec-4lgb</guid>
      <description>&lt;p&gt;Amazon ECS Exec enables direct interaction with running containers for troubleshooting and monitoring without needing SSH access or host-level intervention. This feature simplifies diagnostics by allowing commands or shells inside containers on EC2, Fargate, or ECS Anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;br&gt;
ECS Exec supports Linux and Windows containers, logging commands to CloudWatch or S3 for auditing via CloudTrail, and uses AWS KMS for encryption. Enable it at the cluster level with executeCommandConfiguration and per-task via enableExecuteCommand.&lt;/p&gt;

&lt;p&gt;The Amazon ECS console now supports ECS Exec, enabling you to open secure, interactive shell access directly from the AWS Management Console to any running container.&lt;/p&gt;

&lt;p&gt;ECS customers often need to access running containers to debug applications and examine running processes. ECS Exec provides easy and secure access to running containers without requiring inbound ports or SSH key management.&lt;/p&gt;

&lt;p&gt;To get started, you can turn on ECS Exec directly in the console when creating or updating services and standalone tasks. Additional settings like encryption and logging can also be configured at the cluster level through the console. Once enabled, simply navigate to a task details page, select a container, and click "Connect" to open an interactive session through CloudShell. The console also displays the underlying AWS CLI command, which you can customize or copy to use in your local terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring ECS Exec&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To use ECS Exec, you must first turn on the feature for your tasks and services, and then you can run commands in your containers.&lt;/p&gt;

&lt;p&gt;Turning on ECS Exec for your tasks and services&lt;br&gt;
You can turn on the ECS Exec feature for your services and standalone tasks by specifying the --enable-execute-command flag when using one of the following AWS CLI commands: &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html" rel="noopener noreferrer"&gt;create-service&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html" rel="noopener noreferrer"&gt;update-service&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecs/start-task.html" rel="noopener noreferrer"&gt;start-task&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html" rel="noopener noreferrer"&gt;run-task&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example, if you run the following command, the ECS Exec feature is turned on for a newly created service that runs on Fargate. For more information about creating services, see &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html" rel="noopener noreferrer"&gt;create-service&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs create-service \
    --cluster cluster-name \
    --task-definition task-definition-name \
    --enable-execute-command \
    --service-name service-name \
    --launch-type FARGATE \
     --network-configuration "awsvpcConfiguration={subnets=[subnet-12344321],securityGroups=[sg-12344321],assignPublicIp=ENABLED}" \
    --desired-count 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you turn on ECS Exec for a task, you can run the following command to confirm the task is ready to be used. If the lastStatus property of the ExecuteCommandAgent is listed as RUNNING and the enableExecuteCommand property is set to true, then your task is ready.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs describe-tasks \
    --cluster cluster-name \
    --tasks task-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IAM permissions required for Amazon CloudWatch Logs or Amazon S3 Logging&lt;br&gt;
To enable logging, the Amazon ECS task role that's referenced in your task definition needs to have additional permissions. These additional permissions can be added as a policy to the task role. They're different depending on if you direct your logs to Amazon CloudWatch Logs or Amazon S3.&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html" rel="noopener noreferrer"&gt;Amazon ECS Exec&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec-run.html" rel="noopener noreferrer"&gt;Running commands using ECS Exec&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




</description>
      <category>cloud</category>
      <category>ecs</category>
      <category>containers</category>
      <category>aws</category>
    </item>
    <item>
      <title>Creating an Amazon ECS service that uses Service Discovery</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Mon, 17 Nov 2025 09:56:46 +0000</pubDate>
      <link>https://forem.com/aws-builders/creating-an-amazon-ecs-service-that-uses-service-discovery-2d0n</link>
      <guid>https://forem.com/aws-builders/creating-an-amazon-ecs-service-that-uses-service-discovery-2d0n</guid>
      <description>&lt;p&gt;Amazon ECS services with Service Discovery enable dynamic discovery of containerized services using AWS Cloud Map, allowing tasks to register and find each other by DNS names without hardcoding IPs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Concepts&lt;/strong&gt;&lt;br&gt;
Service Discovery integrates ECS with Cloud Map to automatically register service instances with custom names and health checks. Use it for microservices where tasks need to communicate via service names (e.g., api.default.local). Supports private/public namespaces and automatic deregistration on task stops.&lt;/p&gt;

&lt;p&gt;When an ECS task associated with a service discovery-enabled ECS service starts, Amazon ECS automatically registers the task's IP address and port with AWS Cloud Map. Other services within the same Cloud Map namespace can then resolve the service's name (e.g., myservice.example.com) to discover the IP addresses of the running tasks and establish connections. This eliminates the need for manual IP address management and provides a flexible, dynamic way for services to interact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before you start this tutorial, make sure that the following prerequisites are met:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The latest version of the AWS CLI is installed and configured. For more information, &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;see Installing or updating to the latest version of the AWS CLI.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The steps described in &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html" rel="noopener noreferrer"&gt;Set up to use Amazon ECS are complete&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your IAM user has the required permissions specified in the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonECS_FullAccess" rel="noopener noreferrer"&gt;AmazonECS_FullAccess&lt;/a&gt; IAM policy example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have created at least one VPC and one security group. For more information, see &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-a-vpc" rel="noopener noreferrer"&gt;Create a virtual private cloud.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create the Service Discovery resources in AWS Cloud Map&lt;/strong&gt;&lt;br&gt;
Follow these steps to create your service discovery namespace and service discovery service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a private Cloud Map service discovery namespace. This example creates a namespace that's called tutorial. Replace vpc-abcd1234 with the ID of one of your existing VPCs.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery create-private-dns-namespace \
      --name tutorial \
      --vpc vpc-abcd1234
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Using the OperationId from the output of the previous step, verify that the private namespace was created successfully. Make note of the namespace ID because you use it in subsequent commands.
&lt;code&gt;aws servicediscovery get-operation \
  --operation-id h2qe3s6dxftvvt7riu6lfy2f6c3jlhf4-je6chs2e&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Using the NAMESPACE ID from the output of the previous step, create a service discovery service. This example creates a service named myapplication. Make note of the service ID and ARN because you use them in subsequent commands.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery create-service \
      --name myapplication \
      --dns-config "NamespaceId="ns-uejictsjen2i4eeg",DnsRecords=[{Type="A",TTL="300"}]" \
      --health-check-custom-config FailureThreshold=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Step 2: Create the Amazon ECS resources&lt;/strong&gt;&lt;br&gt;
Follow these steps to create your Amazon ECS cluster, task definition, and service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an Amazon ECS cluster. This example creates a cluster that's named tutorial.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs create-cluster \
      --cluster-name tutorial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Register a task definition that's compatible with Fargate and uses the awsvpc network mode. Follow these steps:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. Create a file that's named fargate-task.json with the contents of the following task definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "family": "tutorial-task-def",
        "networkMode": "awsvpc",
        "containerDefinitions": [
            {
                "name": "sample-app",
                "image": "public.ecr.aws/docker/library/httpd:2.4",
                "portMappings": [
                    {
                        "containerPort": 80,
                        "hostPort": 80,
                        "protocol": "tcp"
                    }
                ],
                "essential": true,
                "entryPoint": [
                    "sh",
                    "-c"
                ],
                "command": [
                    "/bin/sh -c \"echo '&amp;lt;html&amp;gt; &amp;lt;head&amp;gt; &amp;lt;title&amp;gt;Amazon ECS Sample App&amp;lt;/title&amp;gt; &amp;lt;style&amp;gt;body {margin-top: 40px; background-color: #333;} &amp;lt;/style&amp;gt; &amp;lt;/head&amp;gt;&amp;lt;body&amp;gt; &amp;lt;div style=color:white;text-align:center&amp;gt; &amp;lt;h1&amp;gt;Amazon ECS Sample App&amp;lt;/h1&amp;gt; &amp;lt;h2&amp;gt;Congratulations!&amp;lt;/h2&amp;gt; &amp;lt;p&amp;gt;Your application is now running on a container in Amazon ECS.&amp;lt;/p&amp;gt; &amp;lt;/div&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;' &amp;gt;  /usr/local/apache2/htdocs/index.html &amp;amp;&amp;amp; httpd-foreground\""
                ]
            }
        ],
        "requiresCompatibilities": [
            "FARGATE"
        ],
        "cpu": "256",
        "memory": "512"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Register the task definition using fargate-task.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs register-task-definition \
      --cli-input-json file://fargate-task.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create an ECS service by following these steps:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create a file that's named ecs-service-discovery.json with the contents of the ECS service that you're creating. This example uses the task definition that was created in the previous step. An awsvpcConfiguration is required because the example task definition uses the awsvpc network mode.&lt;/p&gt;

&lt;p&gt;When you create the ECS service, specify Fargate and the LATEST platform version that supports service discovery. When the service discovery service is created in AWS Cloud Map , registryArn is the ARN returned. The securityGroups and subnets must belong to the VPC that's used to create the Cloud Map namespace. You can obtain the security group and subnet IDs from the Amazon VPC Console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "cluster": "tutorial",
    "serviceName": "ecs-service-discovery",
    "taskDefinition": "tutorial-task-def",
    "serviceRegistries": [
       {
          "registryArn": "arn:aws:servicediscovery:region:aws_account_id:service/srv-utcrh6wavdkggqtk"
       }
    ],
    "launchType": "FARGATE",
    "platformVersion": "LATEST",
    "networkConfiguration": {
       "awsvpcConfiguration": {
          "assignPublicIp": "ENABLED",
          "securityGroups": [ "sg-abcd1234" ],
          "subnets": [ "subnet-abcd1234" ]
       }
    },
    "desiredCount": 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Create your ECS service using ecs-service-discovery.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs create-service \
      --cli-input-json file://ecs-service-discovery.json 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Verify Service Discovery in AWS Cloud Map&lt;/strong&gt;&lt;br&gt;
You can verify that everything is created properly by querying your service discovery information. After service discovery is configured, you can either use AWS Cloud Map API operations, or call dig from an instance within your VPC. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using the service discovery service ID, list the service discovery instances. Make note of the instance ID (marked in bold) for resource cleanup.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; aws servicediscovery list-instances \
       --service-id srv-utcrh6wavdkggqtk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Use the service discovery namespace, service, and additional parameters such as ECS cluster name to query details about the service discovery instances.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery discover-instances \
      --namespace-name tutorial \
      --service-name myapplication \
      --query-parameters ECS_CLUSTER_NAME=tutorial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;The DNS records that are created in the Route 53 hosted zone for the service discovery service can be queried with the following AWS CLI commands:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using the namespace ID, get information about the namespace, which includes the Route 53 hosted zone ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery \
      get-namespace --id ns-uejictsjen2i4eeg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Using the Route 53 hosted zone ID from the previous step (see the text in bold), get the resource record set for the hosted zone.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws route53 list-resource-record-sets \
      --hosted-zone-id Z35JQ4ZFDRYPLV 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;You can also query the DNS from an instance within your VPC using dig.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dig +short myapplication.tutorial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Clean up&lt;/strong&gt;&lt;br&gt;
When you're finished with this tutorial, clean up the associated resources to avoid incurring charges for unused resources. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you're finished with this tutorial, clean up the associated resources to avoid incurring charges for unused resources. Follow these steps:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery deregister-instance \
      --service-id srv-utcrh6wavdkggqtk \
      --instance-id 16becc26-8558-4af1-9fbd-f81be062a266
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Using the OperationId from the output of the previous step, verify that the service discovery service instances were deregistered successfully.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery get-operation \ 
      --operation-id xhu73bsertlyffhm3faqi7kumsmx274n-jh0zimzv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Delete the service discovery service using the service ID.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery delete-service \ 
      --id srv-utcrh6wavdkggqtk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Delete the service discovery namespace using the namespace ID.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery delete-namespace \ 
      --id ns-uejictsjen2i4eeg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Using the OperationId from the output of the previous step, verify that the service discovery namespace was deleted successfully.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws servicediscovery get-operation \ 
      --operation-id c3ncqglftesw4ibgj5baz6ktaoh6cg4t-jh0ztysj
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update the desired count for the Amazon ECS service to 0. You must do this to delete the service in the next step.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs update-service \
      --cluster tutorial \
      --service ecs-service-discovery \
      --desired-count 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Delete the Amazon ECS service.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs delete-service \
      --cluster tutorial \
      --service ecs-service-discovery
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Delete the Amazon ECS cluster.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs delete-cluster \
      --cluster tutorial
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Benefits vs Alternatives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwattyxmh38reimr6gsbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwattyxmh38reimr6gsbs.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Practice Question: What is required for ECS Service Discovery?&lt;br&gt;
Answer: Cloud Map namespace/service + awsvpc networking; auto-registers task IPs as DNS records.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service discovery pricing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Customers using Amazon ECS service discovery are charged for Route 53 resources and AWS Cloud Map discovery API operations. This involves costs for creating the Route 53 hosted zones and queries to the service registry. For more information, see &lt;a href="https://docs.aws.amazon.com/cloud-map/latest/dg/cloud-map-pricing.html" rel="noopener noreferrer"&gt;AWS Cloud Map Pricing&lt;/a&gt;&lt;br&gt;
Amazon ECS performs container level health checks and exposes them to AWS Cloud Map custom health check API operations. This is currently made available to customers at no extra cost. If you configure additional network health checks for publicly exposed tasks, you're charged for those health checks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flow Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F868y51l7t0r2ycw854na.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F868y51l7t0r2ycw854na.png" alt=" " width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html" rel="noopener noreferrer"&gt;Use service discovery to connect Amazon ECS services with DNS names&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cloud-map/latest/dg/what-is-cloud-map.html" rel="noopener noreferrer"&gt;What Is AWS Cloud Map?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




</description>
      <category>aws</category>
      <category>containers</category>
      <category>cloud</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Amazon EMR on EKS now supports Service Quotas</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Mon, 30 Jun 2025 09:55:49 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-emr-on-eks-now-supports-service-quotas-413i</link>
      <guid>https://forem.com/aws-builders/amazon-emr-on-eks-now-supports-service-quotas-413i</guid>
      <description>&lt;p&gt;Amazon EMR on EKS now supports Service Quotas, allowing users to manage and request increases for quotas like StartJobRun API calls directly in the AWS Service Quotas console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;br&gt;
This eliminates the need for support tickets in many cases, enabling automated approvals for eligible requests and faster scaling. Users can also set CloudWatch alarms to monitor usage against quotas, improving operational efficiency.&lt;/p&gt;

&lt;p&gt;Previously, to request an increase for EMR on EKS quotas, such as maximum number of StartJobRun API calls per second, customers had to open a support ticket and wait for the support team to process the increase. Now, customers can view and manage their EMR on EKS quota limits directly in the &lt;a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html" rel="noopener noreferrer"&gt;Service Quotas console&lt;/a&gt;. This enables automated limit increase approvals for eligible requests, improving response times and reducing the number of support tickets. Customers can also set up Amazon CloudWatch alarms to get automatically notified when their usage reaches a certain percentage of a maximum quota.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;br&gt;
Access the Service Quotas console via the AWS Management Console to view EMR on EKS limits by region. Requests for increases are processed centrally, with some approved automatically to reduce wait times.&lt;/p&gt;

&lt;p&gt;With Service Quotas, you can view and manage your quotas for AWS services from a central location. Quotas, also referred to as limits in AWS services, are the maximum values for the resources, actions, and items in your AWS account. Each AWS service defines its quotas and establishes default values for those quotas. If your business needs aren't met by the default limit of service resources or operations that apply to an AWS account, resource, or an AWS Region, you might need to increase your service quota values. Service Quotas enables you to look up your service quotas and to request increases. Support might approve, deny, or partially approve your requests.&lt;/p&gt;

&lt;p&gt;AWS Management Console&lt;br&gt;
The &lt;a href="https://console.aws.amazon.com/servicequotas/home?region=us-east-1#!/dashboard" rel="noopener noreferrer"&gt;Service Quotas console&lt;/a&gt; is a browser-based interface that you can use to view and manage your service quotas. You can perform almost any task that's related to your service quotas by using the console. You can access Service Quotas from any AWS Management Console page by choosing it on the top navigation bar, or by searching for Service Quotas in the AWS Management Console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using the AWS Management Console to request an increase&lt;/strong&gt;&lt;br&gt;
To request a service quota increase&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the AWS Management Console and open the Service Quotas console at &lt;a href="https://console.aws.amazon.com/servicequotas/home" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/servicequotas/home&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the navigation pane, choose AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose an AWS service from the list, or enter the name of the service in the search box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the quota is adjustable, you can request a quota increase at either the account-level or resource-level based on the value listed in the Adjustability column.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Account-level – Request a quota increase at the account-level for an account-level quota such as Domains per Region for Amazon OpenSearch Service. To do so, select the quota from the list and choose Request increase at account-level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resource-level – Request a quota increase for a specific resource for a resource-level quota such as Instances per domain for Amazon OpenSearch Service. To do so, choose the quota name to view additional information about the quota. Under the Resource-level quotas section, select the resource for which you want to increase the quota value, and choose the Request increase at resource-level button.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;For Increase quota value, enter the new value. The new value must be greater than the current value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To view any pending or recently resolved requests in the console, navigate to the Request history tab from the service's details page, or choose Dashboard from the navigation pane. For pending requests, choose the status of the request to open the request receipt. The initial status of a request is Pending. After the status changes to Quota requested, you'll see the case number with Support. Choose the case number to open the ticket for your request.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Service Flow Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte8e3k8tymu1jso8nxpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte8e3k8tymu1jso8nxpc.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram illustrates the streamlined process: workloads interact with quotas checked via the console, with monitoring and requests handled centrally for EMR on EKS resources like API rates and cluster limits.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/YasDt-9H0O4"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/getting-started.html" rel="noopener noreferrer"&gt;Customizing the Service Quotas dashboard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/servicequotas/latest/userguide/getting-started-auto-mgmt.html" rel="noopener noreferrer"&gt;Getting started with Service Quotas Automatic Management&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




</description>
      <category>aws</category>
      <category>eks</category>
      <category>cloud</category>
      <category>containers</category>
    </item>
    <item>
      <title>Amazon Redshift now supports refresh interval in a zero-ETL integration</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Wed, 20 Nov 2024 11:06:14 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-redshift-now-supports-refresh-interval-in-a-zero-etl-integration-5ao6</link>
      <guid>https://forem.com/aws-builders/amazon-redshift-now-supports-refresh-interval-in-a-zero-etl-integration-5ao6</guid>
      <description>&lt;p&gt;This set of tasks walks you through setting up your first zero-ETL integration. First, you configure your integration source and set it up with the required parameters and permissions. Then, you continue to the rest of the initial setup from the Amazon Redshift console or AWS CLI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create and configure a target Amazon Redshift data warehouse&lt;/li&gt;
&lt;li&gt;Turn on case sensitivity for your data warehouse&lt;/li&gt;
&lt;li&gt;Configure authorization for your Amazon Redshift data warehouse&lt;/li&gt;
&lt;li&gt;Create a zero-ETL integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating destination databases in Amazon Redshift&lt;/strong&gt;&lt;br&gt;
To replicate data from your source into Amazon Redshift, you must create a database from your integration in Amazon Redshift.&lt;/p&gt;

&lt;p&gt;Connect to your target Redshift Serverless workgroup or provisioned cluster and create a database with a reference to your integration identifier. This identifier is the value returned for integration_id when you query the SVV_INTEGRATION view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note-&lt;/strong&gt; Before creating a database from your integration, your zero-ETL integration must be created and in the Active state on the Amazon Redshift console.&lt;/p&gt;

&lt;p&gt;Before you can start replicating data from your source into Amazon Redshift, create a database from the integration in Amazon Redshift. You can either create the database using the Amazon Redshift console or the query editor v2.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the left navigation pane, choose Zero-ETL integrations.&lt;/li&gt;
&lt;li&gt;From the integration list, choose an integration.&lt;/li&gt;
&lt;li&gt;If you're using a provisioned cluster, you must first connect to the database. Choose Connect to database. You can connect using a recent connection, or by creating a new connection.&lt;/li&gt;
&lt;li&gt;To create a database from the integration, choose Create database from integration.&lt;/li&gt;
&lt;li&gt;Enter a Destination database name. The Integration ID and Data warehouse name are pre-populated.
For Aurora PostgreSQL sources, enter the Source named database that you specified when creating your zero-ETL integration. You can map a maximum of 100 Aurora PostgreSQL databases to Amazon Redshift databases.&lt;/li&gt;
&lt;li&gt;Choose Create database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ref: &lt;a href="https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.setting-up.html" rel="noopener noreferrer"&gt;Getting started with zero-ETL integrations&lt;/a&gt;&lt;/p&gt;




</description>
      <category>aws</category>
      <category>database</category>
      <category>cloud</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Amazon DynamoDB adds support for attribute-based access control</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Mon, 30 Sep 2024 06:34:24 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-dynamodb-adds-support-for-attribute-based-access-control-d7o</link>
      <guid>https://forem.com/aws-builders/amazon-dynamodb-adds-support-for-attribute-based-access-control-d7o</guid>
      <description>&lt;p&gt;&lt;strong&gt;Attribute-based access control (ABAC)&lt;/strong&gt; is an authorization technique that allows you to define fine-grained permissions based on user factors like department, job title, and team name. User attributes make permissions more intuitive and simplify the administrative process of managing access. By specifying permissions with attributes, you can reduce the number of separate permissions required to create fine-grained controls in your AWS account.&lt;/p&gt;

&lt;p&gt;Attribute-Based Access Control for Amazon DynamoDB is now available in limited preview in the US East (Ohio), US East (Virginia), and US West (N. California) Regions. To request access to the limited preview, visit the &lt;a href="https://pages.awscloud.com/Attribute-Based-Access-Control-Amazon-DynamoDB.html" rel="noopener noreferrer"&gt;preview page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grant developers and workloads read and write access to only their project resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Solution:&lt;/code&gt; When you base permissions on user attributes, you can ensure that developers and workloads only have read and write access to resources related to their projects. If the attributes of developers or workloads match those of project resources, they are granted access. Otherwise, they are rejected. For example, you can assign two developers from different teams, Alejandro and Mary, to the same IAM role and then use the team name property to manage access. When Alejandro and Mary check in to AWS, their identity provider (IdP) transmits their team name as an attribute in the AWS session, and they are only permitted access to their team's project resources, as indicated by the tags on those resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcoujg2ogno2ccs5gx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcoujg2ogno2ccs5gx1.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you create new resource permission and new secrets and application have automatically access, to all secrets tag and product tag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F456w235i6i8kb2ier1s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F456w235i6i8kb2ier1s3.png" alt="Image description" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This are the tag Governance, Tags are for access control&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cimq0y1y3g20o2hvf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cimq0y1y3g20o2hvf7.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy6jdhhl1ac9gkx81k61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyy6jdhhl1ac9gkx81k61.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure AWS tags and keys&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpjq0w24i4d7j5ityhr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpjq0w24i4d7j5ityhr4.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanl0f1uxuj423wfdsvnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanl0f1uxuj423wfdsvnz.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create secrets that are tagged with the project tag&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivo449qg6438uwodduya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivo449qg6438uwodduya.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl60dnwlgy1gopc190a2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl60dnwlgy1gopc190a2b.png" alt="Image description" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ref: &lt;a href="https://youtu.be/XO4CALyzbVM" rel="noopener noreferrer"&gt;attribute-based access control&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>dynamodb</category>
      <category>database</category>
    </item>
    <item>
      <title>Assign an IAM role to a Kubernetes service account</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Sat, 20 Jul 2024 16:58:08 +0000</pubDate>
      <link>https://forem.com/aws-builders/assign-an-iam-role-to-a-kubernetes-service-account-3nc7</link>
      <guid>https://forem.com/aws-builders/assign-an-iam-role-to-a-kubernetes-service-account-3nc7</guid>
      <description>&lt;p&gt;How to set up a Kubernetes service account to take an AWS Identity and Access Management (IAM) role using EKS Pod Identity. Any Pods that are set up to use the service account can then access any AWS service that the role has permission to access.&lt;/p&gt;

&lt;p&gt;An EKS Pod Identity association can be created in a single step using the AWS Management Console, AWS CLI, AWS SDKs, AWS CloudFormation, and other technologies. There is no data or information about the cluster's affiliations in any Kubernetes objects, and you do not annotate the service accounts.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An existing cluster. &lt;/li&gt;
&lt;li&gt;The IAM principal that is creating the association must have iam:PassRole.&lt;/li&gt;
&lt;li&gt;The latest version of the AWS CLI installed and configured on your device or AWS CloudShell. You can check your current version with aws --version | cut -d / -f2 | cut -d ' ' -f1. Package managers such yum, apt-get, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI.&lt;/li&gt;
&lt;li&gt;The kubectl command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.29, you can use kubectl version 1.28, 1.29, or 1.30 with it.&lt;/li&gt;
&lt;li&gt;An existing kubectl config file that contains your cluster configuration. To create a kubectl config file&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Creating the EKS Pod Identity association&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the Amazon EKS console at &lt;a href="https://console.aws.amazon.com/eks/home#/clusters" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/eks/home#/clusters&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation pane, select Clusters, and then select the name of the cluster that you want to configure the EKS Pod Identity Agent add-on for.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the Access tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Pod Identity associations, choose Create.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the IAM role, select the IAM role with the permissions that you want the workload to have.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note&lt;br&gt;
The list only contains roles that have the following trust policy which allows EKS Pod Identity to use them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;sts:AssumeRole&lt;br&gt;
EKS Pod Identity uses AssumeRole to assume the IAM role before passing the temporary credentials to your pods.&lt;/p&gt;

&lt;p&gt;sts:TagSession&lt;br&gt;
EKS Pod Identity uses TagSession to include session tags in the requests to AWS STS.&lt;/p&gt;

&lt;p&gt;You can use these tags in the condition keys in the trust policy to restrict which service accounts, namespaces, and clusters can use this role.&lt;/p&gt;

&lt;p&gt;For a list of Amazon EKS condition keys, see &lt;a href="https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-policy-keys" rel="noopener noreferrer"&gt;Conditions defined by Amazon Elastic Kubernetes Service&lt;/a&gt; in the Service Authorization Reference. To learn which actions and resources you can use a condition key with, see Actions &lt;a href="https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions" rel="noopener noreferrer"&gt;defined by Amazon Elastic Kubernetes Service&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;For the &lt;strong&gt;Kubernetes namespace&lt;/strong&gt;, select the Kubernetes namespace that contains the service account and workload. Optionally, you can specify a namespace by name that doesn't exist in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the &lt;strong&gt;Kubernetes service account&lt;/strong&gt;, select the Kubernetes service account to use. The manifest for your Kubernetes workload must specify this service account. Optionally, you can specify a service account by name that doesn't exist in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(Optional) For the Tags, choose Add tag to add metadata in a key and value pair. These tags are applied to the association and can be used in IAM policies.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can repeat this step to add multiple tags.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose Create.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Confirm configuration&lt;/strong&gt;&lt;br&gt;
Confirm that the role and service account are configured correctly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Confirm that the IAM role's trust policy is configured correctly.
&lt;code&gt;aws iam get-role --role-name my-role --query Role.AssumeRolePolicyDocument
&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Confirm that the policy that you attached to your role in a previous step is attached to the role.
&lt;code&gt;aws iam list-attached-role-policies --role-name my-role --query AttachedPolicies[].PolicyArn --output text
&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;View the default version of the policy.
&lt;code&gt;aws iam get-policy --policy-arn $policy_arn
&lt;/code&gt;
---&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  for more details: &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-id-association.html" rel="noopener noreferrer"&gt;IAM role to a Kubernetes service account&lt;/a&gt;
&lt;/h2&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Amazon Aurora PostgreSQL now supports RDS Data API</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Sun, 31 Dec 2023 15:21:35 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-aurora-postgresql-now-supports-rds-data-api-51a1</link>
      <guid>https://forem.com/aws-builders/amazon-aurora-postgresql-now-supports-rds-data-api-51a1</guid>
      <description>&lt;p&gt;By using RDS Data API (Data API), you can work with a web-services interface to your Aurora DB cluster. Data API doesn't require a persistent connection to the DB cluster. Instead, it provides a secure HTTP endpoint and integration with AWS SDKs. You can use the endpoint to run SQL statements without managing connections.&lt;/p&gt;

&lt;p&gt;You can enable Data API when you create the Aurora DB cluster. You can also modify the configuration later. For more information, see &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.enabling" rel="noopener noreferrer"&gt;Enabling RDS Data API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note-&lt;/strong&gt; Currently, for Aurora MySQL, Data API and query editor aren't supported for Aurora Serverless v2 or for provisioned DB clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region and version availability&lt;/strong&gt;&lt;br&gt;
RDS Data API is available for the following types of Aurora DB clusters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Aurora PostgreSQL using specific PostgreSQL versions. These clusters can use Aurora Serverless v2 instances, provisioned instances, or a combination of both.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aurora Serverless v1 clusters using either Aurora PostgreSQL or Aurora MySQL.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Limitations with RDS Data API&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;RDS Data API (Data API) has the following limitations:&lt;br&gt;
You can only execute Data API queries on writer instances in a DB cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With Aurora global databases, you can enable Data API on both primary and secondary DB clusters. However, until a secondary cluster is promoted to be the primary, it has no writer instance. Thus, Data API queries that you send to the secondary fail. After a promoted secondary has an available writer instance, Data API queries on that DB instance should succeed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance Insights doesn't support monitoring database queries that you make using Data API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data API isn't supported on T DB instance classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Aurora PostgreSQL Serverless v2 and provisioned DB clusters, RDS Data API doesn't support enumerated types.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enabling RDS Data API when you create a database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While you are creating a database that supports RDS Data API (Data API), you can enable this feature. The following procedures describe how to do so when you use the AWS Management Console, the AWS CLI, or the RDS API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyye4ga7lrrv05k1wtg2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyye4ga7lrrv05k1wtg2.png" alt="Image description" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhn1noz3srbbc5dk48k9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhn1noz3srbbc5dk48k9u.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling or disabling Data API (Aurora PostgreSQL Serverless v2 and provisioned)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use the following procedures to enable or disable Data API on Aurora PostgreSQL Serverless v2 and provisioned databases. To enable or disable Data API on Aurora Serverless v1 databases, use the procedures in &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.enabling.modifying.sv1" rel="noopener noreferrer"&gt;Enabling or disabling Data API (Aurora Serverless v1 only).&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj5n6qowrx7rus5exd44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj5n6qowrx7rus5exd44.png" alt="Image description" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;more details &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html" rel="noopener noreferrer"&gt;RDS Data API for Aurora Serverless&lt;/a&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>api</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Amazon S3 Express One Zone</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Thu, 30 Nov 2023 15:45:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-s3-express-one-zone-2h57</link>
      <guid>https://forem.com/aws-builders/amazon-s3-express-one-zone-2h57</guid>
      <description>&lt;p&gt;S3 Express One Zone can improve data access speeds by 10x and reduce request costs by 50% compared to S3 Standard and scales to process millions of requests per minute for your most frequently accessed datasets.&lt;/p&gt;

&lt;p&gt;S3 Express One Zone is ideal for any application where it's important to minimize the latency required to access an object. This can be human-interactive workflows, like video editing, where creative professionals need responsive access to content from their user interfaces. S3 Express One Zone also benefits analytics and machine learning workloads that have similar responsiveness requirements from their data, especially workloads with lots of smaller accesses or large numbers of random accesses. S3 Express One Zone can be used with other AWS services to support analytics and AI/ML workloads, such as Amazon EMR, Amazon SageMaker, and Amazon Athena.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89zgtnct5cze63evcech.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89zgtnct5cze63evcech.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When using S3 Express One Zone, you can interact with your directory bucket in an AWS virtual private cloud (VPC) by using a gateway VPC endpoint. With a gateway endpoint, you can access S3 Express One Zone directory buckets from your VPC without an internet gateway or NAT device for your VPC and at no additional cost.&lt;/p&gt;

&lt;p&gt;You can use many of the same S3 APIs and features with directory buckets that you use with general purpose buckets and other storage classes. These include Mountpoint for Amazon S3, server-side encryption with Amazon S3 managed keys (SSE-S3), S3 Batch Operations, and S3 Block Public Access. You can access S3 Express One Zone by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To optimize performance and reduce latency, S3 Express One Zone introduces the following new concepts.&lt;/p&gt;

&lt;p&gt;Single Availability Zone&lt;br&gt;
The Amazon S3 Express One Zone storage class is designed for 99.95% availability within a single Availability Zone and is backed by the &lt;a href="http://aws.amazon.com/s3/sla/" rel="noopener noreferrer"&gt;Amazon S3 Service Level Agreement&lt;/a&gt;. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed to handle concurrent device failures by quickly detecting and repairing any lost redundancy. If the existing device encounters a failure, S3 Express One Zone automatically shifts requests to new devices within an Availability Zone. This redundancy helps ensure uninterrupted access to your data within an Availability Zone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Directory buckets&lt;/strong&gt;&lt;br&gt;
There are two types of Amazon S3 buckets, S3 general purpose buckets and S3 directory buckets. Directory buckets use only the S3 Express One Zone storage class, which is designed for workloads or performance-critical applications that require consistent single-digit millisecond latency. General purpose buckets are the default Amazon S3 bucket type that is used for the vast majority of S3 use cases. You should choose the bucket type that best fits your application and performance requirements.&lt;/p&gt;

&lt;p&gt;Directory buckets organize data hierarchically into directories as opposed to the flat storing structure of general purpose buckets. There aren’t prefix limits for directory buckets and individual directories can scale horizontally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endpoints and gateway VPC endpoints&lt;/strong&gt;&lt;br&gt;
Bucket-management API operations are available through a Regional endpoint and are referred to as Regional endpoint APIs. Examples of Regional endpoint APIs are CreateBucket and DeleteBucket. After you create a directory bucket, you can use Zonal endpoint APIs to upload and manage the objects in your directory bucket. Zonal endpoint APIs are available through a Zonal endpoint. Examples of Zonal endpoint APIs are PutObject and CopyObject.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session-based authorization&lt;/strong&gt;&lt;br&gt;
With S3 Express One Zone, you authenticate and authorize requests through a new session-based mechanism, which is optimized to provide the lowest latency. You can use CreateSession to request temporary credentials that provide low latency access to your bucket. These temporary credentials are scoped to a specific S3 directory bucket. Session tokens are used only with Zonal (object-level) operations (with the exception of CopyObject) and are optimized to provide the lowest latency. For more information, see &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html" rel="noopener noreferrer"&gt;Create session&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features of S3 Express One Zone&lt;/strong&gt;&lt;br&gt;
The following S3 features are available for S3 Express One Zone. For a complete list of supported APIs and unsupported features, see &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html" rel="noopener noreferrer"&gt;How is S3 Express One Zone different?&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access management and security&lt;/strong&gt;&lt;br&gt;
With directory buckets, you can use the following features to audit and manage access. By default, directory buckets are private and can be accessed only by users who are explicitly granted access. Unlike general purpose buckets, which can set the access control boundary at the bucket, prefix, or object tag level, the access control boundary for directory buckets is set only at the bucket level. For more information, see &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html" rel="noopener noreferrer"&gt;AWS Identity and Access Management (IAM) for S3 Express One Zone.&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html" rel="noopener noreferrer"&gt;S3 Block Public Access&lt;/a&gt; – All S3 Block Public access settings are enabled by default at the bucket level. This default setting can't be modified.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html" rel="noopener noreferrer"&gt;S3 Object Ownership&lt;/a&gt; (Bucket owner enforced by default) – Access control lists (ACLs) are not supported for directory buckets. Directory buckets automatically use the bucket owner enforced setting for S3 Object Ownership, which means that ACLs are disabled and the bucket owner automatically owns and has full control over every object in the bucket. This default setting can’t be modified.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html" rel="noopener noreferrer"&gt;AWS Identity and Access Management (IAM)&lt;/a&gt; – IAM helps you securely control access to your directory buckets. You can use IAM to grant access to bucket management (Regional) actions and object management (Zonal) APIs through the CreateSession action. For more information, see AWS Identity and Access Management (IAM) for S3 Express One Zone. Unlike object-management actions, bucket management actions cannot be cross-account. Only the bucket owner can perform those actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam-example-bucket-policies.html" rel="noopener noreferrer"&gt;Bucket policies&lt;/a&gt; – Use IAM-based policy language to configure resource-based permissions for your directory buckets. You can also use IAM to control access to the CreateSession API which allows you to use the Zonal or object management APIs. You can grant same-account or cross-account access. For more information on S3 Express One Zone permissions and policies, see AWS Identity and Access Management (IAM) for S3 Express One Zone.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-analyzer.html" rel="noopener noreferrer"&gt;IAM Access Analyzer for S3&lt;/a&gt; – Evaluate and monitor your access policies, ensuring that the policies provide only the intended access to your S3 resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Logging and monitoring&lt;/strong&gt;&lt;br&gt;
S3 Express One Zone uses S3 logging and monitoring tools that you can use to monitor and control how your resources are being used.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudwatch-monitoring.html" rel="noopener noreferrer"&gt;Amazon CloudWatch metrics&lt;/a&gt; – Monitor your AWS resources and applications using CloudWatch to collect and track metrics. S3 Express One Zone uses the same CloudWatch namespace as other Amazon S3 storage classes (AWS/S3) and supports daily storage metrics for directory buckets: BucketSizeBytes and NumberOfObjects. For more information, see Monitoring metrics with Amazon CloudWatch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html" rel="noopener noreferrer"&gt;AWS CloudTrail logs&lt;/a&gt; – AWS CloudTrail is an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account by recording actions taken by a user, role, or an AWS service. For S3 Express One Zone, CloudTrail captures regional endpoint APIs (for example, CreateBucket, PutBucketPolicy) as management events. This includes actions taken in the AWS Management Console, AWS CLI, AWS SDKs, and APIs. The eventsource for CloudTrail management events for S3 Express One Zone is s3express.amazonaws.com. For more information, see Amazon S3 CloudTrail events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Object management&lt;/strong&gt;&lt;br&gt;
After you create a directory bucket, you can manage your object storage using the S3 console, AWS SDKs, and AWS CLI. The following features are available for object management with S3 Express One Zone.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-create-job.html" rel="noopener noreferrer"&gt;S3 Batch Operations&lt;/a&gt; – Use Batch Operations to perform bulk operations on objects in directory buckets, for example, Copy and Invoke AWS Lambda function. For example, you can use Batch Operations to copy objects between directory buckets and general purpose buckets. With Batch Operations, you can manage billions of objects at scale with a single S3 request using the AWS SDKs or AWS CLI or a few clicks in the Amazon S3 console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job.html" rel="noopener noreferrer"&gt;Import&lt;/a&gt; – After you create a directory bucket, you can populate your bucket with objects by using the import feature in the Amazon S3 console. Import is a streamlined method for creating Batch Operations jobs to copy objects from general purpose buckets to directory buckets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>aws</category>
      <category>s3</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>EKS Pod Identities</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Thu, 30 Nov 2023 15:39:30 +0000</pubDate>
      <link>https://forem.com/aws-builders/eks-pod-identities-4j56</link>
      <guid>https://forem.com/aws-builders/eks-pod-identities-4j56</guid>
      <description>&lt;p&gt;Applications in a Pod's containers can use the AWS SDK or the AWS CLI to make API requests to AWS services using AWS Identity and Access Management (IAM) permissions. Applications must sign their AWS API requests with AWS credentials.&lt;/p&gt;

&lt;p&gt;EKS Pod Identities provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance's role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account.&lt;/p&gt;

&lt;p&gt;Each EKS Pod Identity association maps a role to a service account in a namespace in the specified cluster. If you have the same application in multiple clusters, you can make identical associations in each cluster without modifying the trust policy of the role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of EKS Pod Identities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS Pod Identities provide the following benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Least privilege&lt;/strong&gt; – You can scope IAM permissions to a service account, and only Pods that use that service account have access to those permissions. This feature also eliminates the need for third-party solutions such as kiam or kube2iam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credential isolation&lt;/strong&gt; – A Pod's containers can only retrieve credentials for the IAM role that's associated with the service account that the container uses. A container never has access to credentials that are used by other containers in other Pods. When using Pod Identities, the Pod's containers also have the permissions assigned to the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html" rel="noopener noreferrer"&gt;Amazon EKS node IAM role&lt;/a&gt;, unless you block Pod access to the &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html" rel="noopener noreferrer"&gt;Amazon EC2 Instance Metadata Service (IMDS)&lt;/a&gt;. For more information, see &lt;a href="https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node" rel="noopener noreferrer"&gt;Restrict access to the instance profile assigned to the worker node&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auditability&lt;/strong&gt; – Access and event logging is available through AWS CloudTrail to help facilitate retrospective auditing.&lt;/p&gt;

&lt;p&gt;EKS Pod Identity is a simpler method than &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="noopener noreferrer"&gt;IAM roles for service accounts&lt;/a&gt;, as this method doesn't use OIDC identity providers. EKS Pod Identity has the following enhancements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent operations&lt;/strong&gt; – In many organizations, creating OIDC identity providers is a responsibility of different teams than administering the Kubernetes clusters. EKS Pod Identity has clean separation of duties, where all configuration of EKS Pod Identity associations is done in Amazon EKS and all configuration of the IAM permissions is done in IAM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reusability&lt;/strong&gt; – EKS Pod Identity uses a single IAM principal instead of the separate principals for each cluster that IAM roles for service accounts use. Your IAM administrator adds the following principal to the trust policy of any role to make it usable by EKS Pod Identities.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        "Principal": {
            "Service": "pods.eks.amazonaws.com"
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; – Each set of temporary credentials are assumed by the EKS Auth service in EKS Pod Identity, instead of each AWS SDK that you run in each pod. Then, the Amazon EKS Pod Identity Agent that runs on each node issues the credentials to the SDKs. Thus the load is reduced to once for each node and isn't duplicated in each pod. For more details of the process, see How EKS Pod Identity works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview of setting up EKS Pod Identities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Turn on EKS Pod Identities by completing the following procedures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html" rel="noopener noreferrer"&gt;Setting up the Amazon EKS Pod Identity Agent&lt;/a&gt; – You only complete this procedure once for each cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-id-association.html" rel="noopener noreferrer"&gt;Configuring a Kubernetes service account to assume an IAM role with EKS Pod Identity&lt;/a&gt; – Complete this procedure for each unique set of permissions that you want an application to have.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-id-configure-pods.html" rel="noopener noreferrer"&gt;Configuring Pods to use a Kubernetes service account&lt;/a&gt; – Complete this procedure for each Pod that needs access to AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-id-minimum-sdk.html" rel="noopener noreferrer"&gt;Using a supported AWS SDK&lt;/a&gt; – Confirm that the workload uses an AWS SDK of a supported version and that the workload uses the default credential chain.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;EKS Pod Identity cluster versions&lt;/strong&gt;&lt;br&gt;
To use EKS Pod Identities, the cluster must have a platform version that is the same or later than the version listed in the following table, or a Kubernetes version that is later than the versions listed in the table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes version    Platform version&lt;/strong&gt;&lt;br&gt;
1.28                    eks.4&lt;br&gt;
1.27                    eks.8&lt;br&gt;
1.26                    eks.9&lt;br&gt;
1.25                    eks.10&lt;br&gt;
1.24                    eks.13&lt;br&gt;
&lt;strong&gt;EKS Pod Identity restrictions&lt;/strong&gt;&lt;br&gt;
EKS Pod Identities are available on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon EKS cluster versions listed in the previous topic &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html#pod-id-cluster-versions" rel="noopener noreferrer"&gt;EKS Pod Identity cluster versions&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Worker nodes in the cluster that are Linux Amazon EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EKS Pod Identities aren't available on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;China Regions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS GovCloud (US).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Outposts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon EKS Anywhere.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes clusters that you create and run on Amazon EC2. The EKS Pod Identity components are only available on Amazon EKS.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can't use EKS Pod Identities with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pods that run anywhere except Linux Amazon EC2 instances. Linux and Windows pods that run on AWS Fargate (Fargate) aren't supported. Pods that run on Windows Amazon EC2 instances aren't supported.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon EKS add-ons that need IAM credentials. The EKS add-ons can only use IAM roles for service accounts instead. The list of EKS add-ons that use IAM credentials include:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon VPC CNI plugin for Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Load Balancer Controller&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CSI storage drivers: EBS CSI, EFS CSI, Amazon FSx for Lustre CSI driver, Amazon FSx for NetApp ONTAP CSI driver, Amazon FSx for OpenZFS CSI driver, Amazon File Cache CSI driver&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>aws</category>
      <category>containers</category>
      <category>kubernetes</category>
      <category>cloud</category>
    </item>
    <item>
      <title>EKS Fargate supports additional Ephemeral Storage</title>
      <dc:creator>Learn2Skills</dc:creator>
      <pubDate>Sun, 13 Aug 2023 06:05:45 +0000</pubDate>
      <link>https://forem.com/aws-builders/eks-fargate-supports-additional-ephemeral-storage-6j6</link>
      <guid>https://forem.com/aws-builders/eks-fargate-supports-additional-ephemeral-storage-6j6</guid>
      <description>&lt;p&gt;Customers can now specify the size (in GiB) of ephemeral storage that their workloads require using the storage parameter in their pod spec. 20 GiB of ephemeral storage is included with every EKS Fargate pod. Additional ephemeral storage requested, up to 175GB, is charged in GB increments for the duration that the pod is running. See the Fargate pricing page for more details. Customers with data intensive workloads, like machine learning inference, data processing, or with large container images, can now use AWS Fargate with Amazon EKS to reduce their operational burden, pay only for the resources used by their applications, and get the security benefits of AWS Fargate’s built-in workload isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate storage&lt;/strong&gt;&lt;br&gt;
A Pod running on Fargate automatically mounts an Amazon EFS file system. You can't use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning. For more information, see &lt;a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/README.md" rel="noopener noreferrer"&gt;Amazon EFS CSI Driver&lt;/a&gt; on GitHub.&lt;/p&gt;

&lt;p&gt;When provisioned, each Pod running on Fargate receives a default 20 GiB of ephemeral storage. This type of storage is deleted after a Pod stops. New Pods launched onto Fargate have encryption of the ephemeral storage volume enabled by default. The ephemeral Pod storage is encrypted with an AES-256 encryption algorithm using AWS Fargate managed keys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note-&lt;/strong&gt; The default usable storage for Amazon EKS Pods that run on Fargate is less than 20 GiB. This is because some space is used by the kubelet and other Kubernetes modules that are loaded inside the Pod.&lt;/p&gt;

&lt;p&gt;You can increase the total amount of ephemeral storage up to a maximum of 175 GiB. To configure the size with Kubernetes, specify the requests of &lt;code&gt;ephemeral-storage&lt;/code&gt; resource to each container in a Pod. When Kubernetes schedules Pods, it ensures that the sum of the resource requests for each Pod is less than the capacity of the Fargate task. For more information, see &lt;a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noopener noreferrer"&gt;Resource Management for Pods and Containers&lt;/a&gt; in the Kubernetes documentation.&lt;/p&gt;

&lt;p&gt;Amazon EKS Fargate provisions more ephemeral storage than requested for the purposes of system use. For example, a request of 100 GiB will provision a Fargate task with 115 GiB ephemeral storage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>containers</category>
      <category>kubernetes</category>
      <category>storage</category>
    </item>
  </channel>
</rss>
