<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matt Lewis</title>
    <description>The latest articles on Forem by Matt Lewis (@mlewis7127).</description>
    <link>https://forem.com/mlewis7127</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mlewis7127"/>
    <language>en</language>
    <item>
      <title>Moving from Node Groups to NodePools on Amazon EKS</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Sat, 21 Feb 2026 17:38:16 +0000</pubDate>
      <link>https://forem.com/aws-heroes/moving-from-node-groups-to-nodepools-on-amazon-eks-1kc5</link>
      <guid>https://forem.com/aws-heroes/moving-from-node-groups-to-nodepools-on-amazon-eks-1kc5</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;In November 2019, AWS introduced the concept of Amazon EKS Managed Node Groups. With this, Amazon EKS would provision and manage the underlying EC2 instances as worker nodes, as part of an EC2 Auto Scaling Group. You could create, update or terminate a node with a single operation. When updating or terminating a node, EKS would handle these operations gracefully by automatically draining nodes to ensure applications stayed available. Futher enhancements allowed for node configuration and customisation through EC2 Launch Templates and custom AMIs, alongside support for EC2 spot instances.&lt;/p&gt;

&lt;p&gt;However, the modern trend in Kubernetes is moving away from static node groups to dynamic node provisioning with tools like Karpenter for more flexible and cost-effective infrastructure management. With Amazon EKS Auto Mode, the recommendation is no longer to create traditional node groups. Instead, you create a Karpenter NodePool that defines the compute requirements. Amazon EKS Auto Mode provides two built-in node pools - &lt;code&gt;system&lt;/code&gt; and &lt;code&gt;general-purpose&lt;/code&gt; - which you cannot modify, but you can enable or disable. The &lt;code&gt;general-purpose&lt;/code&gt; node pool provides support for launching nodes for general purpose workloads. It supports only &lt;code&gt;amd64&lt;/code&gt; architecture and uses only on-demand EC2 capacity in the &lt;code&gt;C&lt;/code&gt;, &lt;code&gt;M&lt;/code&gt; or &lt;code&gt;R&lt;/code&gt; instance families.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens if you want to take advantage of spot instances?&lt;/li&gt;
&lt;li&gt;What happens if you want to take advantage of Graviton?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's show how you can create a node pool to do just that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Karpenter NodePool
&lt;/h2&gt;

&lt;p&gt;The complete configuration files for this post can be found in the &lt;code&gt;k8s\node-pool&lt;/code&gt; section of the code repository &lt;a href="https://github.com/mlewis7127/amazon-eks-guide-code/tree/main/k8s/node-pool" rel="noopener noreferrer"&gt;here&lt;/a&gt;. We can create it using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; arm-nodepool.yaml
nodepool.karpenter.sh/arm-mixed-capacity created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The start of the &lt;code&gt;arm-nodepool.yaml&lt;/code&gt; configuration file is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm-mixed-capacity&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells us we are using the NodePool API with Karpenter. This uses the &lt;code&gt;nodepools.karpenter.sh&lt;/code&gt; CRD which is installed by default with Auto Mode. The &lt;code&gt;spec&lt;/code&gt; element provides the contract with Karpenter. It has the following high-level structure, and we will go through each one in order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;disruption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="c1"&gt;# when and how nodes can be replaced&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;     &lt;span class="c1"&gt;# what a node looks like&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;       &lt;span class="c1"&gt;# optional safety rails&lt;/span&gt;
  &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;       &lt;span class="c1"&gt;# optional priority&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Disruption
&lt;/h3&gt;

&lt;p&gt;The disruption section describes the ways in which Karpenter can disrupt and replace nodes. This is used when Karpenter wants to remove empty nodes, replace under-utilised nodes with better fitting ones, or shrink the cluster to save money.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Disruption settings for node lifecycle management&lt;/span&gt;
  &lt;span class="na"&gt;disruption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;consolidationPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WhenEmptyOrUnderutilized&lt;/span&gt;
    &lt;span class="na"&gt;consolidateAfter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10m&lt;/span&gt;  &lt;span class="c1"&gt;# Wait 10 minutes before consolidating&lt;/span&gt;

    &lt;span class="c1"&gt;# Disruption budgets to control how many nodes can be disrupted&lt;/span&gt;
    &lt;span class="na"&gt;budgets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# During business hours: more conservative&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2"&lt;/span&gt;
        &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;9&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;mon-fri"&lt;/span&gt;  &lt;span class="c1"&gt;# 9 AM Mon-Fri&lt;/span&gt;
        &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8h&lt;/span&gt;
      &lt;span class="c1"&gt;# Outside business hours: more aggressive&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10%"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;consolidationPolicy&lt;/code&gt; describes which types of nodes Karpenter should consider for consolidation. There are 2 options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;WhenEmptyOrUnderutilized&lt;/code&gt; - Karpenter will consider all nodes for consolidation and attempt to remove or replace nodes when it discovers that the node is empty or underutilised and could be changed to reduce cost&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WhenEmpty&lt;/code&gt; - Karpenter will only consider nodes for consolidation that contain no workload pods&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;consolidateAfter&lt;/code&gt; field is the amount of time Karpenter should wait to consolidate a node after a pod has been added or removed from the node. We set this to 10 minutes to make sure the behaviour is not too aggressive, and gives the scheduler time to stabilise.&lt;/p&gt;

&lt;p&gt;Disruption budgets are used to control how many nodes can be disrupted. There are two rules defined in this section. The first rule states that between 09:00 and 17:00 on Monday to Friday, Karpenter may disrupt at most 2 nodes at a time. The second rule states that Karpenter may disrupt up to 10% of all nodes at any time. This will not apply between 09:00-17:00 on Monday to Friday as the first rule is more restrictive and so wins out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Template
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;template&lt;/code&gt; section defines the exact shape, rules and constraints of every node that Karpenter is allowed to create as part of this NodePool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Node template specification&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Termination grace period (24 hours)&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;24h&lt;/span&gt;

      &lt;span class="c1"&gt;# Node requirements&lt;/span&gt;
      &lt;span class="na"&gt;requirements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# ARM architecture&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/arch&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arm64"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

        &lt;span class="c1"&gt;# Support spot and on-demand (prefer spot for cost)&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/capacity-type&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;spot"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;on-demand"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

        &lt;span class="c1"&gt;# ARM instance types (Graviton) - diverse selection&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node.kubernetes.io/instance-type&lt;/span&gt;
          &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
          &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# General purpose (M7g)&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m7g.medium"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m7g.large"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m7g.xlarge"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;m7g.2xlarge"&lt;/span&gt;
            &lt;span class="c1"&gt;# Burstable (T4g) - cost-effective for variable workloads&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t4g.medium"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t4g.large"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t4g.xlarge"&lt;/span&gt;
            &lt;span class="c1"&gt;# Compute optimized (C7g)&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;c7g.large"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;c7g.xlarge"&lt;/span&gt;
            &lt;span class="c1"&gt;# Memory optimized (R7g)&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r7g.large"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r7g.xlarge"&lt;/span&gt;

      &lt;span class="c1"&gt;# Node class reference (Auto Mode creates this automatically)&lt;/span&gt;
      &lt;span class="na"&gt;nodeClassRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eks.amazonaws.com&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodeClass&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;


      &lt;span class="c1"&gt;# Taints (optional - for dedicated ARM workloads)&lt;/span&gt;
      &lt;span class="na"&gt;taints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arch&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
          &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;terminationGracePeriod&lt;/code&gt; field defines the amount of time that a node can be draining before Karpenter forcibly cleans it up.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;spec.requirements&lt;/code&gt; section provides more details about the nodes that can be created. There are a specified here as an example.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kubernetes.io/arch&lt;/code&gt; key sets out the architecture for the node. Karpenter supports &lt;code&gt;amd64&lt;/code&gt; and &lt;code&gt;arm64&lt;/code&gt; nodes. This is how we support Graviton.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;karpenter.sh/capacity-type&lt;/code&gt; key is analogous to EC2 puchase options. The &lt;code&gt;general-purpose&lt;/code&gt; NodePool only supports &lt;code&gt;on-demand&lt;/code&gt; as a value, whereas he we specify both &lt;code&gt;spot&lt;/code&gt; and &lt;code&gt;on-demand&lt;/code&gt;. As multiple capacity types are specified, Karpenter will prioritise &lt;code&gt;spot&lt;/code&gt; where available, but fallback to on-demand.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; AWS automatically applies Amazon EC2 Reserved Instance discounts to matching running on-demand EC2 usage, regardless of how these instances were launched. This means that you will get these discounts for instances launched by Karpenter&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are a number of instance type options&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;key: node.kubernetes.io/instance-type&lt;/li&gt;
&lt;li&gt;key: karpenter.k8s.aws/instance-family&lt;/li&gt;
&lt;li&gt;key: karpenter.k8s.aws/instance-category&lt;/li&gt;
&lt;li&gt;key: karpenter.k8s.aws/instance-generation&lt;/li&gt;
&lt;li&gt;key: karpenter.k8s.aws/instance-capability-flex&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Generally, instance types should be a list and not a single value. Leaving these requirements undefined is recommended, as it maximizes choices for efficiently placing pods.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each NodePool must reference a NodeClass. A Node Class defines infrastructure-level settings that apply to groups of nodes in your EKS cluster, including network configuration, storage settings, and resource tagging. When you need to customize how EKS Auto Mode provisions and configures EC2 instances beyond the default settings, creating a Node Class gives you precise control over critical infrastructure parameters. For example, you can specify private subnet placement for enhanced security, configure instance ephemeral storage for performance-sensitive workloads, or apply custom tagging for cost allocation. In this case, we just reference the default Auto Mode NodeClass.&lt;/p&gt;

&lt;p&gt;There is also an example shown on how to apply a &lt;code&gt;taint&lt;/code&gt; to a NodePool. When a taint is applied to a NodePool, Karpenter will only place pods on the nodes that explicitly tolerate the taint. In the example, Karpenter will only place a workload on the node that explicitly states that it supports the ARM architecture.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Toleration for the taint (if you added one)&lt;/span&gt;
&lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arch&lt;/span&gt;
    &lt;span class="s"&gt;operator&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Equal&lt;/span&gt;
    &lt;span class="s"&gt;value&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
    &lt;span class="s"&gt;effect&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Limits
&lt;/h3&gt;

&lt;p&gt;The limits section is used to constrain the total size of the NodePool. The limits that are set prevent Karpenter from creating new instances, once they have been exceeded. This is done to prevent runaway costs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Limits for this node pool&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1000"&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1000Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Weight
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;weight&lt;/code&gt; field controls prioritisation when Karpenter has multiple NodePools to choose from for scheduling a pod. When multiple NodePools can satisfy the requirements for a pod, Karpenter will give priority to the NodePool with the highest weight. If the &lt;code&gt;weight&lt;/code&gt; attribute is not specified, it will default to 0.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Weight for prioritization (higher = preferred)&lt;/span&gt;
  &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Karpenter will look to choose the cheapest feasible instance. It prefers NodePools where it can pack the pod more efficiently with other pending pods, and minimise wasted CPU / memory on the node.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Based on the way that Karpenter performs pod batching and bin packing, it is not guaranteed that Karpenter will always choose the highest priority NodePool given specific requirements. For example, if a pod can’t be scheduled with the highest priority NodePool, it will force creation of a node using a lower priority NodePool, allowing other pods from that batch to also schedule on that node. The behavior may also occur if existing capacity is available, as the kube-scheduler will schedule the pods instead of allowing Karpenter to provision a new node.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Targetting the NodePool with a Deployment
&lt;/h2&gt;

&lt;p&gt;In order to test the NodePool and show it working, we created a Deployment, which is a simple Nginx container. It can be deployed using the following command from the code repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; arm-deployment.yaml
deployment.apps/arm-app created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define a Deployment and give it the name of &lt;code&gt;arm-app&lt;/code&gt;, which is also assigned a label of the same name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm-app&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next part of the manifest file tells Kubernetes to run 3 copies of the application, and to make sure they are labelled as &lt;code&gt;app=arm-app&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifest file then defines a &lt;code&gt;nodeSelector&lt;/code&gt; which is a rule that states that these pods can only run on nodes with an architecture type of &lt;code&gt;arm64&lt;/code&gt;. This matches the architecture of our NodePool. Kubernetes will only schedule the Pod onto nodes that match the labels specified.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Node selector for ARM architecture&lt;/span&gt;
&lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kubernetes.io/arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next part of the manifest file moves onto &lt;code&gt;affinity&lt;/code&gt;. Node affinity functions like the &lt;code&gt;nodeSelector&lt;/code&gt; field but is more expressive and allows you to specify soft rules. In this case, we use &lt;code&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/code&gt; with a weight of 100 to state that we want the Pod to run on a Spot instance, but if this cannot be scheduled, then it is fine to drop back to on-demand. This means that the Pod will not remain in a pending state if a Spot instance was not available, and so it is considered a soft rule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Prefer spot instances for cost savings&lt;/span&gt;
&lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
      &lt;span class="na"&gt;preference&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/capacity-type&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;spot"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we use the &lt;code&gt;containers&lt;/code&gt; section to say that we want to run a copy of nginx in each Pod, which half a CPU and 512 MB of memory reserved, but this can grow to a whole CPU and 1 GB or memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;  &lt;span class="c1"&gt;# Multi-arch image supports ARM64&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1000m&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We open up a number of additional terminal windows as we apply the Deployment, to give us more information on what exactly is happening in the background.&lt;/p&gt;

&lt;p&gt;The first command lists all the nodes in the EKS cluster including a column showing their architecture and a column showing their capacity type. We can see that a node is in the Ready state which uses ARM and is a spot instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-L&lt;/span&gt; kubernetes.io/arch,karpenter.sh/capacity-type
NAME                  STATUS   ROLES    AGE     VERSION               ARCH    CAPACITY-TYPE
i-0d336c28e588123ae   Ready    &amp;lt;none&amp;gt;   2m22s   v1.34.3-eks-3c60543   arm64   spot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second command lists the &lt;code&gt;NodeClaim&lt;/code&gt; resources. A &lt;code&gt;NodeClaim&lt;/code&gt; is a custom resource created by Karpenter. Here we can see the generated &lt;code&gt;NodeClaim&lt;/code&gt; name is taken from the name of the &lt;code&gt;NodePool&lt;/code&gt; with a random suffix. We can also see it is using spot capacity, and a supported instance family type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodeclaims
NAME                       TYPE         CAPACITY   ZONE         NODE                  READY   AGE
arm-mixed-capacity-zw6vh   m7g.xlarge   spot       eu-west-2a   i-0d336c28e588123ae   True    3m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next command describes all pods that have the label &lt;code&gt;app=arm-app&lt;/code&gt;. This is the label that gets applied as part of the deployment. It filters the output to show the pod lifecycle events. Again, we can see from this that the pod is running on an ARM-based Graviton spot instance. The event timeline shows the lifecycle involved here. The pod is bound to a compatible node, it then downloads the latest nginx image from the container registry, the container is then created, and finally started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe pod &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm-app | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; 20 Events

Name:             arm-app-6674bd9849-ld6fm
Namespace:        default
Priority:         0
Service Account:  default
Node:             i-0d336c28e588123ae/10.1.3.225
Start Time:       Tue, 27 Jan 2026 11:42:06 +0000
Labels:           &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm-app
                  pod-template-hash&lt;span class="o"&gt;=&lt;/span&gt;6674bd9849
Annotations:      &amp;lt;none&amp;gt;
Status:           Running
IP:               10.1.3.97
&lt;span class="nt"&gt;--&lt;/span&gt;
Events:
  Type    Reason     Age    From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;   &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal  Scheduled  2m14s  default-scheduler  Successfully assigned default/arm-app-6674bd9849-ld6fm to i-0d336c28e588123ae
  Normal  Pulling    2m12s  kubelet            spec.containers&lt;span class="o"&gt;{&lt;/span&gt;nginx&lt;span class="o"&gt;}&lt;/span&gt;: Pulling image &lt;span class="s2"&gt;"nginx:latest"&lt;/span&gt;
  Normal  Pulled     2m9s   kubelet            spec.containers&lt;span class="o"&gt;{&lt;/span&gt;nginx&lt;span class="o"&gt;}&lt;/span&gt;: Successfully pulled image &lt;span class="s2"&gt;"nginx:latest"&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;3.874s &lt;span class="o"&gt;(&lt;/span&gt;3.874s including waiting&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; Image size: 61200811 bytes.
  Normal  Created    2m8s   kubelet            spec.containers&lt;span class="o"&gt;{&lt;/span&gt;nginx&lt;span class="o"&gt;}&lt;/span&gt;: Created container: nginx
  Normal  Started    2m8s   kubelet            spec.containers&lt;span class="o"&gt;{&lt;/span&gt;nginx&lt;span class="o"&gt;}&lt;/span&gt;: Started container nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We ran a similar command to list all of the pods running, and to show that the 3 replicas as specified in the deployment are running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm-app &lt;span class="nt"&gt;-w&lt;/span&gt;
NAME                       READY   STATUS             RESTARTS   AGE
arm-app-6674bd9849-ld6fm   0/1     Pending             0          0s
arm-app-6674bd9849-76wzg   0/1     Pending             0          0s
arm-app-6674bd9849-2vnwx   0/1     Pending             0          0s
arm-app-6674bd9849-ld6fm   0/1     ContainerCreating   0          0s
arm-app-6674bd9849-76wzg   0/1     ContainerCreating   0          0s
arm-app-6674bd9849-2vnwx   0/1     ContainerCreating   0          0s
arm-app-6674bd9849-2vnwx   0/1     Running             0          7s
arm-app-6674bd9849-ld6fm   0/1     Running             0          7s
arm-app-6674bd9849-76wzg   0/1     Running             0          7s
arm-app-6674bd9849-ld6fm   1/1     Running             0          13s
arm-app-6674bd9849-76wzg   1/1     Running             0          13s
arm-app-6674bd9849-2vnwx   1/1     Running             0          13s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Get started with the Argo CD EKS Capability</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Fri, 23 Jan 2026 14:49:00 +0000</pubDate>
      <link>https://forem.com/aws-heroes/get-started-with-the-argo-cd-eks-capability-36kd</link>
      <guid>https://forem.com/aws-heroes/get-started-with-the-argo-cd-eks-capability-36kd</guid>
      <description>&lt;h2&gt;
  
  
  Argo CD Overview
&lt;/h2&gt;

&lt;p&gt;EKS Capabilities was announced at re:Invent 2025. These are Kubernetes-native platform features managed by AWS, that provide higher-level functionality. This post looks at the Argo CD capability.&lt;/p&gt;

&lt;p&gt;Argo CD is a GitOps based continuous deployment tool. Your git repository becomes the source of truth, and Argo CD ensures that your cluster state matches what you have defined in git. AWS have been consistently guiding their customers towards GitOps for a number of years. AWS describe GitOps as being like a reference implementation of best practice with these 4 characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Desired state expressed declaratively&lt;/li&gt;
&lt;li&gt;Desired state is immutable and versioned&lt;/li&gt;
&lt;li&gt;Desired state is automatically applied from source&lt;/li&gt;
&lt;li&gt;Desired state is continuously reconciled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Argo CD is used by most of AWS customers practicing GitOps in 2025, and has really emerged as its own de facto standard. More than 45% of Kubernetes end-users reported production or planned production use of Argo CD in the 2024 CNCF survey.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Argo CD Capability via Console
&lt;/h2&gt;

&lt;p&gt;The quickest way to get up and running with Argo CD is via the console. In the EKS console there is a capabilities tab that shows which managed capabilities are deployed in the cluster and which are available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo3q86i6hzf5yq0u80y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo3q86i6hzf5yq0u80y7.png" alt="EKS Capabilities in Console" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, we can click the &lt;code&gt;Create capabilities&lt;/code&gt; button, and tick the checkbox against Argo CD in the Deployment section, before clicking next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fso3pgr5a8nzvmtkcw89t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fso3pgr5a8nzvmtkcw89t.png" alt="Select Argo CD Capability" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This brings up the page where we configure the selected capabilities. The first thing to do is to either select an existing role for the capability role, or select the button to create a new role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblzexkxuld0jouxfn0ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblzexkxuld0jouxfn0ii.png" alt="Create Argo CD Role" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, the Argo CD managed capability integrates with AWS Identity Centre for authentication, and uses RBAC roles for authorization. You select the existing instance of IAM Identity Centre which should be pre-populated when you click the drop down. The next step is to assign RBAC roles. This involves specifying an AWS user or group from AWS Identity Centre and assigning them to an Argo CD RBAC role of "Admin", "Editor" or "Viewer".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flksxlg5f0lv9wju1u0hr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flksxlg5f0lv9wju1u0hr.png" alt="Argo CD Authentication Access" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, you can review and create the managed capability for Argo CD. To understand this process in more detail, we can step through how to setup the capability using IaC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Argo CD Capability via IaC and CLI
&lt;/h2&gt;

&lt;p&gt;The code samples required for this section are contained in the &lt;code&gt;CloudFormation&lt;/code&gt; section of this &lt;a href="https://github.com/mlewis7127/amazon-eks-guide-code" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an IAM Capability Role
&lt;/h3&gt;

&lt;p&gt;The first step is to create an IAM Capability Role. EKS Capabilities use this role to act on your behalf, running controllers in EKS. EKS Capabilities introduced a new service principle called &lt;code&gt;capabilities.eks.amazonaws.com&lt;/code&gt;. When you create the capability role, you need to ensure the trust policy trusts this new service principle. An example of the required trust policy is shown below which is available in a file named "argocd-trust-policy.json":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Version"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2012-10-17"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Statement"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
        &lt;span class="pi"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Effect"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allow"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Principal"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Service"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;capabilities.eks.amazonaws.com"&lt;/span&gt;
            &lt;span class="pi"&gt;},&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Action"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts:AssumeRole"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sts:TagSession"&lt;/span&gt;
            &lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can create an IAM role with this trust policy using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam create-role &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; ArgoCDCapabilityRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file://argocd-trust-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we need to attach permissions to this Capability IAM Role based on the capability needs and what integrations are required. For example, for Argo CD you may need to give permissions to &lt;code&gt;ecr&lt;/code&gt; or &lt;code&gt;codecommit&lt;/code&gt; or &lt;code&gt;codeconnection&lt;/code&gt;, depending on where your source is coming from. When choosing "Create Argo CD role" through the console, an IAM role with the &lt;code&gt;AWSSecretsManagerClientReadOnlyAccess&lt;/code&gt; managed policy pre-selected is created. This managed policy provides read access to all secrets stored in Secrets Manager in your AWS account and is intended for getting started quickly. You have the flexibility to modify these permissions by unselecting this policy or adding different policies as needed. &lt;/p&gt;

&lt;p&gt;We can this managed policy to the created role by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws iam attach-role-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; ArgoCDCapabilityRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can achieve the same in CloudFormation but in a single template. The &lt;code&gt;AWS::IAM::Role&lt;/code&gt; config required is shown below, although we will use this as part of the CloudFormation template that creates the EKS Capability, so we can control it in a single stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# IAM Capability Role for ArgoCD&lt;/span&gt;
  &lt;span class="na"&gt;ArgoCDCapabilityRole&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::IAM::Role&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;RoleName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;RoleName"&lt;/span&gt;
      &lt;span class="na"&gt;AssumeRolePolicyDocument&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2012-10-17'&lt;/span&gt;
        &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
            &lt;span class="na"&gt;Principal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;Service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capabilities.eks.amazonaws.com&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sts:AssumeRole&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sts:TagSession&lt;/span&gt;
      &lt;span class="na"&gt;ManagedPolicyArns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create EKS Capability
&lt;/h3&gt;

&lt;p&gt;Finally, we can create the EKS Capability for Argo CD itself. The Argo CD capability is integrated with AWS Identity Centre (IDC). This ensures that single sign-on is enabled for the fully managed and hosted Argo UI instance and for Argo CLI. For this, you need to ensure that your identity centre configuration is passed into the capability when creating it.&lt;/p&gt;

&lt;p&gt;The Argo CD IDC instance identifies the name of the IAM Identity Center instance that is used by your organization to get permissions for Argo CD to access your EKS cluster. Once the capability has been created, the Argo CD IDC instance cannot be edited.&lt;/p&gt;

&lt;p&gt;We can retrieve the IDC Instance ARN directly from the console, or by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws sso-admin list-instances &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Instances[0].InstanceArn'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to identify the users or groups that should be assigned to be a particular Argo CD role. The users and groups you identify from your IDC instance defines the permissions that the Argo CD capability has to access your EKS cluster. In the command below I retrieve the User ID for the user called &lt;code&gt;mattlewis&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws identitystore list-users &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--identity-store-id&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws sso-admin list-instances &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Instances[0].IdentityStoreId'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Users[?UserName==`mattlewis`].UserId'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I wanted to assign a Group rather than a User, then I need to retrieve a Group ID from Identity Centre. The following command retrieves the Group ID for a group called &lt;code&gt;Admin&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws identitystore list-groups &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--identity-store-id&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws sso-admin list-instances &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2 &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Instances[0].IdentityStoreId'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Groups[?DisplayName==`Admin`].GroupId'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a JSON file called &lt;code&gt;aws-identity-centre-configuration.json&lt;/code&gt; which is made available for convenience. The configuration below assigns the User ID to the Argo CD role of &lt;code&gt;ADMIN&lt;/code&gt;, and the Group ID to the Argo CD role of &lt;code&gt;VIEWER&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"argoCd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"awsIdc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"idcInstanceArn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REPLACE_WITH_IDC_INSTANCE_ARN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"idcRegion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REPLACE_WITH_REGION"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"rbacRoleMappings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ADMIN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"identities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REPLACE_WITH_USER_ID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SSO_USER"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"VIEWER"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"identities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REPLACE_WITH_GROUP_ID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SSO_GROUP"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then create the Argo CD capability using the following AWS CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks create-capability &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--capability-name&lt;/span&gt; argocd-capability &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; eks-test-cluster&lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--type&lt;/span&gt; ARGOCD &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-arn&lt;/span&gt; arn:aws:iam::&lt;span class="o"&gt;{&lt;/span&gt;ACCOUNT_ID&lt;span class="o"&gt;}&lt;/span&gt;:role/ArgoCDCapabilityRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--delete-propagation-policy&lt;/span&gt; RETAIN &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--configuration&lt;/span&gt; file://aws-identity-centre-configuration.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is equivalent to the following that can be used in a CloudFormation template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ArgoCDCapability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::EKS::Capability&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;CapabilityName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;CapabilityName&lt;/span&gt;
    &lt;span class="na"&gt;ClusterName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ClusterName&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ARGOCD&lt;/span&gt;
    &lt;span class="na"&gt;RoleArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;ArgoCDCapabilityRole.Arn&lt;/span&gt;
    &lt;span class="na"&gt;DeletePropagationPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;DeletePropagationPolicy&lt;/span&gt;
    &lt;span class="na"&gt;Configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ArgoCd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;AwsIdc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;IdcInstanceArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;IdentityCenterInstanceArn&lt;/span&gt;
          &lt;span class="na"&gt;IdcRegion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;IdentityCenterRegion&lt;/span&gt;
        &lt;span class="na"&gt;RbacRoleMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Identities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;AdminUserId&lt;/span&gt;
                &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSO_USER&lt;/span&gt;
            &lt;span class="na"&gt;Role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ADMIN&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Identities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;ViewerGroupId&lt;/span&gt;
                &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSO_GROUP&lt;/span&gt;
            &lt;span class="na"&gt;Role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VIEWER&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can deploy the full CloudFormation stack using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudformation create-stack &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--stack-name&lt;/span&gt; argocd-capability-stack &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--template-body&lt;/span&gt; file://argocd-capability.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;ParameterKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ClusterName,ParameterValue&lt;span class="o"&gt;=&lt;/span&gt;eks-test-cluster &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;ParameterKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;IdentityCenterInstanceArn,ParameterValue&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:sso:::instance/ssoins-&lt;span class="o"&gt;{&lt;/span&gt;REPLACE&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;ParameterKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;IdentityCenterRegion,ParameterValue&lt;span class="o"&gt;=&lt;/span&gt;eu-west-2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;ParameterKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AdminUserId,ParameterValue&lt;span class="o"&gt;={&lt;/span&gt;REPLACE_WITH_USER_ID&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;ParameterKey&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ViewerGroupId,ParameterValue&lt;span class="o"&gt;={&lt;/span&gt;REPLACE_WITH_GROUP_ID&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--capabilities&lt;/span&gt; CAPABILITY_NAMED_IAM &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some of the properties include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt; - This defines the type of EKS Capability to create with valid values of

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ACK&lt;/code&gt; - Amazon Web Services Controllers for Kubernetes (ACK), which lets you manage resources directly from Kubernetes&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ARGOCD&lt;/code&gt; – Argo CD for GitOps-based continuous delivery&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;KRO&lt;/code&gt; – Kube Resource Orchestrator (KRO) for composing and managing custom Kubernetes resources&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;DeletePropagationPolicy&lt;/strong&gt; - This only supported value is &lt;code&gt;RETAIN&lt;/code&gt;, which keeps all resources managed by the capability when the capability is deleted&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Configuration&lt;/strong&gt; - This property defines the configuration settings, with the structure depending on the capability type&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Role&lt;/strong&gt; - The role under the &lt;code&gt;RbacRoleMappings&lt;/code&gt; property defines the Argo CD role to be assigned. The value values are:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ADMIN&lt;/code&gt; - Full administrative access to Argo CD&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EDITOR&lt;/code&gt; - Edit access to Argo CD resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VIEWER&lt;/code&gt; - Read-only access to Argo CD resources&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Once the capability has been created (which will take a while), the capabilities tab for the cluster in the EKS console will provide the Argo API endpoint and a link to go to the managed hosted Argo UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxy71tw7ia5g6bnka9os.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxy71tw7ia5g6bnka9os.png" alt="Argo API Endpoint" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can click on the link to open up the managed Argo UI. At this point, we will need to click the button to &lt;code&gt;LOG IN VIA SSO&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb9t6mhj5ykffinvo8fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb9t6mhj5ykffinvo8fs.png" alt="Argo UI Home Page" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In most cases, this will automatically log you directly into the console. At this point, we can see there are no applications currently available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgraw7oyaeqsue0h8g6t1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgraw7oyaeqsue0h8g6t1.png" alt="Argo UI Login Successful" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also check the Argo CD Role Based Access Control (RCAC) assignments in the console, and make sure that it matches what we set up in the previous JSON file or in the CloudFormation template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0oduosuffdadg4u6x3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0oduosuffdadg4u6x3p.png" alt="Argo RBAC Assignments" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Argo CD Capability
&lt;/h2&gt;

&lt;p&gt;Now that the Argo CD capability has been created, we can take a quick look at the same of the changes that have been made to our cluster.&lt;/p&gt;

&lt;p&gt;Firstly, we run a command to look at the EKS Access Entries for the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks list-access-entries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; eks-test-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;EKS Access Entries are the recommended way to grant users access to the Kubernetes API. Fundamentally, it associates a set of Kubernetes permissions with an IAM identity such as an IAM Role. Running the command above generated the following output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"accessEntries"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::{ACCOUNT_ID}:role/ArgoCDCapabilityRole"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::{ACCOUNT_ID}:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_Developer_b664db2de4791f77"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::{ACCOUNT_ID}:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::{ACCOUNT_ID}:role/eks-test-cluster-eks-auto-20260116165710199100000002"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see that four access entries currently exist in the cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The SSO Developer role entry, which is the role I assumed to create the EKS cluster&lt;/li&gt;
&lt;li&gt;An EKS Auto Mode generated role that is used to enable Auto Mode to make authenticated Kubernetes API calls&lt;/li&gt;
&lt;li&gt;An EKS service-linked role that is used to manage the control plane and AWS-side resources of the cluster&lt;/li&gt;
&lt;li&gt;The ArgoCDCapabilityRole role entry, which has been created when enabling the Argo CD EKS Capability which allows the capability to authenticate to the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the permissions that have been automatically granted to each entry in the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c2qhmzmbgrf9382l0xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c2qhmzmbgrf9382l0xm.png" alt="EKS Access Entries" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By enabling the EKS Argo CD Capability, the following Custom Resource Definitions (CRDs) have been added to the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get crds | &lt;span class="nb"&gt;grep &lt;/span&gt;argo  

applications.argoproj.io                        2026-01-16T17:10:56Z
applicationsets.argoproj.io                     2026-01-16T17:10:56Z
appprojects.argoproj.io                         2026-01-16T17:10:57Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy a sample application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Register your EKS cluster with Argo CD
&lt;/h3&gt;

&lt;p&gt;In order to deploy a sample application, the first step is to register the EKS cluster where we want to deploy an application. We do this by by creating a Kubernetes secret, and passing the label of &lt;code&gt;argocd.argoproj.io/secret-type: cluster&lt;/code&gt;. We give the cluster a name, and this is where the mapping between the actual cluster and the ARN happens. With EKS Capabilities you only need to provide the ARN and not the Kubernetes API Server URL as with a self-managed instance. In our case, we are registering a local cluster, as it is the same one as Argo CD is running. The following manifest file is found in the code repository under &lt;code&gt;k8s/argocd&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-cluster&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/secret-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-cluster&lt;/span&gt;
  &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:eks:eu-west-2:{ACCOUNT_ID}:cluster/eks-test-cluster&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then apply this to the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; local-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Register an Argo CD Application
&lt;/h3&gt;

&lt;p&gt;Now we can register an Argo CD application. In this case, we will use the guestbook example from the Argo CD project itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/argoproj/argocd-example-apps&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:eks:eu-west-2:{ACCOUNT_ID}:cluster/eks-test-cluster&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a Kubernetes manifest file for a custom resource (an Argo CD Application). Our cluster knows how to handle this as it has installed the CRD when deploying the capability itself. This application tells Argo CD to deploy the guestbook application from the specific public Git repository into the default namespace of the EKS cluster. We can run this file using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; guestbook-application.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should get back the information that the application has been created &lt;code&gt;application.argoproj.io/guestbook created&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At this point we can log in to the Argo UI, and see something similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffog9f78ko4rnimg2nzuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffog9f78ko4rnimg2nzuh.png" alt="Argo CD Unknown Status" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we go into settings, we can see there is a failed connection status with our cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj80mzjf06si10lmay2cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj80mzjf06si10lmay2cb.png" alt="Argo CD Cluster Settings" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we go further and click into the application itself, we see that there are 3 errors in the application conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64a5v1no43smq6ax2gs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64a5v1no43smq6ax2gs6.png" alt="Argo CD Application Conditions Errors" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The issue here is that Argo CD cannot build its cache of what exists in the cluster, because it does not have permission to list the cluster-scoped resource &lt;code&gt;PersistentVolume&lt;/code&gt;. It also cannot list resource "ingressclassparams" in API group "eks.amazonaws.com" at the cluster scope to connect to the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Associate an access policy
&lt;/h3&gt;

&lt;p&gt;This issue is caused because the two access policies associated with the Argo CD Capability role (&lt;code&gt;AmazonEKSArgoCDPolicy&lt;/code&gt; and &lt;code&gt;AmazonEKSArgoCDClusterPolicy&lt;/code&gt;) do not give the required permissions to access or mutate the cluster resources.&lt;/p&gt;

&lt;p&gt;The best practice in this case is to determine the minimum permissions required, and add these to an access policy. However, the fastest solution is to add the &lt;code&gt;AmazonEKSClusterAdminPolicy&lt;/code&gt; with cluster scope to the access entry, which we can do using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws eks associate-access-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster-name&lt;/span&gt; eks-test-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--principal-arn&lt;/span&gt; arn:aws:iam::424727766526:role/ArgoCDCapabilityRole &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--access-scope&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have run this command, Argo CD will now be able to connect successfully to the cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqg96wb5vwnmjkmg8cq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqg96wb5vwnmjkmg8cq1.png" alt="Argo CD Cluster Successful" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the application can be synced and is healthy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw9cjoarqpvvb4ke548d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw9cjoarqpvvb4ke548d.png" alt="Argo CD Application Healthy" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Certified Machine Learning Engineer Core Concepts</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Sat, 04 Oct 2025 13:32:38 +0000</pubDate>
      <link>https://forem.com/aws-heroes/aws-certified-machine-learning-engineer-core-concepts-5ekg</link>
      <guid>https://forem.com/aws-heroes/aws-certified-machine-learning-engineer-core-concepts-5ekg</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Earlier this year I passed the AWS Machine Learning Engineer - Associate exam. I spent time making sure I understood the core concepts before taking the exam, and made a lot of notes. The intent of this post is to summarise the concepts essential to pass the exam. Based on my experience, knowing these concepts will get you at least half way there. Layer on knowledge of the AWS AI Services, and focus on SageMaker and all its capabilities, and the certification will be yours.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Preprocessing&lt;/li&gt;
&lt;li&gt;SageMaker Built-In Algorithms&lt;/li&gt;
&lt;li&gt;Model Development&lt;/li&gt;
&lt;li&gt;Evaluating Model Accuracy&lt;/li&gt;
&lt;li&gt;Improving Model Accuracy&lt;/li&gt;
&lt;li&gt;Additional Topics to Study&lt;/li&gt;
&lt;li&gt;Other Study Guides&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Data Preprocessing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Data preprocessing ensures that data is in the right shape and of the right quality to be used for training.&lt;/p&gt;

&lt;p&gt;Labelling data is important for models to learn effectively, and this is where services such as Mechanical Turk and SageMaker Ground Truth come in. Mechanical Turk is an online marketplace to access an on-demand global workforce. SageMaker Ground Truth provides built-in workflows to automate data labelling, and can use your own workforce, third-party vendors from the AWS marketplace, or Mechanical Turk.&lt;/p&gt;

&lt;p&gt;Cleaning data can include removing outliers and duplicates, replacing inaccurate or irrelevant data, and correcting missing data.&lt;/p&gt;

&lt;p&gt;Approaches for inputting missing data include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mean replacement&lt;/strong&gt; – replacing the missing values with the mean value from the rest of the column. The mean value is the average, which means it can be distorted by outliers. Therefore, the median value (which is the middle value when data is sorted) may be a better choice&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KNN (K-Nearest Neighbour)&lt;/strong&gt; – Find the ‘K’ nearest’ most similar rows and average their values. This assumes numeric data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Balancing data (for datasets with underrepresented categories) can be achieved using one of the following methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Random oversampling&lt;/strong&gt; – this method randomly duplicates samples from the minority category. For example, if you were building a fraud detection model and you had 1000 examples of normal transactions and only 50 of fraudulent transactions, you would duplicate the fraudulent transactions until you had an equal proportion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Random Undersampling&lt;/strong&gt; – this method randomly removes samples from the overrepresented category to achieve the equal proportion. This would typically be used when you have a large dataset, or you would want to reduce the size of your dataset to make training the model quicker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Synthetic Minority Oversampling Technique (SMOTE)&lt;/strong&gt; – this approach generates new synthetic samples of the minority category by interpolating between existing samples using nearest neighbours.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Encoding&lt;/strong&gt; is the concept around converting data (typically categorical data where the data represents a category or group) into a numerical format that can be well understood by a model. The main types of encoding are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Label Encoding&lt;/strong&gt; – assigns each category a unique number e.g. Red=0, Green=1, Blue=2. There is no order implied by this encoding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One-Hot Encoding&lt;/strong&gt; – creates binary columns for each category. This means if there is a category called colour there would be additional columns created for each unique value such as a column for Red and one for Blue and for Green. The value for each column would be assigned 1 if it is true, else 0.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ordinal Encoding&lt;/strong&gt; – this is similar to label encoding but is used when there is a ranked ordering between values in a category. For a category called ‘size’ you could map Small to 0, Medium to 1 and Large to 2.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Outliers are data points in a data set that deviate significantly from the general patterns. One way of detecting outliers in training data is to measure how many standard deviations a data point is from the mean of a dataset. This is often called a &lt;strong&gt;z-score&lt;/strong&gt; or standard score. Data points that lie more than one standard deviation from the mean can be considered unusual.&lt;/p&gt;

&lt;p&gt;Outliers can be handled in different ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Delete the record&lt;/strong&gt; - if the outlier is clearly an error and there is enough training data, you can just delete that record.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature scaling or normalisation&lt;/strong&gt; – this aims to transform the numeric values so that are all values are on the same scale, often between 0 and 1. This rescaling makes the values more comparable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Standardisation&lt;/strong&gt; – is similar to normalisation but instead of scaling values from 0 to 1, it rescales the features to have a mean of 0 and standard deviation of 1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Binning&lt;/strong&gt; – takes a continuous numerical feature and splits into a set of intervals or bins. Each value is then assigned to a bin which can cover up imprecision or uncertainty e.g. someone aged 110 could end up in a bin which is “70+”.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After data has been cleaned up and encoded, you can fine tune or create new features in your dataset through feature engineering. There are other methods such as &lt;strong&gt;bag-of-words&lt;/strong&gt; and &lt;strong&gt;N-gram&lt;/strong&gt; that can be used to extract key information from text data.&lt;/p&gt;

&lt;p&gt;In machine learning, &lt;strong&gt;dimensionality&lt;/strong&gt; refers to the number of features in your dataset. If you have a dataset with 3 features (age, income and height), it exists in a 3-dimensional space. The &lt;strong&gt;curse of dimensionality&lt;/strong&gt; refers to the problems that arise when you have too many features. When this happens, the data becomes sparse (e.g. spread out too thinly in the feature space), and it is hard to find meaningful patterns.&lt;/p&gt;

&lt;p&gt;There are a number of unsupervised reduction techniques that can help to distil many features into a smaller more manageable number:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Principal component analysis (PCA)&lt;/strong&gt; – this technique retains most of the variation in the original features but reduces the overall number of features. It works by transforming features into a new set of features called principal components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;K-Means&lt;/strong&gt; – this technique uses a clustering algorithm to group similar data points into K clusters. It does not create new features but assigns a cluster label to each data point.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SageMaker Built-In Algorithms &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before you can train your model, you need to select a machine learning algorithm to use. Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. It is worth understanding each of these algorithms at a high-level, understanding which ones are used for supervised versus unsupervised learning, and their main uses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linear Learner&lt;/strong&gt; - A supervised learning algorithm suited for general classification (Logistic Regression) and regression (Linear Regression) tasks.  It makes predictions based on labelled data. In simpler terms, the model is given examples where each example has some features (like size, price, or age) and an outcome (like a house price or a category). For classification tasks, the model sorts data into categories, such as whether a house is expensive or not. For regression tasks, it predicts a specific value, like the actual price of a house. It assumes linear regression and must pre-process missing values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;K-Means&lt;/strong&gt; - An unsupervised algorithm designed for clustering or grouping data points based on their features (chosen attribute), without needing labelled data&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BlazingText&lt;/strong&gt; - A supervised algorithm used for Natural Language Processing tasks like text classification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seq2Seq&lt;/strong&gt; - A supervised algorithm specifically designed for sequence-to-sequence tasks, such as predicting the next word in a sequence, making it ideal for tasks like language translation or text generation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DeepAR&lt;/strong&gt; - A supervised algorithm used to forecast time-series predictions by using recurrent neural networks (RNN)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;XGBoost&lt;/strong&gt; - A supervised learning algorithm used for both classification and regression tasks — especially when you care about speed and performance. It is an optimized, scalable implementation of gradient boosting that builds an ensemble of decision trees in a sequential manner. Often outperforms other models in competitions (e.g., Kaggle)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Random Cut Forest&lt;/strong&gt; - An unsupervised algorithm used to identify abnormal data points within a data set e.g. anomaly detection&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic Segmentation&lt;/strong&gt; - A supervised algorithm that provides pixel-level classification but does not label objects with bounding boxes. It is typically used to classify individual pixels by tagging each pixel with a specified class&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Principal Component Analysis (PCA)&lt;/strong&gt; - An unsupervised algorithm used for dimensionality reduction&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image Classification&lt;/strong&gt; - A supervised algorithm that is used to label entire images, not individual objects. It simply assigns a single label to an entire image, categorising it based on the predominant features. It cannot identify or count multiple objects within a single image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object Detection&lt;/strong&gt; - A supervised algorithm used to identify and classify multiple objects within an image, assigning bounding boxes and confidence scores. It draws bounding boxes around detected objects and classifies them into different categories, making it very useful for tasks where you need to recognise what is in the image and determine the exact location of each object. This algorithm is well-suited for scenarios that require counting specific items, such as animals, in drone imagery, as it can distinguish between individual objects even in complex scenes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object2Vec&lt;/strong&gt; - A supervised algorithm primarily used to learn vector embeddings of discrete objects. Its typically used in recommendation systems, document classification or semantic similarity tasks, not computer vision or image processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP Insights&lt;/strong&gt; - An unsupervised algorithm used to detect anomalies in IP address usage patterns. It captures associations between these IP addresses and various entities, such as user IDs or account numbers. For instance, you can use it to detect a user attempting to log into a web service from an anomalous IP address. Additionally, it helps identify accounts that create computing resources from unexpected IP addresses. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latent Dirichlet Allocation (LDA)&lt;/strong&gt; - An unsupervised learning technique designed to represent a collection of documents as a combination of various topics. LDA is primarily used to identify a specified number of topics within a set of text documents. The LDA algorithm is a powerful tool for text mining and natural language processing tasks. It allows companies to sift through vast amounts of textual data and discern patterns that might take time to be apparent. Since LDA is an unsupervised method, the topics are not specified up front, and the discovered topics may not necessarily match human categorisations. Instead, LDA learns the topics as a probability distribution over the words in the documents, and each document is characterised as a mixture of these topics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neural Topic Model (NTM)&lt;/strong&gt; - An unsupervised algorithm used for organising documents into topics. It is just like LDA — but it's based on neural networks rather than probabilistic graphical models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Factorization Machines&lt;/strong&gt; - A supervised algorithm designed to handle sparse data, making it ideal for recommendation systems where user-item interactions are often sparse. It is primarily used for recommendation systems and ranking predictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text Classification – TensorFlow algorithm&lt;/strong&gt; - A supervised algorithm designed to classify text into predefined categories. &lt;/p&gt;

&lt;h2&gt;
  
  
  Model Development &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hyperparameters&lt;/strong&gt; are external configuration variables used to control a training model, improve model performance and the model outcome. Hyperparameters are set before training. This can be done manually, although SageMaker offers an intelligent version of hyperparameter tuning methods based on Bayesian search theory designed to find the best model in the shortest time. Amazon SageMaker AI automatic model tuning (AMT) finds the best version of a model by running many training jobs on your dataset. Amazon SageMaker AI automatic model tuning (AMT) is also known as hyperparameter tuning and supports Hyperband, a new search strategy.&lt;/p&gt;

&lt;p&gt;Common hyperparameters include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Epoch&lt;/strong&gt; – the number of times the entire training dataset is shown to the network during training. A smaller epoch value means faster training, but the model might not learn enough patterns and end up underfitting. A larger epoch value gives more opportunity to refine weights and give better convergence, but will take longer to train and may end up memorising training data and so overfitting&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learning rate&lt;/strong&gt; – the rate at which an algorithm updates estimates. Too high a learning rate means you might overshoot the optimal solution. Too small a learning rate will take too long to find the optimal solution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Batch size&lt;/strong&gt; – how many batch training samples are used within each batch of each epoch. Large batch sizes are faster per epoch as it will fill the GPU, but risk worse generalisation and can end up getting stuck in the wrong solution. Small batch sizes are slower per epoch but can provide better generalisation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that hyperparameters are not related to &lt;strong&gt;inference parameters&lt;/strong&gt;. Inference parameters are settings you can adjust during inference, that influence the response from the model. The most common are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temperature&lt;/strong&gt;: Temperature is a value between 0 and 1, and it regulates the creativity of the model's responses. Use a lower temperature if you want more deterministic responses, and use a higher temperature if you want creative or different responses for the same prompt&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top K&lt;/strong&gt;: The number of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top P&lt;/strong&gt;: The percentage of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Evaluating Model Accuracy &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Metrics are used to measure the performance and accuracy of a machine learning model. These metrics can typically be broken down into classification metrics and regression metrics.&lt;/p&gt;

&lt;p&gt;With classification, the goal of the model is to predict a label or class (category) for the given input. With binary classification, there are only two possible outputs (positive or negative). This is used to predict whether an image is a dog or not, or whether an email is spam or not. With multi-class classification, there are more than two possible outputs, such as predicting whether an animal is a dog, cat or cow.&lt;/p&gt;

&lt;p&gt;With regression, the goal of the model is to predict a numerical value. This could be predicting a house or stock price, or a person's annual income given certain inputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Classification Metrics
&lt;/h3&gt;

&lt;p&gt;The confusion matrix is a great way to help understand common classification metrics. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6m0fsc8qbn74xccoiii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6m0fsc8qbn74xccoiii.png" alt="Confusion Matrix" width="678" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall&lt;/strong&gt; - Recall is the percentage of positives correctly predicted. It is focused on how many of the actual positives did the model get right. You use it when you prefer to catch as many positives as possible, even if some are incorrect. It is a good metric when false negatives are costly e.g. fraud detection or cancer screening where it is better to flag more cases (even if some are wrong) than to miss a true positive&lt;/p&gt;

&lt;p&gt;It is calculated as: &lt;code&gt;Recall = TP / (TP + FN)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt; - Precision is all around correct positives. All positive predictions include both true positives and false positives (those predicted as positive but are actually negative). This means it is a good metric when false positives are costly e.g. spam email when you don't want to mark legitimate emails as spam, or object detection in autonomous vehicles, where a false positive can induce sudden unnecessary braking.&lt;/p&gt;

&lt;p&gt;It is calculated as: &lt;code&gt;Precision = TP / (TP + FP)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;A model with &lt;strong&gt;high precision and low recall&lt;/strong&gt; catches few positives but is rarely wrong.&lt;/p&gt;

&lt;p&gt;A model with &lt;strong&gt;high recall and low precision&lt;/strong&gt; catches most positives but includes many false alarms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;F1 Score&lt;/strong&gt; - The F1 Score is the harmonic mean of precision and recall. It is used when you need a balance betweeb both.&lt;/p&gt;

&lt;p&gt;It is calculated as: &lt;code&gt;F1 Score = 2 x ((Precision x Recall) / (Precision + Recall))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accuracy&lt;/strong&gt; - Accuracy measures overall correctness — how often the model was right, regardless of class. It considers all predictions (true positives, true negatives, false positives, and false negatives).&lt;/p&gt;

&lt;p&gt;It is calculated as: &lt;code&gt;Accuracy = TP + TN / TP + TN + FP + FN&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AUC and ROC&lt;/strong&gt; - The ROC curve is a graphical plot that helps visualise how well a binary classification model performs across different threshold values. It is a plot of true positive rate (recall) versus false positive rate and helps you see this trade off between True Positives and False Positives. &lt;/p&gt;

&lt;p&gt;The Area under the Curve (AUC) is a single scalar value between 0 and 1 of how well the classification model can separate the positive and negative predictions. A value of 0.5 means the model performs no better than a random classifier. A value of 1.0 is a perfect classifier. &lt;/p&gt;

&lt;h3&gt;
  
  
  Regression Metrics
&lt;/h3&gt;

&lt;p&gt;If you are using regression where you are predicting a number and not just a classification, then there are other metrics:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mean Absolute Error (MEA)&lt;/strong&gt; - MEA measures the average absolute difference between the predicted values and the actual values. It tells you on average, how much your models predictions are off from the true values. A lower score means a better model. It is simple to understand and is not affected by outliers. MAE is robust to outliers because it handles them linearly, not exponentially. This makes it a good choice when you don’t want a few bad predictions to dominate the error metric.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mean Squared Error (MSE)&lt;/strong&gt; - MSE averages the squared difference between actual and predicted values. Because it squares the values, outliers become amplified. This makes MSE more sensitive to outliers. You would choose MSE (Mean Squared Error) over MAE (Mean Absolute Error) when you want to penalize large errors more heavily and are more concerned with model performance on extreme values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RMSE (Root Mean Square Error)&lt;/strong&gt; - RMSE is a metric used to measure the differences between predicted values and actual values in a regression problem. It calculates the square root of the average squared differences between the predicted and actual values. A lower Root Mean Square Error value indicates better model performance. Since the errors are squared before averaging, larger errors have a bigger impact (this makes RMSE sensitive to outliers).&lt;/p&gt;

&lt;p&gt;You would use RMSE (Root Mean Squared Error) over MSE (Mean Squared Error) when you want the error metric to be in the same units as the target variable, making it more interpretable. For example, if you're predicting house prices in dollars, RMSE is in dollars, while MSE is in squared dollars, which is less intuitive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;R Squared&lt;/strong&gt; - R-Squared measures the square of the correlation coefficient between observed outcomes and predicted. It measures how well your regression model explains the variability of the target (dependent) variable. A score of 1 means the model explains all the variance perfectly. A score of 0 means the model explains none of the variance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving Model Accuracy &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Understanding model fit is important when understanding the root cause for poor model accuracy. &lt;/p&gt;

&lt;p&gt;Two common terms that come up to describe model performance are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overfitting&lt;/strong&gt; – a model that is overfitting has learned patterns in the training data that don’t generalise out to the real world. This  means that it has high accuracy on the training data set, but lower accuracy on evaluation data sets&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Underfitting&lt;/strong&gt; – a model that is underfitting performs poorly on the training data and in the real world. This is because the model is unable to capture the relationship between the input examples and the target values. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Regularization techniques&lt;/strong&gt; are intended to prevent overfitting. Common techniques include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dropout&lt;/strong&gt; – this is a technique where random neurons are temporarily dropped out (e.g. ignore) during each training iteration. This means the network can’t rely too heavily on any specific neuron or connection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Early Stopping&lt;/strong&gt; – this is a technique where you stop training the neural network before it overfits the training data. It works by monitoring validation loss and accuracy and stopping training when the model stops improving.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L1 and L2 Regularization&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L1 and L2 Regularization are techniques used to prevent overfitting by penalising large model weights. In a machine learning model, a weight is a numeric parameter that connects an input feature to an output. A large weight value means the model is putting an extremely strong emphasis on that specific feature, which means the model becomes very sensitive to small changes in inputs, which can lead to overfitting. These techniques add a penalty term to the loss function to discourage large weights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L1 Regularization (Lasso)&lt;/strong&gt; – the penalty is the sum of the absolute value of the weights. It shrinks some weights entirely to zero to create sparse models. This is a form of feature selection (removing irrelevant features). You should use this when you suspect only a few features are important. It is computationally inefficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;L2 Regularization (Ridge)&lt;/strong&gt; – the penalty is the sum of the square of the weights. This shrinks the weights but does not make them zero. It helps keep the model simpler and reduces sensitivity. It is computationally efficient.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Topics to Study &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The two other main topic areas you need to understand are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS AI Services&lt;/strong&gt; - these are the managed AWS services that offer a simpler entry point than building your own model. You will need to a good understanding of each service and what it is used for, so you can distinguish between Amazon Lex and Amazon Polly, and between Amazon Translate and Amazon Transcribe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon SageMaker&lt;/strong&gt; - Amazon SageMaker is a service that provides a whole host of features and capabilities you need to be aware of. You need to understand which feature you can use to detect bias; which to import, prepare and transform data; which to share curated features, and so on.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Other Study Guides &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Certification Page&lt;/strong&gt; - the &lt;a href="https://aws.amazon.com/certification/certified-machine-learning-engineer-associate/" rel="noopener noreferrer"&gt;AWS certification home page&lt;/a&gt; for this exam includes the study guide and links to additional resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS SkillBuilder&lt;/strong&gt; - the &lt;a href="https://skillbuilder.aws/learning-plan/A2FGY8CH1P/exam-prep-plan-aws-certified-machine-learning-engineer--associate-mlac01--english/3YFU86SSKN" rel="noopener noreferrer"&gt;AWS official learning plan&lt;/a&gt; which is available for free alongside an official set of practice exam questions. Additional material including longer review sections, extra questions and labs are available with a subscription.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Udemy&lt;/strong&gt; - the &lt;a href="https://www.udemy.com/course/aws-certified-machine-learning-engineer-associate-mla-c01/" rel="noopener noreferrer"&gt;certification course&lt;/a&gt; provided by Stephane Maarek and Frank Kane comes highly recommended&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pluralsight&lt;/strong&gt; - this &lt;a href="https://www.pluralsight.com/paths/aws-certified-machine-learning-engineer-associate-mlac01" rel="noopener noreferrer"&gt;certification course&lt;/a&gt; by Pluralsight also includes labs. Pluralsight offer a 10 day free individual trial and monthly subscriptions which may work for some&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tutorials Dojo&lt;/strong&gt; - this set of &lt;a href="https://portal.tutorialsdojo.com/courses/aws-certified-machine-learning-engineer-associate-practice-exams-mla-c01-2025/" rel="noopener noreferrer"&gt;practice questions&lt;/a&gt; is a great way to get used to the style of exam in various modes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>certification</category>
      <category>learning</category>
    </item>
    <item>
      <title>Next Gen Developer Experience with Amazon Q Developer</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Thu, 08 May 2025 08:55:15 +0000</pubDate>
      <link>https://forem.com/aws-heroes/next-gen-developer-experience-with-amazon-q-developer-1eja</link>
      <guid>https://forem.com/aws-heroes/next-gen-developer-experience-with-amazon-q-developer-1eja</guid>
      <description>&lt;p&gt;There have been massive advances in the capabilities and features supported by Amazon Q Developer over the last few months. A number of these really stood out for me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At the beginning of March the new enhanced Amazon Q Developer CLI agent was released with the power of Claude 3.7 Sonnet step-by-step reasoning. This also  gave the agent access to tools such as the AWS CLI. Read more in the &lt;a href="https://aws.amazon.com/blogs/devops/introducing-the-enhanced-command-line-interface-in-amazon-q-developer/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the end of April the Amazon Q Developer CLI was further enhanced with support for Model Context Protocol (MCP) to provide even more context. Read more in the &lt;a href="https://aws.amazon.com/blogs/devops/extend-the-amazon-q-developer-cli-with-mcp/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the beginning of May, Amazon Q Developer support was integrated into GitHub in preview. Read more in the &lt;a href="https://aws.amazon.com/blogs/aws/amazon-q-developer-in-github-now-in-preview-with-code-generation-review-and-legacy-transformation-capabilities/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all of these improvements, I wanted to see if there was a way of bringing them together to meet a coherent use case. This use case was  to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Take a task assigned to me from a Jira Kanban board&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implement the requested functionality&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push the code up to GitHub as the source code repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run a check for security vulnerabilities and code quality issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Raise a Pull Request&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Move the task along on the Kanban board&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The goal was to show how these tools can make life easier for a software engineer, and greatly increase their productivity. Let's see how I got on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set up Jira
&lt;/h2&gt;

&lt;p&gt;I am using a hosted version of Jira Cloud using the free tier provided by Atlassian. The first thing I did was to create a new Jira project that sets up a Kanban board using a software development supporting template. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09ab4hvux2gm4c5johqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09ab4hvux2gm4c5johqa.png" alt="Create Jira Project" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next I created a new Jira task. The task was to "create a classic snake game written in python using pygame", and I assigned it to myself. Although this is a contrived example, you could easily equate this to a new feature on an existing service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94p5455ppmm8b5qoocpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94p5455ppmm8b5qoocpn.png" alt="Create Jira Task" width="794" height="767"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, we can see this task in to "To Do" section of the Kanban board.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktm5k27l9nj9q0cioyti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktm5k27l9nj9q0cioyti.png" alt="Jira To Do Kanban" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also created a new GitHub repository which is cloned to my workspace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure MCP Servers
&lt;/h2&gt;

&lt;p&gt;The next step was to setup the Amazon Q CLI to access both Atlassian and GitHub. Amazon Q CLI acts as an MCP Client, and it can access MCP Servers that have been configured in a &lt;code&gt;mcp.json&lt;/code&gt; file. This files needs to be located in &lt;code&gt;~/.aws/amazonq&lt;/code&gt;. You can find out more details in this &lt;a href="https://dev.to/aws/configuring-model-context-protocol-mcp-with-amazon-q-cli-e80"&gt;blog post&lt;/a&gt; by Ricardo Sueiras.&lt;/p&gt;

&lt;p&gt;I want to run these MCP Servers in a container, and without access to Docker Desktop I configure them to use Podman. My configuration is shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "mcpServers": {
    "github-mcp-server": {
      "command": "podman",
      "args": [
        "run",
        "--rm",
        "--interactive",
        "--env",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "env": {"GITHUB_PERSONAL_ACCESS_TOKEN": "github_pat_xxx"}
    },
    "mcp-atlassian": {
      "command": "podman",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e", "JIRA_URL",
        "-e", "JIRA_USERNAME",
        "-e", "JIRA_API_TOKEN",
        "ghcr.io/sooperset/mcp-atlassian:latest"
      ],
      "env": {
        "JIRA_URL": "https://xxx.atlassian.net/",
        "JIRA_USERNAME": "xxx@email.com",
        "JIRA_API_TOKEN": "XXXXX"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This required creating a Personal Access Token in GitHub, and an API Token in Atlassian.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Run Amazon Q Developer CLI
&lt;/h2&gt;

&lt;p&gt;After making the changes to the &lt;code&gt;mcp.json&lt;/code&gt; configuration, I launched Amazon Q CLI from the terminal window in my Visual Studio Code IDE. You can see that the two MCP Servers have been loaded and are accessible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xup59yca1z89snvw4q1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xup59yca1z89snvw4q1.png" alt="Launch Amazon Q CLI" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I start by asking the Amazon Q CLI to “get the latest task from Jira that is assigned to me”. The Amazon Q CLI returns wanting to know more details, before it can use the configuration to retrieve information about the task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16lrvkkndqds6ctjdoyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16lrvkkndqds6ctjdoyk.png" alt="Get Latest Jira Task" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tell the Amazon Q CLI that I am using Jira Cloud and to search across all projects. I am then told that the Amazon Q CLI wants to run a tool provided by the &lt;code&gt;mcp_atlassian&lt;/code&gt; MCP Server. I am prompted to either press &lt;code&gt;t&lt;/code&gt; to always trust the tool for the session, &lt;code&gt;y&lt;/code&gt; to allow the tool to be executed this time without trusting for the session, or &lt;code&gt;n&lt;/code&gt; to not let the tool be executed. I will be answering &lt;code&gt;y&lt;/code&gt; to all of these prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkxq54dqvdxgkxkbdomx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkxq54dqvdxgkxkbdomx.png" alt="Jira Search MCP tool" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After running the tool, the Amazon Q CLI has found the task and displays all of the details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8v0vaxwvgtp2j1w81me.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8v0vaxwvgtp2j1w81me.png" alt="Jira Task details returned" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ask Amazon Q CLI to help me implement the functionality, and it goes away and generates all the code required. At this point, the code is in memory, and I am asked if I want to save the code to a file in my current directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukytd6ws1vatnbo5mxg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukytd6ws1vatnbo5mxg9.png" alt="Q CLI Implement Functionality" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tell the Amazon Q CLI to create a new branch and add the file to this new branch. After running the relevant git commands to create a new branch, the Amazon Q CLI switches to this branch, and then writes the code to a new file in this branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgpz6v0xq2xpsok0vd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgpz6v0xq2xpsok0vd8.png" alt="Q CLI Create New Branch" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successfully writing the code to a new branch, the Amazon Q CLI commits the changes with a message referencing the specific Jira task number that it still has in its current session context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvavklm8kbk51di5c6uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvavklm8kbk51di5c6uo.png" alt="Q CLI Commit Changes" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, I could prompt to make more enhancements to the game. The Amazon Q CLI even gives suggestions of areas of the game it could improve. Instead, I just ask it to update the README file with instructions on how to play the game, and then make another commit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzwpo83609tygdl1z0ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzwpo83609tygdl1z0ca.png" alt="Update README.md" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Amazon Q CLI now asks if I want to make any other changes. I'm happy with those that have been made so far, so ask it to make a pull request for these. Notice the first request to create a pull request fails. Amazon Q CLI apologises, and tries another approach, this time pushing the branch to GitHub and then creating the pull request which succeeds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F876qgapywrxwepeeidx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F876qgapywrxwepeeidx0.png" alt="Create Pull Request" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the pull request, the Amazon Q CLI knows from its session context and its reasoning, that we should also update the original task in Jira. It interacts with tools in the Atlassian MCP Server to transition the task to the "In Progress" state and add a relevant comment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9ktluhl1ul1ogywhgay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9ktluhl1ul1ogywhgay.png" alt="Update Jira" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Jira Kanban Board
&lt;/h2&gt;

&lt;p&gt;At this point in time, I go across to the Kanban board in Jira and can see that the task has been transitioned to "In Progress".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7tbe4kkamg1800zqzb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7tbe4kkamg1800zqzb0.png" alt="Jira Task In Progress" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking into the task, I can see the comment that has been added to the task. This gives details about the functionality implemented to meet the task description, alongside a working link to the open PR in GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnge3ot2sbsqd0ec7hfhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnge3ot2sbsqd0ec7hfhl.png" alt="Jira Task Comment" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Code Scanning in GitHub
&lt;/h2&gt;

&lt;p&gt;The final task I wanted to carry out was to run a check for security vulnerabilities and code quality issues. I have already configured my GitHub account with the Amazon Q Developer application. This means that as soon as the pull request was raised, the &lt;code&gt;amazon-q-developer&lt;/code&gt; application automatically scanned the changes in the PR. Happily, there were no security or code quality issues found. If there were, the new application would have automatically generated code suggestions to fix the findings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc3bkc0caqb9k5kdxrq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc3bkc0caqb9k5kdxrq0.png" alt="GitHub Integration" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I can't remember the last time I tested out new features and services and genuinely felt so convinced we are now starting to see a change in how we will engineer applications in the software industry. The value of software engineering still remains, even more so when working on complex problems for which generative AI solutions do not have the corpus of knowledge to be trained on. However, this showed to me how these new capabilities can help in reducing the context switching, and the need to move between various tools, copying data between them. The next generation of developer experience is well and  truly upon it, so I'd urge everyone to try it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Biography
&lt;/h2&gt;

&lt;p&gt;As Chief AWS Architect at IBM in the UK, I am responsible for growing the AWS capability and community within one of the fastest growing AWS consulting partners globally. This often gives me the opportunity to try out the latest features in preview before they go into general availability. You'll often find me blogging about my experience, but please reach out if there are services you'd like to know more about.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Java code transformation using the Amazon Q Developer GitHub Integration</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Tue, 06 May 2025 10:30:37 +0000</pubDate>
      <link>https://forem.com/aws-heroes/java-code-transformation-using-the-amazon-q-developer-github-integration-6fd</link>
      <guid>https://forem.com/aws-heroes/java-code-transformation-using-the-amazon-q-developer-github-integration-6fd</guid>
      <description>&lt;p&gt;AWS have launched the Amazon Q Developer integration with GitHub. I was keen to try this out, and in this post, I walk through how to get started and use the integration to upgrade a Java project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Setup
&lt;/h2&gt;

&lt;p&gt;The first step is to install the Amazon Q Developer application from the GitHub Marketplace found at this URL - &lt;a href="https://github.com/apps/amazon-q-developer" rel="noopener noreferrer"&gt;https://github.com/apps/amazon-q-developer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1m99jkbo3hmdksnuedj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1m99jkbo3hmdksnuedj.png" alt="Amazon Q Developer application" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on &lt;code&gt;Install&lt;/code&gt; and select which repositories you want to allow the application to access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgiw22a6fgkrrc2swolp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgiw22a6fgkrrc2swolp.png" alt="Install Q Developer application" width="800" height="1006"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also optionally register the application installation with your AWS account to increase your usage limits. This is a two-step process. A landing page in your AWS console allows you to authorise Amazon Q Developer to access your GitHub account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkkw3vcmjnzvq72t1je5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkkw3vcmjnzvq72t1je5.png" alt="Register App Installation" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This redirects you to GitHub to complete the authorisation process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74np9gaie9lo1646f4wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74np9gaie9lo1646f4wl.png" alt="Authorise GitHub" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are then returned the AWS Console to provide a registration name and complete the registration process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73qw3tt8hyewzg91lx9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73qw3tt8hyewzg91lx9i.png" alt="Complete Registration Process" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Before you can get started on running a transformation request, you need to enable GitHub Actions for the repository and ensure a runner is available online.&lt;/p&gt;

&lt;p&gt;This involves creating a &lt;code&gt;main.yml&lt;/code&gt; file within a &lt;code&gt;.github/workflows/&lt;/code&gt; folder structure. The workflow I used is shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Q Code Transformation

on:
  push:
    branches:
      - 'Q-TRANSFORM-issue-*'

env:
   MAVEN_CLI_OPTS: &amp;gt;-
     -Djava.version=${{ contains(github.event.head_commit.message, 'Code transformation completed') &amp;amp;&amp;amp;  '17' || '11' }}

jobs:
  q-code-transformation:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-java@v3
        with:
          java-version: ${{ contains(github.event.head_commit.message, 'Code transformation completed') &amp;amp;&amp;amp; '17' || '11' }}
          distribution: 'adopt'

      - name: Build and copy dependencies
        run: |
          mvn ${{ env.MAVEN_CLI_OPTS }} clean install -U
          mvn ${{ env.MAVEN_CLI_OPTS }} verify
          mvn ${{ env.MAVEN_CLI_OPTS }} dependency:copy-dependencies -DoutputDirectory=dependencies -Dmdep.useRepositoryLayout=true -Dmdep.copyPom=true -Dmdep.addParentPoms=true

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        with:
          name: q-code-transformation-dependencies
          path: dependencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a workflow named &lt;code&gt;Q Code Transformation&lt;/code&gt; that is triggered when code is pushed to a branch that matches the pattern &lt;code&gt;Q-TRANSFORM-issue-*&lt;/code&gt;. The job itself runs on Ubuntu, and starts by checking out the code in the repository into the GitHub Action runner.&lt;/p&gt;

&lt;p&gt;It then installs and sets up a Java installation using the AdoptOpenJDK distribution on the runner. The choice of Java version is chosen dynamically. If the commit message contains "Code transformation completed", Java version 17 is installed, else Java 11.&lt;/p&gt;

&lt;p&gt;The script then uses &lt;code&gt;maven&lt;/code&gt; commands to clean and rebuild the project, builds the projects and runs all tests, and finally copies all project dependencies into a specific folder. These are then uploaded as a project artifact from the runner machine into GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  Triggering the Code Transformation
&lt;/h2&gt;

&lt;p&gt;Triggering the Java code transformation is as simple as raising a GitHub issue, and applying the Amazon Q transform agent label to the issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbeeu4akl3huyb4yve3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvbeeu4akl3huyb4yve3.png" alt="Create GitHub Issue" width="800" height="387"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;amazon-q-developer&lt;/code&gt; application then takes over, adding comments to the issue to keep you up to date. It starts by running the GitHub Actions workflow required to transform the code. You can view the progress of the runner in the Actions tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2h6l9zw0l3cvxkjanb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2h6l9zw0l3cvxkjanb2.png" alt="GitHub Action Runner" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once successful, your code to be transformed is uploaded, and a transformation plan created. This code transformation plan sets out the changes that the agent is initially expecting to apply.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkxj5ich0gbo2ef5b7uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkxj5ich0gbo2ef5b7uc.png" alt="Code Transformation Plan" width="774" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent then starts upgrading the code to Java 17. Once complete, the agent updates the issue with a comment and opens a pull request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing the transformed code
&lt;/h2&gt;

&lt;p&gt;The pull request opened by the agent starts off with a code transformation summary detailing the changes made during the transformation process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72j85y846iksebh71u20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72j85y846iksebh71u20.png" alt="Code Transformation Summary Changes" width="744" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One area that the transform agent has significantly improved over the past few months is in summarising these changes. Not only are these nicely laid out in a list format, but for each significant change, there are details provided why it has been made and the benefits it offers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbbdr2pu07lt08qurmp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbbdr2pu07lt08qurmp4.png" alt="Summary of Changes" width="749" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security vulnerabilities and code quality issues
&lt;/h2&gt;

&lt;p&gt;One benefit that the GitHub integration brings is the scanning of all pull requests for security vulnerabilities and code quality issues. The application will raise a comment in the pull request, and then highlight any findings it discovers. In my case, the application generates a high severity warning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26el17y9ht54vpjxl9vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26el17y9ht54vpjxl9vu.png" alt="Security Vulnerabilities" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each vulnerability found, a detailed description of the fix needed alongside code that can be committed is provided.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjnj7wuk91i15yp526nr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjnj7wuk91i15yp526nr.png" alt="Security Vulnerability Fix" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If multiple issues are discovered, you can go into the &lt;code&gt;Files Changed&lt;/code&gt; and select to batch up all of the suggestions into a single commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observations and Conclusion
&lt;/h2&gt;

&lt;p&gt;There are a few areas that are worth drawing out that initially caught me out.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Supported Target Code Versions&lt;br&gt;
The transform agent in the IDE currently only supports Java 17. The transform agent in the IDE has recently enabled support for Java 21 as a target code version as well as Java 17.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code Review&lt;br&gt;
The code review run on the pull request is only carried out against the changes made in the diff, and not against unchanged content in these files, or the entire repo. To do this, you will need to go back to the IDE and run the &lt;code&gt;/review&lt;/code&gt; agent. It would be great to trigger a full code review on an entire repo through the GitHub integration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code Suggestions for Security Vulnerabilities&lt;br&gt;
Ironically, as the code review is only carried out against new changes, the vulnerability detected by the agent is against code that the agent itself created. I'm not entirely sure how I'm supposed to feel about this. In fairness, its most likely a reflection of the wider codebase and the code suggestion following the existing styling. In addition, the code suggestion made to resolve the security vulnerability which I automatically accepted failed compilation. This was detected almost straight away as it triggered another run of the GitHub Action. It turned out simpler to fix this compilation error manually and add this as a commit to the pull request. I'm not sure why this was the case, but I feel confident this will be fixed soon.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For Java transformations this now provides an alternative approach to using the agent in the IDE. Using the transform agent in the IDE feels more synchronous, as you watch the progress taking place in the Transformation Hub terminal. I really enjoyed the asynchronous nature of the GitHub integration. I can simply create a new issue, use a label to assign to the transform agent, and then wait for the email notification that it is complete. This frees up time to carry on with other value add activities.&lt;/p&gt;

&lt;p&gt;Running the transform agent in the IDE, you are also responsible for creating a separate branch, committing the changes to this branch, and raising the PR, all of which is taken care of for you with the GitHub integration. Even better, once merged into the main branch, the full conversation history is still maintained in the closed Pull Request for transparency.&lt;/p&gt;

&lt;p&gt;The project I used can be found here - &lt;a href="https://github.com/mlewis7127/bicycle-licence-GH-integration" rel="noopener noreferrer"&gt;https://github.com/mlewis7127/bicycle-licence-GH-integration&lt;/a&gt;. If you have a requirement to transform old Java code projects, I would definitely recommend checking out this integration, and see how it fits into your developer workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Biography
&lt;/h2&gt;

&lt;p&gt;As Chief AWS Architect at IBM in the UK, I am responsible for growing the AWS capability and community within one of the fastest growing AWS consulting partners globally. This often gives me the opportunity to try out the latest features in preview before they go into general availability. You'll often find me blogging about my experience, but please reach out if there are services you'd like to know more about.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>aws</category>
    </item>
    <item>
      <title>Amazon Q Developer transform for .NET</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Mon, 20 Jan 2025 10:19:13 +0000</pubDate>
      <link>https://forem.com/aws-heroes/amazon-q-developer-transform-for-net-5c98</link>
      <guid>https://forem.com/aws-heroes/amazon-q-developer-transform-for-net-5c98</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A fantastic use case for AI Coding Assistants is upgrading applications to modern and supported versions of programming languages, libraries and frameworks. All too often, engineering effort is spent building new features, whilst existing applications are left untouched, until they become unsupported with all kinds of inherent vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Q Developer&lt;/code&gt; introduced a code transformation agent for Java when launched in preview back in November 2023. This agent has undergone multiple iterations and become more accurate and powerful since then. On 3rd December 2024 at AWS re:Invent, the public preview of new transformation capabilities for .NET, mainframe, and VMware workloads was announced.&lt;/p&gt;

&lt;p&gt;In this blog post, I take a look at the .NET transformation capability to get a better understanding of how it works. This feature is available in the &lt;code&gt;Visual Studio IDE&lt;/code&gt;. However, &lt;code&gt;Visual Studio for Mac&lt;/code&gt; has been retired, which gave me an opportunity to try out the Amazon Q Developer transformation web experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why port .NET Framework?
&lt;/h2&gt;

&lt;p&gt;.NET Framework is the original implementation of .NET by Microsoft. It supports running websites, services, desktop apps and more, but only on Windows.&lt;/p&gt;

&lt;p&gt;.NET (sometimes called .NET Core) is a more modern, open-source version of .NET that can run on multiple operating systems including Windows, Linux and MacOS. The reasons to continue using .NET Framework are specific and limited according to Microsoft, and relate to use cases where your application is using third-party libraries or NuGet packages or .NET Framework technologies that are not available for .NET.&lt;/p&gt;

&lt;p&gt;Where possible you should look to utilise .NET.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with web experience
&lt;/h2&gt;

&lt;p&gt;To get started with the web experience, I had to subscribe to &lt;code&gt;Amazon Q Developer&lt;/code&gt; from your management (root) account, as it is not currently possible to use a delegated administrator account. Note that you need to be in the &lt;code&gt;us-east-1&lt;/code&gt; region at this point.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkuthgu1ah2b962wsgnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkuthgu1ah2b962wsgnb.png" alt="Amazon Q Developer Start Page" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This takes you to a &lt;code&gt;Getting started with Amazon Q&lt;/code&gt; page. I was prompted to switch back to my home region (&lt;code&gt;eu-west-2&lt;/code&gt;) which then automatically connected my &lt;code&gt;AWS Organization&lt;/code&gt; instance of &lt;code&gt;IAM Identity Center&lt;/code&gt; to &lt;code&gt;Amazon Q&lt;/code&gt;. At this point, I clicked the button to "subscribe" and added a user from &lt;code&gt;IAM Identity Center&lt;/code&gt;. It is also possible to add a Group instead of an individual user or users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu34d1wcqn9bdk9rfrpiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu34d1wcqn9bdk9rfrpiv.png" alt="Connect to IAM Identity Center" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This successfully created an &lt;code&gt;Amazon Q Developer&lt;/code&gt; Pro subscription for my chosen user. At this point, I clicked on the button which took me to the &lt;code&gt;Amazon Q Developer&lt;/code&gt; console to complete the setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eyese8uxyhn8furwzv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eyese8uxyhn8furwzv3.png" alt="Create an Amazon Q Subscription" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Opening up &lt;code&gt;Amazon Q Developer&lt;/code&gt; in the AWS Console gave the option to click on a settings button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh8ndltjpzpb9hdeu37r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh8ndltjpzpb9hdeu37r.png" alt="Amazon Q Developer Console Settings" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the settings, I enabled the transform settings, which is required to give access to the transformation web experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc33ddi92qhh6z0ft0i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc33ddi92qhh6z0ft0i9.png" alt="Enable Amazon Q Transform Settings" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, I navigated in a browser window to &lt;a href="https://transform.developer.q.aws.com/" rel="noopener noreferrer"&gt;https://transform.developer.q.aws.com/&lt;/a&gt; and signed in using &lt;code&gt;IAM Identity Centre&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyufggmv4gm6tvp5ojqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyufggmv4gm6tvp5ojqc.png" alt="Sign in using IAM Identity Center" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once logged in, I was presented with the option of creating my first transformation job with &lt;code&gt;Amazon Q&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyvh8wuvht95jyz21q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gqyvh8wuvht95jyz21q.png" alt="Create first transformation job with Amazon Q" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running transformation for .NET
&lt;/h2&gt;

&lt;p&gt;Once I asked Q to create a transformation job, I was given the choice  of the type of transformation to work on. There are three options available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modernize .NET applications to cross-platform .NET&lt;/li&gt;
&lt;li&gt;Migrate VMware applications to Amazon EC2&lt;/li&gt;
&lt;li&gt;Perform mainframe modernization (z/os to AWS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqrbicb9sh09z0j2fmu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqrbicb9sh09z0j2fmu9.png" alt="Choose transformation type" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose the option to modernise a .NET application. &lt;code&gt;Amazon Q&lt;/code&gt; then populated a number of details about the .NET modernisation project. I could change these details, or in this case, confirm they are correct and let &lt;code&gt;Amazon Q&lt;/code&gt; create the job itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcp7ijz0t9t3fhw8dkfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcp7ijz0t9t3fhw8dkfx.png" alt="Confirm .NET transformation job" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point I had to connect &lt;code&gt;Amazon Q&lt;/code&gt; to my source repository for the project I want to transform. I have forked the &lt;a href="https://github.com/leitosama/SharpZeroLogon" rel="noopener noreferrer"&gt;SharpZeroLogon&lt;/a&gt; GitHub repository to my own profile. This is an archived repository that was a rework of the NCC Group's tool specifically for .NET Framework 3.5.&lt;/p&gt;

&lt;p&gt;The connection is made using &lt;code&gt;AWS CodeConnections&lt;/code&gt;. Within an AWS account, you use &lt;code&gt;CodeConnections&lt;/code&gt; to create a connection to a third party Git-based source provider. Currently, the only supported provider is GitHub. To create a connection, you need to go to &lt;code&gt;AWS CodeArtifact&lt;/code&gt;, click on &lt;code&gt;Settings&lt;/code&gt; and then &lt;code&gt;Connections&lt;/code&gt;. I am using GitHub which installs the &lt;code&gt;AWS Connector for GitHub&lt;/code&gt; as an application in GitHub. You can configure the connector with access to only specific repositories.&lt;/p&gt;

&lt;p&gt;To setup the Amazon Q transformation job you first specify the account number for the AWS account where the connection is configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg1h1aafsawo6cvc2lm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg1h1aafsawo6cvc2lm9.png" alt="Select AWS Account ID" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You then specify the &lt;code&gt;AWS CodeConnection&lt;/code&gt; ARN.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqdiroy5nof01avm8jpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqdiroy5nof01avm8jpv.png" alt="AWS CodeConnection ARN" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You then go back into the AWS console to approve this connection request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc1g3ycjfwkf645auio4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc1g3ycjfwkf645auio4.png" alt="Approve Connection Request" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the connection request has been approved, you click on &lt;code&gt;Send to Q&lt;/code&gt;, which will allow &lt;code&gt;Amazon Q&lt;/code&gt; to access the repositories in the connected account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk56ptf6nz5646tcrj8z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk56ptf6nz5646tcrj8z7.png" alt="Create Connector Send to Amazon Q" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Q&lt;/code&gt; analyses all of the repositories it has access too, to discover which ones run a .NET Framework application that is capable of being transformed. &lt;code&gt;Amazon Q Developer&lt;/code&gt; transformation capabilities for .NET supports porting C# code projects of the following types: console application, class library, unit tests, web API, Windows Communication Foundation (WCF) service, and business logic layers of Model View Controller (MVC) and Single Page Application (SPA). Types of jobs that &lt;code&gt;Amazon Q&lt;/code&gt; currently cannot transform include WebUI, SQLServer and ASP.NET.&lt;/p&gt;

&lt;p&gt;In my example, the &lt;code&gt;SharpZeroLogin&lt;/code&gt; repository has been detected as a supported project, and I am given the option to specify a target version (.NET 8.0). I can also specify the name of the new branch that will be created or keep the default.&lt;/p&gt;

&lt;p&gt;Note that the web experience gives you the option of carrying out a .NET transform of multiple repositories. This is something not available within the IDE, which only allows one .NET solution at a time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1zi97degiu30zwvinc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1zi97degiu30zwvinc.png" alt="Confirm Repository to Transform" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Q&lt;/code&gt; now automatically ports the selected .NET application to the target version following a transformation plan it has created. It commits all of the transformed code to a new branch in my GitHub repository, preserving the original source code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lae5ijly53zpkmdt02a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lae5ijly53zpkmdt02a.png" alt="Job Completed Successfully" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can click on the Dashboard tab to monitor the progress. In this case, I am told that the application has been transformed with no issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmts7vdavps0l348pvqx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmts7vdavps0l348pvqx6.png" alt="Job Completed Dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can now go to my GitHub repository and look at the new branch that has been created. I can also view the diffs to see what changes have been made. In the file below, we can see that the target framework version has been updated from "3.5" to "net8.0".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ftey1kf583vh0a3crfx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ftey1kf583vh0a3crfx.png" alt="View Code Diffs" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The goal of this blog post is to show you how to simple it is to get up and running with the new &lt;code&gt;Amazon Q Developer&lt;/code&gt; transformation web experience. If you have existing .NET Framework applications that you want to port to .NET to gain performance improvements and cross-platform support, it is definitely worth giving this feature a go.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>ai</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Amazon GuardDuty Extended Threat Detection</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Mon, 02 Dec 2024 15:19:53 +0000</pubDate>
      <link>https://forem.com/aws-heroes/amazon-guardduty-extended-threat-detection-3l72</link>
      <guid>https://forem.com/aws-heroes/amazon-guardduty-extended-threat-detection-3l72</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I was lucky to get the opportunity to try out the new "Extended Threat Protection" feature for &lt;code&gt;Amazon GuardDuty&lt;/code&gt; whilst in beta. With the announcement of this new feature, I wanted to share more around my experience, and the value this new feature brings. Before jumping into this, let's start by providing some background to &lt;code&gt;Amazon GuardDuty&lt;/code&gt; and the benefits it provides, to those who may not be familiar with the service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Amazon GuardDuty?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Amazon GuardDuty&lt;/code&gt; is a threat detection service that continuously monitors, analyses and processes AWS logs and other data sources for malicious and abnormal activity. It uses its own internal feeds, alongside other intelligence feeds from CrowdStrike and Proofpoint to detect the latest threats and attack techniques. As someone who has worked for many years in heavily regulated industries processing sensitive data sets in areas of critical national infrastructure, I have been a huge advocate of &lt;code&gt;Amazon GuardDuty&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In modern cloud environments, the quantity of logs and events that is captured is enormous. When it comes to threat detection, you require real-time and accurate visibility into this data. When your workloads reside on AWS, shipping this data externally to another cloud provider or back on-premises adds significant egress costs and latency. This is why I always look to use GuardDuty, so the data can be analysed at source, and threat detection can be consumed as a managed service.&lt;/p&gt;

&lt;p&gt;GuardDuty uses a baseline of foundational data sources, and processes these logs using independent streams of data so it does not affect existing configurations. These foundational data sources are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudTrail&lt;/strong&gt; - showing a history of AWS API calls and management events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VPC Flow Logs&lt;/strong&gt; - showing details of IP traffic going to and from network interfaces attached to your EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route 53 Resolver DNS logs&lt;/strong&gt; - showing a history of DNS queries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of this baseline, you have the option to enable protection plans, which are specialised features within GuardDuty that provide enhanced threat detection for specific AWS services, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Protection&lt;/strong&gt; - helps detect risks such as data exfiltration and destruction in your S3 buckets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EKS Protection&lt;/strong&gt; - monitors EKS audit logs to identify potential security issues such as unauthenticated actor attempts to collect secrets or AWS credentials, and suspicious container deployments with images not commonly used in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Runtime Monitoring&lt;/strong&gt; - observes and analyses operating-system level, networking, and file events to help detect potential threats for EC2 instances and container workloads in EKS and ECS including Fargate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Malware Protection for EC2&lt;/strong&gt; - detect the potential presence of malware by scanning the EBS volumes attached to EC2 instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Malware Protection for S3&lt;/strong&gt; - detect the potential presence or malware by scanning newly uploaded objects to selected S3 buckets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RDS Protection&lt;/strong&gt; - profile and monitor access activity to Aurora databases in your AWS account without impacting database performance, to detect potential threats such as high severity brute force attacks, suspicious logins, and access by known threat actors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda Protection&lt;/strong&gt; - identifies potential security threats when an AWS Lambda gets invoked in your AWS environment by monitoring Lambda network activity logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Integration with AWS Services
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Amazon GuardDuty&lt;/code&gt; is tightly integrated with other AWS services to enable fast responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Detective
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Amazon Detective&lt;/code&gt; ingests GuardDuty findings and allows you to quickly analyse and investigate these events.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Security Hub
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;AWS Security Hub&lt;/code&gt; is a cloud security posture management (CSPM) service. It collects findings from the security services enabled across your AWS accounts, such as intrusion detection findings from GuardDuty, vulnerability scans from Inspector, and sensitive data identification findings from Macie. It runs continuous and automated account and resource-level configuration checks against the controls in the AWS Foundational Security Best Practices standard and other supported industry best practices and standards such as NIST and PCI DSS. The screenshot below shows GuardDuty findings in Security Hub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd33f44kj8crcprmw1cbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd33f44kj8crcprmw1cbv.png" alt="Security Hub Findings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon EventBridge
&lt;/h3&gt;

&lt;p&gt;GuardDuty creates an event whenever a new finding occurs. These are routed to the default event bus in &lt;code&gt;Amazon EventBridge&lt;/code&gt;. You can configure an EventBridge rule with a pattern that listens for GuardDuty findings in order to automatically respond to these events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzucsywb8vdtis10lchc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzucsywb8vdtis10lchc.png" alt="Amazon EventBridge Rule"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Common use cases include sending automatic alerts for high severity findings, or automating remediation (e.g. disabling a compromised access key)&lt;/p&gt;

&lt;h2&gt;
  
  
  Extended Threat Detection with Attack Sequences
&lt;/h2&gt;

&lt;p&gt;GuardDuty Extended Threat Detection is a new feature of &lt;code&gt;Amazon GuardDuty&lt;/code&gt; that uniquely identifies attack sequences spanning multiple AWS data sources and resources within a 24-hour time window within an AWS account.&lt;/p&gt;

&lt;p&gt;This addresses the risk where an attack could be comprised of a number of related suspicious activities over a period of time. Each of these suspicious activities may generate their own individual finding. However, these may be of a lower severity and act as a weak signal, and so not seen as presenting a real threat. However, when these weak signals are considered together, and the sequence of these activities align to a more suspicious activity, GuardDuty will generate an attack sequence finding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm646c21idsquxy1x47i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm646c21idsquxy1x47i.png" alt="GuardDuty Summary with Attack Sequence"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, we have triggered a finding of type &lt;code&gt;&lt;br&gt;
AttackSequence:IAM/CompromisedCredentials&lt;/code&gt;. We can see looking at the summary of findings that this has been given a critical severity level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprbeyk1ahsj5ung7agxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprbeyk1ahsj5ung7agxl.png" alt="GuardDuty Findings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking into the finding and selection "View details" brings up the overview page. This provides a compact view of the attack sequence details, including signals, MITRE tactics, and potentially impacted resources. In the screenshot below, (1) shows the signals, (2) shows the MITRE tactics, and (3) shows the indicators.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk38baspm2oaprjfbonv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk38baspm2oaprjfbonv.png" alt="Attack Sequence Overview Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Signals displays a timeline of events that are involved in the attack sequence. Each individual signal could be an API activity or finding that GuardDuty used to detect the attack sequence. Each signal, that is a GuardDuty finding, has it's own severity level and value assigned to it. In the GuardDuty console, you can select each signal to view the associated details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt5h8y7toa2bzqh6tn4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt5h8y7toa2bzqh6tn4h.png" alt="Signals Timeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of my favourite aspects of the new feature is the mapping of the finding to both MITRE ATT&amp;amp;CK(™️) tactics and techniques. This "compromised credentials" attack sequence was comprised of the following 3 MITRE ATT&amp;amp;CK tactics. GuardDuty uses the MITRE ATT&amp;amp;CK framework to add context to the entire attack sequence. The colours GuardDuty uses to specify the threat purposes used by the threat actor, align with the colours that indicate the critical, high, medium, and low findings severity level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F568k1i6tt4xx9689o11m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F568k1i6tt4xx9689o11m.png" alt="MITRE tactics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The indicators section shows observed data that matches the pattern of a security issue, and is the reason why this collection of signals was identified as an attack sequence. For example, the "High risk API" indicator is flagged as the &lt;code&gt;cloudtrail:DeleteTrail&lt;/code&gt; and &lt;code&gt;iam:CreateUser&lt;/code&gt; API calls were made, which are actions commonly used by threat actors.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19r09drugg5rwf1g3tch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19r09drugg5rwf1g3tch.png" alt="Indicators"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I setup a rule in EventBridge to capture an attack sequence finding. A small subset of the JSON event message is shown below. This message also provides details of the associated signals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b496q230gxxbqktusxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1b496q230gxxbqktusxw.png" alt="Sample GuardDuty Finding"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, this is a fantastic new feature in GuardDuty and I am excited to see more attack sequence detections being added over time.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>security</category>
      <category>aws</category>
    </item>
    <item>
      <title>Accelerating builds with Amazon Q Developer Agent</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Thu, 21 Nov 2024 12:07:45 +0000</pubDate>
      <link>https://forem.com/aws-heroes/accelerating-builds-with-amazon-q-developer-agent-2dd5</link>
      <guid>https://forem.com/aws-heroes/accelerating-builds-with-amazon-q-developer-agent-2dd5</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;I have been using the &lt;code&gt;Amazon Q Developer Agent for software development&lt;/code&gt; for some time, looking at ways to get the agent to consistently generate more accurate output. This has highlighted the importance of the prompt, as the primary way of instructing the agent about the type and style of content to be generated. I have found the more precise guidance the prompt contains, the better the output. However, crafting these prompts can be time consuming. This leads to another challenge of ensuring that prompts are used in a consistent fashion across all engineers in a team.&lt;/p&gt;

&lt;p&gt;In this post we look at a simple way of introducing “prompt templates” as a better way of ensuring the accuracy of content generated. The goal is to generate a complete project using a number of AWS services written in Python, with the infrastructure managed and provisioned by Terraform. This is being carried out with limited knowledge of these languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrapping our project
&lt;/h2&gt;

&lt;p&gt;We start off by bootstrapping the project with just 2 files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;requirements.md&lt;/code&gt; - a markdown file containing our requirements in terms of programming languages and other guidance. This acts as our prompt template&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;architecture.puml&lt;/code&gt; - a Plant UML diagram with the design of the application we want to build&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh0gteqmhwsrjvd4mz97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh0gteqmhwsrjvd4mz97.png" alt="Initial Project Setup" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, we can now pass in a simple prompt to the &lt;code&gt;Amazon Q Developer Agent&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Create a complete project implementation including source code, unit tests and a requirements.txt file that meets all of the project requirements as set out in the requirements.md file. Use Terraform following standard best practice to define and deploy all services as detailed in the Plant UML diagram format below:”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I have not found an accurate way of getting Amazon Q Developer to recognise the &lt;code&gt;.puml&lt;/code&gt; file, so for the moment, I copy and paste the Plant UML diagram content into the prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppxw8bvet4hdqcy0wjae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppxw8bvet4hdqcy0wjae.png" alt="Project Prompt" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Amazon Q Developer Agent for software development&lt;/code&gt; plans out all the steps it needs to take to meet the ask. In the very first step, the agent states:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I need to open requirements.md to review the project requirements. Based on Plant UML diagram, I can see it's an AWS serverless architecture with an API Gateway, Lambda functions and DynamoDB. I'll first create the necessary project structure&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The complete summary of changes are shown in the screenshot below. It is interesting to see the agent is clever enough to recognise that some of the imports are not working correctly, and it continually iterates after each change, until it is confident that the requirements have been met.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcusy3957smk9jxom2sgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcusy3957smk9jxom2sgq.png" alt="Summary of Changes" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I accept the proposed code changes, and now I have a full project structure that has been created for me, with a &lt;code&gt;README.md&lt;/code&gt; file that gives me more details about the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felt98uh5s8ny2e9u41wm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felt98uh5s8ny2e9u41wm.png" alt="Project Files" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the application
&lt;/h2&gt;

&lt;p&gt;Next we want to get the application up and running. I recorded a video to show this working. There were no code changes needed. I just needed to execute a python script to package up the source code for each Lambda function into a Zip file. I also used the inline chat functionality of &lt;code&gt;Amazon Q Developer&lt;/code&gt; to output the API endpoint URL so I could run some cURL commands.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/v9roxvkZQR8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Design versus Build Comparison
&lt;/h2&gt;

&lt;p&gt;The only details of what AWS services that made up the application were shown in the Plant UML diagram. What I found most impressive was how accurately the software development agent created the infrastructure to match the design.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The design shows 4 individual AWS Lambda functions called CreateItem, ReadItem, UpdateItem and DeleteItem. These are the exact names of the Lambda functions created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The design shows the HTTP verbs (POST, GET, DELETE) and the URL Path Notation. These have been created in API Gateway exactly as they are shown in the diagram.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The design shows a DynamoDB table called Item with a partition key of &lt;code&gt;id&lt;/code&gt;. This has also been created exactly as shown in the diagram&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5eoehyp3h4p00hh29ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5eoehyp3h4p00hh29ol.png" alt="Design vs Build Comparison" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, we can see that the code generated conforms to the specification set out in the requirements file, being written in Python with headers and comments and meaningful variable names.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhymlwksbbnysas315v28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhymlwksbbnysas315v28.png" alt="Create Item Lambda Function" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I am really excited to see the potential that exists today with the approach taken here. I love seeing how visual design artefacts can be taken and converted directly into the underlying infrastructure on AWS. This helps to provide a seamless transition between design and build. It also solves another challenge of adopting a prompt template that can be used consistently by engineers within a team and across teams in an organisation. This approach allows for the consistent generation of code artefacts that meet organisation guidelines, by ensuring that project repositories are bootstrapped when created with a standard requirements file.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>tutorial</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Amazon Q Developer Agent for Code Transformation</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Mon, 18 Nov 2024 09:56:30 +0000</pubDate>
      <link>https://forem.com/aws-heroes/amazon-q-developer-agent-for-code-transformation-1bd0</link>
      <guid>https://forem.com/aws-heroes/amazon-q-developer-agent-for-code-transformation-1bd0</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Here in the UK, the &lt;a href="https://www.ncsc.gov.uk/" rel="noopener noreferrer"&gt;National Cyber Security Centre (NCSC)&lt;/a&gt; is the 'technical authority' for cyber incidents with a view that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;patching remains the single most important thing you can do to secure your technology, and is why applying patches is often described as 'doing the basics'&lt;/strong&gt; &lt;a href="https://www.ncsc.gov.uk/blog-post/the-problems-with-patching" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All too often, all engineering effort is spent building new features, and existing applications are left untouched, until they are running out of date versions of libraries and frameworks with all kinds of inherent vulnerabilities. This is where the &lt;code&gt;Amazon Q Developer Agent for Code Transformation&lt;/code&gt; comes into its own. In this article, we take the agent for a spin looking to upgrade a Java 11 application to Java 17.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bicycle Licence Application
&lt;/h2&gt;

&lt;p&gt;In March 2020, I presented an online AWS tech talk demonstrating features of &lt;code&gt;Amazon QLDB&lt;/code&gt; in an application written in Java 11 and Spring Boot. In the past couple of months, I have replaced &lt;code&gt;Amazon QLDB&lt;/code&gt; with &lt;code&gt;Amazon DynamoDB&lt;/code&gt;, and made sure that it is simple to start up and run as a Java 11 application. This is now a great project to test out the code transformation agent. It is a more complex application with around 750 lines of code than a basic "Hello World" application, but not so large it is difficult to understand. You can clone this project yourself &lt;a href="https://github.com/mlewis7127/bicycle-licence-ui-master" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Java 11 application
&lt;/h2&gt;

&lt;p&gt;All screenshots are taken using the &lt;code&gt;Intellij IDEA&lt;/code&gt;. The first step is to clone the repository and open as a new project. Make sure the project structure is setup to use version 11 of the Java SDK:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnqsd7qotpdjjtcdk2et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnqsd7qotpdjjtcdk2et.png" alt="Project Structure Java 11" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application requires a DynamoDB table with a Global Secondary Index to be setup in the &lt;code&gt;eu-west-1&lt;/code&gt; region, and there is a CloudFormation template provided you can use to set this up. I couldn't remember the exact command to use, but luckily &lt;code&gt;Amazon Q on the command line&lt;/code&gt; came to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famk70cx5752l58l4rndd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famk70cx5752l58l4rndd.png" alt="Amazon Q Developer on the command line" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run a &lt;code&gt;mvn clean&lt;/code&gt; and then a &lt;code&gt;mvn compile&lt;/code&gt;, which will compile all of the code successfully. You can then launch the application using a new Spring Boot configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn628wsmtq8l0vmc4tih1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn628wsmtq8l0vmc4tih1.png" alt="Spring Boot configuration" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a new Bicycle Licence record
&lt;/h2&gt;

&lt;p&gt;Once launched, the application will be running at &lt;code&gt;http://localhost:8080/&lt;/code&gt;. Opening this in a browser window will render the landing page for the application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibjn80h6f7w21k91dywx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibjn80h6f7w21k91dywx.png" alt="Bicycle Licence UI" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, you can start by creating a new fictitious bicycle licence by specifying a name, telephone number and email address:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxv02f21vvyorkjrl1yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxv02f21vvyorkjrl1yj.png" alt="Bicycle Licence UI Create" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, you can go into the view/update licence tab and add some points to the licence:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ygj88acpjgtz5b45w7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ygj88acpjgtz5b45w7r.png" alt="Bicycle Licence UI Update" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, you can click in the history tab and view all the events that have taken place against that record:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm5unzyio53d5jrk4576.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvm5unzyio53d5jrk4576.png" alt="Bicycle Licence UI History" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have this up and running as a Java 11 application, we want to look to migrate to Java 17.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the code transformation agent
&lt;/h2&gt;

&lt;p&gt;The first step in running a code transformation is to type &lt;code&gt;/transform&lt;/code&gt; into the &lt;code&gt;Amazon Q&lt;/code&gt; chat window. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysplixg0ljfycq42621a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysplixg0ljfycq42621a.png" alt="Transform" width="612" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here, &lt;code&gt;Amazon Q&lt;/code&gt; will analyse the open workspace to identify if there is a module available running Java 8 or Java 11 that can be transformed. It will automatically recognise the &lt;code&gt;bicycle-licence-ui&lt;/code&gt; module, and pre-select this for you. We can simply confirm we want to transform this module to JDK17.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x9l0x0f7ayhg54eap2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x9l0x0f7ayhg54eap2g.png" alt="Welcome to Code Transformation" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is also an option for the agent to build your module with or without running unit tests. The default is to run tests, and a number of unit tests are included as part of the project to show this feature working.&lt;/p&gt;

&lt;p&gt;The first step is that &lt;code&gt;Amazon Q&lt;/code&gt; builds your module locally, downloading all dependencies, and then running the unit tests. Assuming this is successful, &lt;code&gt;Amazon Q&lt;/code&gt; will then scan the project files and get ready to start the job. To do this, the project artifacts are uploaded to a managed secure build environment on AWS.&lt;/p&gt;

&lt;p&gt;Once the files have been uploaded, the transformation job has been accepted and is ready to start. The application is built again using Java 11. Following this, the code is analysed in order to generate a transformation plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0ign2vkisr3qc7utww0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0ign2vkisr3qc7utww0.png" alt="Code Transformation Plan" width="612" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the plan used by the agent to transform your code. The whole concept of an agent is that it can run autonomously to complete a complex task, and take actions based upon its findings as it progresses, without direct human intervention. The agent can make use of RAG and custom models, which are all abstracted away from the end user. It does mean that the final code updates may end up being slightly different than the initial plan, as the agent will continue to re-evaluate its progress after each step.&lt;/p&gt;

&lt;p&gt;The most impressive feature for me is watching the agent apply the updated dependencies and code changes, and then building the module in a Java 17 environment. You can see from the screenshot below that each time updated dependencies were added, the application failed to compile. When this took place, the agent is able to access other underlying models to work out what further changes are required, until eventually the application can now be built successfully in Java 17.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5j9xb2l1raew8xhaqrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5j9xb2l1raew8xhaqrd.png" alt="Code Transformation Java 17 Build" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After just over 12 minutes, the transformation job was complete, and it was time to review the code diff and see the proposed changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff20woq5qmxsgap7ugo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff20woq5qmxsgap7ugo1.png" alt="Code Transformation Java 17 Complete" width="612" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking "view diff" opens up a new window highlighting the files that have been modified:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz882egu68ws0fgqgugvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz882egu68ws0fgqgugvy.png" alt="View Diff" width="540" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can click on each one, to see these changes. In the example below, we can see that a new method has been added to the &lt;code&gt;BicycleLicenceDynamoDBRepository.java&lt;/code&gt; class. This is because the &lt;code&gt;CrudRepository&lt;/code&gt; interface provided by Spring Data that this class implements has had this new method added to it in the version upgrade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4qa3xismfuisk87ji39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4qa3xismfuisk87ji39.png" alt="New Interface Method Added" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also view the Code Transformation Summary provided by Amazon Q. This provides details around how many lines of code were analysed, how many files have been changed, how many planned dependencies have been added and so on. It also provides a build log summary. In this case, I can see that all source files were successfully compiled and 6 tests ran without any failures, errors or skips.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeifrxh77hgb2f37jtjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeifrxh77hgb2f37jtjs.png" alt="Code Transformation Summary" width="612" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are now in a position where we can test the application after it has been transformed to Java 17. After accepting the code changes, we run an &lt;code&gt;mvn clean&lt;/code&gt; and &lt;code&gt;mvn compile&lt;/code&gt; and reload all project dependencies. It also involves making sure the project structure is setup to use version 17 of the Java SDK, alongside the Run Configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Java 17 application
&lt;/h2&gt;

&lt;p&gt;After making the previous changes, we run the application and can interact with the bicycle licence with no issues. However, one thing we notice is a warning message in the console that the &lt;code&gt;AWS SDK for Java 1.x&lt;/code&gt; has entered maintenance mode and will reach end of support on December 31, 2025.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49smvw8qa1zzvzxc1s7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49smvw8qa1zzvzxc1s7f.png" alt="AWS SDK for Java 1.x" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In reality, it's a little disappointing that the AWS SDK library was not updated as part of the code transformation. The reality is that there are significant changes involved, and the AWS SDK for Java 2.x is a major rewrite of the 1.x code base. Out of curiosity, I wanted to see how the software development agent would handle this.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS SDK v1 to v2 upgrade with software development agent
&lt;/h2&gt;

&lt;p&gt;With the AWS SDK for Java v1 already in maintenance mode and not updated by the code transformation agent, I wanted to see how the software development agent would handle the upgrade. I typed in &lt;code&gt;\dev&lt;/code&gt; in the chat window, and entered the simple prompt of "Rewrite this application to use the AWS SDK for Java 2.x". The agent analysed the application and then worked through a whole set of changes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnmb7jbes7x5vkssercs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnmb7jbes7x5vkssercs.png" alt="Developer Agent Summary of Changes" width="612" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I really liked the fact that the developer agent created a migration plan which it then used iteratively to update all dependencies. This was a markdown file as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4kpte9bbvjkte94w2rf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4kpte9bbvjkte94w2rf.png" alt="Migration Plan" width="590" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although the developer agent came up with a large number of correct changes, there were still some errors left behind. This really helped to highlight the differences between the two agents currently available. The two key points for me are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The software development agent does not allow you to test the generated code before it is accepted. This means that any issues such as compilation errors will need to be fixed manually (supported by the chat interface), or thrown back to the agent consuming another code generation from your quota&lt;/li&gt;
&lt;li&gt;The code transformation agent uploads your artifacts to a secure build environment, and will continually try to build and compile your code, fixing any errors as it goes along, until it is complete.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Amazon Q Developer Agent for code transformation&lt;/code&gt; is a great example of the value that agents can bring, and why I believe they are the future for coding assistants. It handles the complex task of upgrading an application from an older version to a newer version autonomously, significantly reducing developer effort.&lt;/p&gt;

&lt;p&gt;The two main drawbacks I encountered are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The agent did not look to transform the outdated &lt;code&gt;AWS SDK for Java 1.x&lt;/code&gt; libraries to the latest &lt;code&gt;AWS SDK for Java 2.x&lt;/code&gt; libraries&lt;/li&gt;
&lt;li&gt;The target version is currently at Java 17. This is now outdated itself. &lt;code&gt;Amazon Corretto 17&lt;/code&gt; was released in September 2021, whilst &lt;code&gt;Amazon Corretto 21&lt;/code&gt; was released in September 2023 and &lt;code&gt;Amazon Corretto 23&lt;/code&gt; was released in September 2024, and hopefully we will see these more recent versions supported shortly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nevertheless, I really hope you try this agent out for yourself, and I'd love to hear your feedback.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Top tips to pass the AWS AI Practitioner Exam</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Mon, 23 Sep 2024 10:27:15 +0000</pubDate>
      <link>https://forem.com/aws-heroes/top-tips-to-pass-the-aws-ai-practitioner-exam-2848</link>
      <guid>https://forem.com/aws-heroes/top-tips-to-pass-the-aws-ai-practitioner-exam-2848</guid>
      <description>&lt;p&gt;I sat and passed the AWS Certified AI Practitioner exam last week. It’s currently still in beta, but that comes with the bonus of receiving an Early Adopter badge for anyone that is successful before Feb 15th, 2025.&lt;/p&gt;

&lt;p&gt;It’s classed as a Foundational level exam, which puts it alongside the AWS Cloud Practitioner exam. But don’t let that fool you, a good general knowledge of AWS cloud helps, but you definitely need to freshen up on AI/ML terminology and concepts to be confident of a pass going into the exam.&lt;/p&gt;

&lt;h2&gt;
  
  
  Study Guides
&lt;/h2&gt;

&lt;p&gt;Alongside the &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-ai-practitioner/AWS-Certified-AI-Practitioner_Exam-Guide.pdf" rel="noopener noreferrer"&gt;official exam guide&lt;/a&gt;, I used the following online resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stéphane Maarek's online course on &lt;a href="https://www.udemy.com/course/aws-ai-practitioner-certified/" rel="noopener noreferrer"&gt;Udemy&lt;/a&gt; via a work subscription&lt;/li&gt;
&lt;li&gt;The AWS official exam prep standard course on &lt;a href="https://explore.skillbuilder.aws/learn/course/internal/view/elearning/19554/exam-prep-standard-course-aws-certified-ai-practitioner-aif-c01" rel="noopener noreferrer"&gt;AWS Skill Builder&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The AWS official Practice Question Sets on &lt;a href="https://explore.skillbuilder.aws/learn/course/19790/play/134393/official-practice-question-set-aws-certified-ai-practitioner-aif-c01-english" rel="noopener noreferrer"&gt;AWS Skill Builder&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are additional resources available on AWS Skill Builder for anyone with a subscription.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Tips
&lt;/h2&gt;

&lt;p&gt;There is no shortcut to pass the exam, other than ensuring you have covered all of the material called out in the exam guide. However, the following are my top 5 topics to make sure you know well, to help maximise your chances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine Learning Lifecycle
&lt;/h3&gt;

&lt;p&gt;It is important to understand the various phases of the machine learning lifecycle and the order in which they take place e.g. feature engineering takes place before model training. There are more details provided in the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/well-architected-machine-learning-lifecycle.html" rel="noopener noreferrer"&gt;Machine Learning Lens&lt;/a&gt; of the Well Architected Framework:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ah009ojzjsfajch46za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ah009ojzjsfajch46za.png" alt="ML Lifecycle Phases"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is also important to understand what AWS services are available to help in each phase such as AWS Glue and Amazon SageMaker Data Wrangler.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Model Selection and Customisation
&lt;/h3&gt;

&lt;p&gt;It is critical to understand the trade offs in time, effort and complexity in selecting an appropriate model and ensuring it meets your requirements. The following is a high level summary:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwlx6oi5zov70hg2qtc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwlx6oi5zov70hg2qtc3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The simplest option is to use an AI/ML hosted service such as Amazon Comprehend or Amazon Rekognition. You will need to know about all of the AWS hosted AI/ML services and what their capabilities are at a high level e.g. text-to-speech and text translation.&lt;/p&gt;

&lt;p&gt;Following this you can use pre-trained foundation models available in services such as Amazon Bedrock and Amazon SageMaker JumpStart or you can bring a model into Amazon SageMaker.&lt;/p&gt;

&lt;p&gt;The recommended way to first customise a model is using prompt engineering. You need to understand about different types of prompting and the use of prompt templates. Remember that with prompt engineering, there is no change to the underlying model weights.&lt;/p&gt;

&lt;p&gt;Retrieval-Augmented Generation (RAG) is another approach to improve responses, this time by referencing an external knowledge base that is outside of the LLM's training data. There are a variety of options in this space, most notably Amazon Bedrock Knowledge Bases, but it is possible to take advantage of vector databases such as Amazon OpenSearch or the &lt;code&gt;pgvector&lt;/code&gt; extension for PostgreSQL to roll your own RAG solution.&lt;/p&gt;

&lt;p&gt;Next up is fine-tuning, with a couple of different options dependent upon where your model is hosted. Remember that with fine-tuning you are changing the weights of the model. Amazon Bedrock supports fine-tuning and continued pre-training. There are important distinctions between these. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fine-tuning&lt;/strong&gt; relies on you providing your own &lt;strong&gt;labelled&lt;/strong&gt; data set to the model. Be aware that if you only provide instructions for a single task, the model may lose its more general purpose capability and experience catastrophic forgetting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continued pre-training&lt;/strong&gt; uses &lt;strong&gt;unlabelled&lt;/strong&gt; data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you have created your custom model, you need to purchase provisioned throughput to be able to use it.&lt;/p&gt;

&lt;p&gt;Amazon SageMaker supports both domain-adaptation fine-tuning and instruction-based fine-tuning. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain adaptation&lt;/strong&gt; fine-tuning allows you to leverage pre-trained foundation models and adapt them to specific tasks using limited domain-specific data. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instruction-based&lt;/strong&gt; fine-tuning uses labeled examples to improve the performance of a pre-trained foundation model on a specific task. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The choice comes down to whether you want to train a model around domain data or to follow instructions and perform a specific task.&lt;/p&gt;

&lt;p&gt;Finally, the most time-consuming and costly approach is to create your own custom model with Amazon SageMaker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parameters
&lt;/h3&gt;

&lt;p&gt;It is useful to understand what parameters are available to you. These fall into two distinct categories.&lt;/p&gt;

&lt;p&gt;Hyperparameters are used to control the training process. The most common are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Epoch&lt;/strong&gt;: The number of iterations through the entire training dataset&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Size&lt;/strong&gt;: The number of samples processed before updating model parameters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Rate&lt;/strong&gt;: The rate at which model parameters are updated after each batch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inference parameters are settings you can just during inference, that influence the response from the model. The most common are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temperature&lt;/strong&gt;: Temperature is a value between 0 and 1, and it regulates the creativity of the model's responses. Use a lower temperature if you want more deterministic responses, and use a higher temperature if you want creative or different responses for the same prompt&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top K&lt;/strong&gt;: The number of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top P&lt;/strong&gt;: The percentage of most-likely candidates that the model considers for the next token. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon SageMaker Capabilities
&lt;/h3&gt;

&lt;p&gt;Amazon SageMaker is a service that provides a whole host of features and capabilities you need to be aware of. The exam will test your awareness of all these such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SageMaker Clarify&lt;/li&gt;
&lt;li&gt;SageMaker JumpStart&lt;/li&gt;
&lt;li&gt;SageMaker Studio&lt;/li&gt;
&lt;li&gt;SageMaker Data Wrangler&lt;/li&gt;
&lt;li&gt;SageMaker Feature Store&lt;/li&gt;
&lt;li&gt;SageMaker Model Cards&lt;/li&gt;
&lt;li&gt;SageMaker Model Dashboard&lt;/li&gt;
&lt;li&gt;SageMaker Model Monitor&lt;/li&gt;
&lt;li&gt;SageMaker Ground Truth&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics
&lt;/h3&gt;

&lt;p&gt;Finally, expect to see a number of question around model performance metrics, and which one is the most appropriate.&lt;/p&gt;

&lt;p&gt;For classification tasks such as spam detection you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt;: The ratio of correctly predicted instances to the total instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Precision&lt;/strong&gt;: How many of the predicted positive cases are positive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recall&lt;/strong&gt;: How many positive cases were predicted correctly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;F1 Score&lt;/strong&gt;: The F1 score combines precision and recall into a single metric&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should be aware of the confusion matrix, and when you might want to optimise for recall (life-saving tasks such as cancer diagnosis where you want to minimise false negatives) versus when you might want to optimise for precision (minimise false positives such as spam email detection)&lt;/p&gt;

&lt;p&gt;Task generation metrics include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ROUGE&lt;/strong&gt; (Recall-Oriented Understudy for Gisting Evaluation): used to evaluate text generation or summarisation tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BLEU&lt;/strong&gt; (Bilingual Evaluation Understudy Score): used for translation tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perplexity&lt;/strong&gt;: measures how well a model can predict a sequence of tokens or words in a given dataset.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should also know how to interpret model performance. For example, if your model performs better on training data that on new data it is &lt;strong&gt;overfitting&lt;/strong&gt; which is exhibiting high variance. If your model does not work well on training data or new data it is &lt;strong&gt;underfitting&lt;/strong&gt; and exhibiting high bias. If there are disparities in the performance of your model across different groups then it is showing &lt;strong&gt;bias&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking the Exam
&lt;/h2&gt;

&lt;p&gt;Congratulations if you have booked your exam. After having taken a number of AWS exams over the years, these are my top tips for the exam itself:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Don't panic. It's likely there will be one or two questions come up that confuse you. Simply flag them for review and move on.&lt;/li&gt;
&lt;li&gt;Read the question carefully and pick out the key information it is asking for&lt;/li&gt;
&lt;li&gt;If you don't know the answer for a question, you can often eliminate a couple of the possible options, which will help narrow it down.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>certification</category>
      <category>learning</category>
    </item>
    <item>
      <title>Code Transformation with Amazon Q</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Tue, 09 Jul 2024 09:24:42 +0000</pubDate>
      <link>https://forem.com/aws-heroes/code-transformation-with-amazon-q-40df</link>
      <guid>https://forem.com/aws-heroes/code-transformation-with-amazon-q-40df</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;An exciting feature of &lt;code&gt;Amazon Q&lt;/code&gt; is the concept of agents that autonomously perform a complex, multistep task from a single prompt. One of these agents is the "Developer Agent for Code Transformation" which automates the process of upgrading and transforming Java applications from &lt;code&gt;Java 8&lt;/code&gt; or &lt;code&gt;Java 11&lt;/code&gt; to &lt;code&gt;Java 17&lt;/code&gt;, with more language support on the way.&lt;/p&gt;

&lt;p&gt;I have previously demonstrated this capability using a simple &lt;code&gt;Java 8&lt;/code&gt; example. However, when I stumbled upon an old &lt;code&gt;Java 11&lt;/code&gt; Spring Boot application with thousands of lines of code, the build failed to compile on &lt;code&gt;Java 17&lt;/code&gt; with various upgrade issues, and it meant a time-consuming process to manually step through all of the problems.&lt;/p&gt;

&lt;p&gt;Now, a few months on, I wanted to see whether any step change improvements had been made to the agent, so dug out the old codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Java 11 Bicycle Licence Application
&lt;/h2&gt;

&lt;p&gt;In March 2020, I presented at an online AWS Tech Talk on new features of &lt;code&gt;Amazon QLDB&lt;/code&gt;. To bring this to life, we built a quick demo using Java 11 and Spring Boot. You can find the code repository on GitHub &lt;a href="https://github.com/mlewis7127/bicycle-licence-ui-master" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The repository does require a QLDB Ledger and table to be set up in the &lt;code&gt;eu-west-2&lt;/code&gt; region, with more details provided in associated &lt;code&gt;README.md&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;First create a new QLDB ledger in the &lt;code&gt;eu-west-2&lt;/code&gt; region:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpir04argrt50t302w3sz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpir04argrt50t302w3sz.jpg" alt="QLDB Create Ledger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And once created, open up the &lt;code&gt;PartiQL editor&lt;/code&gt; and run a command to create a new table:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7nrcj5dsh9jm0uhu8sh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7nrcj5dsh9jm0uhu8sh.jpg" alt="QLDB Create Table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we create a new directory by cloning the repository using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone https://github.com/mlewis7127/bicycle-licence-ui-master.git


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will use the JetBrains Intellij IDEA for this transformation, and choose to open a new project and select the folder created in the previous step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s79zimkrgknye6y707r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s79zimkrgknye6y707r.jpg" alt="Open Project in Intellij"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If a pop-up appears, enable annotation processing for Lombok. Make sure to set the project to use version 11 of the &lt;code&gt;Java SDK&lt;/code&gt;. I am using &lt;code&gt;Amazon Corretto&lt;/code&gt; for my distribution of the Open Java Development Kit (OpenJDK).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdbshg9uwiba9sck11rd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdbshg9uwiba9sck11rd.jpg" alt="Set up Java 11 SDK"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is set for the project by selected &lt;code&gt;File --&amp;gt; Project Structure&lt;/code&gt;, and select &lt;code&gt;Project&lt;/code&gt; under &lt;code&gt;Project Settings&lt;/code&gt;. Clicking on the SDK dropdown box, allows you to select &lt;code&gt;Download SDK&lt;/code&gt; where you can specify the version and vendor of the JDK to download.&lt;/p&gt;

&lt;p&gt;From here, run a Maven &lt;code&gt;clean&lt;/code&gt; and then a Maven &lt;code&gt;compile&lt;/code&gt;. In the screenshot below I am doing this from the Maven plugin, or you can run from the command line. This will compile all of the code successfully as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F398yphdyl23ycr6v1cza.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F398yphdyl23ycr6v1cza.jpg" alt="Maven Compile Success"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now launch the application by running the &lt;code&gt;BicycleLicenceApplication&lt;/code&gt; class in a new configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuc8tzcxsfxy3mbvjraj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuc8tzcxsfxy3mbvjraj.jpg" alt="Application Run Configuration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will start the application, which can be accessed in a browser window on &lt;code&gt;http://localhost:8080/&lt;/code&gt;. I have put together a short video below showing the application running as a &lt;code&gt;Java 11&lt;/code&gt; application.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/LbmzamV9CZA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Amazon Q to automatically upgrade to Java 17
&lt;/h2&gt;

&lt;p&gt;Now we have the application successfully running using &lt;code&gt;Java 11&lt;/code&gt;, we want to upgrade to &lt;code&gt;Java 17&lt;/code&gt;. We tell &lt;code&gt;Amazon Q&lt;/code&gt; that we want to use the Code Transformation agent by opening a chat window and typing &lt;code&gt;/transform&lt;/code&gt; and selecting the agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r5xgdfzrt05fin89229.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r5xgdfzrt05fin89229.jpg" alt="Transform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This launches the &lt;code&gt;Developer Agent for Code Transformation&lt;/code&gt;. The &lt;code&gt;bicycle-licence-ui&lt;/code&gt; module is automatically selected, and we press confirm to let the agent know we want to upgrade to &lt;code&gt;Java 17&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0hkr99x4z53biu86dsd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0hkr99x4z53biu86dsd.jpg" alt="Code Transformation Agent"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we select transform, the agent takes over. It starts by building the Java module. Then it uploads the project artefacts that it has just built. Once these files have been uploaded, the code transformation job has been accepted, and the agent begins building the code in a secure build environment. Once built, &lt;code&gt;Amazon Q&lt;/code&gt; begins analysing the code in order to generate a transformation plan.&lt;/p&gt;

&lt;p&gt;Once created, you can see a summary of the transformation plan. In this case, we have an application with almost 2500 lines of code, in which we need to replace 2 dependencies, with changes made to 5 files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7lh0uxk5gdpcw2qqfm6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7lh0uxk5gdpcw2qqfm6.jpg" alt="Code Transformation Plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A summary is also provided of the planned transformation changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wflrgxon5grz46hh59m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wflrgxon5grz46hh59m.jpg" alt="Planned Transformation Changes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The transformation plan generated follows a 3 step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update JDK version, dependencies and related code&lt;/li&gt;
&lt;li&gt;Upgrade deprecated code&lt;/li&gt;
&lt;li&gt;Finalise code changes and generate transformation summary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is where the massive improvements in code transformation shone. A few months ago, when the build of the application to &lt;code&gt;Java 17&lt;/code&gt; failed, the automated process abruptly ended. Now, the agent takes the compilation errors, and looks to make changes to fix the errors, before attempting to build the code again. You can see below it took several attempts, making changes each time, before the code could successfully build on &lt;code&gt;Java 17&lt;/code&gt;. The key point is this was all automated, with no manual input required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzvw3m5u63dq2u4q3cjt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzvw3m5u63dq2u4q3cjt.jpg" alt="Step 1 Compilation Errors"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After just 16 minutes, the code transformation was complete and had succeeded.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0ynl5h1yxqycrtjpv37.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0ynl5h1yxqycrtjpv37.jpg" alt="Code Transformation Summary"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Amazon Q&lt;/code&gt; lets us know about the planned dependency that it updated, with the other identified dependency having been removed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin3juu0yeum1g04x95d2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin3juu0yeum1g04x95d2.jpg" alt="Planned Dependencies Replaced"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As well as a number of additional dependencies that were updated during the upgrade&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxsmwl4xychclap81bk9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxsmwl4xychclap81bk9.jpg" alt="Additional Dependencies Updated"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A pop up box allows you to see which files have been changed, and you  can select each file individually and run a side by sided comparison to evaluate the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7aas6jchnu8d1pn2i0r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7aas6jchnu8d1pn2i0r.jpg" alt="Apply Patch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an example, I can see that all all references to &lt;code&gt;javax.servlet.http.HttpServletRequest&lt;/code&gt; have been replaced by &lt;code&gt;jakarta.servlet.http.HttpServletRequest&lt;/code&gt; as &lt;code&gt;Spring Boot 3.0&lt;/code&gt; has migrated from &lt;code&gt;Java EE&lt;/code&gt; to &lt;code&gt;Jakarta EE&lt;/code&gt; APIs for all dependencies. The agent had also implemented a new interface that was present in the latest Spring Framework version, that had not previously existed.&lt;/p&gt;

&lt;p&gt;After this, we accept the automated updates, and run a Maven &lt;code&gt;clean&lt;/code&gt; and a Maven &lt;code&gt;compile&lt;/code&gt; before making sure we click on the top left button in the screenshot to reload the latest version of the dependencies:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxa6zf4mjffobg79dew8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxa6zf4mjffobg79dew8.jpg" alt="Reload Maven Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can launch and test the application which is now running on &lt;code&gt;Java 17&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/eKIGflrCQ1s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Observations
&lt;/h2&gt;

&lt;p&gt;There has been a &lt;strong&gt;massive improvement&lt;/strong&gt; in the capability of the “Developer Agent for Code Transformation” over the past few months. If you had tried and discounted the agent, you should definitely look to give it another go. As the underlying models improve, I think we will see further step change improvements happen in very short timescales on an ongoing basis.&lt;/p&gt;

&lt;p&gt;Two areas to call out for me are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unit Testing&lt;/strong&gt;&lt;br&gt;
In the video above, the call to verify the digest failed with a &lt;code&gt;java.lang.NoSuchFieldError&lt;/code&gt; in the logs. This is an example where despite having no compilation errors, we can still experience runtime errors. I have updated the GitHub repository with some unit tests as an example, which demonstrates how Amazon Q executes these tests as part of the code transformation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qkkbc9syf39mawmd0ko.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qkkbc9syf39mawmd0ko.jpg" alt="Amazon Q Unit Tests"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were able to fix the error by correcting a version mismatch between AWS SDK modules. This may not be found by unit tests, unless we were able to interact directly with AWS endpoints as part of these tests, and this is something being worked on. Nevertheless, ensuring there are sufficient unit tests to provide coverage of the core functionality will help to reduce examples of code compiling correctly but failing at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS SDK Upgrades&lt;/strong&gt;&lt;br&gt;
Although a number of libraries and frameworks were upgraded such as Spring and Log4j, the AWS SDK itself remained on Java 1.x. A big reason for this is the upgrade to AWS SDK for Java 2.x is a major rewrite that will typically require custom development to make work.&lt;/p&gt;

&lt;p&gt;Watch the video below to see the agent in action and the steps it takes as described throughout this post&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/e7Mvek3wP38"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>genai</category>
      <category>java</category>
    </item>
    <item>
      <title>Using GenAI to improve developer experience on AWS</title>
      <dc:creator>Matt Lewis</dc:creator>
      <pubDate>Fri, 23 Feb 2024 11:37:14 +0000</pubDate>
      <link>https://forem.com/aws-heroes/using-genai-to-improve-developer-experience-on-aws-5bpk</link>
      <guid>https://forem.com/aws-heroes/using-genai-to-improve-developer-experience-on-aws-5bpk</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Today, many developers find themselves working on multiple projects at the same time, often dealing with unfamiliar codebases and programming languages, and being pulled in many directions. This is backed up by the 2023 "State of Engineering Management Report" by Jellyfish which found that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;60% of teams reported being short of engineering resources needed to accomplish their established goals&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is an area where generative AI has the promise to add real value. The latest Gartner research backs this up, with the planning assumption that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;75% of enterprise software engineers will use AI coding assistants, up from less than 10% in early 2023&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With this in mind, enterprises need to prepare now for the inevitable wide scale adoption of these tools, and start taking advantage of the benefits they can offer. In this post, we look at some of the core features of the GenAI services available today on AWS, that help to provide the next generation developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;p&gt;There are many capabilities available today, which has resulted in a lengthy blog post. These capabilities have been grouped into categories, and structured as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Responsible AI&lt;/li&gt;
&lt;li&gt;
Code Creation

&lt;ul&gt;
&lt;li&gt;Code Completion&lt;/li&gt;
&lt;li&gt;Code Generation&lt;/li&gt;
&lt;li&gt;Customizations&lt;/li&gt;
&lt;li&gt;Infrastructure as Code Support&lt;/li&gt;
&lt;li&gt;SQL Support&lt;/li&gt;
&lt;li&gt;CodeWhisperer on the Command Line&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Application Understanding

&lt;ul&gt;
&lt;li&gt;Explaining Code&lt;/li&gt;
&lt;li&gt;Application Visualisation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Application Modernisation

&lt;ul&gt;
&lt;li&gt;Chat with Amazon Q&lt;/li&gt;
&lt;li&gt;Code Transformation&lt;/li&gt;
&lt;li&gt;Code Optimisation&lt;/li&gt;
&lt;li&gt;Code Translation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Feature Development&lt;/li&gt;
&lt;li&gt;Code Vulnerabilities&lt;/li&gt;
&lt;li&gt;Debugging and Troubleshooting&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Responsible AI &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Although not directly part of developer experience, the first area to address is Responsible AI. This is because the rise of generative AI has led to concerns around the ethics and legality of the content generated. The risk is heightened by the continuing legal actions taking place, including the ongoing class action lawsuit against GitHub, OpenAI and Microsoft claiming violations against open-source licensing and copyright law.&lt;/p&gt;

&lt;p&gt;The professional tier of Amazon CodeWhisperer is covered by the "Indemnified Generative AI Services" from AWS. This means - quoting from the &lt;a href="https://aws.amazon.com/service-terms/"&gt;AWS Service Terms&lt;/a&gt; - that&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;AWS will defend you and your employees, officers, and directors against any third-party claim alleging that the Generative AI Output generated by an Indemnified Generative AI Service infringes or misappropriates that third party’s intellectual property rights, and will pay the amount of any adverse final judgment or settlement&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Amazon CodeWhisperer provides a reference tracker that displays the licensing information for a code recommendation. This allows a developer to understand what source code attribution they need to make, and whether they should accept the recommendation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qm45lty73j9ifvo8t7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qm45lty73j9ifvo8t7g.png" alt="Open Source Reference Tracker" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, to ensure that you are indemnified, you need to remain opted-in (default) to include suggestions with code references at the &lt;code&gt;AWS Organization&lt;/code&gt; level within the CodeWhisperer service console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk29ukags3fp0ucunarj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk29ukags3fp0ucunarj.png" alt="CodeWhisperer Advanced Settings" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS also address the toxicity and fairness of the generated code by evaluating it in real time, and filtering our any recommendations that include toxic phrases or that indicate bias.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Creation &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;One of the primary goals of AI Coding Assistants is to increase the productivity of developers in creating code. In this section, we break down this capability into different categories, separating out programming languages, from Infrastructure-as-Code tools, SQL and shell script commands in the console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Completion &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;With code completion, CodeWhisperer makes suggestions inline as code is written in the IDE. This has been around for some time, and is known by the term &lt;code&gt;IntelliSense&lt;/code&gt; in Visual Studio Code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsrebu5kqzcfaxgmng5n.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsrebu5kqzcfaxgmng5n.gif" alt="Code Completion" width="1280" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The challenge with code completion is that developers initiate the process by writing code and they are driving the implementation detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Generation &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;With code generation, a developer writes a comment in natural language giving specific and concise requirements. This information, alongside the surrounding code including other open files in the editor, act as the input context. CodeWhisperer returns a suggestion based on this context.&lt;br&gt;
Amazon CodeWhisperer is trained on billions of lines of Amazon internal and open source code. This gives CodeWhisperer an advantage when it comes to making suggestions for using AWS native services. In the example below, CodeWhisperer understands from the input context that we want to create a handler for an AWS Lambda function, and suggests a correct signature and function implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir1ts5sw5uhrlneui1hy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir1ts5sw5uhrlneui1hy.gif" alt="CodeWhisperer Lambda Function" width="756" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are techniques to help you generate the best recommendation, and you can find out more details in this blog post on &lt;a href="https://aws.amazon.com/blogs/devops/best-practices-for-prompt-engineering-with-amazon-codewhisperer/"&gt;Best practices for prompt engineering with Amazon CodeWhisperer&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Customizations  &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The source code that Amazon CodeWhisperer is trained on is great for most scenarios, but does not help when an organization has their own internal set of libraries, best practices and coding standards that must be followed. This is where the customization capability comes in. With this capability, you create a connection to your code repositories (either third party hosted or via an S3 bucket), and then train a customization from this codebase. This capability is in preview and available only in the Professional tier.&lt;/p&gt;

&lt;p&gt;When you create a customization and assign it to a user, they are then able to select that customization in the editor and this will then be used to generate code suggestions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxkpmrk0d74qt7wfyeno.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxkpmrk0d74qt7wfyeno.png" alt="CodeWhisperer Customization" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CodeWhisperer Customizations were designed from the ground up with security in mind. This &lt;a href="https://aws.amazon.com/blogs/devops/generative-ai-meets-aws-security/"&gt;blog post&lt;/a&gt; gives more detail in this space.&lt;/p&gt;
&lt;h3&gt;
  
  
  Infrastructure as Code Support &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;CodeWhisperer support for creating code extends beyond just programming languages and into Infrastructure as Code (IaC) tools such as CloudFormation, AWS CDK and Terraform. The screenshot below shows the specification of an RDS instance created as a resource in CloudFormation from a natural language prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqczvl7axfxntcn8c1ucq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqczvl7axfxntcn8c1ucq.png" alt="CloudFormation Support" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example below uses a simple prompt to configure Terraform Cloud, and then create an EC2 instance using the AMI for Amazon Linux 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw4wehtszo3pxr1cg107.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvw4wehtszo3pxr1cg107.gif" alt="Terraform Support" width="1280" height="720"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  SQL Support &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;CodeWhisperer also supports the creation of SQL as a standard language for database creation and manipulation. This covers Data Definition Language commands (such as creating tables and views) as well as Data Manipulation Language commands (from simple inserts through to complex queries with joins across tables).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd05g9l3a2iy3bvca3w7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd05g9l3a2iy3bvca3w7z.png" alt="CodeWhisperer Generating SQL" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generative AI can also help the developer by converting natural language to SQL across many AWS services. In the screenshot below, we are using Generative AI to generate queries from natural language against Amazon Redshift. These queries can be added directly to a notebook and then executed, all within the Redshift Query Editor V2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki3ub8wsdogv8b8duohm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fki3ub8wsdogv8b8duohm.png" alt="Redshift Query Editor v2" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It doesn't stop there - you can also use natural language to query &lt;a href="https://aws.amazon.com/blogs/aws/use-natural-language-to-query-amazon-cloudwatch-logs-and-metrics-preview/"&gt;Amazon CloudWatch Log Groups and Metrics&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/blogs/mt/simplify-query-authoring-in-aws-config-advanced-queries-with-natural-language-query-generation/"&gt;AWS Config Advanced Queries&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  CodeWhisperer on the Command Line &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;CodeWhisperer is also available on the command line. This allows you to write a natural language instruction which is converted to an executable shell snippet. It supports hundreds of popular CLIs. This reduces the overhead on the developer of having to remember these commands, or the context switching of navigating away from the IDE to look them up. You can also execute the commands directly, as we see below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez1vt23kk66m9e6700d4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez1vt23kk66m9e6700d4.gif" alt="CodeWhisperer Command Line" width="756" height="490"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Application Understanding &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;One of the most valuable use cases for GenAI is helping to understand existing applications. A common problem is supporting an application with limited if any documentation, written in an unfamiliar programming language or style, and with no comments in the code.&lt;/p&gt;
&lt;h3&gt;
  
  
   Explaining Code &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Working in combination with CodeWhisperer in your IDE, you can send whole code sections to Amazon Q and ask for an explanation of what the selected code does. To show how this works, we open up the &lt;code&gt;file.rs&lt;/code&gt; file cloned from this &lt;a href="https://github.com/rust-lang/docs.rs/blob/master/src/db/file.rs"&gt;GitHub repository&lt;/a&gt;. This is part of an open source project to host documentation of crates for the Rust Programming Language, which is a language we are not familiar with.&lt;/p&gt;

&lt;p&gt;We select a code block from the file, right-click, and then send to Amazon Q to explain:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpye388hvcz9esvskr03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpye388hvcz9esvskr03.png" alt="Amazon Q Explain" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q provides a detailed breakdown of the function that has been written in Rust, and the key activities that it carries out. What is really useful in this case, is Amazon Q suggests follow up questions to help you get an even better understanding of the code. This allows you to chat with and ask questions about the code segment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ap1riu4ff4n3up5uzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1ap1riu4ff4n3up5uzf.png" alt="Amazon Q Explain Output" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Application Visualisation &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;A new feature that is incredibly useful is to visualise how an application is composed using &lt;code&gt;Application Composer&lt;/code&gt; directly within the IDE. At the end of last year, Application Composer announced support for all 1000+ resources supported by CloudFormation.&lt;/p&gt;

&lt;p&gt;This now works for any application running in AWS, even if not originally deployed through CloudFormation, with the introduction of the &lt;code&gt;AWS CloudFormation IaC Generator&lt;/code&gt; (for more information see &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/generate-IaC.html"&gt;Infrastructure as Code Generator&lt;/a&gt;). This allows you to generate a CloudFormation template. Within VSCode, you can select the template, right-click, and select "Open with Application Composer".&lt;/p&gt;

&lt;p&gt;The screenshot below uses the CloudFormation template from an AWS samples application in GitHub which can be found &lt;a href="https://github.com/aws-samples/generative-ai-amazon-bedrock-langchain-agent-example"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpqh7imo1cugw2dzdroi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpqh7imo1cugw2dzdroi.png" alt="Application Composer" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Application Modernisation &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A massive problem for organisations is the amount of technical debt growing in legacy applications. The challenge here is how to modernise these applications. The starting point is to make sure you understand the application in question using the approaches above. In addition, a number of other capabilities are available.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chat with Amazon Q &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon Q is available in both the console and the IDE to answer questions around AWS. This allows you to ask questions about AWS services, limits and best practices alongside software development. &lt;/p&gt;

&lt;p&gt;The generated content returned contains links to the source articles, that allow you to do more in-depth reading to validate the response. Again, when writing code in the editor, this allows the developer to remain in the IDE to ask these questions, reducing the distraction and the context switching.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqtxyy5gdxul1dirnm0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqtxyy5gdxul1dirnm0e.png" alt="Amazon Q Architecture" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Code Transformation &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Code Transformation is a formal feature of Amazon Q. It is currently available in preview with support to carry out complete application upgrades from Java 8 or Java 11 to Java 17. Coming soon is support to perform .NET Framework to cross-platform .NET upgrades to migrate applications from Windows to Linux faster. The video below shows the steps involved to automatically update a Java application.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/q8B3bridbpU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The internal Amazon results are impressive, with 1000 production applications upgraded from Java 8 to Java 17 in just two days, with an average of 10 minutes to upgrade each application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Optimisation &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Code optimisation is a concept supported by Amazon Q through its built-in prompts. In the example below, we have inefficient code with two loops that could be combined into a single loop. This is correctly detected with suggestions made to optimise the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugacfeuw854sed2ftrgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugacfeuw854sed2ftrgh.png" alt="Amazon Q Code Optimisation" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Translation &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Code translation is another concept support by Amazon Q through prompts. As always, the accuracy and quality of the code generation is dependent upon the size and quality of the training data. In this context, there is more support for languages such as Java, Python and JavaScript than C++ or Scala. In the screenshot below, we have taken an AWS Lambda function written in JavaScript and asked Amazon Q to translate to Python.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik4ldmpiba0osles9o53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fik4ldmpiba0osles9o53.png" alt="Code Translation Python" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Development &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Feature Development is a formal feature of Amazon Q. You explain the feature you want to develop, and then allow Amazon Q to create everything from the implementation plan to the suggested code. For this example, we create an application using an AWS SAM quick start template for a serverless API. We then ask Q to create a new API for us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzso8biye5pypvya5bws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzso8biye5pypvya5bws.png" alt="Amazon Q Dev Feature" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q uses the context of the current project to generate a detailed implementation plan as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdp0h1onb6zwignboide.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdp0h1onb6zwignboide.png" alt="Amazon Q Implementation Plan" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By selecting 'Write Code', Amazon Q then generates the code suggestions, using the coding style as already set out in the current project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuplrlejdv7w1oyl9sbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuplrlejdv7w1oyl9sbf.png" alt="Amazon Q Dev Feature Write Code" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This results in the proposed code suggestions, whereby you can choose to click on each file to view the differences, and finally choose whether or not to accept the changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72w7abo444zqwmsvlw7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72w7abo444zqwmsvlw7c.png" alt="Amazon Q Code Suggestions" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This has the promise of a hugely powerful feature. In the screenshots above, we have shown how Amazon Q has taken a natural language instruction, and created everything from the Infrastructure-as-Code, function implementation, and unit tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Vulnerabilities &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;It is critical to prevent vulnerabilities being present in your application, and the earlier these are detected and resolved in the development life cycle the better. CodeWhisperer can detect security policy violations and vulnerabilities in code using static application security testing (SAST), secrets detection, and Infrastructure as Code (IaC) scanning.&lt;/p&gt;

&lt;p&gt;Within CodeWhisperer, you can select to run a security scan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oqaftxfpgynf1dtkjsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oqaftxfpgynf1dtkjsy.png" alt="CodeWhisperer Security Scan" width="432" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This performs the security scan on the currently active file in the IDE editor, and its dependent files from the project.&lt;/p&gt;

&lt;p&gt;Security scans in CodeWhisperer identify security vulnerabilities and suggest how to improve your code. In some cases, CodeWhisperer provides code you can use to address those vulnerabilities. The security scan is powered by detectors from the &lt;a href="https://docs.aws.amazon.com/codeguru/detector-library/"&gt;Amazon CodeGuru Detector Library&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the screenshot below, we use sample code that contains a vulnerability to Cross Site Request Forgery (CSRF), and this is picked up by the scan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz41678pdca4ath2gkwd0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz41678pdca4ath2gkwd0.png" alt="Security Issue Detected" width="744" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having detected there is an issue, we select the function with the vulnerability, and send it to Amazon Q to fix. Amazon Q generates code that we copy or insert directly into the editor, as well as providing details about the issue and resolution, and suggesting follow up questions if we want to learn even more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqdi46itmq8zr7dsqpio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqdi46itmq8zr7dsqpio.png" alt="Amazon Q Fix" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging and Troubleshooting &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Moving outside of the IDE and into the AWS console, AWS now provide situations where generative AI capabilities can be used to help debug problems.&lt;/p&gt;

&lt;p&gt;In the example below, we have tested an AWS Lambda function that is failing. This opens up a button we can select to get Amazon Q to help us with troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb66vmhxt2h5nbzb3ftff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb66vmhxt2h5nbzb3ftff.png" alt="Troubleshoot with Amazon Q" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q not only provides a summary of what the initial analysis of the problem is, but can then also be used to provide the steps required to resolve the issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1xl35emru2cab7r6epi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1xl35emru2cab7r6epi.png" alt="Amazon Q Help me Resolve" width="659" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Outside of the editor, Amazon Q can also help to troubleshoot network-related issues by working with &lt;a href="https://docs.aws.amazon.com/vpc/latest/reachability/what-is-reachability-analyzer.html"&gt;Amazon VPC Reachability Analyzer&lt;/a&gt;. This allows you to ask questions in natural language as explained in this post &lt;a href="https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-amazon-q-support-for-network-troubleshooting/"&gt;introducing Amazon Q support for network troubleshooting&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The concept of generative AI is that the content generated is non-deterministic. This means the exact code suggested may be slightly different when executed against the same prompts. There are occasions when the suggested code may not compile or contain an invalid configuration. This means there is still a level of developer expertise required to drive the tooling and understand the content generated. There is also training required to understand how best to write comments to get the best suggestions, in effect a variation on prompt engineering. However, without question, the capabilities available today will increase your productivity when developing on AWS.&lt;/p&gt;

&lt;p&gt;For edge cases, there are other alternatives available which is worth mentioning. CodeWhisperer abstracts away all of the complexity of dealing with LLMs. Amazon Bedrock allows you API access to supported models such as Claude and Llama 2. There are also open source code LLMs like StarCoder you can bring into Amazon SageMaker. This allows you more control on what dataset or instructions you might want to fine tune a base model, but brings with it higher cost and complexity.&lt;/p&gt;

&lt;p&gt;Hopefully this post has given you a taster of many of the capabilities now available that form the next generation developer experience on AWS, and will encourage you to try it out for yourself.&lt;/p&gt;

</description>
      <category>genai</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>developer</category>
    </item>
  </channel>
</rss>
