<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shubham Murti</title>
    <description>The latest articles on Forem by Shubham Murti (@shubham_murti).</description>
    <link>https://forem.com/shubham_murti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shubham_murti"/>
    <language>en</language>
    <item>
      <title>Build an automated video monitoring system with AWS IoT and AI/ML : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Sat, 16 Nov 2024 11:25:28 +0000</pubDate>
      <link>https://forem.com/shubham_murti/build-an-automated-video-monitoring-system-with-aws-iot-and-aiml-aws-project-2ofl</link>
      <guid>https://forem.com/shubham_murti/build-an-automated-video-monitoring-system-with-aws-iot-and-aiml-aws-project-2ofl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services (AWS) offers a robust platform for developing cloud-based solutions, designed to cater to the growing demand for scalable, secure, and cost-effective technology. With AWS, professionals can build everything from simple applications to complex enterprise systems, leveraging a suite of globally recognized services such as EC2, S3, Lambda, and DynamoDB.&lt;/p&gt;

&lt;p&gt;This blog introduces practical scenarios where AWS services can be utilized to solve real-world problems, empowering developers and cloud enthusiasts to sharpen their skills. The focus is on applying concepts effectively to deploy efficient, scalable, and reliable cloud solutions. Whether you’re a beginner exploring cloud computing or an experienced professional advancing your expertise, AWS’s comprehensive ecosystem serves as a foundation for innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt;&lt;br&gt;
Ensure you have an active AWS account to access and deploy resources. Proper billing permissions are essential for provisioning services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Computing Basics&lt;/strong&gt;&lt;br&gt;
Familiarity with cloud concepts, including virtualization, networking, and basic AWS services like EC2 and S3, is helpful for a smoother experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Programming Fundamentals&lt;/strong&gt;&lt;br&gt;
Knowledge of programming concepts and languages (e.g., Python, JavaScript) is beneficial for working with AWS services and automation tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Technical Setup&lt;/strong&gt;&lt;br&gt;
Have a stable internet connection and tools like a modern browser, AWS CLI, and a text editor or IDE installed on your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Awareness&lt;/strong&gt;&lt;br&gt;
Understand AWS pricing and usage limits. Utilize the Free Tier where possible to minimize costs while experimenting with services.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt;&lt;br&gt;
Scalable virtual servers for hosting applications and workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;&lt;br&gt;
Serverless compute service for running code without managing servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;br&gt;
Secure and scalable storage for objects like files and backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt;&lt;br&gt;
Managed relational databases for engines like MySQL and PostgreSQL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;&lt;br&gt;
Fully managed NoSQL database for high-speed and scalable applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon VPC&lt;/strong&gt;&lt;br&gt;
Isolated cloud network for secure deployment of AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS IAM&lt;/strong&gt;&lt;br&gt;
Access management to control permissions for users and resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS KMS&lt;/strong&gt;&lt;br&gt;
Encryption key management for secure data storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon CloudWatch&lt;/strong&gt;&lt;br&gt;
Monitoring and observability service for AWS resources and applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudFormation&lt;/strong&gt;&lt;br&gt;
Infrastructure automation using code templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;&lt;br&gt;
Command-line tool to manage AWS services programmatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Route 53&lt;/strong&gt;&lt;br&gt;
Domain Name System (DNS) service for routing traffic to applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;The architecture diagram above illustrates a typical AWS-based web application setup&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeihio00z2v1g3rb6ge0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeihio00z2v1g3rb6ge0.png" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  WebRTC Introduction
&lt;/h2&gt;

&lt;p&gt;WebRTC is an open technology specification for enabling real-time communication (RTC) across browsers and mobile applications via simple APIs. It leverages peering techniques for real-time data exchange between connected peers and provides low media streaming latency required for human-to-human interaction. WebRTC specification includes a set of IETF protocols including Interactive Connectivity Establishment (ICE RFC5245), Traversal Using Relay around NAT (TURN RFC5766), and Session Traversal Utilities for NAT (STUN RFC5389) for establishing peer-to-peer connectivity, in addition to protocol specifications for real-time media and data streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/introduction/webrtc#webrtc-connection-flow" rel="noopener noreferrer"&gt;WebRTC Connection Flow&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The following diagram illustrates the connection flow as it occurs using Kinesis Video Streams for WebRTC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwwoo76hu5lca11k3aa3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwwoo76hu5lca11k3aa3.gif" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kinesis Video Streams with WebRTC Components
&lt;/h2&gt;

&lt;p&gt;Kinesis Video Streams provides a standards compliant WebRTC implementation as a fully-managed capability. You can use this capability to securely live stream media or perform two-way audio or video interaction between any camera IoT device and WebRTC compliant mobile or web players. As a fully-managed capability, you do not have to build, operate, or scale any WebRTC related cloud infrastructure such as signaling or media relay servers to securely stream media across applications and devices.&lt;/p&gt;

&lt;p&gt;Kinesis Video Streams provides managed end-points for WebRTC signaling that allows applications to securely connect with each other for peer-to-peer live media streaming. Next, it includes managed end-points for TURN that enables media relay via the cloud when applications cannot stream peer-to-peer media. It also includes managed end-points for STUN that enables applications to discover their public IP address when they are located behind a NAT or a firewall. Additionally, it provides easy to use SDKs to enable camera IoT devices with WebRTC capabilities. Finally, it provides client SDKs for Android, iOS, and for Web applications to integrate Kinesis Video Streams WebRTC signaling, TURN, and STUN capabilities with any WebRTC compliant mobile or web player.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Rekognition — ML Video Analysis
&lt;/h2&gt;

&lt;p&gt;Amazon Rekognition makes it easy to add image and video analysis to your applications. You just have to provide an image or video to the Amazon Rekognition API, and the service can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Identify labels (objects, concepts, people, scenes, and activities) and text&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detect inappropriate content&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide highly accurate facial analysis, face comparison, and face search capabilities&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/introduction/rekognition#working-with-stored-video-analysis" rel="noopener noreferrer"&gt;Working with stored video analysis&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Rekognition Video is an API that you can use to analyze videos. With Amazon Rekognition Video, you can detect labels, faces, people etc. in videos that are stored in an Amazon Simple Storage Service (Amazon S3) bucket. Previously, scanning videos for objects or people would have taken many hours of error-prone viewing by a human being. Amazon Rekognition Video automates the detection of items and when they occur throughout a video.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/introduction/rekognition#amazon-rekognition-video-api-overview" rel="noopener noreferrer"&gt;Amazon Rekognition Video API overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Rekognition Video processes a video that’s stored in an Amazon S3 bucket. The design pattern is an asynchronous set of operations. You start video analysis by calling a Start operation such as &lt;a href="https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartLabelDetection.html" rel="noopener noreferrer"&gt;StartLabelDetection &lt;/a&gt;. The completion status of the request is published to an Amazon Simple Notification Service (Amazon SNS) topic. To get the completion status from the Amazon SNS topic, you can use an Amazon Simple Queue Service (Amazon SQS) queue or an AWS Lambda function. After you have the completion status, you call a Get operation, such as &lt;a href="https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetLabelDetection.html" rel="noopener noreferrer"&gt;GetLabelDetection &lt;/a&gt;, to get the results of the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon OpenSearch Service
&lt;/h2&gt;

&lt;p&gt;Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Amazon OpenSearch Service offers the latest versions of OpenSearch, support for 19 versions of Elasticsearch (1.5 to 7.10 versions), as well as visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  1. Getting Started
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/getting-started#let's-first-review-the-resources-that-have-already-been-created-for-you-as-part-of-this-workshop:" rel="noopener noreferrer"&gt;Let’s first review the resources that have already been created for you as part of this workshop:&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/" rel="noopener noreferrer"&gt;AWS CloudFormation console &lt;/a&gt;. (you need to be in the &lt;strong&gt;US West (Oregon)&lt;/strong&gt; region to be able to view the resources)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the stack with name &lt;strong&gt;iot304-base-stack&lt;/strong&gt; and go to the &lt;strong&gt;Outputs&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij0xj4gw70rw94mxydkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij0xj4gw70rw94mxydkh.png" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Amazon OpenSearch Service Domain is pre-created in this workshop. This will be used to store the labels related to entities detected in your streaming videos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make a note of the Key (&lt;em&gt;OpenSearchURL&lt;/em&gt;, &lt;em&gt;OpenSearchARN&lt;/em&gt;) and Value (&lt;em&gt;search-kvs-workshop-domain-...&lt;/em&gt;, &lt;em&gt;arn...&lt;/em&gt;). You'll use them in later parts of the workshop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also explore the &lt;strong&gt;Resources&lt;/strong&gt; tab. In addition to the OpenSearch Domain, there is an &lt;a href="https://aws.amazon.com/pm/cloud9/" rel="noopener noreferrer"&gt;AWS Cloud9 &lt;/a&gt;instance also created which you’ll use to host the Video Analytics application (front-end) in later half of this workshop.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq9ijrbi1ahw8q9bbbts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq9ijrbi1ahw8q9bbbts.png" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create Kinesis Video Stream Resources
&lt;/h2&gt;

&lt;p&gt;As described in the ‘&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/introduction/kvs/" rel="noopener noreferrer"&gt;Kinesis Video Streams with WebRTC Components&lt;/a&gt;’ section, a signaling channel is required to establish peer-to-peer connection when you want to enable live-streaming of video. And a video stream is a KVS resource that enables you to transport live video data, store it, and make the data available for consumption both in real time and on a batch or ad hoc basis. In this section, you will create a signaling channel and a video stream in KVS, and initiate streaming from your laptop’s camera through Google Chrome browser. You will first create a peer-to-peer stream to see near real-time video, and then you’ll ingest video into a KVS stream and test the media playback via KVS Video Stream console.&lt;/p&gt;

&lt;p&gt;Please make sure you are in the &lt;strong&gt;US West (Oregon)&lt;/strong&gt; region for all steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#creating-a-signaling-channel:" rel="noopener noreferrer"&gt;Creating a signaling channel:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the AWS Management Console, open the &lt;a href="https://us-west-2.console.aws.amazon.com/kinesisvideo/home?region=us-west-2#/" rel="noopener noreferrer"&gt;Kinesis Video Streams console&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation, click &lt;strong&gt;Signaling channels&lt;/strong&gt;. And then click &lt;strong&gt;Create signaling channel&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybmo76oh5ze3rc40a94g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybmo76oh5ze3rc40a94g.png" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the &lt;strong&gt;Create a new signaling channel&lt;/strong&gt; page, type the name for the signaling channel. For this workshop you can use StreamChannel. Leave the default &lt;strong&gt;Time-to-live (Ttl)&lt;/strong&gt; value as 60 seconds.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc4ns1ervsv48qiho9ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpc4ns1ervsv48qiho9ei.png" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create signaling channel&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the signaling channel is created, review the details on the channel’s details page. Make note of the &lt;strong&gt;Signaling channel ARN&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#creating-a-video-stream:" rel="noopener noreferrer"&gt;Creating a video stream:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the left navigation pane, select &lt;strong&gt;Video streams&lt;/strong&gt;. Click &lt;strong&gt;Create video stream&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd32s3vbz8v4arxunvo2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd32s3vbz8v4arxunvo2a.png" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the &lt;strong&gt;Create a new video stream&lt;/strong&gt; page, type a name for this stream. For this workshop you can use WebRTCStream. Use the default configuration for other parameters.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6iqg7r1jcu0hv8egfe2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6iqg7r1jcu0hv8egfe2.png" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create video stream&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the stream is created, review the details on the &lt;strong&gt;Video Streams&lt;/strong&gt; page. Make note of the &lt;strong&gt;Video stream ARN&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#stream-video-peer-to-peer-in-near-real-time:" rel="noopener noreferrer"&gt;Stream video peer-to-peer in near real-time:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html" rel="noopener noreferrer"&gt;Kinesis Video Streams WebRTC Test Page &lt;/a&gt;. You’ll use this page to initiate the video stream from your system. This page has been created for testing purposes using the &lt;a href="https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-js" rel="noopener noreferrer"&gt;Amazon Kinesis Video Streams WebRTC SDK for JavaScript &lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91526x78b8dqjvrpb49f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91526x78b8dqjvrpb49f.png" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set the &lt;strong&gt;Region&lt;/strong&gt;: us-west-2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the &lt;strong&gt;Access Key ID&lt;/strong&gt;, &lt;strong&gt;Secret Access Key&lt;/strong&gt; and &lt;strong&gt;Session Token&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you are &lt;strong&gt;undertaking this outside of an AWS event and using your own AWS account&lt;/strong&gt;, then please retrieve your credentials as you normally would, and paste them here.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are &lt;strong&gt;participating in an AWS event and using an AWS-provided account&lt;/strong&gt;, please refer to Step 7 of this &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/accessing-workshop-studio/" rel="noopener noreferrer"&gt;section&lt;/a&gt; and the Get AWS CLI credentials option. This is located on the bottom-left corner of the Workshop Studio Event Dashboard. Copy and paste the values within the double quotes (as shown for &lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt; below).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8h3p0j6iszbi26e9hh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8h3p0j6iszbi26e9hh0.png" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Please Note&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While &lt;strong&gt;Session Token&lt;/strong&gt; is marked as an ‘Optional’ field in the KVS WebRTC Test Page, you must provide it if you’re participating in an AWS event, with an AWS-provided account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the &lt;strong&gt;Channel Name&lt;/strong&gt;: StreamChannel&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or as described when creating the signaling channel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify that &lt;strong&gt;Send Video&lt;/strong&gt; and &lt;strong&gt;Send Audio&lt;/strong&gt; options are enabled under &lt;strong&gt;Tracks&lt;/strong&gt;. If not checked already then &lt;strong&gt;please make sure you enable both.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrlt4mww6jjo0lzauniq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrlt4mww6jjo0lzauniq.png" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;strong&gt;Start Master&lt;/strong&gt; button located right above the &lt;strong&gt;Logs&lt;/strong&gt; section. It may ask you to &lt;strong&gt;Allow Access&lt;/strong&gt; for your browser to access your system’s camera and microphone. Please select &lt;strong&gt;Allow&lt;/strong&gt; to proceed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuds68urcate9erb9y7s6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuds68urcate9erb9y7s6.png" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#view-the-real-time-peer-to-peer-stream:" rel="noopener noreferrer"&gt;Vew the real-time peer-to-peer stream:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;a href="https://us-west-2.console.aws.amazon.com/kinesisvideo/home?region=us-west-2#/" rel="noopener noreferrer"&gt;Kinesis Video Streams console &lt;/a&gt;, on the left navigation pane, click &lt;strong&gt;Signaling channels&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the signaling channel you created earlier (StreamChannel if you gave the same name as suggested in steps above).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expand the &lt;strong&gt;Media playback viewer&lt;/strong&gt; option and press the play button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix26wxbh0f3u49o3g2w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix26wxbh0f3u49o3g2w2.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The peer-to-peer video stream should appear within a few seconds. Put this viewer and the KVS WebRTC Test Page side-by-side and confirm that the latency is minimal. Congratulations, you have a real-time peer-to-peer stream!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the KVS WebRTC Test Page and click on Stop Master.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#ingest-video-into-the-cloud:" rel="noopener noreferrer"&gt;Ingest video into the cloud:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Now you’ll reconfigure the KVS WebRTC Test Page to use the WebRTC ingest feature to store video in a stream, instead of having a peer-to-peer stream.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Expand &lt;strong&gt;‘WebRTC Ingestion and Storage’&lt;/strong&gt; and provide WebRTCStream as the Stream name (or as described when creating the video stream).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Update Media Storage Configuration&lt;/strong&gt;. &lt;em&gt;(You can verify if the configuration was updated successfully or not by scrolling down to the Logs section and looking for the success message as shown in screenshot below)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfp9yl1pja17abviv9fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfp9yl1pja17abviv9fm.png" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll back to the &lt;strong&gt;‘WebRTC Ingestion and Storage’&lt;/strong&gt; section and enable the &lt;strong&gt;Ingestion and storage peer joins automatically&lt;/strong&gt; option by ticking the checkbox&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93e6azfew3oeaz6x01td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93e6azfew3oeaz6x01td.png" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;strong&gt;Start Master&lt;/strong&gt; button located right above the &lt;strong&gt;Logs&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs#view-the-video-stored-in-the-stream:" rel="noopener noreferrer"&gt;View the video stored in the stream:&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;a href="https://us-west-2.console.aws.amazon.com/kinesisvideo/home?region=us-west-2#/" rel="noopener noreferrer"&gt;Kinesis Video Streams console &lt;/a&gt;, on the left navigation pane, click &lt;strong&gt;Video streams&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the video stream created before (WebRTCStream if you gave the same name as suggested in steps above).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expand the &lt;strong&gt;Media playback&lt;/strong&gt; option.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbk9cbvu2qxue0qqebk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbk9cbvu2qxue0qqebk2.png" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Wait for a few seconds and you should now be able to see the video stream coming in from your laptop’s camera. &lt;em&gt;(Ignore the Download SDK pop-up and in case it persists then just refresh the KVS Video Streams Console page once).&lt;/em&gt; Note that since this video is playing back from storage, it has considerable latency. Congratulations, you have successfully played back video that you ingested and stored in a stream!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the KVS WebRTC Test Page and click on Stop Master.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  4. Provision supporting resources — Lambda, API Gateway &amp;amp; Step Function
&lt;/h2&gt;

&lt;p&gt;You will now create the following resources which will be used further in the workshop (a CloudFormation script is provided in the next step which will be used to create all the resources mentioned below)-&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource Identifier in CloudFormationUsage&lt;/strong&gt;kvslambdaThis Lambda function would get the KVS video stream and cut them into smaller chunks and store into S3. This would also call the StartLabelDetection API for Rekognition to start asynchronous detection of labels in the stored video.LambdaRoleThis is the IAM role that will enable AWS Lambda to work with Rekognition, Kinesis Video Streams, OpenSearch cluster, SNS and S3.MyCloudFrontDistributionThis is the CloudFront Distribution URL which would serve the media in our front-end application.rekognitionlambdaSNS would trigger this Lambda function to get the detected labels, their confidence scores and timestamp from Rekognition and store them into Amazon OpenSearch Service.RekognitionRoleThis is the IAM role that will enable Amazon Rekognition to conduct stored video analysis.S3BucketVideoThis is an Amazon S3 Bucket that will be used for storing video snippets. The incoming video streams from KVS would be cut into smaller segments and stored into this S3 bucket.searchApiGatewayThis is a REST API within API Gateway to facilitate search option in the final front-end application.searchlambdaThis Lambda function would be invoked when an end user accesses the front-end application and searches for an entity. The search API would trigger this function which in turn would get the entities from OpenSearch.SnsTopicOnce Rekognition completes label detection for a video clip, it will trigger a notification via this SNS topic.KVSStateMachineStep Function to orchestrate the whole workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.1-create-cloudformation-stack" rel="noopener noreferrer"&gt;4.1 Create CloudFormation Stack&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Follow the steps mentioned below and pass the Parameter values as described carefully-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the following link to launch the CloudFormation stack: &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/create/review?stackName=iot304-lambda-stack&amp;amp;templateURL=https://ws-assets-prod-iad-r-pdx-f3b3f9f1a7d6a3d0.s3.us-west-2.amazonaws.com/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/iot304-lambda-stack.yaml" rel="noopener noreferrer"&gt;Launch CloudFormation stack in us-west-2 (Oregon)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under the &lt;strong&gt;Parameters&lt;/strong&gt; section of the &lt;strong&gt;Specify stack details&lt;/strong&gt; page, provide the following values:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KVSStreamName&lt;/strong&gt;: WebRTCStream. (This is the name of the KVS Video Stream created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs/" rel="noopener noreferrer"&gt;Section 3&lt;/a&gt; Step 9.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenSearchARN&lt;/strong&gt;: This is the ARN for the Amazon OpenSearch Service Domain. Paste the value noted in Step 4 of the ‘&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/getting-started/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt;’ section. For quick access, you can also navigate to the *&lt;em&gt;Output **section of **iot304-base-stack *&lt;/em&gt;&lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;here &lt;/a&gt;and get the value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenSearchApiUrl&lt;/strong&gt;: This is the domain endpoint for the Amazon OpenSearch Service Domain. Paste the value noted in Step 4 of the ‘&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/getting-started/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt;’ section. For quick access, you can also navigate to the *&lt;em&gt;Output **section of **iot304-base-stack *&lt;/em&gt;&lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;here &lt;/a&gt;and get the value.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvx9bujctzleh7ify6x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvx9bujctzleh7ify6x7.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt; till you get to the &lt;strong&gt;Review&lt;/strong&gt; page (Step 4 in console). Scroll down till the end and check the box for &lt;strong&gt;I acknowledge that AWS CloudFormation might create IAM resources with custom names&lt;/strong&gt; and click &lt;strong&gt;Submit&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscr4kxcs1ls881rvdequ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscr4kxcs1ls881rvdequ.png" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Wait for the stack creation to reach &lt;strong&gt;CREATE_COMPLETE&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once stack is provisioned, make a note of the &lt;strong&gt;Amazon API Gateway URL&lt;/strong&gt;, &lt;strong&gt;AWS Lambda Role ARN&lt;/strong&gt;, &lt;strong&gt;Amazon CloudFront Distribution&lt;/strong&gt; and the &lt;strong&gt;AWS Step Function ARN&lt;/strong&gt; from the Outputs section. You'll use these in the upcoming sections.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56rzoe9ssut2ampgt8hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56rzoe9ssut2ampgt8hh.png" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.2-mapping-the-iam-role-for-lambda-with-amazon-opensearch-service-domain" rel="noopener noreferrer"&gt;4.2 Mapping the IAM role for Lambda with Amazon OpenSearch Service domain&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;For the &lt;strong&gt;searchlambda&lt;/strong&gt; function to be able to work with Amazon OpenSearch Service, for searching or analyzing data stored in OpenSearch, you’ll need to map the Lambda IAM role &lt;em&gt;(LambdaRole)&lt;/em&gt; created in the section above, with the OpenSearch domain.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;a href="https://us-west-2.console.aws.amazon.com/aos/home?region=us-west-2#opensearch/domains" rel="noopener noreferrer"&gt;Amazon OpenSearch Service Domain console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the Domain Name kvs-workshop-domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for the OpenSearch Dashboards URL on the top-right corner and click on it. This will open a new tab for the OpenSearch Dashboards.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kd3rm8wofqn51ibitks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kd3rm8wofqn51ibitks.png" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You’ll be prompted to enter the login credentials here. Copy and paste the following and click &lt;strong&gt;Log in&lt;/strong&gt;-&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Username: admin&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Password: Amazon90!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Keep the tenant selection as &lt;strong&gt;Global&lt;/strong&gt; and &lt;strong&gt;Confirm&lt;/strong&gt; (if you get a pop-up asking you to add data, select &lt;strong&gt;Explore on my own&lt;/strong&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ai952yyvwdfyr56vezt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ai952yyvwdfyr56vezt.png" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the hamburger menu option on the top-left side of the screen and select &lt;strong&gt;Security&lt;/strong&gt; under &lt;strong&gt;Management&lt;/strong&gt; section in the left navigation pane.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0teki25f29t3ul36ppoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0teki25f29t3ul36ppoc.png" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on &lt;strong&gt;Roles&lt;/strong&gt; and copy-paste all_access in the Search option and hit &lt;strong&gt;return&lt;/strong&gt;/&lt;strong&gt;enter&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0e1njqggx72hl25fkgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0e1njqggx72hl25fkgr.png" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;strong&gt;all_access&lt;/strong&gt; hyperlink and switch to the &lt;strong&gt;Mapped Users&lt;/strong&gt; tab on top half of the screen. Click on &lt;strong&gt;Manage Mapping&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6s5nmmavcc07l3bc2j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6s5nmmavcc07l3bc2j7.png" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scroll down to the &lt;strong&gt;Backend roles&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujszgsfholzh9ov3pdmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujszgsfholzh9ov3pdmv.png" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need to paste the IAM Role ARN for &lt;strong&gt;LambdaRole&lt;/strong&gt; that you copied earlier in Step 6 &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.1-create-cloudformation-stack" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Or, you can also go back to the &lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/" rel="noopener noreferrer"&gt;CloudFormation Console &lt;/a&gt;and select &lt;strong&gt;iot304-lambda-stack&lt;/strong&gt; created in the previous section. And navigate to the &lt;strong&gt;Outputs&lt;/strong&gt; tab and copy the Value corresponding to the LambdaRole key.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9ivnemvnzx7msvaq8xe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9ivnemvnzx7msvaq8xe.png" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go back to the &lt;strong&gt;OpenSearch Dashboards&lt;/strong&gt; console and paste this value under the &lt;strong&gt;Backend roles&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzxmyuyhgrq159bgt4jw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzxmyuyhgrq159bgt4jw.png" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Map&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the role has successfully been added you will see an entry named &lt;strong&gt;Backend role&lt;/strong&gt; with &lt;strong&gt;LambdaRole’s ARN&lt;/strong&gt; mapped to it&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdchwlq2om0bkghmbcmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdchwlq2om0bkghmbcmn.png" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Set up the front-end search application
&lt;/h2&gt;

&lt;p&gt;You’ll now host the Node.js based ‘&lt;strong&gt;Video Analytics App&lt;/strong&gt;’ (front-end application) on an AWS Cloud9 IDE instance. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.&lt;/p&gt;

&lt;p&gt;This instance was pre-created for you as part of the iot304-base-stack as mentioned in Step 5 of &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/getting-started/" rel="noopener noreferrer"&gt;section 2&lt;/a&gt;. This application would be your interface to search for entities in video streams and verify the occurrences, timestamps and clips based on labels detected. Artifacts for this app already exist &lt;a href="https://github.com/aws-samples/aws-workshop-for-real-time-video-analysis" rel="noopener noreferrer"&gt;here &lt;/a&gt;. You just need to clone this to get started.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://us-west-2.console.aws.amazon.com/cloud9control/home?region=us-west-2#/" rel="noopener noreferrer"&gt;AWS Cloud9 Console&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Open&lt;/strong&gt; under &lt;strong&gt;Cloud9 IDE&lt;/strong&gt; for the Environment named kvs-workshop-environment&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrpf8lw9u3chq6ij09oz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrpf8lw9u3chq6ij09oz.png" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You should see the Welcome page load up. The terminal is on the bottom part of the screen. On the left you have the directory options and on top the &lt;strong&gt;+&lt;/strong&gt; sign gives you options to start new files, terminals, configurations etc. -&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frij0nviskjsyipxk3gtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frij0nviskjsyipxk3gtp.png" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Head to the terminal and clone the Git repository. To do this, copy and paste the following into Cloud9 terminal&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/aws-samples/aws-workshop-for-real-time-video-analysis" rel="noopener noreferrer"&gt;https://github.com/aws-samples/aws-workshop-for-real-time-video-analysis&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e54qhm0ec5z1myg92rr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e54qhm0ec5z1myg92rr.png" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the cloned directory &lt;strong&gt;aws-workshop-for-real-time-video-analysis&lt;/strong&gt; in your Cloud9 IDE and expand it. Navigate to &lt;strong&gt;resources&lt;/strong&gt; &amp;gt; &lt;strong&gt;code&lt;/strong&gt; &amp;gt; &lt;strong&gt;frontend&lt;/strong&gt; &amp;gt; &lt;strong&gt;src&lt;/strong&gt; &amp;gt; config.js file. Double click on that to open it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xegjuuzd6ota1ph729w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xegjuuzd6ota1ph729w.png" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Take the value of the Amazon CloudFront Distribution URL and Amazon API Gateway URL copied earlier in Step 6 &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.1-create-cloudformation-stack" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Alternatively, you can get them from the &lt;strong&gt;Outputs **section of the **iot304-lambda-stack **in &lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt;. These would be the values corresponding to **MyCloudFrontDistribution&lt;/strong&gt; and &lt;strong&gt;APIGWURL&lt;/strong&gt; Keys (refer to the first screenshot, below). Paste the values in the CLOUDFRONT_URL and API_GW_URL parameter in &lt;strong&gt;config.js&lt;/strong&gt; (refer to the second screenshot, below). And save the file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauxrkd2rpghdkb9yb3tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauxrkd2rpghdkb9yb3tn.png" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35lp3leuoqcckho3v537.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35lp3leuoqcckho3v537.png" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;In the terminal, cd into the frontend folder within the aws-workshop-for-real-time-video-analysis directory by running the following command:&lt;/p&gt;

&lt;p&gt;cd aws-workshop-for-real-time-video-analysis/resources/code/frontend/&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From this directory, run the following command in the terminal:&lt;/p&gt;

&lt;p&gt;npm install&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1zhusrr8avwbhapicaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1zhusrr8avwbhapicaa.png" width="800" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Now start the application by running the following command in the terminal once the previous step is completed:&lt;/p&gt;

&lt;p&gt;npm run start&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After a few seconds the compilation would complete. Post that, click on the &lt;strong&gt;Preview&lt;/strong&gt; option on top.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiv2ga5zkbz8sccqj4ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiv2ga5zkbz8sccqj4ao.png" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on &lt;strong&gt;Preview Running Application&lt;/strong&gt;. This would open a small window on the bottom-right part of the screen. Click on &lt;strong&gt;Pop Out Into New Window&lt;/strong&gt; option.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03956tm812ewxhab0i34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03956tm812ewxhab0i34.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This would open the application in a new tab in your browser.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft791hbjjkm4dpo4aghau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft791hbjjkm4dpo4aghau.png" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Tying it all together
&lt;/h2&gt;

&lt;p&gt;Refer to the solution components as listed below to verify all the components you have created till now-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwc7dk2obw4z5vtowwbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwc7dk2obw4z5vtowwbl.png" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your laptop camera and the video feed via webRTC is setup via the &lt;a href="https://awslabs.github.io/amazon-kinesis-video-streams-webrtc-sdk-js/examples/index.html" rel="noopener noreferrer"&gt;Kinesis Video Streams WebRTC Test Page &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kinesis Video Stream successfully streams video via webRTC (as confirmed in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs/" rel="noopener noreferrer"&gt;section 3&lt;/a&gt; step 16).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kvslambda and the S3 bucket to store videos are created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SNS topic and rekognitionlambda are created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OpenSearch Domain is created and required IAM role is attached.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The front-end search app, CloudFront distribution to serve media, searchApiGateway and searchlambda are created and setup.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now the only 2 steps that you have to do are-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Restart the Master Feed (as done earlier while testing in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/kvs/" rel="noopener noreferrer"&gt;section 3&lt;/a&gt; step 12): Open the &lt;strong&gt;Kinesis Video Streams WebRTC Test Page&lt;/strong&gt; and confirm the values as selected in earlier steps. And click on Start Master button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trigger Step Function, created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.1-create-cloudformation-stack" rel="noopener noreferrer"&gt;section 4.1&lt;/a&gt;, to orchestrate KVS-Lambda-Rekognition flow. This is the workflow that connects 2nd and 3rd boxes in the diagram above. To do this, go to the &lt;strong&gt;Cloud9 environment *&lt;em&gt;again. Open a new terminal by clicking on the *&lt;/em&gt;+&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7yigprwrysz8asopx0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7yigprwrysz8asopx0u.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now update the &lt;em&gt;&lt;/em&gt; field in the command mentioned below with the value of &lt;strong&gt;Step Function ARN&lt;/strong&gt; obtained from &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/create-cfn#4.1-create-cloudformation-stack" rel="noopener noreferrer"&gt;section 4.1&lt;/a&gt; &lt;strong&gt;Step 6&lt;/strong&gt;. Or, you can also navigate to the &lt;a href="https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/" rel="noopener noreferrer"&gt;CloudFormation Console &lt;/a&gt;and select &lt;strong&gt;iot304-lambda-stack&lt;/strong&gt; created in the previous section and navigate to the &lt;strong&gt;Outputs&lt;/strong&gt; tab to copy the Value corresponding to &lt;strong&gt;StepFunctionARN&lt;/strong&gt; (refer to the image below the code snippet).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws stepfunctions start-execution --state-machine-arn &amp;lt;STATE_MACHINE_ARN&amp;gt; --input "{\"doContinue\" : true}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxfy3mwpuo54tffxwlfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxfy3mwpuo54tffxwlfe.png" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And &lt;strong&gt;run&lt;/strong&gt; the command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jve1w4rmm8ei0akm4fq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jve1w4rmm8ei0akm4fq.png" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will now initiate the whole workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/tying-together#verify-the-orchestration" rel="noopener noreferrer"&gt;Verify the orchestration&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To verify if the execution has started, you can go to the &lt;a href="https://us-west-2.console.aws.amazon.com/states/home?region=us-west-2#/statemachines" rel="noopener noreferrer"&gt;Step Functions Console &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflv0pnyucigksq95y6ft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflv0pnyucigksq95y6ft.png" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;KVSStateMachine-&lt;/strong&gt; and scroll down to the &lt;strong&gt;Executions&lt;/strong&gt;. Select the execution that would be in the &lt;strong&gt;Running&lt;/strong&gt; Status and scroll to the &lt;strong&gt;Events&lt;/strong&gt; section at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rkmool5k0janaz0hbkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rkmool5k0janaz0hbkj.png" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TaskSucceeded&lt;/strong&gt; would indicate that the invocations have started. After a few seconds you can go back to the Video Analytics front-end application to begin searching for entities.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Search Away!
&lt;/h2&gt;

&lt;p&gt;Now that you have all the components created and configured, you can take this setup for a spin.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/summary#head-back-to-the-front-end-application" rel="noopener noreferrer"&gt;Head back to the front-end application&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Refer to step 12 of &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/0c7e49a5-b5c4-4f6d-b26b-9b2d8b5cbd2f/en-US/frontend/" rel="noopener noreferrer"&gt;section 5.12&lt;/a&gt; where you had launched preview of the application in your browser. Head back to this preview.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts9iejh078blwtb7img3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts9iejh078blwtb7img3.png" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Here you will see the following options-&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Search Labels&lt;/strong&gt;: This is where you type the entities to search for within your video streams. The application would fetch the values and video frames based on the the labels detected from your video stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start and End Date&lt;/strong&gt;: You can select the time period within which you’d want to search for the detected entities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try searching for something that could have been there in the video stream. If it was detected, you’ll get an option in form of a drop down menu that you can click on.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19dlwzgqurj8bl3klnb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19dlwzgqurj8bl3klnb5.png" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You should see the results in a similar format-&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyqyn1r7qkvhgbqlks3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyqyn1r7qkvhgbqlks3n.png" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can search for other entities similarly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;With this we have concluded the workshop!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In real-life scenarios, the webRTC stream that we simulated through our browser and system’s camera and microphone could be replaced with any other source that supports webRTC streaming. These could be mobile devices, Raspberry Pi with USB cameras or even CCTV or IP cameras that support WebRTC streaming.&lt;/p&gt;

&lt;p&gt;You can also experiment with adding more functionalities such as setting alerts to highlight specific tagged objects once they are detected by Amazon Rekognition Video.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refernce Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb6ua3flqwkcckf4qsu4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb6ua3flqwkcckf4qsu4.gif" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Clean Up
&lt;/h2&gt;

&lt;p&gt;If you undertook this workshop yourself, using your own AWS account, you should conclude by deleting resources to avoid incurring unnecessary costs. Please navigate to the &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-west-2" rel="noopener noreferrer"&gt;AWS CloudFormation &lt;/a&gt;console and delete the following stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;iot304-lambda-stack&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;iot304-base-stack&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this project, I built an automated video monitoring system using AWS IoT and AI/ML services. By integrating AWS IoT Core, I connected video devices to the cloud for real-time data streaming.&lt;/p&gt;

&lt;p&gt;Leveraging Amazon Rekognition, I implemented AI-driven video analytics to automatically detect objects, faces, and critical events in video feeds. The system used AWS Lambda and CloudWatch to trigger automated responses based on analysis results, creating an efficient and scalable solution for real-time monitoring.&lt;/p&gt;

&lt;p&gt;This project demonstrated the potential of combining IoT and AI/ML to enhance video surveillance capabilities, providing practical benefits in security and operational efficiency.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s connect: &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>aws</category>
      <category>cloud</category>
      <category>iot</category>
    </item>
    <item>
      <title>The Cloud Resume Challenge — AWS</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Thu, 14 Nov 2024 11:48:08 +0000</pubDate>
      <link>https://forem.com/shubham_murti/the-cloud-resume-challenge-aws-1h93</link>
      <guid>https://forem.com/shubham_murti/the-cloud-resume-challenge-aws-1h93</guid>
      <description>&lt;h1&gt;
  
  
  Introduction to The Cloud Resume Challenge
&lt;/h1&gt;

&lt;p&gt;Hello, I’m Shubham Murti, a cloud enthusiast and recent graduate with a Master’s in Computer Science. My journey into cloud computing began with a conversation that sparked a new career path. One of my college friends was talking about cloud technology and encouraged me to look into it. After researching, I realized that cloud computing was becoming essential across industries, with every major company moving to the cloud. It became clear to me that cloud computing would be a valuable field to enter.&lt;/p&gt;

&lt;p&gt;I didn’t know where to start, but my friend suggested I try the Cloud Resume Challenge — a project designed to help people gain hands-on experience with cloud technologies. The challenge felt daunting initially since I was completely new to cloud. However, I decided to take it on as my first cloud project, determined to learn and complete it step by step. The challenge involves building a resume website while gaining experience with front-end development, creating a serverless backend, managing infrastructure with code, and automating deployments — all real-world cloud engineering skills.&lt;/p&gt;

&lt;p&gt;The Cloud Resume Challenge taught me that each chunk of the project brings unique problems to solve, from setting up infrastructure to handling databases and APIs. I’m excited to share my journey through this challenge and how it helped me develop foundational cloud skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 0: Certification Prep
&lt;/h2&gt;

&lt;p&gt;When I began the Cloud Resume Challenge, I hadn’t completed any cloud certifications. I wanted to dive into the hands-on work first to gain real-world experience before taking an exam. After finishing the project, I decided to pursue the AWS Certified Cloud Practitioner (CCP) certification to formalize my knowledge and gain a foundational credential in cloud computing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Certification Matters
&lt;/h3&gt;

&lt;p&gt;Completing the CCP certification after the project turned out to be an excellent choice. The practical experience from the challenge gave me a solid understanding of the core concepts, which made the certification process smoother and more meaningful. The CCP exam is widely recognized as an introductory certification, and having it on my resume helps demonstrate both my commitment to cloud and my understanding of basic cloud principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Approach to Certification
&lt;/h3&gt;

&lt;p&gt;To prepare for the CCP exam, I primarily used Andrew Brown’s CCP course on freeCodeCamp. It’s a comprehensive, 14-hour video course that covers all key topics in-depth. I found Andrew’s approach to be thorough, and it complemented my project experience well. Alongside the video course, I also used practice sets from AWSboy to test my knowledge. This combination of watching videos and taking practice exams really helped reinforce what I had learned during the Cloud Resume Challenge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources and Tips for Success
&lt;/h3&gt;

&lt;p&gt;For those preparing for the CCP exam, my advice is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Watch Andrew Brown’s CCP Course&lt;/strong&gt;: This 14-hour course covers all essential topics in a structured way. It’s ideal for beginners and aligns well with the exam content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Practice as Much as Possible&lt;/strong&gt;: Use practice sets like AWSboy or others you can find online. The more practice questions you do, the better prepared you’ll feel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hands-On Experience&lt;/strong&gt;: If possible, try working on a project like the Cloud Resume Challenge before taking the exam. Practical experience can make concepts easier to understand and retain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Completing the CCP certification after the project gave me a more well-rounded understanding of cloud fundamentals. This experience has prepared me for further cloud challenges and more advanced certifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 1: Building the Frontend
&lt;/h2&gt;

&lt;p&gt;The first step in the Cloud Resume Challenge was to create and deploy a static website that hosts my resume. I used HTML and CSS to structure the site and hosted it on Amazon S3, followed by configuring a custom domain, DNS, and HTTPS using AWS CloudFront and Route 53.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges and Solutions
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Deploying a Static Site with HTTPS
&lt;/h4&gt;

&lt;p&gt;Getting the static site deployed was relatively straightforward until I began configuring the domain and SSL certificate for HTTPS. I purchased a domain to personalize my site, and this required additional configuration steps to ensure secure communication (HTTPS) via AWS Certificate Manager and CloudFront. Setting up a secure HTTPS connection to my S3 bucket was particularly challenging, as I encountered issues with the SSL certificate integration and had to adjust CloudFront and Route 53 settings multiple times.&lt;/p&gt;

&lt;p&gt;One consistent problem was establishing a secure HTTPS connection for my custom domain. Even after adding the SSL certificate, the site wouldn’t load securely. After digging into resources, I realized I needed to fine-tune the CloudFront distribution settings and confirm that the certificate was referenced correctly. The moment my site, &lt;code&gt;murtishubham.click&lt;/code&gt;, finally loaded over HTTPS was a great relief and felt like a significant milestone.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using My Third-Year Project Website
&lt;/h4&gt;

&lt;p&gt;Since I didn’t have a dedicated portfolio site ready, I chose to use a website I built for a project during my third year of college. The idea was that once I completed automation in Chunk 4, I could refresh the deployment and replace the third-year project with a custom portfolio site, showcasing my own design. I liked the flexibility that automation brought to this, allowing me to swap out sites seamlessly. Although it was initially just an idea, the potential to automate future deployments and update the site continuously was exciting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Looking Forward: Security Mods
&lt;/h3&gt;

&lt;p&gt;I haven’t explored mods yet, but I’m interested in the security-focused mods, as my career goal is to become a cloud security engineer. I plan to delve into these mods in future chunks to further enhance my security skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 2: Building the API
&lt;/h2&gt;

&lt;p&gt;The next phase of the Cloud Resume Challenge was to build a backend API that supports a visitor counter on my resume site. This required setting up an Amazon DynamoDB table to store the visitor count, creating an API endpoint using API Gateway, and connecting everything with AWS Lambda using Python.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges and Solutions
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Setting Up DynamoDB and API Gateway
&lt;/h4&gt;

&lt;p&gt;Setting up the DynamoDB table to store the visitor count and configuring the API Gateway to handle requests was straightforward. Amazon’s documentation made it fairly easy to set up both services, and I was able to complete these parts without much trouble. Having a reliable storage solution like DynamoDB in place to record visitor data was a valuable introduction to using managed databases in the cloud.&lt;/p&gt;

&lt;h4&gt;
  
  
  Connecting DynamoDB and API Gateway with Python
&lt;/h4&gt;

&lt;p&gt;The real challenge came when I needed to connect these components with Python code in AWS Lambda. Since I was still getting comfortable with Python, this part took a bit longer. Writing the backend code to communicate between the API Gateway and DynamoDB helped me gain hands-on experience with APIs and Python. Debugging and testing the function gave me a solid understanding of how data flows between different services in the cloud.&lt;/p&gt;

&lt;p&gt;After a few days of trial and error, I was able to successfully connect the API to the database, retrieve visitor counts, and update the values in DynamoDB. Reflecting on this part of the project, I can say that working with APIs was a crucial learning experience, and I now understand that APIs are very important in cloud projects because they let different tools and services talk to each other and work together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;This chunk helped me understand the importance of APIs in cloud projects. Learning to connect multiple cloud services via API calls has given me the confidence to tackle similar tasks in the future. Thanks to this hands-on experience, I’m now much more comfortable working with APIs in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 3: Frontend and Backend Integration
&lt;/h2&gt;

&lt;p&gt;After completing the backend API and the frontend of my portfolio website, the next phase of the Cloud Resume Challenge was to integrate the two. This process involved creating a JavaScript visitor counter that would increment each time a user visited my site. Setting up the counter itself was a quick task, but designing the underlying logic took a bit more time. The real challenge was ensuring that the counter worked seamlessly with the backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges and Solutions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Creating the Visitor Counter&lt;/strong&gt; The first step was implementing a JavaScript counter to track the number of visitors on the site. This was a simple implementation, but the real work came when I had to figure out how to persist this data in a way that would work with the backend. This part required careful planning to ensure that the front-end could interact correctly with the server-side logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing and Debugging with Cypress&lt;/strong&gt; Once the visitor counter was set up, it was time to test the functionality. Before starting this project, I had no experience with testing and didn’t understand its importance. However, the Cloud Resume Challenge guidebook provided me with the resources to learn how to use Cypress for testing. I spent some time experimenting with it, and soon, I was able to run basic tests to ensure the integration worked as expected. Learning to use Cypress was a valuable skill, and it helped me ensure the reliability of my site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dealing with CORS Issues&lt;/strong&gt; One of the toughest hurdles I faced during the frontend and backend integration was dealing with CORS (Cross-Origin Resource Sharing) issues. CORS proved to be a tricky problem to solve in this context. It took several days of troubleshooting to get everything working correctly. However, in the end, I managed to overcome the issues, ensuring smooth communication between the frontend and backend.&lt;/p&gt;

&lt;p&gt;Reference diagram I drew before performing the Cloud Resume Challenge&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54qu9thhz0vg27oas2h4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54qu9thhz0vg27oas2h4.jpeg" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;This chunk of the project helped me understand the importance of seamless integration between the frontend and backend. Learning how to implement real-time features like the visitor counter and dealing with CORS taught me valuable lessons in troubleshooting and debugging. The hands-on experience with Cypress also gave me a strong foundation in web testing, which I’ll be able to apply in future cloud projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 4: Automation and CI
&lt;/h2&gt;

&lt;p&gt;From the beginning of the Cloud Resume Challenge (CRC), I was excited about automation, particularly the idea of using Infrastructure as Code (IaC). When I thought about how I could refresh my website and automatically see the changes after deployment, it really motivated me to dive deeper into this aspect. I was especially intrigued by using Terraform, so I eagerly began exploring how it could be integrated with AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges and Solutions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;&lt;br&gt;
The first step in my automation journey was learning Infrastructure as Code (IaC) using Terraform to automate resource provisioning in AWS. I spent time going through Terraform’s documentation, learning how to integrate it with AWS, and managing AWS Access Keys and Secret Keys using the AWS CLI for secure access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Pipeline with GitHub Actions&lt;/strong&gt;&lt;br&gt;
While working with Terraform, I set up a CI/CD pipeline using GitHub Actions to automate the deployment of both the frontend and backend of my portfolio site. This allowed me to automatically deploy updates whenever changes were made to the repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automating Deployment for the Frontend and Backend&lt;/strong&gt;&lt;br&gt;
Terraform enabled me to automate the deployment of both the frontend and backend infrastructure, eliminating the need for manual configuration and deployment. This took about a week or two, but the effort was worthwhile, as it freed me up to focus more on coding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;p&gt;This phase taught me the importance of automation in cloud development. Using Terraform and GitHub Actions streamlined my deployment process, making it more efficient and reliable. I now feel confident in managing cloud infrastructure using Infrastructure as Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk 5: Writing the Blog Post
&lt;/h2&gt;

&lt;p&gt;As I reach the final part of my Cloud Resume Challenge (CRC) journey, you’re reading Chunk 5, which marks the conclusion of this experience. Writing this blog post has been an opportunity to reflect on what I’ve learned and share the key takeaways from each chunk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Learnings from Each Chunk
&lt;/h3&gt;

&lt;p&gt;The CRC has taught me the value of hands-on practice, especially when learning AWS and cloud technologies. I’ve gained practical experience with Terraform, AWS CLI, CI/CD pipelines, and much more. While the theory was essential, it was the real-world implementation that truly deepened my understanding. Each chunk in this challenge has built on the last, and I now feel more confident working with cloud technologies.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The most rewarding part of the CRC was solving problems on my own. It pushed me to troubleshoot and find solutions, while also helping me discover what I enjoy most about cloud development, like automating deployments and working with infrastructure as code.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Reflection
&lt;/h2&gt;

&lt;p&gt;To anyone new to the cloud or starting the Cloud Resume Challenge: don’t be discouraged if you feel overwhelmed. Stick with it, and in the end, you’ll complete the challenge and gain valuable skills that will serve you in the future. The guidebook has helped me tremendously, but the real learning came from doing.&lt;/p&gt;

&lt;p&gt;Lastly, I’d like to thank Forrest Brazeal for creating the Cloud Resume Challenge. It has helped me in so many ways, and I highly recommend it to anyone looking to learn cloud technologies. Good luck to those starting out — you’ve got this!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Urls&lt;/strong&gt;&lt;br&gt;
Click &lt;a href="https://murtishubham.click/" rel="noopener noreferrer"&gt;here&lt;/a&gt; to view the portfolio developed.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>learning</category>
      <category>cloud</category>
      <category>productivity</category>
      <category>data</category>
    </item>
    <item>
      <title>Serverless Data Processing on AWS : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Wed, 13 Nov 2024 14:54:59 +0000</pubDate>
      <link>https://forem.com/shubham_murti/serverless-data-processing-on-aws-aws-project-2cln</link>
      <guid>https://forem.com/shubham_murti/serverless-data-processing-on-aws-aws-project-2cln</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Businesses looking for meaningful insights in the big data world depend on effective real-time stream processing and analysis. This project uses AWS services including Amazon Kinesis, AWS Lambda, Amazon S3, Amazon DynamoDB, Amazon Cognito, and Amazon Athena to construct a serverless data processing architecture. The solution provides a robust system for real-time data insights by utilizing these services to ingest, process, and store data with high scalability and low infrastructure administration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;The serverless solution uses the following AWS services and technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Triggers upon events from Kinesis streams, enabling real-time data processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Analytics&lt;/strong&gt;: Aggregates and transforms streaming data on the fly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;: Stores processed data in a fast, scalable NoSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Serves as a scalable, durable storage location for raw and processed data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt;: Streams raw data directly to S3 for archival and further processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;: Allows ad-hoc querying directly on data stored in S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Cognito&lt;/strong&gt;: Manages user authentication and authorization for secure access to resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along with this project, ensure you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic AWS Knowledge&lt;/strong&gt;: Familiarity with Lambda, Kinesis, S3, and IAM services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS SDKs and CLI&lt;/strong&gt;: Installed and configured for command-line interactions with AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM Permissions&lt;/strong&gt;: Set up permissions for Lambda, Kinesis, S3, and DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;A fictive company &lt;a href="http://wildrydes.com/" rel="noopener noreferrer"&gt;Wild Rydes&lt;/a&gt; introduces an innovative transportation service that offers unicorn rydes to help people to get to their destination faster and hassle-free. Each unicorn is equipped with a sensor that reports its location and vital signs. During this workshop, we’ll build infrastructure to enable operations personnel at Wild Rydes to monitor the health and status of their unicorn fleet. We’ll use AWS to build applications that process and visualize the unicorn data in real-time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;Below is a visual representation of the serverless data processing architecture, showing data flow from ingestion through Kinesis, processing with Lambda, and storage in DynamoDB and S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqugzs3cvtuaw7l0pli15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqugzs3cvtuaw7l0pli15.png" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Streams&lt;/strong&gt;: Collects data from various sources, enabling the flow of real-time data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Processes data in real-time when triggered by Kinesis events, transforming the data before it’s stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Analytics&lt;/strong&gt;: Provides on-the-fly data aggregation and transformation to gain insights before storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;: Stores processed data for fast retrieval, especially suitable for structured data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Holds raw data and processed datasets, ensuring durability and scalability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt;: Streams data directly to S3 for long-term storage and archiving.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;: Allows for ad-hoc querying on the data stored in S3, ideal for data analysis without data movement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Cognito&lt;/strong&gt;: Ensures secure user authentication, managing permissions and access control.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  AWS Cloud9 IDE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloud9/" rel="noopener noreferrer"&gt;AWS Cloud9&lt;/a&gt; is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. Cloud9 comes pre-packaged with essential tools for popular programming languages and the AWS Command Line Interface (CLI) pre-installed so you don’t need to install files or configure your laptop for this workshop. Your Cloud9 environment will have access to the same AWS resources as the user with which you logged into the AWS Management Console.&lt;/p&gt;

&lt;p&gt;Take a moment now and setup your Cloud9 development environment.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, click Services then select Cloud9 under Developer Tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter Development into Name and optionally provide a Description.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You may leave Environment settings at their defaults of launching a new t2.micro EC2 instance which will be paused after 30 minutes of inactivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the environment settings and click Create environment. It will take several minutes for your environment to be provisioned and prepared.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once ready, your IDE will open to a welcome screen. Below that, you should see a terminal prompt similar to:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8aoa90edhavgvdlzor1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8aoa90edhavgvdlzor1.png" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are running this workshop at an AWS event: Paste here the Credentials/CLI Snippets you copied before to configure your environment with your credentials. If you are running it in your own account, you can use the default profile Cloud9 sets up for you. You can check it via cat ~/.aws/credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvei3gfrow1j4o5eeu3vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvei3gfrow1j4o5eeu3vg.png" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can run AWS CLI commands in here just like you would on your local computer. Verify that your user is logged in by running aws sts get-caller-identity.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You’ll see output indicating your account and user information:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Admin:~/environment $ aws sts get-caller-identity

{
    "Account": "123456789012",
    "UserId": "AKIAI44QH8DHBEXAMPLE",
    "Arn": "arn:aws:iam::123456789012:user/Alice"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Keep your AWS Cloud9 IDE opened in a tab throughout this workshop as you’ll use it for activities like building and running a sample app in a Docker container and using the AWS CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Command Line Clients
&lt;/h2&gt;

&lt;p&gt;The modules utilize two command-line clients to simulate and display sensor data from the unicorns in the fleet. These are small programs written in the &lt;a href="https://golang.org/" rel="noopener noreferrer"&gt;Go Programming Language&lt;/a&gt;. The below instructions in the &lt;a href="https://data-processing.serverlessworkshops.io/setup/03-cloud9-setup.html#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt; section walks through downloading pre-built binaries, but you can also download the source and build it manually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/client/producer.go" rel="noopener noreferrer"&gt;producer.go&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/client/consumer.go" rel="noopener noreferrer"&gt;consumer.go&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Producer
&lt;/h3&gt;

&lt;p&gt;The producer generates sensor data from a unicorn taking a passenger on a Wild Ryde. Each second, it emits the location of the unicorn as a latitude and longitude point, the distance traveled in meters in the previous second, and the unicorn’s current level of magic and health points.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consumer
&lt;/h3&gt;

&lt;p&gt;The consumer reads and displays formatted JSON messages from an Amazon Kinesis stream which allow us to monitor in real-time what’s being sent to the stream. Using the consumer, you can monitor the data the producer and your applications are sending.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Switch to the tab where you have your Cloud9 environment opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and unpack the command line clients by running the following command in the Cloud9 terminal:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;curl -s &lt;a href="https://data-processing.serverlessworkshops.io/client/client.tar" rel="noopener noreferrer"&gt;https://data-processing.serverlessworkshops.io/client/client.tar&lt;/a&gt; | tar -xv&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will unpack the consumer and producer files to your Cloud9 environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Tips
&lt;/h2&gt;

&lt;p&gt;💡 Keep an open scratch pad in Cloud9 or a text editor on your local computer for notes. When the step-by-step directions tell you to note something such as an ID or Amazon Resource Name (ARN), copy and paste that into the scratch pad.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 Use a unique personal, development &lt;a href="https://data-processing.serverlessworkshops.io/setup/02-self-paced.html#self_paced" rel="noopener noreferrer"&gt;AWS Account&lt;/a&gt;, &lt;a href="https://data-processing.serverlessworkshops.io/setup/01-at-aws-event.html#event_engine" rel="noopener noreferrer"&gt;event engine&lt;/a&gt; or &lt;a href="https://data-processing.serverlessworkshops.io/setup/01-at-aws-event.html#event_box" rel="noopener noreferrer"&gt;EventBox&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔑 Use one of the US East (N. Virginia), US West (Oregon), EU* (Ireland, London, Frankfurt) &lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/" rel="noopener noreferrer"&gt;Regions&lt;/a&gt; if using your own AWS account.&lt;/p&gt;

&lt;p&gt;🔑 Keep your &lt;a href="https://data-processing.serverlessworkshops.io/setup/03-cloud9-setup.html#aws-cloud9-ide" rel="noopener noreferrer"&gt;AWS Cloud9 IDE&lt;/a&gt; opened in a tab&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-time Data Streaming
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll create an Amazon Kinesis stream to collect and store sensor data from our unicorn fleet. Using the provided command-line clients, you’ll produce sensor data from a unicorn on a Wild Ryde and read from the stream. Lastly, you’ll use the unicorn dashboard to plot our unicorns on a map and watch their status in real-time. In subsequent modules you’ll add functionality to analyze and persist this data using Amazon Kinesis Data Analytics, AWS Lambda, and Amazon DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The architecture for this module involves an Amazon Kinesis stream, a producer, and a consumer.&lt;/p&gt;

&lt;p&gt;Our producer is a sensor attached to a unicorn currently taking a passenger on a ride. This sensor emits data every second including the unicorn’s current location, distance traveled in the previous second, and magic points and hit points so that our operations team can monitor the health of the unicorn fleet from Wild Rydes headquarters.&lt;/p&gt;

&lt;p&gt;The Amazon Kinesis stream stores data sent by the producer and provides an interface to allow consumers to process and analyze those data. Our consumer is a simple command-line utility that tails the stream and outputs the data points from the stream in effectively real-time so we can see what data is being stored in the stream. Once we send and receive data from the stream, we can use the &lt;a href="https://data-processing.serverlessworkshops.io/dashboard.html" rel="noopener noreferrer"&gt;unicorn dashboard&lt;/a&gt; to view the current position and vitals of our unicorn fleet in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3828v1cru86t1o7a1vq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3828v1cru86t1o7a1vq.jpg" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;❗ Ensure you’ve completed the &lt;a href="https://data-processing.serverlessworkshops.io/setup.html" rel="noopener noreferrer"&gt;setup guide&lt;/a&gt; before beginning the workshop.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create an Amazon Kinesis stream
&lt;/h3&gt;

&lt;p&gt;Use the Amazon Kinesis Data Streams console to create a new provisioned stream named wildrydes with 1 shard.&lt;/p&gt;

&lt;p&gt;A Shard is the base throughput unit of an Amazon Kinesis data stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 PUT records per second. You will specify the number of shards needed when you create a data stream. For example, if we create a data stream with four shards then this data stream has a throughput of 4MB/sec data input and 8MB/sec data output, and allows up to 4000 PUT records per second. You can monitor shard-level metrics in Amazon Kinesis Data Streams and add or remove shards from your data stream dynamically as your data throughput changes by resharding the data stream.&lt;/p&gt;

&lt;p&gt;At re:Invent 2021, AWS introduced a new capacity mode for Kinesis Data Streams called &lt;a href="https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-kinesis-data-streams-on-demand/" rel="noopener noreferrer"&gt;Kinesis Data Streams On-Demand&lt;/a&gt;. The new mode is capable of serving gigabytes of write and read throughput per minute without capacity planning. During this workshop, we will use the provisioned mode with shard capacity planning for educational purposes. For your environments, you should consider using on-demand capacity mode based on your needs and cost considerations.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the &lt;a href="https://console.aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Management Console&lt;/a&gt;, click Services then select Kinesis under Analytics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Get started if prompted with an introductory screen.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create data stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter wildrydes into Kinesis stream name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the Provisioned option and enter 1 into Number of shards, then click Create Kinesis stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Within 60 seconds, your Kinesis stream will be ACTIVE and ready to store real-time streaming data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xqzgzezucqx72ozgsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13xqzgzezucqx72ozgsg.png" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Produce messages into the stream
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://data-processing.serverlessworkshops.io/setup.html#producer" rel="noopener noreferrer"&gt;command-line producer&lt;/a&gt; to produce messages into the stream.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Switch to the tab where you have your Cloud9 environment opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the terminal, run the producer to start emitting sensor data to the stream.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;The producer emits a message a second to the stream and prints a period to the screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;$ ./producer ..................................................&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Amazon Kinesis Streams console, click on wildrydes and click on the Monitoring tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After several minutes, you will see the Put Record Success (percent) — Average graph begin to record a single put a second. Keep the producer running till end of the Lab 2, so that you can see the unicorns flying in action.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Read messages from the stream
&lt;/h3&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;While the producer is running, switch to the tab where you have your Cloud9 environment opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hit the (+) button and click New Terminal to open a new terminal tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the consumer to start reading sensor data from the stream.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./consumer&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;The consumer will print the messages being sent by the producer:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;{ "Name": "Shadowfax", "StatusTime": "2017-06-05 09:17:08.189", "Latitude": 42.264444250051326, "Longitude": -71.97582884770408, "Distance": 175, "MagicPoints": 110, "HealthPoints": 150 } { "Name": "Shadowfax", "StatusTime": "2017-06-05 09:17:09.191", "Latitude": 42.265486935100476, "Longitude": -71.97442977859625, "Distance": 163, "MagicPoints": 110, "HealthPoints": 151 }&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Create an identity pool for the unicorn dashboard
&lt;/h3&gt;

&lt;p&gt;Create an Amazon Cognito identity pool to grant unauthenticated users access to read from your Kinesis stream. Note the identity pool ID for use in the next step.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, click Services then select Cognito under Security, Identity &amp;amp; Compliance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Manage Identity Pools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create new identity pool. [This is not necessary if you do not have any identity pool yet.]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter wildrydes into Identity pool name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tick the Enable access to unauthenticated identities checkbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create Pool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Allow which will create authenticated and unauthenticated roles for your identity pool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Go to Dashboard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Edit identity pool in the upper right hand corner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note the Identity pool ID for use in a later step.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76lnsfk48m8qrm8hqu4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76lnsfk48m8qrm8hqu4v.png" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Grant the unauthenticated role access to the stream
&lt;/h3&gt;

&lt;p&gt;Add a new policy to the unauthenticated role to allow the dashboard to read from the stream to plot the unicorns on the map.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, click Services then select IAM under Security, Identity &amp;amp; Compliance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Roles in the left-hand navigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the Cognito_wildrydesUnauth_Role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add inline policy (Add permissions -&amp;gt; Create inline policy).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Choose a service and click Kinesis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tick the Read and List permissions checkboxes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Resources you will limit the role to the wildrydes stream and consumer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add ARN next to consumer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Add ARN(s) dialog box, enter the following information:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the region you’re using in Region (e.g. us-east-1)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;your &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html" rel="noopener noreferrer"&gt;Account ID&lt;/a&gt; in Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;ul&gt;
&lt;li&gt;in Stream type&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;wildrydes in Stream name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;ul&gt;
&lt;li&gt;in Consumer name&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;ul&gt;
&lt;li&gt;in Consumer creation timestamp&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Add.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add ARN next to stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Add ARN(s) dialog box, enter the following information:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the region you’re using in Region (e.g. us-east-1)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;your &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html" rel="noopener noreferrer"&gt;Account ID&lt;/a&gt; in Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;wildrydes in Stream name&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click Add.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiteize7e34beiroflxab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiteize7e34beiroflxab.png" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Review policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter WildrydesDashboardPolicy in Name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create policy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  6. View unicorn status on the dashboard
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://data-processing.serverlessworkshops.io/dashboard.html" rel="noopener noreferrer"&gt;Unicorn Dashboard&lt;/a&gt; to see the unicorn on a real-time map. You may need to zoom out to find the unicorn. Please double check that producer and consumer are both running if you can’t find it.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://data-processing.serverlessworkshops.io/dashboard.html" rel="noopener noreferrer"&gt;Unicorn Dashboard&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the Cognito Identity Pool ID you noted in step 4 and click Start.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgwzbfnhbesi6zlejeje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgwzbfnhbesi6zlejeje.png" width="530" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validate that you can see the unicorn on the map.If you can not see the unicorn, please go back to Cloud9 and run ./producer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcybsz54lp77ber4qq26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcybsz54lp77ber4qq26.png" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the unicorn to see more details from the stream and compare with the consumer output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1baujy5g3648iq4whugj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1baujy5g3648iq4whugj.png" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The speed is calculated internally based on the difference of the longitude and latitude values.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Experiment with the producer
&lt;/h3&gt;

&lt;p&gt;Stop and start the producer while watching the dashboard and the consumer. Start multiple producers with different unicorn names.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Stop the producer by pressing Control + C and notice the messages stop and the unicorn disappear after 30 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the producer again and notice the messages resume and the unicorn reappear.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hit the (+) button and click New Terminal to open a new terminal tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start another instance of the producer in the new tab. Provide a specific unicorn name and notice data points for both unicorns in consumer’s output:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer -name Bucephalus&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Check the dashboard and verify you see multiple unicorns.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm4z7j2km5de11wyembs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm4z7j2km5de11wyembs.png" width="608" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.&lt;/p&gt;

&lt;p&gt;🔧 In this module, you’ve created an Amazon Kinesis stream and used it to store and visualize data from a simulated fleet of unicorns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stream Processing and analytics with AWS Lambda
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Stream Processing and analytics with AWS Lambda
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll use &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; to process data from the wildrydes &lt;a href="https://aws.amazon.com/kinesis/data-streams/" rel="noopener noreferrer"&gt;Amazon Kinesis stream&lt;/a&gt; created earlier. We’ll create and configure a Lambda function to read from the stream and write records to an &lt;a href="https://aws.amazon.com/dynamodb" rel="noopener noreferrer"&gt;Amazon DynamoDB&lt;/a&gt; table as they arrive. We will also explore a few error handling mechanisms when there are poison pill messages in the stream. Finally, We will learn how to do stream analytics with AWS Lambda.&lt;/p&gt;

&lt;p&gt;Our target architecture looks as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb740uapvp20vcfwgtui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb740uapvp20vcfwgtui.png" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll setup all of the resources needed to support processing records from the Kinesis Data Stream wildrydes including a Dynamo DB table, a Lambda function, an IAM role, a Kinesis Data Stream, and a SQS queue.&lt;/p&gt;

&lt;p&gt;Resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/01-setup.html#create_dynamo_db_table" rel="noopener noreferrer"&gt;Create a Dynamo DB Table&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/01-setup.html#create_sqs_dlq" rel="noopener noreferrer"&gt;Create an SQS On-Error Destination Queue&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/01-setup.html#create_iam_role" rel="noopener noreferrer"&gt;Create an IAM Role&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/01-setup.html#create_lambda" rel="noopener noreferrer"&gt;Create a Lambda Function&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Create an Amazon DynamoDB table
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://console.aws.amazon.com/dynamodbv2/home" rel="noopener noreferrer"&gt;Amazon DynamoDB&lt;/a&gt; console to create a new DynamoDB table. This table is used to store the unicorn data from the AWS Lambda function. We will call your table UnicornSensorData and give it a Partition key called Name of type String and a Sort key called StatusTime of type String. Use the defaults for all other settings.&lt;/p&gt;

&lt;p&gt;After you’ve created the table, note the Amazon Resource Name (ARN) for use in the next section.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select DynamoDB under Database. Alternatively, you can use the search bar and type DynamoDB in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter UnicornSensorData for the Table name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter Name for the Partition key and select String for the key type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter StatusTime for the Sort key and select String for the key type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the Use default settings box checked and choose Create.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglejohzl2pv49t5mzmwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglejohzl2pv49t5mzmwq.png" width="800" height="737"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the table is created, Click on the hyperlink on the table name. This will take you to the new screen. Under General information, You will see Amazon Resource Name (ARN). Copy and save the Amazon Resource Name (ARN) in the scratchpad. You will use this when creating IAM role.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2. Create an SQS On-Error Destination Queue
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://console.aws.amazon.com/sqs" rel="noopener noreferrer"&gt;Amazon SQS&lt;/a&gt; console to create a new queue nammed wildrydes-queue. Your Lambda function will send messages to this queue when the processing is failed based on the retry settings.&lt;/p&gt;

&lt;p&gt;After you’ve created the queue, note the Amazon Resource Name (ARN) for use in later sections.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select Simple Queue Service under Application Integration. Alternatively, you can use the search bar and type Simple Queue Service in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the orange Create queue button&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the Name field enter wildrydes-queue&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the rest of the options as the defaults and click “Create queue”&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6no9lraa9gb5v9snejix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6no9lraa9gb5v9snejix.png" width="800" height="673"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy and save the ARN of the SQS queue in the scratchpad as you will need it for Lambda configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  3. Create an IAM role for your Lambda function
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://console.aws.amazon.com/iamv2/home" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; console to create a new role. Name it WildRydesStreamProcessorRole and select Lambda for the role type. Create a policy that allows dynamodb:BatchWriteItem access to the DynamoDB table created in the last section and sqs:SendMessage to send failed messages to DLQ and attach it to the new role. Also, Attach the managed policy called AWSLambdaKinesisExecutionRole to this role in order to grant permissions for your function to read from Amazon Kinesis streams and to log to Amazon CloudWatch Logs.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;From the AWS Console, click on Services and then select IAM in the Security, Identity &amp;amp; Compliance section. Alternatively, you can use the search bar and type IAM in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Policies from the left navigation and then click Create policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the Visual editor, we’re going to create an IAM policy to allow our Lambda function access to the DynamoDB table created in the last section. To begin, click Service, begin typing DynamoDB in Find a service, and click DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Action, begin typing BatchWriteItem in Filter actions, and tick the BatchWriteItem checkbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Resources, click Add ARN in table, Copy the ARN of the DyanamoDB table from the scratchpad and paste it in Specify ARN for table. Alternatively, you can construct the ARN of the DynamoDB table you created in the previous section by specifying the Region, Account, and Table Name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Region, enter the AWS Region in which you created the DynamoDB table in the previous section, e.g.: us-east-1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Table Name, enter UnicornSensorData.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should see your ARN in the Specify ARN for table field and it should look similar to:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sbqpuy6pztu92hcubc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sbqpuy6pztu92hcubc8.png" width="751" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Add.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When completed, your console should look similar to this:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foaa8582zex61ch9y8cz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foaa8582zex61ch9y8cz7.png" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Next we are going to add permissions to allow the Lambda function access to the SQS On-Error Destination Queue. Click Add additional permissions click Service begin typing SQS in Find a service and click SQS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Action begin typing SendMessage in Filter actions, and tick the SendMessage checkbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Resources, click Add ARN in queue. Copy the ARN of the SQS queue from the scratchpad and paste it in Specify ARN for queue. Alternatively, you can construct the ARN of the SQS queue you created in the previous section by specifying the Region, Account, and Queue Name.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;strong&gt;Region&lt;/strong&gt;, enter the AWS Region in which you created the SQS queue in the previous section, e.g.: us-east-1. In &lt;strong&gt;Queue Name&lt;/strong&gt;, enter &lt;code&gt;wildrydes-queue&lt;/code&gt;. You should see your ARN in the &lt;strong&gt;Specify ARN for queue&lt;/strong&gt; field and it should look similar to: &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2Fimages%2F01-iam-policy-sqs.png" width="" height=""&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Next: Tags**.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next: Review**.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter WildRydesDynamoDBWritePolicy in the Name field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Roles from the left navigation and then click Create role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Lambda for the role type from the AWS service section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next: Permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing AWSLambdaKinesisExecutionRole in the Filter text box and check the box next to that role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing WildRydesDynamoDBWritePolicy in the Filter text box and check the box next to that policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter WildRydesStreamProcessorRole for the Role name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing WildRydesStreamProcessorRole in the Search text box&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tlp4l9d5eqcjgskv0ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tlp4l9d5eqcjgskv0ae.png" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on WildRydesStreamProcessorRole and it should look similar to:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qmu5idzm83ae3zzdvkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qmu5idzm83ae3zzdvkl.png" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create a Lambda function to process the stream
&lt;/h3&gt;

&lt;p&gt;Create a Lambda function called WildRydesStreamProcessor that will be triggered whenever a new record is avaialble in the wildrydes stream. Use the provided index.js implementation for your function code. Create an environment variable with the key TABLE_NAME and the value UnicornSensorData. Configure the function to use the WildRydesStreamProcessor role created in the previous section.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select Lambda under Compute. Alternatively, you can use the search bar and type Lambda in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter WildRydesStreamProcessor in the Function name field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Node.js 14.x from Runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select WildRydesStreamProcessorRole from the Existing role dropdown.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feskz4u5t9lju73zwjenw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feskz4u5t9lju73zwjenw.png" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Create function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the Code source section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and paste the JavaScript code below into the code editor.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;"use strict"; const AWS = require("aws-sdk"); const dynamoDB = new AWS.DynamoDB.DocumentClient(); const tableName = process.env.TABLE_NAME; // Entrypoint for Lambda Function exports.handler = function (event, context, callback) { const requestItems = buildRequestItems(event.Records); const requests = buildRequests(requestItems); Promise.all(requests) .then(() =&amp;gt; callback(null, &lt;code&gt;Delivered ${event.Records.length} records&lt;/code&gt;) ) .catch(callback); }; // Build DynamoDB request payload function buildRequestItems(records) { return records.map((record) =&amp;gt; { const json = Buffer.from(record.kinesis.data, "base64").toString( "ascii" ); const item = JSON.parse(json); return { PutRequest: { Item: item, }, }; }); } function buildRequests(requestItems) { const requests = []; // Batch Write 25 request items from the beginning of the list at a time while (requestItems.length &amp;gt; 0) { const request = batchWrite(requestItems.splice(0, 25)); requests.push(request); } return requests; } // Batch write items into DynamoDB table using DynamoDB API function batchWrite(requestItems, attempt = 0) { const params = { RequestItems: { [tableName]: requestItems, }, }; let delay = 0; if (attempt &amp;gt; 0) { delay = 50 * Math.pow(2, attempt); } return new Promise(function (resolve, reject) { setTimeout(function () { dynamoDB .batchWrite(params) .promise() .then(function (data) { if (data.UnprocessedItems.hasOwnProperty(tableName)) { return batchWrite( data.UnprocessedItems[tableName], attempt + 1 ); } }) .then(resolve) .catch(reject); }, delay); }); }&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10sji26vbsbod6e9b0ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10sji26vbsbod6e9b0ij.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, You will be adding the DynamoDB table name as an environment variable. In the Configuration tab, select the Environment variables section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkklxfk8pwkz02mu7nux6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkklxfk8pwkz02mu7nux6.png" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Edit and then Add environment variable&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enter TABLE_NAME in Key and UnicornSensorData in Value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add another environment variable called AWS_NODEJS_CONNECTION_REUSE_ENABLED in Key and 1 in Value. This setting helps to reuse TCP connection. You can learn more &lt;a href="https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/node-reusing-connections.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;click Save.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Now, You will add the event source mapping(ESM) for AWS Lambda to integrate with Kinesis. Scroll up and click on Add Trigger from the Function overview section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6nkqucqn0k8ee7j6n3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6nkqucqn0k8ee7j6n3n.png" width="545" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Trigger configuration section, select Kinesis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select wildrydes from Kinesis stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave Batch size set to 100 and Starting position set to Latest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the Additional settings section&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under On-failure destination add the ARN of the wildrydes-queue SQS queue&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure Enable trigger is checked.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz85ajq63bdqr5k6ek09l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz85ajq63bdqr5k6ek09l.png" width="800" height="887"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go back to Code Tab and Deploy the Lambda function — the screen will provide a message on successful deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0qfss5m4m20kglqd9di.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0qfss5m4m20kglqd9di.png" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream.&lt;/p&gt;

&lt;p&gt;🔧 In this module, you’ve setup a DynamoDB table for storing unicorn data, a Dead Letter Queue(DLQ) for recieving failed messages and a Lambda function to read data from Kinesis Data Stream and store in DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stream Processing
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/01-setup.html" rel="noopener noreferrer"&gt;SetUp&lt;/a&gt; section, we set up the all the necessary services and roles required to read a message from Amazon Kinesis Data Stream wildrydes by the Lambda function WildRydesStreamProcessor. This function processes the records and inserts the data into Amazon DynamoDB table UnicornSensorData.&lt;/p&gt;

&lt;p&gt;Lambda reads records from the data stream and invokes your function synchronously with an event that contains stream records. Lambda reads records in batches and invokes your function to process records from the batch. Each batch contains records from a single shard/data stream. Follow this &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html" rel="noopener noreferrer"&gt;link&lt;/a&gt; to learn more about this integration&lt;/p&gt;

&lt;p&gt;In this module, you’ll send streaming data to Amazon Kinesis Data Stream, wildrydes using the producer.go library and use the AWS Console to monitor Lambda’s processing of records from the wildrydes stream and query the results in Amazon DynamoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Produce streaming data
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Return to you AWS Cloud9 instance and run the producer to start emitting sensor data to the stream with a unique unicorn name.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer -name Shadowfax -stream wildrydes -msgs 20&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Verify Lambda execution
&lt;/h3&gt;

&lt;p&gt;Verify that the trigger is properly executing the Lambda function. View the metrics emitted by the function and inspect the output from the Lambda function.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to the AWS Lambda function console. Click on the Monitor tab and explore the metrics available to monitor the function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8imikohzpr7aod9ymr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8imikohzpr7aod9ymr4.png" width="800" height="721"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on Logs to explore the function’s log output.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscssud8tqjma6mu5e1jk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscssud8tqjma6mu5e1jk.png" width="800" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on View logs in CloudWatch to explore the logs in CloudWatch for the log group /aws/lambda/WildRydesStreamProcessor&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkemuew88uchua8q6na4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkemuew88uchua8q6na4.png" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The log groups can take a while to create so if you see “Log Group Does Not Exist” when pressing “View Logs” in CloudWatch then just wait a few more minutes before hitting refresh.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Query the DynamoDB table
&lt;/h3&gt;

&lt;p&gt;Using the AWS Management Console, query the DynamoDB table for data for a specific unicorn. Use the producer to create data from a distinct unicorn name and verify those records are persisted.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on Services then select DynamoDB in the Database section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Tables from the left-hand navigation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on UnicornSensorData.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the Explore table items button at the top right. Here you should see the Unicorn data for which you’re running a producer.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28xnymvl04avrlw8bfel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28xnymvl04avrlw8bfel.png" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, There is one to one mapping between Kinesis shard to Lambda function invocation. You can configure the ParallelizationFactor setting to process one shard of a Kinesis with more than one Lambda invocation simultaneously. If you increase the number of concurrent batches per shard, Lambda still ensures in-order processing at the partition-key level. Follow the &lt;a href="https://aws.amazon.com/blogs/compute/new-aws-lambda-scaling-controls-for-kinesis-and-dynamodb-event-sources/" rel="noopener noreferrer"&gt;link&lt;/a&gt; to learn more about parallelization.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 You can subscribe Lambda functions to automatically read batches of records off your Kinesis stream and process them if records are detected on the stream.&lt;/p&gt;

&lt;p&gt;🔧 In this module, you’ve created a Lambda function that reads from the Kinesis Data Stream wildrydes and saves each row to DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll setup AWS Lambda to process data and handle errors when processing data from the wildrydes created earlier. There are couple of approaches for error handling.&lt;/p&gt;

&lt;p&gt;Resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/03-error-handling.html#error_handling_with_retry" rel="noopener noreferrer"&gt;Error Handling with Retry Settings&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/03-error-handling.html#error_handling_with_bisect_on_batch" rel="noopener noreferrer"&gt;Error Handling with Bisect On Batch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Error Handling with Retry Settings
&lt;/h3&gt;

&lt;p&gt;AWS Lambda can reprocess batches of messages from Kinesis Data Streams when an error occurs in one of the items in the batch. You can configure the number of retries by configuring Retry attempts and/or Maximum age of record. The batch will be retried until the number of retry attempts or until the expiration of the batch. You can also configure On-failure destination which will be used by Lambda to send metadata of your failed invocation. You can send this metadata of the failed invocation to either an Amazon SQS queue or an Amazon SNS topic. Typically there are two kinds of errors in the data stream. One category belongs to transient errors which are temporary in nature and are successfully processed with retry logic. Second category belongs to Poison Pill (either data quality / data that generates an exception in Lambda code) which are permanent in nature. In this case Lambda retries for the configured retry attempts and then discards the records to the On-failure destination.&lt;/p&gt;

&lt;p&gt;In order to simulate poison pill message, We will introduce an error data in streaming data and throw an error when it is found in the message. In real world, this may be a validation or a call to another service which expects certain information in the record. This is the code change that will be introduced in the Lambda function&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (item.InputData.toLowerCase().includes(errorString)) {
    console.log("Error record is = ", item);
    throw new Error("kaboom");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select Lambda under Compute. Alternatively, you can use the search bar and type Lambda in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the WildRydesStreamProcessor function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the Function Code section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Double click the index.js file to open it in the editor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and paste the JavaScript code below into the code editor, replacing all of the existing code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;"use strict"; const AWS = require("aws-sdk"); const dynamoDB = new AWS.DynamoDB.DocumentClient(); const tableName = process.env.TABLE_NAME; // This is used to mimic poison pill messages const errorString = "error"; // Entrypoint for Lambda Function exports.handler = function (event, context, callback) { console.log( "Number of Records sent for each invocation of Lambda function = ", event.Records.length ); const requestItems = buildRequestItems(event.Records); const requests = buildRequests(requestItems); Promise.all(requests) .then(() =&amp;gt; callback(null, &lt;code&gt;Delivered ${event.Records.length} records&lt;/code&gt;) ) .catch(callback); }; // Build DynamoDB request payload function buildRequestItems(records) { return records.map((record) =&amp;gt; { const json = Buffer.from(record.kinesis.data, "base64").toString( "ascii" ); const item = JSON.parse(json); //Check for error and throw the error. This is more like a validation in your usecase if (item.InputData.toLowerCase().includes(errorString)) { console.log("Error record is = ", item); throw new Error("kaboom"); } return { PutRequest: { Item: item, }, }; }); } function buildRequests(requestItems) { const requests = []; // Batch Write 25 request items from the beginning of the list at a time while (requestItems.length &amp;gt; 0) { const request = batchWrite(requestItems.splice(0, 25)); requests.push(request); } return requests; } // Batch write items into DynamoDB table using DynamoDB API function batchWrite(requestItems, attempt = 0) { const params = { RequestItems: { [tableName]: requestItems, }, }; let delay = 0; if (attempt &amp;gt; 0) { delay = 50 * Math.pow(2, attempt); } return new Promise(function (resolve, reject) { setTimeout(function () { dynamoDB .batchWrite(params) .promise() .then(function (data) { if (data.UnprocessedItems.hasOwnProperty(tableName)) { return batchWrite( data.UnprocessedItems[tableName], attempt + 1 ); } }) .then(resolve) .catch(reject); }, delay); }); }&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Deploy to deploy the changes to the WildRydesStreamProcessor Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the existing Kinesis Data Stream mapping by clicking the Configuration Tab above the code editor. (This step is needed only if there is an existing Kinesis Data Stream mapping or any other Event Source Mapping present for the Lambda function). In the Configuration Tab, Select the Kinesis:wildrydes (Enabled). If the trigger is not enabled, press refresh. Delete the trigger.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjeri1wfeo87hcp0rfjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjeri1wfeo87hcp0rfjk.png" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Add a new Kinesis Data Stream mapping by clicking the Configuration Tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the Triggers section and Add trigger button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Kinesis for the service and wildrydes from Kinesis Stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the Batch size set to 10 and Starting position set to Latest. This small batch size will help us monitor the AWS Lambda error handling clearly from Cloudwatch logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the Batch window to 15. This window will help you batch the incoming messages by waiting for 15 seconds. By default, AWS Lambda will poll messages from Amazon Kinesis Data Stream every 1 sec.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the Additional settings section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under On-failure destination add the ARN of the wildrydes-queue SQS queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change Retry attempts to 2 and Maximum age of record to 60.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6xmvnjeqra5gbvdok4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6xmvnjeqra5gbvdok4q.png" width="778" height="1223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Leave the rest of the fields to default values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add to create the trigger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the refresh button until creation is complete and the function shows as Enabled. (You may need to hit the refresh button to refresh the status)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchf7es7tksdt9otx13fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchf7es7tksdt9otx13fa.png" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to AWS Cloud9 environment and Insert data into Kinesis Data Stream by running producer binary.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer -stream wildrydes -error yes -msgs 9&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Return to the AWS Lambda function console. Click on the Monitor tab and explore the metrics available to monitor the function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on View logs in CloudWatch to explore the logs in CloudWatch for the log group /aws/lambda/WildRydesStreamProcessor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the logs you can observe that, there will be error and the same batch will be retried twice ( as we configured retry-attempts to 2)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffknf6tw7xkfczhre06tp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffknf6tw7xkfczhre06tp.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Since the entire batch failed, you should not notice any new records in the DynamoDB table UnicornSensorData. You will see only the 20 records that are inserted from the previous section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally you can Monitor SQS Queue by following the following steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then search for Simple Queue Service and select the Simple Queue Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There will be a message in the wildrydes-queue. This is the discarded batch that had one permanent error in the batch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click wildrydes-queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Send and recieve messages on the top right corner&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Poll for messages on the bottom right corner&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You’ll observe one message. Click the message ID and choose the Body tab. You can see all the details of the discarded batch. Notice that entire batch of messages(Size:9) is discarded even through there was only one error message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the checkbox beside the message ID&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Delete button. This will delete the message from the SQS queue. This step is optional and is needed only to keep the SQS queue empty.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2. Error Handling with Bisect On Batch settings
&lt;/h3&gt;

&lt;p&gt;The retry setting we had before discards entire batch of records even if there is one bad record in the batch. Bisect On Batch error handling feature of AWS Lambda splits the batch into two and retries the half-batches separately. The process continues recursively until there is a single item in a batch or messages are processed successfully.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;There are no code changes to the WildRydesStreamProcessor Lambda function. The only change is around setting Kinesis Data Stream configuration. Follow the below steps to remove and add a new Kinesis Data Stream mapping.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the existing Kinesis Data Stream mapping by clicking the Configuration Tab above the code editor. (This step is needed only if there is an existing Kinesis Data Stream mapping or any other Event Source Mapping present for the Lambda function). In the Configuration Tab select the Kinesis:wildrydes (Enabled) and Delete the trigger.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw15le0yv1yfm8ba2ycwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw15le0yv1yfm8ba2ycwd.png" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Add a new Kinesis Data Stream mapping by clicking the Configuration Tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the Triggers section and Add trigger button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Kinesis for the service and wildrydes from Kinesis Stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the Batch size set to 10 and Starting position set to Latest. This small batch size will help us monitor the AWS Lambda error handling clearly from Cloudwatch logs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the Batch window to 15. Again, this window will help you batch the incoming messages by waiting for 15 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the Additional settings section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under On-failure destination add the ARN of the wildrydes-queue SQS queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change Retry attempts to 2 and Maximum age of record to 60.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the box Split batch on error.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmscdbwode47cwl6is3y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmscdbwode47cwl6is3y1.png" width="800" height="1222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Leave the rest of the fields to default values.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Add to create the trigger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the refresh button until creation is complete and the function shows as Enabled. (You may need to hit the refresh button to refresh the status)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuuiecdf0f5ijdqrk8e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuuiecdf0f5ijdqrk8e8.png" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Return to AWS Cloud9 environment and Insert data into Kinesis Data Stream by running producer binary.&lt;/p&gt;

&lt;p&gt;./producer -stream wildrydes -error yes -msgs 9&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the AWS Lambda function console. Click on the Monitor tab and explore the metrics available to monitor the function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on View logs in CloudWatch to explore the logs in CloudWatch for the log group /aws/lambda/WildRydesStreamProcessor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the logs you can observe that, there will be error and the same batch will be split into two halves and processed. This splitting continues recursively until there is a single item or messages are processed successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn9zgex783mmzq6k3pkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn9zgex783mmzq6k3pkv.png" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Since Bisect-On-Batch splits the batch and processes records,you should notice new records in the DynamoDB table UnicornSensorData. There should be a total of 28 items in UnicornSensorData ( 1 record is an error record ).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally you can Monitor SQS Queue by following the below steps.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then search for Simple Queue Service and select the Simple Queue Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There will be a message in the wildrydes-queue. This is the discarded batch that had one permanent error in the batch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click wildrydes-queue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Send and recieve messages on the top right corner&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Poll for messages on the bottom right corner&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You’ll observe one message. Click the message ID and choose the Body tab. You can see all the details of the discarded batch. Notice that this time only one message is discarded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the checkbox beside the message ID&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Delete button. This will delete the message from the SQS queue. This step is optional and is needed only to keep the SQS queue empty.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Analytics with Tumbling Windows
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll use the tumbling window feature of AWS Lambda to aggregate sensor data from a unicorn in the fleet in real-time. The Lambda function will read from the Amazon Kinesis stream, calculate the total distance traveled per minute for a specific unicorn, and store the results in an Amazon DynamoDB table.&lt;/p&gt;

&lt;p&gt;Tumbling windows are distinct time windows that open and close at regular intervals. By default, Lambda invocations are stateless — you cannot use them for processing data across multiple continuous invocations without an external database. However, with tumbling windows, you can maintain your state across invocations. This state contains the aggregate result of the messages previously processed for the current window. Your state can be a maximum of 1 MB per shard. If it exceeds that size, Lambda terminates the window early. Each record of a stream belongs to a specific window. A record is processed only once, when Lambda processes the window that the record belongs to. In each window, you can perform calculations, such as a sum or average, at the partition key level within a shard.&lt;/p&gt;

&lt;p&gt;Resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#create_dynamodb_table" rel="noopener noreferrer"&gt;Create a DynamoDB table&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#create_iam_role" rel="noopener noreferrer"&gt;Create an IAM Role&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#create_lambda" rel="noopener noreferrer"&gt;Create a Lambda function&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#monitor_lambda" rel="noopener noreferrer"&gt;Monitor the Lambda function&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#query_dynamodb_table" rel="noopener noreferrer"&gt;Query the DynamoDB table&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create a DynamoDB table
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://console.aws.amazon.com/dynamodbv2/home" rel="noopener noreferrer"&gt;Amazon DynamoDB&lt;/a&gt; console to create a new DynamoDB table. Call your table UnicornAggregation and give it a Partition key called name of type String and a Sort key called windowStart of type String. Use the defaults for all other settings.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select DynamoDB under Database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter UnicornAggregation for the Table name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter name for the Partition key and select String for the key type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter windowStart for the Sort key and select String for the key type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the Default settings box checked and choose Create table.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr204x7e8dy3vzc7ujta8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr204x7e8dy3vzc7ujta8.png" width="800" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create an IAM Role for the Lambda function
&lt;/h3&gt;

&lt;p&gt;Use the &lt;a href="https://console.aws.amazon.com/iamv2/home" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; console to create a new role. Name it unicorn-aggregation-role and select Lambda for the role type. Attach the managed policy called AWSLambdaKinesisExecutionRole to this role in order to grant permissions for your function to read from Amazon Kinesis streams and to log to Amazon CloudWatch Logs. Create a policy that allows dynamodb:PutItem access to the DynamoDB table created in the last section and attach it to the new role.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;From the AWS Console, click on Services and then select IAM in the Security, Identity &amp;amp; Compliance section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Policies from the left navigation and then click Create Policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the Visual editor, we’re going to create an IAM policy to allow our Lambda function access to the DynamoDB table created in the last section. To begin, click Service, begin typing DynamoDB in Find a service, and click DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type PutItem in Filter actions, and tick the PutItem checkbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Resources, click Add ARN in table, and construct the ARN of the DynamoDB table you created in the previous section by specifying the Region, Account, and Table Name.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;strong&gt;Region&lt;/strong&gt;, enter the AWS Region in which you created the DynamoDB table in the previous section, e.g.: us-east-1. In &lt;strong&gt;Account&lt;/strong&gt;, enter your AWS Account ID which is a twelve digit number, e.g.: 123456789012. To find your AWS account ID number in the AWS Management Console, click on &lt;strong&gt;Support&lt;/strong&gt; in the navigation bar in the upper-right, and then click &lt;strong&gt;Support Center&lt;/strong&gt;. Your currently signed in account ID appears in the upper-right corner below the Support menu. In &lt;strong&gt;Table Name&lt;/strong&gt;, enter &lt;code&gt;UnicornAggregation&lt;/code&gt;. You should see your ARN in the &lt;strong&gt;Specify ARN for table&lt;/strong&gt; field and it should look similar to:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2t3reaf1sk3kqzvrjdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2t3reaf1sk3kqzvrjdl.png" width="538" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Add.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnz2rpzy9lj71lv92qdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnz2rpzy9lj71lv92qdd.png" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Next: Tags.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next: Review.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter unicorn-aggregation-ddb-write-policy in the Name field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Roles from the left navigation and then click Create role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Lambda for the role type from the AWS service section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next: Permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing AWSLambdaKinesisExecutionRole in the Filter text box and check the box next to that role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing unicorn-aggregation-ddb-write-policy in the Filter text box and check the box next to that role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter unicorn-aggregation-role for the Role name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Begin typing unicorn-aggregation-role in the Search text box.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkgym26e8an0l15c5cx1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkgym26e8an0l15c5cx1.png" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on unicorn-aggregation-role and it should look similar to:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon427ilydsc8icx9ipn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon427ilydsc8icx9ipn8.png" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a Lambda function
&lt;/h3&gt;

&lt;p&gt;Use the Lambda console to create a Lambda function called WildRydesAggregator that will be triggered whenever a new record is available in the wildrydes stream. Use the provided index.js implementation for your function code. Create an environment variable with the key TABLE_NAME and the value UnicornAggregation. Configure the function to use the unicorn-aggregation-role role created in the previous sections.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select Lambda under Compute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the three lines icon to expand the service menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter WildRydesAggregator in the Function name field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Node.js 14.x from Runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select unicorn-aggregation-role from the Existing role dropdown.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqcmdsueczq92j3x8jju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqcmdsueczq92j3x8jju.png" width="800" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Create function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the Code source section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Double click the index.js file to open it in the editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and paste the JavaScript code below into the code editor, replacing all of the existing code:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;const AWS = require("aws-sdk"); AWS.config.update({ region: process.env.AWS_REGION }); const docClient = new AWS.DynamoDB.DocumentClient(); const TableName = process.env.TABLE_NAME; function isEmpty(obj) { return Object.keys(obj).length === 0; } exports.handler = async (event) =&amp;gt; { // Save aggregation result in the final invocation if (event.isFinalInvokeForWindow) { console.log("Final: ", event); const params = { TableName, Item: { windowStart: event.window.start, windowEnd: event.window.end, distance: Math.round(event.state.distance), shardId: event.shardId, name: event.state.name, }, }; console.log({ params }); await docClient.put(params).promise(); } console.log(JSON.stringify(event, null, 2)); // Create the state object on first invocation or use state passed in let state = event.state; if (isEmpty(state)) { state = { distance: 0, }; } console.log("Existing: ", state); // Process records with custom aggregation logic event.Records.map((record) =&amp;gt; { const payload = Buffer.from(record.kinesis.data, "base64").toString( "ascii" ); const item = JSON.parse(payload); let value = item.Distance; console.log("Adding: ", value); state.distance += value; let unicorn = item.Name; console.log("Name: ", unicorn); state.name = unicorn; }); // Return the state for the next invocation console.log("Returning state: ", state); return { state: state }; };&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Configuration tab, select the Environment variables section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Edit and then Add environment variable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter TABLE_NAME in Key and UnicornAggregation in Value and click Save.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnmuhgyqlh8lhehtgobm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnmuhgyqlh8lhehtgobm.png" width="739" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scroll up and click on Add trigger from the Function overview section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh03bmlj86isynxa0mcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh03bmlj86isynxa0mcw.png" width="605" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the Trigger configuration section, select Kinesis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Kinesis stream, select wildrydes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave Batch size set to 100 and Starting position set to Latest.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the Additional settings section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a best practice to set the retry attempts and bisect on batch settings when setting up your trigger. Change Retry attempts to 2 and select the checkbox for Split batch on error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Tumbling window duration, enter 60. This sets the time interval for your aggregation in seconds.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkxxpvjh1pg870lj94j5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkxxpvjh1pg870lj94j5.png" width="533" height="795"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Add and the trigger will start creating.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe2vgt6imgmwmv7ud262.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe2vgt6imgmwmv7ud262.png" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the refresh button until creation is complete and the trigger shows as Enabled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go back to the Code tab and Deploy the Lambda function. You will see a confirmation that the changes were deployed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbohk8a21ftzt917npcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbohk8a21ftzt917npcq.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Monitor the Lambda function
&lt;/h3&gt;

&lt;p&gt;Verify that the trigger is properly executing the Lambda function and inspect the output from the Lambda function.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to your Cloud9 environment, and run the producer to start emitting sensor data to the stream.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on the Monitor tab. Next, click on View logs in CloudWatch to explore the logs in CloudWatch for the log group /aws/lambda/WildRydesAggregator.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the most recent Log stream.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnn361nu5v3me718pw4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnn361nu5v3me718pw4h.png" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can use the Filter events bar at the top to quickly search for matching values within your logs. Use the filter bar, or scroll down, to examine the log events showing the Existing: distance state, Adding: a new distance count, and the Returning state: after the Lambda function is invoked.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2rvnxq0djgpl8k0j6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2rvnxq0djgpl8k0j6f.png" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Because we set the tumbling window to 60 seconds, every minute the Final: distance state is aggregated and passed to the DynamoDB table. To see the final distance count, use the filter bar to search for "Final:".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After expanding this log, you will see isFinalInvokeForWindow is set to true, along with the state data that will be passed to DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffykrh5b8rsaxikbbg6sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffykrh5b8rsaxikbbg6sg.png" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Query the DynamoDB table
&lt;/h3&gt;

&lt;p&gt;Using the AWS Management Console, query the DynamoDB table to verify the per-minute distance totals are aggregated for the specified unicorn.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on Services then select DynamoDB in the Database section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Tables from the left-hand navigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on UnicornAggregation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the View Items button on the top right.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Run to return the items in the table. Here you should see each per-minute data point for the unicorn for which you’re running a producer. Verify the state information from the CloudWatch log you viewed was successfully passed to the DynamoDB table.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ctd1zd80gbslxhvxdtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ctd1zd80gbslxhvxdtz.png" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 The tumbling window feature allows a streaming data source to pass state between multiple Lambda invocations. During the window, a state is passed from one invocation to the next, until a final invocation at the end of the window. This allows developers to calculate aggregates in near-real time, and makes it easier to calculate sums, averages, and counts on values across multiple batches of data. This feature provides an alternative way to build analytics in addition to services like Amazon Kinesis Data Analytics.&lt;/p&gt;

&lt;p&gt;🔧 In this module, you’ve created a Lambda function that reads from the Kinesis stream of unicorn data, aggregates the distance traveled per-minute, and saves each row to DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling with Checkpoint and Bisect On Batch
&lt;/h2&gt;

&lt;p&gt;While Bisect On Batch is helpful in narrowing down to the problematic messages, it can result in processing previously successful messages more than once. With Checkpoint feature you can return the sequence identifier for the failed messages. This provides more precise control over how to choose to continue processing the stream. For example in a batch of 9 messages where the fifth message fails — Lambda process the batch of messages, items 1–9. The fifth message fails and the function returns the failed sequence identifier. The batch is only retried from message 5 to 9&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, choose Services then select Lambda under Compute. Alternatively, you can use the search bar and type Lambda in the search dialog box.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Figure out the changes to the WildRydesStreamProcessor function. The changes involve storing the kinesis sequence number (kinesis.sequenceNumber) of the error record and returning the sequence number in the catch block&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;javascript 'use strict'; const AWS = require('aws-sdk'); const dynamoDB = new AWS.DynamoDB.DocumentClient(); const tableName = process.env.TABLE_NAME; const errorString = 'error'; exports.handler = function(event, context, callback) { console.log('Number of Records sent for each invocation of Lambda function = ', event.Records.length) const requestItems = buildRequestItems(event.Records); const requests = buildRequests(requestItems); Promise.all(requests) .then(() =&amp;gt; callback(null, `Delivered ${event.Records.length} records`)) .catch(callback); }; function buildRequestItems(records) { let sequenceNumber = 0; try { return records.map((record) =&amp;gt; { sequenceNumber = record.kinesis.sequenceNumber; console.log('Processing record with Sequence Number = ', sequenceNumber); const json = Buffer.from(record.kinesis.data, 'base64').toString('ascii'); const item = JSON.parse(json); if(item.InputData.toLowerCase().includes(errorString)) { console.log ('Error record is = ', item); throw new Error('kaboom') } return { PutRequest: { Item: item, }, }; }); }catch (err) { console.log('Returning Failure Sequence Number =', sequenceNumber) return { "batchItemFailures": [ {"itemIdentifier": sequenceNumber} ] } } } function buildRequests(requestItems) { const requests = []; while (requestItems.length &amp;gt; 0) { const request = batchWrite(requestItems.splice(0, 25)); requests.push(request); } return requests; } function batchWrite(requestItems, attempt = 0) { const params = { RequestItems: { [tableName]: requestItems, }, }; let delay = 0; if (attempt &amp;gt; 0) { delay = 50 * Math.pow(2, attempt); } return new Promise(function(resolve, reject) { setTimeout(function() { dynamoDB.batchWrite(params).promise() .then(function(data) { if (data.UnprocessedItems.hasOwnProperty(tableName)) { return batchWrite(data.UnprocessedItems[tableName], attempt + 1); } }) .then(resolve) .catch(reject); }, delay); }); }&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Do not forget to click Deploy to deploy the changes to the WildRydesStreamProcessor Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do not forget to Remove the existing Kinesis Data Stream mapping by clicking the Configuration Tab above the code editor. (This step is needed only if there is an existing Kinesis Data Stream mapping or any other Event Source Mapping present for the Lambda function). In the Configuration Tab select the Kinesis:wildrydes (Enabled) and Delete the trigger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a new Kinesis Data Stream mapping by clicking the Configuration Tab. The mapping configuration is same except that you can check mark an additional field Report batch item failures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test your changes by inserting data into Kinesis Data Stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor CloudWatch logs and Query DynamoDB by repeating steps in the sections &lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#monitor_lambda" rel="noopener noreferrer"&gt;Monitor&lt;/a&gt; and &lt;a href="https://data-processing.serverlessworkshops.io/stream-processing/04-analytics-with-tumbling-window.html#query_dynamodb_table" rel="noopener noreferrer"&gt;Query&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Enhanced Fan Out
&lt;/h2&gt;

&lt;p&gt;Enhanced fan-out enables consumers to receive records from a stream with throughput of up to 2 MB of data per second per shard. This throughput is dedicated, which means that consumers that use enhanced fan-out don’t have to contend with other consumers that are receiving data from the stream. Kinesis Data Streams pushes data records from the stream to consumers that use enhanced fan-out. Therefore, these consumers don’t need to poll for data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi07zxohigx1ajapzpmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi07zxohigx1ajapzpmh.png" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For standard iterators, Lambda polls each shard in your Kinesis stream for records at a base rate of once per second. When more records are available, Lambda keeps processing batches until the function catches up with the stream. The event source mapping shares read throughput with other consumers of the shard.&lt;/p&gt;

&lt;p&gt;To minimize latency and maximize read throughput, create a data stream consumer with enhanced fan-out. Enhanced fan-out consumers get a dedicated connection to each shard that doesn’t impact other applications reading from the stream. Stream consumers use HTTP/2 to reduce latency by pushing records to Lambda over a long-lived connection and by compressing request headers. You can create a stream consumer with the Kinesis RegisterStreamConsumer API.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kinesis register-stream-consumer --consumer-name con1 \
--stream-arn arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When you configure Event source mapping, use the consumer name created above as the consumer.&lt;/p&gt;

&lt;p&gt;You can also try the template from &lt;a href="https://serverlessland.com/patterns/kinesis-lambda-efo" rel="noopener noreferrer"&gt;serverlessland&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stream Aggregation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Streaming Aggregation
&lt;/h2&gt;

&lt;p&gt;In this module, you’ll create an Amazon Kinesis Data Analytics application to aggregate sensor data from the unicorn fleet in real-time. The application will read from the Amazon Kinesis stream, calculate the total distance traveled and minimum and maximum health and magic points for each unicorn currently on a Wild Ryde and output these aggregated statistics to an Amazon Kinesis stream every minute. In the first section, you’ll run the application from a Flink Studio notebook. In the second, optional step, you’ll learn how to deploy the application to run outside the notebook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The architecture for this module involves an Amazon Kinesis Data Analytics application, source and destination Amazon Kinesis streams, and the producer and consumer command-line clients.&lt;/p&gt;

&lt;p&gt;The Amazon Kinesis Data Analytics application processes data from the source Amazon Kinesis stream that we created in the previous module and aggregates it on a per-minute basis. Each minute, the application will emit data including the total distance traveled in the last minute as well as the minimum and maximum readings from health and magic points for each unicorn in our fleet. These data points will be sent to a destination Amazon Kinesis stream for processing by other components in our system.&lt;/p&gt;

&lt;p&gt;During the workshop, we will use the &lt;a href="https://data-processing.serverlessworkshops.io/client/consumer.go" rel="noopener noreferrer"&gt;consumer.go&lt;/a&gt; application to consume the resulting stream. To do so, the application leverages the AWS SDK and acts as Kinesis Consumer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehnux2yk6ow70stuxwfg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehnux2yk6ow70stuxwfg.jpg" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implement the Stream Aggregation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Implement the Streaming Aggregation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create a Kinesis Data Stream for the summarized events
&lt;/h3&gt;

&lt;p&gt;Use the Amazon Kinesis Data Streams console to create a new stream named wildrydes-summary with 1 shard. This stream will serve as destination for our Kinesis Data Analytics application.&lt;/p&gt;

&lt;p&gt;A Shard is the base throughput unit of an Amazon Kinesis data stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 PUT records per second. You will specify the number of shards needed when you create a data stream. For example, if we create a data stream with four shards then this data stream has a throughput of 4MB/sec data input and 8MB/sec data output, and allows up to 4000 PUT records per second. You can monitor shard-level metrics in Amazon Kinesis Data Streams and add or remove shards from your data stream dynamically as your data throughput changes by resharding the data stream.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step Instructions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS Management Console, click Services then select Kinesis under Analytics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Get started if prompted with an introductory screen.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create data stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter wildrydes-summary into Data stream name, select the Provisioned mode, and enter 1 into Number of open shards, then click Create Kinesis stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Within 60 seconds, your Kinesis stream will be ACTIVE and ready to store real-time streaming data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2. Create an Amazon Kinesis Data Analytics application
&lt;/h3&gt;

&lt;p&gt;In this step, we are going to build an Amazon Kinesis Data Analytics application which reads from the wildrydes stream built in the &lt;a href="https://data-processing.serverlessworkshops.io/streaming-data.html" rel="noopener noreferrer"&gt;Real-time Data Streaming module&lt;/a&gt; and emits a JSON object with the following attributes for each minute:&lt;/p&gt;

&lt;p&gt;NameUnicorn nameStatusTimeROWTIME provided by Amazon Kinesis Data AnalyticsDistanceThe sum of distance traveled by the unicornMinMagicPointsThe minimum data point of the &lt;em&gt;MagicPoints&lt;/em&gt; attributeMaxMagicPointsThe maximum data point of the &lt;em&gt;MagicPoints&lt;/em&gt; attributeMinHealthPointsThe minimum data point of the &lt;em&gt;HealthPoints&lt;/em&gt; attributeMaxHealthPointsThe maximum data point of the &lt;em&gt;HealthPoints&lt;/em&gt; attribute&lt;/p&gt;

&lt;p&gt;To do so, we will use Kinesis Data Analytics to run an &lt;a href="https://flink.apache.org/" rel="noopener noreferrer"&gt;Apache Flink&lt;/a&gt; application. To enhance our development experience, we will use Studio notebooks for Kinesis Data Analytics that are powered by &lt;a href="https://zeppelin.apache.org/" rel="noopener noreferrer"&gt;Apache Zeppelin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Kinesis Data Analytics provides the underlying infrastructure for your Apache Flink applications. It handles core capabilities like provisioning compute resources, parallel computation, automatic scaling, and application backups (implemented as checkpoints and snapshots). You can use the high-level Flink programming features (such as operators, functions, sources, and sinks) in the same way that you use them when hosting the Flink infrastructure yourself. You can learn more about Amazon Kinesis Data Analytics for Apache Flink by checking out the corresponding &lt;a href="https://docs.aws.amazon.com/kinesisanalytics/latest/java/what-is.html" rel="noopener noreferrer"&gt;developer guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the AWS Management Console, click Services, select the Kinesis service under Analytics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the Analytics Applications tab in sidebar on the left. If you cannot see a sidebar on the left, please click the Hamburger icon (three vertical lines) to open it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select the Studio tab&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the Create Studio Notebook button. Keep the creation method to the default value.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Name the notebook flink-analytics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Permissions panel, click the Create button to create a new AWS Glue database. The AWS Glue service console will open in a new browser tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the AWS Glue console, create a new database named default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Switch back to your tab with the Studio notebook creation process and click Create Studio notebook.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the notebook is created, click the Edit IAM permissions button in the Studio notebook details section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure to select the default database in the AWS Glue database dropdown. Click the Choose source button in the Included sources in IAM policy section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Browse to select the wildrydes Kinesis data stream as a source. Afterwards, click Save changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click Choose destination followed by Browse to select the wildrydes-summary Kinesis data stream as a output. Afterwards, click Save changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Save changes the second time to update the IAM policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on IAM Role link to open it in separate tab. We need to add an additional managed policy to the role that would allow us to delete Glue tables. And we will reuse this policy when deploying the notebook as Flink application later.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Add permissions, then Attach Policies, and then Create Policy. Use the Visual editor. Choose Glue as Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the Actions, select GetPartitions from the Read subsection and DeleteTable from the Write subsection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the Resources, click Add ARN for catalog and enter your region (e.g. eu-west-1)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the database, click Add ARN, enter your region and specify default for database name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, for the table, click Add ARN, enter your region, specify default as database, and select Any for the Table name.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2FNaN%2F1%2Ab31hiO4ynbDLRrXWEFF4aQ.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click Next: Tags button and then Next: Review. Specify kinesis-analytics-service-flink-analytics-glue as the policy name and click Create Policy button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach the policy you have just created to the notebook IAM role, by selecting it from the list and clicking Attach Policy button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Switch back to KDA Studio notebook tab and click Run to run the notebook. As soon it is in the Running state (takes a few minutes), click the Open in Apache Zeppelin button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Apache Zeppelin notebook, choose create a new note. Name it flinkagg and insert the following SQL command. Replace the  placeholder with the actual region (e.g. eu-west-1).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;%flink.ssql(type=update) DROP TABLE IF EXISTS wildrydes; CREATE TABLE wildrydes ( Distance double, HealthPoints INT, Latitude double, Longitude double, MagicPoints INT, Name VARCHAR, StatusTime AS proctime() ) WITH ( 'connector' = 'kinesis', 'stream' = 'wildrydes', 'aws.region' = '', 'scan.stream.initpos' = 'LATEST', 'format' = 'json' ); DROP TABLE IF EXISTS wildrydes_summary; CREATE TABLE &lt;code&gt;wildrydes_summary&lt;/code&gt; ( Name VARCHAR, StatusTime TIMESTAMP, Distance double, MinMagicPoints INT, MaxMagicPoints INT, MinHealthPoints INT, MaxHealthPoints INT ) WITH ( 'connector' = 'kinesis', 'stream' = 'wildrydes-summary', 'aws.region' = '', 'scan.stream.initpos' = 'LATEST', 'format' = 'json' ); INSERT INTO &lt;code&gt;wildrydes_summary&lt;/code&gt; SELECT Name, TUMBLE_START(StatusTime, INTERVAL '1' MINUTE) AS StatusTime, SUM(Distance) AS Distance, MIN(MagicPoints) AS MinMagicPoints, MAX(MagicPoints) AS MaxMagicPoints, MIN(HealthPoints) AS MinHealthPoints, MAX(HealthPoints) AS MaxHealthPoints FROM wildrydes GROUP BY TUMBLE(StatusTime, INTERVAL '1' MINUTE), Name;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Execute the code by clicking the Play button next to the READY statement on the right side of the cell.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Cloud9 development environment, run the producer to start emiting sensor data to the stream.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Read messages from the stream
&lt;/h3&gt;

&lt;p&gt;Use the command line consumer to view messages from the Kinesis stream to see the aggregated data being sent every minute.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Switch to the tab where you have your Cloud9 environment opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the consumer to start reading sensor data from the stream. It can take up to a minute for the first message to appear, since the Analytics application aggregates the messages in one minute intervals.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./consumer -stream wildrydes-summary&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;The consumer will print the messages being sent by the Kinesis Data Analytics application every minute:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;{ "Name": "Shadowfax", "StatusTime": "2018-03-18 03:20:00.000", "Distance": 362, "MinMagicPoints": 170, "MaxMagicPoints": 172, "MinHealthPoints": 146, "MaxHealthPoints": 149 }&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Experiment with the producer
&lt;/h3&gt;

&lt;p&gt;Stop and start the producer while watching the dashboard and the consumer. Start multiple producers with different unicorn names.&lt;/p&gt;

&lt;p&gt;✅ Step-by-step directions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Switch to the tab where you have your Cloud9 environment opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stop the producer by pressing Control + C and notice how the consumer stops receiving messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the producer again and notice the messages resume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hit the (+) button and click New Terminal to open a new terminal tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start another instance of the producer in the new tab. Provide a specific unicorn name and notice data points for both unicorns in consumer’s output:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;./producer -name Bucephalus&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Verify you see multiple unicorns in the output:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;{ "Name": "Shadowfax", "StatusTime": "2018-03-18 03:20:00.000", "Distance": 362, "MinMagicPoints": 170, "MaxMagicPoints": 172, "MinHealthPoints": 146, "MaxHealthPoints": 149 } { "Name": "Bucephalus", "StatusTime": "2018-03-18 03:20:00.000", "Distance": 1773, "MinMagicPoints": 140, "MaxMagicPoints": 148, "MinHealthPoints": 132, "MaxHealthPoints": 138 }&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⭐ Recap
&lt;/h2&gt;

&lt;p&gt;🔑 Amazon Kinesis Data Analytics enables you to query streaming data or build entire streaming applications using SQL, so that you can gain actionable insights and respond to your business and customer needs promptly.&lt;/p&gt;

&lt;p&gt;🔧 In this module, you’ve created a Kinesis Data Analytics application that reads from the Kinesis stream of unicorn data and emits a summary row each minute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced and Solutions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Challenge 1: Handling High Volumes of Real-Time Data&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Utilized Kinesis and Lambda to process data in small batches, ensuring efficient handling of high-throughput streams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 2: Ensuring Data Durability and Query Flexibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Stored raw data in S3 for archiving, leveraging Athena for querying, and DynamoDB for structured data storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 3: Securing Access Across Services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Implemented Cognito for secure, role-based access management, enforcing strict access control for users and applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The ability of serverless architecture to manage real-time data streams is demonstrated by the serverless data processing solution developed on AWS. This project offers a scalable, effective, and secure platform for processing large amounts of data with less infrastructure administration by combining services like Kinesis, Lambda, DynamoDB, and S3. This architecture is flexible and can be used for a variety of purposes, including online monitoring, financial transactions, and the Internet of Things. It is a prime example of how serverless services may streamline intricate data processing requirements while preserving cost-effectiveness and scalability.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>data</category>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Deploying a Complete Machine Learning Fraud Detection Solution Using Amazon SageMaker : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Wed, 13 Nov 2024 12:55:03 +0000</pubDate>
      <link>https://forem.com/shubham_murti/deploying-a-complete-machine-learning-fraud-detection-solution-using-amazon-sagemaker-aws-project-2252</link>
      <guid>https://forem.com/shubham_murti/deploying-a-complete-machine-learning-fraud-detection-solution-using-amazon-sagemaker-aws-project-2252</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This project leverages Amazon SageMaker and key AWS services to build a scalable, real-time fraud detection solution. By utilizing Amazon SageMaker’s machine learning capabilities, combined with services such as AWS Lambda, S3, and API Gateway, this setup processes transaction data to identify fraudulent patterns efficiently. It provides a robust framework for secure, automated, and reliable fraud detection, designed for seamless integration into production environments where real-time insights are essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;This solution leverages a range of AWS services, each playing a crucial role in creating a scalable, secure, and responsive machine learning infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon SageMaker&lt;/strong&gt;: Core platform for model training, deployment, and hosting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Automates resource creation, orchestrates SageMaker integrations, and handles infrastructure setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Stores training data, model artifacts, and log files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS IAM&lt;/strong&gt;: Manages access control with defined roles and policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2 and VPC&lt;/strong&gt;: Provides network isolation and backend processing capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon CloudWatch&lt;/strong&gt;: Enables monitoring and alerting for various system components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon SQS&lt;/strong&gt;: Manages asynchronous task queues for inter-service communication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional Services&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt;: Safeguards sensitive information such as API keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CloudTrail&lt;/strong&gt;: Tracks account activity and resource changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Route 53&lt;/strong&gt;: Manages domain name resolution for the API endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Systems Manager (SSM)&lt;/strong&gt;: Provides parameter management and infrastructure automation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;: Exposes model predictions as RESTful APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon SNS&lt;/strong&gt;: Sends alerts and notifications based on specific events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon CloudFormation&lt;/strong&gt;: Automates infrastructure provisioning.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before beginning, ensure you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic AWS Knowledge&lt;/strong&gt;: Familiarity with Amazon SageMaker, IAM, Lambda, and S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python and ML Basics&lt;/strong&gt;: Knowledge of Python for model training and AWS SDK integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI and SDKs&lt;/strong&gt;: Installed and configured for seamless AWS service management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM Permissions&lt;/strong&gt;: Appropriate permissions to interact with SageMaker, S3, Lambda, and other services used in the project.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;Detecting fraudulent transactions is a challenge for financial institutions due to the high volume and complexity of real-time data. Fraud detection models need to be scalable, secure, and capable of integrating seamlessly with backend systems to process transactions in real-time. This project addresses these needs by developing a machine learning-based fraud detection model that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learns and Identifies Fraud Patterns&lt;/strong&gt;: Uses machine learning to analyze transaction data and classify transactions as legitimate or suspicious.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ensures Scalability and Efficiency&lt;/strong&gt;: Deploys a highly scalable, serverless architecture using SageMaker and Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enables Real-Time Monitoring and Notifications&lt;/strong&gt;: Implements CloudWatch, CloudTrail, and SNS for tracking and alerting on model performance and anomalies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The solution is ideal for large-scale fraud detection in production environments, enabling real-time insights with minimal manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;The architecture diagram below shows the interaction between various AWS services, highlighting the flow of data from transaction storage to model inference and result distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpidai6rc1hdn0dmhrjdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpidai6rc1hdn0dmhrjdj.png" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Launch the CloudFormation Stack
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/22-selfpaced/cloudformation#local-computer" rel="noopener noreferrer"&gt;Local Computer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Download the CloudFormation template to your local computer:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'https://static.us-east-1.prod.workshops.aws/public/aed5dc57-f15e-4afa-bbf4-9ff167491648/static/fraud-detection-workshop-selfpaced.yaml' --output fraud-detection-workshop-selfpaced.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/22-selfpaced/cloudformation#within-your-aws-account" rel="noopener noreferrer"&gt;Within your AWS account&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/cloudformation/home#/stacks/create/template" rel="noopener noreferrer"&gt;AWS CloudFormation &lt;/a&gt;(AWS Console) to create a new stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On &lt;strong&gt;Create stack&lt;/strong&gt; screen, under the &lt;strong&gt;Specify a template&lt;/strong&gt; section, select &lt;strong&gt;Upload a template file&lt;/strong&gt; option and navigate to select fraud-detection-workshop-selfpaced.yml file you downloaded earlier. Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rh3gc9lgbgasekpyzm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rh3gc9lgbgasekpyzm3.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On &lt;strong&gt;Specify stack details&lt;/strong&gt; screen, under the &lt;strong&gt;Stack name&lt;/strong&gt; section, provide a name for the CloudFormation stack such as fraud-detection-workshop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the rest of the parameters unchanged, and click &lt;strong&gt;Next&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntktizmattbx9z918sf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntktizmattbx9z918sf7.png" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Configure stack options&lt;/strong&gt; screen, leave default parameters unchanged, scroll to the bottom of the page, and click &lt;strong&gt;Next&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Review fraud-detection-workshop&lt;/strong&gt; screen, scroll to the bottom of the page and check off the box “I acknowledge that AWS CloudFormation might create IAM resources.”. Click &lt;strong&gt;Create Stack&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3fgohwt9cswira4a391.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3fgohwt9cswira4a391.png" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;STOP: Cloudformation will take a few minutes to run and set up your environment. Please wait for this step to finish.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully deployed AWS environment for this workshop. Next, we will verify this environment.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview of the Environment
&lt;/h2&gt;

&lt;p&gt;In this section, we’re going to go through a quick overview of the workshop and check if everything is set up correctly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#workshop-overview" rel="noopener noreferrer"&gt;Workshop Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#verify-sagemaker-studio-is-ready" rel="noopener noreferrer"&gt;Verify SageMaker Studio is ready&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-the-code-files" rel="noopener noreferrer"&gt;Explanation of the code files&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-the-data-files" rel="noopener noreferrer"&gt;Explanation of the data files&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-helper-scripts" rel="noopener noreferrer"&gt;Explanation of helper scripts&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#workshop-overview" rel="noopener noreferrer"&gt;Workshop Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this workshop we will -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Explore, clean, visualize and prepare the data&lt;/strong&gt;: This step is all about understanding the auto insurance claims data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Select &amp;amp; engineer features&lt;/strong&gt;: Here we will get acquainted with Amazon SageMaker Feature Store.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build and train a model&lt;/strong&gt;: Train your model through the SageMaker API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment &amp;amp; Inference&lt;/strong&gt;: Learn to deploy your model through quick commands for Real-time inference.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Bonus) Transform data visually&lt;/strong&gt;: Learn to transform and visualize data through Amazon SageMaker DataWrangler.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Bonus) Detect bias in the dataset&lt;/strong&gt;: Learn to use Amazon SageMaker Clarify to detect bias in a bonus lab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Bonus) Batch transforms&lt;/strong&gt;: Learn to batch inference requests and use SageMaker Batch Transforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Finally&lt;/strong&gt;, put everything together into a production CI/CD pipeline using Amazon SageMaker Pipelines&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are going to make use of three core jupyter notebooks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The first notebook (Lab_1_and_2-Data-Exploration-and-Features.ipynb) demonstrates Exploratory Data Analysis (EDA). Specifically, data visualization, manipulation and transformation through Pandas and Seaborn python libraries. It will then walk you through feature engineering and getting the data ready for training.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second notebook (Lab_3_and_4-Training_and_Deployment.ipynb) demonstrates training and deployment of the model followed by validation of the predictions using a subset of the data. Once deployed, the next step shows how to get predictions from the model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The third notebook (Lab_5-Pipelines.ipynb) showcases a pipeline which integrates all previous steps. This is a good example on how to operationalize a Machine Learning Model into a production pipeline. This is a stand-alone lab which doesn't require executing the first two notebooks.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#verify-sagemaker-studio-is-ready" rel="noopener noreferrer"&gt;Verify SageMaker Studio is ready&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/" rel="noopener noreferrer"&gt;AWS console &lt;/a&gt;browser window and type SageMaker in the search bar at the top of the AWS console home page. Click on &lt;a href="https://console.aws.amazon.com/sagemaker/home" rel="noopener noreferrer"&gt;Amazon SageMaker &lt;/a&gt;service page in the AWS console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the Studio link in the left navigation pane under Control Panel.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fknu78fq4kdznkgpu0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fknu78fq4kdznkgpu0q.png" width="271" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, you should see a user is already setup for you. Click on “Open Studio”.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i2m0db21kulks6dapdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i2m0db21kulks6dapdo.png" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This will open SageMaker Studio UI in new browser window.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Attention&lt;/p&gt;

&lt;p&gt;Amazon SageMaker Studio and Amazon SageMaker Studio Classic are two of the machine learning environments that you can use to interact with SageMaker. In this workshop, we will use SageMaker Studio Classic experience.&lt;/p&gt;

&lt;p&gt;The Amazon SageMaker Studio UI extends the SageMaker Studio Classic interface. Click on “Studio Classic” icon under Applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibqp7l7a49r78rxq2b11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibqp7l7a49r78rxq2b11.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Amazon SageMaker Studio Classic interface is based on JupyterLab, which is a web-based interactive development environment for notebooks, code, and data. Keep this window open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq4sydey5c7lyowpcv0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq4sydey5c7lyowpcv0m.png" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s walk through the various files and resources pre-provisioned for you.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-the-code-files" rel="noopener noreferrer"&gt;Explanation of the code files&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are five notebooks in the folder FraudDetectionWorkshop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmcf8z94wovtfz4ngrmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmcf8z94wovtfz4ngrmh.png" width="721" height="239"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-the-data-files" rel="noopener noreferrer"&gt;Explanation of the data files&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The target data exists in the data directory. Below is a list of files and their brief description.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk199ygvw6mnowghewk5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk199ygvw6mnowghewk5u.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-output-files" rel="noopener noreferrer"&gt;Explanation of output files&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The outputs directory contains two files that contain data transformations. We will use these files in Lab 5.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;claims_flow_template&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;customer_flow_template&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/23-overviewoftheenvironment#explanation-of-helper-scripts" rel="noopener noreferrer"&gt;Explanation of helper scripts&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are seven helper scripts in scripts directory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs35isymqcl6qblkpb0pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs35isymqcl6qblkpb0pd.png" width="756" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Attention&lt;/p&gt;

&lt;p&gt;If for some reason, you are unable to complete the workshop, please head to the &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/90-cleanup" rel="noopener noreferrer"&gt;clean up&lt;/a&gt; steps. Resources created during the workshop may incur minor charges. It is best practice to spin down resources when they’re not in use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully completed the Overview of the Environment.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Running Jupyter Notebooks
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;If you already know how to execute Jupyter notebooks, skip this section.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/20-environmentsetup/24-jupyternotebook#running-notebook-cells" rel="noopener noreferrer"&gt;Running Notebook cells&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;When you open a notebook, you’ll see a popup that requires you to select a kernel and instance type. Please make sure that the Image is Data Science, Kernel is Python3 and Instance Type is ml.t3.medium as shown in the screenshot below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex5km7lzzlajpcl9e4m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex5km7lzzlajpcl9e4m6.png" width="450" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If for some reason, you see an error on capacity for this particular instance type, it’s okay to scale up and choose the next available instance type.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you haven’t worked with Jupyter notebooks before, the following screenshots explains how to execute and run different cells.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft07451mvz8sqg94ijubc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft07451mvz8sqg94ijubc.png" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the play button will execute the code within a selected cell.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you see a * sign next to a cell, it means that cell is still being executed, and you should wait. Once it finishes it will show a number where the * was.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zm61o0q4siovrzsm5y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zm61o0q4siovrzsm5y6.png" width="600" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully completed the steps of how to run Jupyter Notebooks.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Data Preparation
&lt;/h2&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, you will learn about the highlighted steps of the machine learning process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff49x67qr34nmq8qhlpws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff49x67qr34nmq8qhlpws.png" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1 — Ingest, Transform And Preprocess Data
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;The following material provides contextual information about this lab. Please read through this information before you refer jupyter notebook for step-by-step code block instructions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#data-transformation" rel="noopener noreferrer"&gt;Data Transformation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#data-visualization" rel="noopener noreferrer"&gt;Data Visualization&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Exploratory Data Analysis (EDA) is an unavoidable step in the Machine Learning process. Raw data cannot be consumed directly to create a model. Data stakeholders understand, visualize and manipulate data before using it. Common transforms include (but aren’t limited to): removing symbols, one-hot encoding, removing outliers, and normalization.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You will be working on the first notebook in the series Lab_1_and_2-Data-Exploration-and-Features.ipynb. Please scroll down for important context before starting with the notebooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The steps are outlined below:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Data visualization ~5m&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data transformation ~8m&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Total run time ~13 minutes&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#data-transformation" rel="noopener noreferrer"&gt;Data Transformation&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;For our use case we have been provided with two datasets claims.csv and customers.csv containing the auto-insurance claims and customers' information respectively. This dataset was generated synthetically. However, the raw dataset can be non-numeric which is hard to visualize and cannot be used for the Machine Learning process.&lt;/p&gt;

&lt;p&gt;Consider the columns driver_relationship or incident_type in claims.csv. The data type for values under these columns is known as an Object. It's a string that represents a feature. It's hard to use this kind of data directly since machines don't understand strings or what they represent. Instead, it would be a lot easier to just mark a feature as a one or zero.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1un1t9ocqw329g0pi3uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1un1t9ocqw329g0pi3uj.png" width="400" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So instead of saying:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;driver_relationship = 'Spouse'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It’s better to break it out into another feature like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;driver_relationship_spouse = 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ve elected to transform the data to get it ready for Machine Learning.&lt;/p&gt;

&lt;p&gt;These columns will be one-hot encoded so that every type of collision can be a separate column.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7hz7nfhvvt332vxykke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7hz7nfhvvt332vxykke.png" width="500" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, many transformations are required before the data can be used for Machine Learning. Data stakeholders often iterate over datasets multiple times before they can be used. In this case, transformations are created using Amazon SageMaker Data Wrangler (see the hint below). With this context in mind the following files are available:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The .flow templates named customer_flow_template and claims_flow_template. These templates contain the transformations on customer and claims dataset created through SageMaker Data Wrangler. These files are in the standard JSON format and can be read in using the python json module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;These transformations are applied to the raw datasets. The final processed datasets are claims_preprocessed.csv and customers_preprocessed.csv. &lt;strong&gt;The notebook starts off with these preprocessed datasets.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hint&lt;/p&gt;

&lt;p&gt;If you wish to learn how to make these transformations yourself, you can go through the &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/95-bonusmaterial/91-bonus-datawrangler" rel="noopener noreferrer"&gt;Bonus Labs section of this workshop titled — Data exploration using Amazon SageMaker Data Wrangler&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#data-visualization" rel="noopener noreferrer"&gt;Data Visualization&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;At this point, let’s head over to the first notebook. Navigate to the SageMaker Studio UI and click on the folder icon on the left navigation panel. Open the folder FraudDetectionWorkshop. Finally, open the first notebook titled Lab_1_and_2-Data-Exploration-and-Features.ipynb.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxe8ko5t225sjckwm615.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxe8ko5t225sjckwm615.png" width="400" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;Follow the jupyter notebook instructions till you complete Lab 1 and navigate back here when done.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/31-ingesttranformpreprocessjupyter#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You’ve successfully learned how to visualize and pre-process the data and gather insights.&lt;/p&gt;

&lt;p&gt;In this lab we learned how to transform data easily using Amazon SageMaker Studio Notebooks.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  2 — Feature Engineering
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;The following material provides contextual information about this lab. Please read through this information before you refer jupyter notebook for step-by-step code block instructions.&lt;/p&gt;

&lt;p&gt;Prerequisite&lt;/p&gt;

&lt;p&gt;Please make sure Lab 1 is executed successfully before you proceed with this lab.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#creating-the-feature-store" rel="noopener noreferrer"&gt;Creating the Feature Store&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#split-the-dataset-and-upload-to-s3" rel="noopener noreferrer"&gt;Split the dataset and upload to S3&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html" rel="noopener noreferrer"&gt;Amazon SageMaker Feature Store &lt;/a&gt;provides a central repository for data features with low latency (milliseconds) reads and writes. Features can be stored, retrieved, discovered, and shared through SageMaker Feature Store for easy reuse across models and teams with secure access and control.&lt;/p&gt;

&lt;p&gt;SageMaker Feature Store keeps track of the metadata of stored features (e.g. feature name or version number) so that you can query the features for the right attributes in batches or in real time using &lt;a href="https://aws.amazon.com/athena/" rel="noopener noreferrer"&gt;Amazon Athena &lt;/a&gt;, an interactive query service.&lt;/p&gt;

&lt;p&gt;In this lab, you will learn how to use Amazon SageMaker Feature Store to store and retrieve machine learning (ML) features.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The steps are outlined below:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creating the Feature Store ~6 min (including time to create and ingest data on Feature Store).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Visualize Feature Store ~2 min.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upload data to S3 ~1 min.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Total run time ~10 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#creating-the-feature-store" rel="noopener noreferrer"&gt;Creating the Feature Store&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The collected data, we refer to it as raw data is typically not ready to be consumed by ML Models, The data needs to transformed e.g. encoding, dealing with missing values, outliers, aggregations. This process is known as feature engineering and the signals that are extracted as part of this data prep are referred to as features.&lt;/p&gt;

&lt;p&gt;A feature group is a logical grouping of features and these groups consist of features that are computed together, related by common parameters, or are all related to the same business domain entity.&lt;/p&gt;

&lt;p&gt;In this step, you are going to create two feature groups: customer and claims.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4dx77hdxq2a1zzhxdtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4dx77hdxq2a1zzhxdtz.png" width="788" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the Feature Groups have been created, we can put data into each store by using the &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_feature_store_PutRecord.html" rel="noopener noreferrer"&gt;PutRecord API &lt;/a&gt;. This API can handle high TPS (Transactions Per Second) and is designed to be called concurrently by different streams. The data from PUT requests is written to the offline store within few minutes of ingestion.&lt;/p&gt;

&lt;p&gt;Hint&lt;/p&gt;

&lt;p&gt;It is possible to verify that the data is available offline by navigating to the S3 Bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#split-the-dataset-and-upload-to-s3" rel="noopener noreferrer"&gt;Split the Dataset and upload to S3&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the data is available in the offline store, it will automatically be cataloged and loaded into an &lt;a href="https://aws.amazon.com/pt/athena/" rel="noopener noreferrer"&gt;Amazon Athena &lt;/a&gt;table (this is done by default, but can be turned off). In order to build our training and test datasets, you will submit a SQL query to join the Claims and Customers tables created in Athena.&lt;/p&gt;

&lt;p&gt;The last step in this notebook is to upload newly created datasets into S3.&lt;/p&gt;

&lt;p&gt;At this point, let’s navigate back to the first notebook (Lab_1_and_2-Data-Exploration-and-Features.ipynb) and scroll down to &lt;strong&gt;Lab 2: Feature Engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;Follow the jupyter notebook instructions till you complete Lab 2 and navigate back here when done.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/30-datapreparation/32-storefeaturesfeaturestore#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully prepared the data to train an XGBoost model.&lt;/p&gt;

&lt;p&gt;In this lab we learned how to ingest features into Amazon SageMaker Feature Store and prepare our data for training.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training and Deployment
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, you will learn about the following highlighted step of the Machine Learning process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0l6jmrjzq5qrff0wzvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0l6jmrjzq5qrff0wzvg.png" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3 — Train a Model using XGBoost
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;The following material provides contextual information about this lab. Please read through this information before you refer jupyter notebook for step-by-step code block instructions.&lt;/p&gt;

&lt;p&gt;Prerequisite&lt;/p&gt;

&lt;p&gt;Please make sure Lab 2 is executed successfully before you proceed with this lab.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#data-handling" rel="noopener noreferrer"&gt;Data Handling&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#train-a-model-using-xgboost" rel="noopener noreferrer"&gt;Train a Model using XGBoost&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this lab, you will learn how to use &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html" rel="noopener noreferrer"&gt;Amazon SageMaker Training Job &lt;/a&gt;to build, and train the ML model.&lt;/p&gt;

&lt;p&gt;To train a model using SageMaker, you create a training job. The training job includes the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The URL of the Amazon Simple Storage Service (Amazon S3) bucket where you’ve stored the training data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The compute resources that you want SageMaker to use for model training. Compute resources are ML compute instances that are managed by SageMaker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The URL of the S3 bucket where you want to store the output of the job.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Amazon Elastic Container Registry path where the docker container image is stored.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial, you will use the &lt;a href="https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html" rel="noopener noreferrer"&gt;XGBoost Open Source Framework &lt;/a&gt;to train your model. This estimator is accessed via the SageMaker SDK, but mirrors the open source version of the &lt;a href="https://xgboost.readthedocs.io/en/latest/python/index.html" rel="noopener noreferrer"&gt;XGBoost Python package &lt;/a&gt;. Any functionality provided by the XGBoost Python package can be implemented in your training script. XGBoost is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The steps are outlined below:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Data handling ~1m&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Train a model using XGBoost ~8m (including running the training code ~4m)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deposit the model in SageMaker Model Registry ~3m&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Total run time ~ 12 mins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#data-handling" rel="noopener noreferrer"&gt;Data handling&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;There are two ways to obtain the dataset:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Use the dataset you uploaded to Amazon S3 bucket in the previous Lab (Lab 2 — Feature Engineering).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upload the following datasets from data folder to Amazon S3: train.csv, test.csv&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The following code upload the datasets from data folder to Amazon S3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38kipplipdd5fqsn6yp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38kipplipdd5fqsn6yp7.png" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#train-a-model-using-xgboost" rel="noopener noreferrer"&gt;Train a model using XGBoost&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You will define SageMaker Estimator using &lt;a href="https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html" rel="noopener noreferrer"&gt;XGBoost Open Source Framework &lt;/a&gt;to train your model. The following code will create the Estimator object and start the training job using xgb_estimator.fit() function call.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxs6yovz6cy2a1h6huyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxs6yovz6cy2a1h6huyi.png" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this example, we will use the following parameters for the XGBoost estimator:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;entry_point - Path to the Python source file which should be executed as the entry point to training.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;hyperparameters - Hyperparameters that will be used for training. The hyperparameters are made accessible as a dict[str, str] to the training code on SageMaker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;output_path - S3 location for saving the training result (model artifacts and output files).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;framework_version - XGBoost version you want to use for executing your model training code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;instance_type - Type of EC2 instance to use for training.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to explore the breadth of functionality offered by the SageMaker XGBoost Framework you can read about all the configuration parameters by referencing the inheriting classes. The XGBoost class inherits from the Framework class and Framework inherits from the EstimatorBase class:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html#sagemaker.xgboost.estimator.XGBoost" rel="noopener noreferrer"&gt;XGBoost Estimator documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html#sagemaker.estimator.Framework" rel="noopener noreferrer"&gt;Framework documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html#sagemaker.estimator.EstimatorBase" rel="noopener noreferrer"&gt;EstimatorBase documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Launching a training job and storing the trained model into S3 should take ~4 minutes. Notice that the output includes the value of Billable seconds, which is the amount of time you will be actually charged for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ulcr436hpm56fxvh79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5ulcr436hpm56fxvh79.png" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#deposit-the-model-in-sagemaker-model-registry" rel="noopener noreferrer"&gt;Deposit the model in SageMaker Model Registry&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;After the successful training job, you can register the trained model in &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html" rel="noopener noreferrer"&gt;SageMaker Model Registry &lt;/a&gt;. SageMaker’s Model Registry is a metadata store for your machine learning models. Within the model registry, models are versioned and registered as model packages within model groups. Each model package contains an Amazon S3 URI to the model files associated with the trained model and an Amazon ECR URI that points to the container used while serving the model.&lt;/p&gt;

&lt;p&gt;At this point, let’s navigate back to the training notebook (Lab_3_and_4-Training_and_Deployment.ipynb) and scroll down to &lt;strong&gt;Lab 3: Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;Follow the jupyter notebook instructions till you complete Lab 3 and navigate back here when done.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/41-trainandtune#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully built and trained your model.&lt;/p&gt;

&lt;p&gt;In this lab you have walked through the process of building, and training XGBoost model using Amazon SageMaker Estimator. You also used the SageMaker Python SDK to train the model.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 — Deploy and Serve the Model
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;The following material provides contextual information about this lab. Please read through this information before you refer jupyter notebook for step-by-step code block instructions.&lt;/p&gt;

&lt;p&gt;Prerequisite&lt;/p&gt;

&lt;p&gt;Please make sure Lab 3 is executed successfully before you proceed with this lab.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;After you train your machine learning model, you can deploy it using Amazon SageMaker to get predictions in any of the following ways, depending on your use case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For persistent, real-time endpoints that make one prediction at a time, use SageMaker real-time hosting services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Workloads that have idle periods between traffic spurts and can tolerate cold starts, use Serverless Inference.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requests with large payload sizes up to 1GB, long processing times, and near real-time latency requirements, use Amazon SageMaker Asynchronous Inference.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To get predictions for an entire dataset, use SageMaker batch transform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following image describes different deployment options and their use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvs79tu500mvyzwecfg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvs79tu500mvyzwecfg1.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The steps are outlined below:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Evaluate trained model and update status in the model registry: ~3 mins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model deployment: ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create/update endpoint: 5 mins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Predictor interface: 1 mins&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total run time ~ 10 mins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#evaluate-trained-model" rel="noopener noreferrer"&gt;Evaluate trained model&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;After you create a model version, you typically want to evaluate its performance before you deploy the model in production. If it performs to your requirements, you can update the approval status of the model version to Approved. In the real-life MLOps lifecycle, a model package gets approved after evaluation by data scientists, subject matter experts, and auditors.&lt;/p&gt;

&lt;p&gt;For the purpose of this lab, we will evaluate the model with test dataset that was created during training process. The lab contains evaluate.py script that calculates AUC (Area under the ROC Curve) on the test dataset. The AUC threshold is set at 0.7. If the test dataset AUC is below the threshold, then the approval status should be “Rejected” for that model version.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#model-deployment" rel="noopener noreferrer"&gt;Model deployment&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To prepare the model for deployment, you will conduct following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Query the model registry and list all the model versions:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hint&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the purpose of this lab, we will get the latest version of the model from the model registry. However, you can apply different filtering criterion such as listing approved models or get specific version of the model. Please refer to the &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.html" rel="noopener noreferrer"&gt;Model Registry &lt;/a&gt;documentation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define the endpoint configuration: Specify the name of one or more models in production (variants) and the ML compute instances that you want SageMaker to launch to host each production variant.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvyz5hnxhfy2cuhis7rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvyz5hnxhfy2cuhis7rl.png" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When hosting models in production, you can configure the endpoint to elastically scale the deployed ML compute instances. For each production variant, you specify the number of ML compute instances that you want to deploy. When you specify two or more instances, SageMaker launches them in multiple Availability Zones, this ensures continuous availability. SageMaker manages deploying the instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#createupdate-endpoint" rel="noopener noreferrer"&gt;Create/update endpoint&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once you have your model and endpoint configuration, use the &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpoint.html" rel="noopener noreferrer"&gt;CreateEndpoint API &lt;/a&gt;to create your endpoint. Provide the endpoint configuration to SageMaker. The service launches the ML compute instances and deploys the model or models as specified in the configuration. Please refer to the &lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html" rel="noopener noreferrer"&gt;documentation &lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#predictor-interface" rel="noopener noreferrer"&gt;Predictor interface&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this part of the workshop, you will use the data from dataset.csv to run inference against the newly deployed endpoint.&lt;/p&gt;

&lt;p&gt;At this point, let’s navigate back to the training notebook (Lab_3_and_4-Training_and_Deployment.ipynb) and scroll down to &lt;strong&gt;Lab 4: Deploy and serve the model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;Follow the jupyter notebook instructions till you complete Lab 4 and navigate back here when done.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/40-traininganddeployment/42-deploy#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully deployed an endpoint to get predictions from your model.&lt;/p&gt;

&lt;p&gt;In this lab, you created a low latency Endpoint using Amazon SageMaker and deployed your model to get predictions. In the next lab, you will learn how to integrate all of the steps you’ve learnt so far using SageMaker Pipelines.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, you will learn about the following highlighted step of the Machine Learning process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnmhnnydhpseyh4fm0pj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnmhnnydhpseyh4fm0pj.png" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5 — Pipelines
&lt;/h2&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;The following material provides contextual information about this lab. Please read through this information before you refer jupyter notebook for step-by-step code block instructions.&lt;/p&gt;

&lt;p&gt;Attention&lt;/p&gt;

&lt;p&gt;This lab demonstrates how to build an end-to-end machine learning workflow using Sagemaker Pipeline. This is a stand-alone lab and can be run independently of the previous labs.&lt;/p&gt;

&lt;p&gt;If you have already executed the previous labs (Lab 1 and Lab 2) then you don’t need to run the &lt;strong&gt;Step 0&lt;/strong&gt; on juypter notebook.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#content" rel="noopener noreferrer"&gt;Content&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#create-automated-machine-learning-pipeline" rel="noopener noreferrer"&gt;Create Automated Machine Learning Pipeline&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-1-data-wrangler-preprocessing" rel="noopener noreferrer"&gt;Step 1 — Data Wrangler Preprocessing&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-2-create-dataset-and-traintest-split" rel="noopener noreferrer"&gt;Step 2 — Create Dataset and Train/Test Split&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-3-train-xgboost-model" rel="noopener noreferrer"&gt;Step 3 — Train XGBoost Model&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-4-model-pre-deployment" rel="noopener noreferrer"&gt;Step 4 — Model Pre-Deployment Step&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-5-register-model" rel="noopener noreferrer"&gt;Step 5 — Register Model&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-6-model-deployment" rel="noopener noreferrer"&gt;Step 6 — Model deployment&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-7-combine-the-pipeline-steps" rel="noopener noreferrer"&gt;Step 7 — Combine the Pipeline Steps&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#create-the-pipeline-definition" rel="noopener noreferrer"&gt;Create the pipeline definition&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#review-the-pipeline-definition" rel="noopener noreferrer"&gt;Review the pipeline definition&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#run-the-pipeline" rel="noopener noreferrer"&gt;Run the pipeline&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#overview" rel="noopener noreferrer"&gt;Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In previous labs, you built separate processes for data preparation, training, and deployment. In this lab, you will build a machine learning workflow using SageMaker Pipelines that automates end-to-end process of data preparation, model training, and model deployment to detect fraudulent automobile insurance claims.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wbm1yke7s64kly1vpjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wbm1yke7s64kly1vpjd.png" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#instructions" rel="noopener noreferrer"&gt;Instructions&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The machine learning workflow steps are outlined below:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Step 1 — Data Wrangler Preprocessing ~2 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 2 — Create Dataset and Train/Test Split ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 3 — Train XGBoost Model ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 4 — Model Pre-Deployment Step ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 5 — Register Model ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 6 — Model deployment ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step 7 — Combine and Run the Pipeline Steps ~1 min&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the pipeline ~15 mins&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total run time ~ 23 mins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#create-automated-machine-learning-pipeline" rel="noopener noreferrer"&gt;Create Automated Machine Learning Pipeline&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;SageMaker Pipelines service is composed of following steps. These steps define the actions that the pipeline takes and the relationships between steps using properties.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-1-data-wrangler-preprocessing" rel="noopener noreferrer"&gt;Step 1 — Data Wrangler Preprocessing&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Define Data Wrangler inputs using ProcessingInput, outputs using ProcessingOutput, and Processing Step to create a job for data processing for "claims" and "customer" data.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-2-create-dataset-and-traintest-split" rel="noopener noreferrer"&gt;Step 2 — Create Dataset and Train/Test Split&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Next you will create an instance of a SKLearnProcessor processor. You can split the dataset without using SKLearnProcessor as well, but if the dataset is larger than the one provided, it will takes more time and requires local compute resources. Hence it is recommended to use manage processing job.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-3-train-xgboost-model" rel="noopener noreferrer"&gt;Step 3 — Train XGBoost Model&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You will use SageMaker’s XGBoost algorithm to train the dataset using the &lt;a href="[https://sagemaker.readthedocs.io/en/stable/api/training%20](https://sagemaker.readthedocs.io/en/stable/api/training)estimators.html"&gt;Estimator&lt;/a&gt; interface. A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves the model to “model_dir”.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-4-model-pre-deployment" rel="noopener noreferrer"&gt;Step 4 — Model Pre-Deployment&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;sagemaker.model.Model denotes a SageMaker Model that can be deployed to an Endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-5-register-model" rel="noopener noreferrer"&gt;Step 5 — Register Model&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Typically, customers can create a ModelPackageGroup for SageMaker Pipelines so that model package versions are added for every iteration.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-6-model-deployment" rel="noopener noreferrer"&gt;Step 6 — Model deployment&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Once the model is registered, the next step is deploying the model. You will use Lambda function step to deploy the model as real time endpoint. The SageMaker SDK provides a Lambda helper class that can be used to create a Lambda function. This function is provided to the Lambda step for invocation via the pipeline. Alternatively, a predefined Lambda function can also be provided to the Lambda step.&lt;/p&gt;

&lt;p&gt;Attention&lt;/p&gt;

&lt;p&gt;Please open &lt;a href="https://console.aws.amazon.com/cloudformation/home" rel="noopener noreferrer"&gt;CloudFormation console &lt;/a&gt;and copy Lambda ARN of the lambda function (under the Outputs tab).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcyl1st37q6zyf1x6cot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcyl1st37q6zyf1x6cot.png" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy lambda function ARN value and add it to cell #21&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2g7xgqqx4splfndz6fx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2g7xgqqx4splfndz6fx.png" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#step-7-combine-the-pipeline-steps" rel="noopener noreferrer"&gt;Step 7 — Combine the Pipeline Steps&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;SageMaker Pipelines is a series of interconnected workflow steps that are defined using the &lt;a href="https://sagemaker.readthedocs.io/en/stable/workflows/pipelines/sagemaker.workflow.pipelines.html" rel="noopener noreferrer"&gt;Pipelines SDK &lt;/a&gt;. This Pipelines definition encodes a pipeline using a directed acyclic graph (DAG) that can be exported as a JSON definition.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#create-the-pipeline-definition" rel="noopener noreferrer"&gt;Create the pipeline definition&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Submit the pipeline definition to the SageMaker Pipelines to either create a new pipeline if it doesn’t exist, or update the existing pipeline if it does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9asguyizlm1z6ic9hl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9asguyizlm1z6ic9hl2.png" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#review-the-pipeline-definition" rel="noopener noreferrer"&gt;Review the pipeline definition&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Describing the pipeline status ensures that it has been created successfully. Viewing the pipeline definition with all the string variables interpolated may help debug pipeline bugs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o710jd1e4dmpqn85nq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o710jd1e4dmpqn85nq2.png" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#run-the-pipeline" rel="noopener noreferrer"&gt;Run the pipeline&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Start a pipeline execution. Note this will take about 15 minutes to complete. You can watch the progress of the Pipeline Job on your SageMaker Studio Pipelines panel.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the Home folder pointed by the arrow and click on Pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will see the available pipelines in the table on the right.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on FraudDetectDemo.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvezjd8in59zu71f3g2d2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvezjd8in59zu71f3g2d2.png" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will see the executions listed on the next page. Double-click on the Status executing to be taken to the graph representation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w7lueqes5b5ux6nwctx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w7lueqes5b5ux6nwctx.png" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see the nodes turn green when the corresponding steps are complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs2faf3d0o6mabmakxpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs2faf3d0o6mabmakxpn.png" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note&lt;/p&gt;

&lt;p&gt;Follow the jupyter notebook instructions till you complete Lab 5 and navigate back here when done.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/50-pipelines/51-pipelines#conclusion" rel="noopener noreferrer"&gt;Conclusion&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully created an end to end machine learning workflow using SageMaker Pipelines.&lt;/p&gt;

&lt;p&gt;Click “Next” to go to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/60-summary#what-you-have-learned" rel="noopener noreferrer"&gt;What you have learned&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this workshop, you have learned how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Inspect, analyze and transform an auto insurance fraud dataset&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingest transformed data into SageMaker Feature Store using the SageMaker Python SDK&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Train an XGBoost model using SageMaker Training Jobs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a realtime endpoint for low latency requests using SageMaker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate all previous steps into an MLOps workflow with SageMaker Pipelines&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.workshops.aws/sagemaker-fraud-detection/en-US/60-summary#thank-you!" rel="noopener noreferrer"&gt;Thank you!&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;Congratulations on developing an ML fraud detection solution and deploying automated pipelines!&lt;/p&gt;

&lt;p&gt;Attention&lt;/p&gt;

&lt;p&gt;If you are at an AWS event, such as re:Invent or an Immersion Day, and are using an AWS provided account, then you don’t need to worry about cleaning up the resources.&lt;/p&gt;

&lt;p&gt;To clean up resources, please do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the CloudFormation stack to clean up the environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://console.aws.amazon.com/cloudformation/home" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt;home page, click &lt;strong&gt;Stacks **under the left hand side menu, select stack **fraud-detection-workshop **stack, and click **Delete&lt;/strong&gt; button to delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp0b0674jingk8k4pb6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp0b0674jingk8k4pb6t.png" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the Model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;During Lab 4, we deleted the SageMaker hosted endpoint but not the model. We needed that model for Detect Bias bonus lab execution. Go to &lt;a href="https://console.aws.amazon.com/sagemaker/home" rel="noopener noreferrer"&gt;SageMaker &lt;/a&gt;home page and expand &lt;strong&gt;Inference **in SageMaker dashboard section on left hand side menu. Click **Models&lt;/strong&gt;, and select &lt;strong&gt;fraud-detect-model-xxxxxxxxxxxx&lt;/strong&gt;. Click &lt;strong&gt;Action **button and select **Delete&lt;/strong&gt; option to delete the model.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0gezdu5b6riw11wgsp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0gezdu5b6riw11wgsp5.png" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the Endpoint Configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In SageMaker home page left hand side menu, expand &lt;strong&gt;Inference&lt;/strong&gt; and click on &lt;strong&gt;Endpoint configurations&lt;/strong&gt;. Select &lt;strong&gt;fraud-detect-demo-endpoint-config-xxxxxxxxxxxx&lt;/strong&gt; configuration, click &lt;strong&gt;Actions&lt;/strong&gt; button, and select &lt;strong&gt;Delete&lt;/strong&gt; option to delete the configuration.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ugu8841nb6bb406wb9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ugu8841nb6bb406wb9x.png" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Delete the lifecycle configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In SageMaker home page left hand side menu, click on &lt;strong&gt;Lifecycle configurations&lt;/strong&gt;. Select &lt;strong&gt;git-clone-step&lt;/strong&gt; lifecycle configuration and click &lt;strong&gt;Delete&lt;/strong&gt; button to delete the lifecycle configuration.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg4mg5il3kwl3yr5l85a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg4mg5il3kwl3yr5l85a.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;And, finally delete the S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://console.aws.amazon.com/sagemaker/home" rel="noopener noreferrer"&gt;S3 &lt;/a&gt;. To delete the bucket, you need to first delete the objects inside the bucket. Click on &lt;strong&gt;sagemaker — xxxxxxxxxxxx&lt;/strong&gt; bucket and select checkbox on the top of the object table to select all objects.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyk28badh2hh7mqvvokk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyk28badh2hh7mqvvokk.png" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Delete&lt;/strong&gt; button to delete all objects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate back to S3 buckets list, select &lt;strong&gt;sagemaker — xxxxxxxxxxxx&lt;/strong&gt; bucket and click &lt;strong&gt;Delete&lt;/strong&gt; button to delete the bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwgugns69e2ge2w2ru6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwgugns69e2ge2w2ru6j.png" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You have successfully cleaned up the environment.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this workshop.&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges Faced and Solutions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Challenge 1: Real-Time Model Inference and Latency Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Leveraged API Gateway and Lambda to manage requests, reducing latency by preprocessing data in Lambda and only sending necessary data to SageMaker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 2: Managing Security for Sensitive Data&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Used AWS Secrets Manager to secure sensitive information such as database credentials, API keys, and integrated with IAM to enforce role-based access control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 3: Monitoring and Troubleshooting Complex Workflows&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Integrated CloudWatch, CloudTrail, and X-Ray to gain visibility into all workflow steps, allowing for efficient troubleshooting and resource optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This end-to-end solution highlights the power of Amazon SageMaker in handling real-time fraud detection and anomaly classification. By integrating AWS tools for automation, monitoring, and security, this project demonstrates an adaptable, high-performance architecture that can scale to meet growing data demands. The setup is versatile and supports businesses in proactive fraud management, ensuring fast, accurate, and secure anomaly detection for production-grade applications.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Large-scale Data Processing with Step Functions : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Wed, 13 Nov 2024 09:42:06 +0000</pubDate>
      <link>https://forem.com/shubham_murti/large-scale-data-processing-with-step-functions-aws-project-4fk</link>
      <guid>https://forem.com/shubham_murti/large-scale-data-processing-with-step-functions-aws-project-4fk</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In this project, we tackle the challenge of orchestrating large-scale data processing using AWS Step Functions. By integrating key AWS services like Amazon S3, IAM, CloudWatch, and AWS X-Ray, we build a scalable, secure, and optimized workflow for handling vast amounts of data. This setup is designed to reduce operational complexity, enhance scalability, and provide robust monitoring, making it ideal for data-intensive industries requiring high levels of efficiency and control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;Key AWS services, tools, and technologies used in this project include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Step Functions&lt;/strong&gt;: Orchestrates the data processing workflow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Provides centralized storage for input and output data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM (Identity and Access Management)&lt;/strong&gt;: Ensures secure permissions and data access policies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudWatch&lt;/strong&gt;: Monitors each step of the workflow, logging actions and errors&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt;: Provides detailed tracing and performance insights into the workflow’s execution&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along with this project, you’ll need the following prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Knowledge&lt;/strong&gt;: Familiarity with AWS services like S3, IAM, and CloudWatch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless and Workflow Concepts&lt;/strong&gt;: Basic understanding of serverless architecture and workflow orchestration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI and SDKs&lt;/strong&gt;: Installed and configured for managing AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM Permissions&lt;/strong&gt;: Ensure the necessary IAM roles and policies for accessing and executing Step Functions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;Organizations often deal with complex workflows that require efficient processing of large data sets. This project addresses the need for a scalable and automated solution to manage such workflows reliably. Key challenges include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Coordination of Multiple Services&lt;/strong&gt;: Processing large datasets often involves multiple tasks and services that need to be coordinated systematically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: As data size grows, the system must scale to handle the increased load.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visibility and Monitoring&lt;/strong&gt;: Detailed logging, tracing, and monitoring are essential for troubleshooting and optimizing the workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Step Functions, combined with other AWS services, provides an ideal solution for orchestrating complex workflows in a serverless environment. This architecture not only improves processing efficiency but also enhances visibility and simplifies troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;Below is the architecture diagram for the data processing workflow. The visual aid below illustrates how AWS Step Functions orchestrates interactions among S3, IAM, CloudWatch, and AWS X-Ray to create a reliable and traceable workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzklagzecvyisbusmtj2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzklagzecvyisbusmtj2p.png" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;p&gt;Each component within this solution has a critical role:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Step Functions&lt;/strong&gt;: Manages the flow of data processing tasks, coordinating AWS services and monitoring the progress of each step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Acts as a centralized storage solution for data inputs and outputs, making it easy to retrieve and store processed data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM&lt;/strong&gt;: Controls access to resources, ensuring only authorized services and users can interact with sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudWatch&lt;/strong&gt;: Provides insights into each task’s status, allowing us to set up alarms for failure states or performance bottlenecks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt;: Traces and analyzes the performance of the workflow, helping to optimize and troubleshoot issues effectively.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Introduction to Distributed Map
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/hello-dmap#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello_dmap" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the &lt;strong&gt;Launch&lt;/strong&gt; link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RegionDeployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (Northern Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfw-hello-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_hellodmap.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfw-hello-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_hellodmap.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfw-module-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_hellodmap.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Sydney)&lt;/strong&gt; ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/create/template?stackName=sfw-hello-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_hellodmap.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The location of the CloudFormation template will be auto-populated in the &lt;strong&gt;Amazon S3 URL&lt;/strong&gt; field, as shown in the image below. Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42ji8bxtq5kihsuduusu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42ji8bxtq5kihsuduusu.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;em&gt;Specify stack details&lt;/em&gt; page, &lt;strong&gt;Stack name&lt;/strong&gt; will be auto-populated to sfw-hello-distributed-map. (You can enter a different name if you want.) Click &lt;strong&gt;Next&lt;/strong&gt; two times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih0ngca6kb9x91ea7fwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih0ngca6kb9x91ea7fwu.png" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;em&gt;Review&lt;/em&gt; page, scroll to the bottom, check the &lt;strong&gt;Capabilities&lt;/strong&gt; box if shown, then click &lt;strong&gt;Create stack&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex0vqf8zxvfyu50og8tq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex0vqf8zxvfyu50og8tq.png" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait until the stack shows &lt;em&gt;CREATE_COMPLETE&lt;/em&gt; status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rzl181oaw51w00zv4yj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rzl181oaw51w00zv4yj.png" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Distributed Map Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/process-multiple-files#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process_multiple_files" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the &lt;strong&gt;Launch&lt;/strong&gt; link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Region Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (Northern Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfw-processmulti-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_processmultifile.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfw-processmulti-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_processmultifile.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfw-processmulti-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_processmultifile.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Sydney)&lt;/strong&gt; ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/create/template?stackName=sfw-processmulti-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_processmultifile.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The location of the CloudFormation template will be auto-populated in the &lt;strong&gt;Amazon S3 URL&lt;/strong&gt; field, as shown in the image below. Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi5l3zj10l7urka8dma3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi5l3zj10l7urka8dma3.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;em&gt;Specify stack details&lt;/em&gt; page, &lt;strong&gt;Stack name&lt;/strong&gt; will be auto-populated to sfw-processmulti-distributed-map. (You can enter a different name if you want.) Click &lt;strong&gt;Next&lt;/strong&gt; two times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjfi37tubcqxcnxge39a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjfi37tubcqxcnxge39a.png" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;-On the &lt;em&gt;Review&lt;/em&gt; page, scroll to the bottom, check the &lt;strong&gt;Capabilities&lt;/strong&gt; box if shown, then click &lt;strong&gt;Create stack&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxypje7wefeey6ubjxos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxypje7wefeey6ubjxos.png" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait until the stack shows &lt;em&gt;CREATE_COMPLETE&lt;/em&gt; status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4yjnvpxzumac6lkm0ki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4yjnvpxzumac6lkm0ki.png" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced — Optimization
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/optimization#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the Launch link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Region Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (N. Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfw-optimization-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_optimization.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfw-optimization-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_optimization.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfw-optimization-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_optimization.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific&lt;/strong&gt; (Sydney) ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/create/template?stackName=sfw-optimization-distributed-map&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_optimization.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Location of the CloudFormation template will be auto populated in the Amazon S3 URL field as shown in the diagram below. Click Next&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tvtkq0q486wghorhpna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tvtkq0q486wghorhpna.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;em&gt;Specify stack details&lt;/em&gt; page, &lt;em&gt;Stack name&lt;/em&gt; would be auto populated to sfw-optimization-distributed-map. You can specify a different name if you want.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyttvih366hedmuu5gux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyttvih366hedmuu5gux.png" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;em&gt;Next&lt;/em&gt; two times and on the last Review page, scroll to the bottom. Click the checkbox if shown and then click Create stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t2udeprhfb6149aqo5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t2udeprhfb6149aqo5i.png" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait till the stack shows CREATE_COMPLETE status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbii06y70tfn563qwhr4p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbii06y70tfn563qwhr4p.png" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case — Healthcare Claims Processing
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/healthcare-claims-processing#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Launch link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Region Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (N. Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfw-healthcare-processing&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_healthcare_processing.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfw-healthcare-processing&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_healthcare_processing.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfw-healthcare-processing&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_healthcare_processing.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific&lt;/strong&gt; (Sydney) ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/create/template?stackName=sfw-healthcare-processing&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_healthcare_processing.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Location of the CloudFormation template will be auto populated in the Amazon S3 URL field as shown in the diagram below. Select Next&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft27wtkhfygtip71u079u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft27wtkhfygtip71u079u.png" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the &lt;em&gt;Specify stack details&lt;/em&gt; page, &lt;em&gt;Stack name&lt;/em&gt; would be auto populated to sfw-healthcare-processing. You can specify a different name if you want.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxuch9bv7t2mc5czecmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxuch9bv7t2mc5czecmk.png" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;em&gt;Next&lt;/em&gt; two times and on the last Review page, scroll to the bottom. Select the checkbox if shown and then Select Create stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oti9tngr4koskc828y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0oti9tngr4koskc828y3.png" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait till the stack shows CREATE_COMPLETE status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v0wdygfgec0ruitrw9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v0wdygfgec0ruitrw9a.png" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case — Security Vulnerability Scanning
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/security-vulnerability-scanning#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the &lt;strong&gt;Launch&lt;/strong&gt; link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Region Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (Northern Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=vulnerability-scanning-module&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_securityscanning.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=vulnerability-scanning-module&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_securityscanning.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=vulnerability-scanning-module&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_securityscanning.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Sydney)&lt;/strong&gt; ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/create/template?stackName=vulnerability-scanning-module&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_securityscanning.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The location of the CloudFormation template will be auto-populated in the &lt;strong&gt;Amazon S3 URL&lt;/strong&gt; field. Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On the &lt;em&gt;Specify stack details&lt;/em&gt; page, &lt;strong&gt;Stack name&lt;/strong&gt; will be auto-populated to vulnerability-scanning-module. (You can enter a different name if you want.) Click &lt;strong&gt;Next&lt;/strong&gt; two times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;em&gt;Review&lt;/em&gt; page, scroll to the bottom then select the &lt;strong&gt;Capabilities and transforms&lt;/strong&gt; boxes if shown.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Submit&lt;/strong&gt; and wait until the stack shows &lt;em&gt;CREATE_COMPLETE&lt;/em&gt; status.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Case — Monte Carlo Simulation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#setup" rel="noopener noreferrer"&gt;Setup&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account. To skip these instructions &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#about-the-stacks" rel="noopener noreferrer"&gt;About The Stacks&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The solution is deployed in two sets. The first stack will deploy a stack of resources that will generate our simulated dataset. It is orchestrated by Step Functions and consists of three Lambda Functions that will generate the data as well as generate a simulated S3 Inventory Report. The second stack will execute the first state machine as well as deploy the components for the module.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#deploy-the-data-generation-stack" rel="noopener noreferrer"&gt;Deploy The Data Generation Stack&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click on the Launch link against any of the regions in the table below to start the deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RegionDeployment*&lt;em&gt;US East (N. Virginia)&lt;/em&gt;* us-east-1&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfn-datagen&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdatagen.yml" rel="noopener noreferrer"&gt;Launch &lt;/a&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfn-datagen&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdatagen.yml" rel="noopener noreferrer"&gt;Launch &lt;/a&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfn-datagen&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdatagen.yml" rel="noopener noreferrer"&gt;Launch &lt;/a&gt;&lt;strong&gt;Asia Pacific&lt;/strong&gt; (Sydney) ap-southeast-2&lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfn-datagen&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdatagen.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take Defaults — Click Next 3 times and Submit&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment Screenshots&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#deploy-the-data-processing-stack" rel="noopener noreferrer"&gt;Deploy The Data Processing Stack&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Click on the Launch link against any of the regions in the table below to start the deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Region Deployment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;US East (N. Virginia)&lt;/strong&gt; us-east-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/template?stackName=sfn-dataproc&amp;amp;templateURL=https://ws-assets-prod-iad-r-iad-ed304a55c2ca1aee.s3.us-east-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdataproc.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Europe (Ireland)&lt;/strong&gt; eu-west-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/create/template?stackName=sfn-dataproc&amp;amp;templateURL=https://ws-assets-prod-iad-r-dub-85e3be25bd827406.s3.eu-west-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdataproc.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific (Singapore)&lt;/strong&gt; ap-southeast-1 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfn-dataproc&amp;amp;templateURL=https://ws-assets-prod-iad-r-sin-694a125e41645312.s3.ap-southeast-1.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdataproc.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asia Pacific&lt;/strong&gt; (Sydney) ap-southeast-2 &lt;a href="https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/create/template?stackName=sfn-dataproc&amp;amp;templateURL=https://ws-assets-prod-iad-r-syd-b04c62a5f16f7b2e.s3.ap-southeast-2.amazonaws.com/2a22e604-2f2e-4d7b-85a8-33b38c999234/templates/module_montecarlosimulationdataproc.yml" rel="noopener noreferrer"&gt;Launch&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take Defaults — Click Next 3 times and Submit&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment Screenshots&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#step-1" rel="noopener noreferrer"&gt;Step 1&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6clc1hf37fznpffanwei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6clc1hf37fznpffanwei.png" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#step-2" rel="noopener noreferrer"&gt;Step 2&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5agkbqhqnbip3t9xwwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5agkbqhqnbip3t9xwwv.png" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#step-3" rel="noopener noreferrer"&gt;Step 3&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje5x70digq1ysjmwl6v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje5x70digq1ysjmwl6v0.png" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/workshop-setup/self-service/monte-carlo-simulation#step-4" rel="noopener noreferrer"&gt;Step 4&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbft65yymajvck1p7js6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbft65yymajvck1p7js6y.png" width="800" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 1 — Basics
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Introduction to Distributed Map
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap#what-is-distributed-map" rel="noopener noreferrer"&gt;What is Distributed Map?&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Distributed map is a map state that executes the same processing steps for multiple entries in a dataset at 10,000 concurrency. This means you can run a large-scale parallel data processing workload without worrying about how to parallelize the executions, workers, and data. Distributed map can iterate over millions of objects such as logs, images or records inside .csv or json files stored in Amazon S3. It can launch up to 10,000 parallel child workflows to process the data. You can include any combination of AWS services like AWS Lambda functions, Amazon ECS tasks, or AWS SDK calls in the child workflow.&lt;/p&gt;

&lt;p&gt;Distributed map is the new addition to the two types of maps available in Step Functions. Primary difference between the inline map and distributed map are -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Distributed map has higher concurrency compared to inline map’s 40 concurrency&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed map can directly iterate on the data from Amazon S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each sub workflow of Distributed map runs as a separate child workflow that avoids 25K execution history limit of Step Functions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1lucosog6kn5tsr9k48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1lucosog6kn5tsr9k48.png" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap#what-you-do-in-the-module" rel="noopener noreferrer"&gt;What you do in the module&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this module, you will explore a pre-created workflow using distributed map. The sample workflow processes &lt;a href="https://cseweb.ucsd.edu/~jmcauley/datasets.html#amazon_reviews" rel="noopener noreferrer"&gt;amazon reviews data &lt;/a&gt;from &lt;a href="https://github.com/MengtingWan/marketBias/tree/master/data" rel="noopener noreferrer"&gt;repo &lt;/a&gt;. It is 83MB CSV file with 1,292,954 records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89xjv2fphwpp3jfd4c3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89xjv2fphwpp3jfd4c3p.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow iterates on the electronics review data and filters highly rated reviews. You will download the data to S3, run the workflow, and verify the results. In the process, you will learn how to build and run a simple distributed map workflow yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap#services-in-this-module" rel="noopener noreferrer"&gt;Services in this module&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;— Serverless visual workflow service&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reviewing the Workflow
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions console &lt;/a&gt;then select the state machine containing “HelloDmap” in its name.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Edit&lt;/strong&gt; to edit the workflow in Workflow Studio.&lt;/p&gt;

&lt;p&gt;Review the definition by selecting the &lt;strong&gt;Definition&lt;/strong&gt; toggle at the right side of the page.&lt;/p&gt;

&lt;p&gt;Select each state in the design to review its definition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbej78qvy7tradvxk1w2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbej78qvy7tradvxk1w2n.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take a closer look at the map definition. You define it as &lt;em&gt;DISTRIBUTED&lt;/em&gt; to tell Step Functions to run the map state in distributed mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsjy8fm62avgjjk0j9lx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsjy8fm62avgjjk0j9lx.png" width="370" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did you notice &lt;em&gt;ItemReader&lt;/em&gt; in the definition? This is how you tell distributed map to process a csv or JSON file. Notice that the reader uses the &lt;em&gt;s3.getObject&lt;/em&gt; API to read the object. Yes! It reads the content of the csv file and distributes the data to child workflows in batches, running at a concurrency of 10,000!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpage6aoh37v7l4zvx2zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpage6aoh37v7l4zvx2zg.png" width="372" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read ahead and you will notice that there are a few more settings.&lt;/p&gt;

&lt;p&gt;Firstly, you can do batching. Do you see &lt;em&gt;MaxItemsPerBatch&lt;/em&gt; set as 1000?&lt;/p&gt;

&lt;p&gt;You can not only run 10,000 (10K) workflows, you can also batch the data to each workflow which means, in a single iteration, you can process &lt;strong&gt;10K * 1K = 10M&lt;/strong&gt; records from the csv file!&lt;/p&gt;

&lt;p&gt;Secondly, you can write the output of the distributed map or the child workflow execution results to an S3 location in an aggregated fashion.&lt;/p&gt;

&lt;p&gt;Thirdly, you can set failure toleration. What does that mean? You don’t want to run 100M records when half of them are bad data. It is a waste of time and money to process those records. By default, the failure toleration is set to 0. Any single child workflow failure will result in the failure of the workflow.&lt;/p&gt;

&lt;p&gt;Data quality is a big challenge with large data processing. So, you can set a percentage or number of items that can be tolerated as failures. When failures exceed that tolerance, the Step Functions workflow fails, saving you time and money.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqduir4z6g5kcflq5xgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqduir4z6g5kcflq5xgy.png" width="378" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you can see what is inside of the distributed map. In this introduction module, we did not use any compute services like Lambda to process the records. You can see a &lt;em&gt;pass&lt;/em&gt; state filtering highly rated reviews. Pass state is really useful to transform and filter input. It is simple and easy to demonstrate the 10K concurrency, without worrying about the scale of a downstream service. However, in a real-world scenario, you include any combination of AWS services like AWS Lambda functions, Amazon ECS tasks, or AWS SDK calls in the child workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtm3rh8n7xwduad1x7xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtm3rh8n7xwduad1x7xc.png" width="375" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The states inside the distributed map are run as separate child workflows. The number of child workflows is dependent on the concurrency setting and the volume of the records to process. For example, you might set the concurrency to 1000 and batch size to 100, but if the total number of records in the file is just 20K, Step Functions only needs 200 child workflows (20,000 / 100 = 200). On the other hand, if the file has 200K records, Step Functions will spin up 1000 child workflows to reach the max concurrency and as child workflows complete, Step Functions will spin up child workflows until all 2000 (200,000 /100 = 2000) child workflows are completed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap/start-execution#prepare-data-set" rel="noopener noreferrer"&gt;Prepare data set&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You are going to download the data file to the local computer and upload to S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the following command in your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Linux or macOS (bash)&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://raw.githubusercontent.com/MengtingWan/marketBias/master/data/df_electronics.csv" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/MengtingWan/marketBias/master/data/df_electronics.csv&lt;/a&gt; --output df_electronics.csv&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Windows (PowerShell)&lt;/p&gt;

&lt;p&gt;Invoke-WebRequest &lt;a href="https://raw.githubusercontent.com/MengtingWan/marketBias/master/data/df_electronics.csv" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/MengtingWan/marketBias/master/data/df_electronics.csv&lt;/a&gt; -OutFile df_electronics.csv&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/s3" rel="noopener noreferrer"&gt;S3 console &lt;/a&gt;then select the bucket containing “hellodmapdatabucket” in its name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the downloaded file “df_electronics.csv” to the S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap/start-execution#run-the-workflow" rel="noopener noreferrer"&gt;Run the workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions Console &lt;/a&gt;, select and open the state machine containing &lt;strong&gt;HelloDmapStateMachine&lt;/strong&gt; in its name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Start execution&lt;/strong&gt; in the top right corner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the popup, enter the following input:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "key":"df_electronics.csv",&lt;br&gt;
    "output":"results"&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You are providing the name of the file and S3 prefix where you want the results of the distributed map to be stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Start execution&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In a few seconds, you will see the execution start to run. It takes a couple of minutes to complete the processing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2r1qncenuoi1lwypfi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2r1qncenuoi1lwypfi8.png" width="293" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully run the hello distributed map workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing the Workflow Results
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap/view-results#view-map-run" rel="noopener noreferrer"&gt;View map run&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Navigate to the bottom of the workflow execution page and click &lt;strong&gt;Map Run&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbuwlgddruz9hox7fh9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbuwlgddruz9hox7fh9d.png" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the status of the distributed processes in the &lt;strong&gt;Item processing status&lt;/strong&gt; card. It shows the number of records processed and the duration. 1,292,954 records processed in less than 90 seconds! At the bottom of the page, you can see the links to all the child workflow executions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvjqy35bisi7ie1yupu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvjqy35bisi7ie1yupu.png" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open one of the child workflows and view the execution input/output. You see in the output window that the &lt;em&gt;pass&lt;/em&gt; state filtered the records with ratings of 4 and above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl0q1vtwj01yyz8ezp8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl0q1vtwj01yyz8ezp8c.png" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/hello-dmap/view-results#verifying-s3-results-bucket" rel="noopener noreferrer"&gt;Verifying S3 results bucket&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Recall you also stored the results of the workflow to an S3 bucket! Open the &lt;a href="https://console.aws.amazon.com/s3" rel="noopener noreferrer"&gt;S3 console &lt;/a&gt;then select the bucket containing &lt;strong&gt;hellodmapresultsbucket&lt;/strong&gt; in its name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijr1r4b2vc3b73whjkar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijr1r4b2vc3b73whjkar.png" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to contents inside results prefix, select "SUCCEEDED_0.json", and download the file to view the results. You notice that the content of the file is the aggregated result of all the child workflows. If you are building &lt;em&gt;map reduce&lt;/em&gt; use cases, this content can be used for the downstream. To learn more about how result writer works, follow the link &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultwriter.html" rel="noopener noreferrer"&gt;here &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fantastic! You processed &lt;strong&gt;1.3M&lt;/strong&gt; records in less than &lt;strong&gt;90 seconds&lt;/strong&gt; without using servers and running complex code. Distributed map is &lt;strong&gt;exciting&lt;/strong&gt;, right!?!&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You have now reviewed a pre-created workflow with distributed map, learned some important attributes of the distributed map definition, and ran the workflow yourself using the AWS console.&lt;/p&gt;

&lt;p&gt;While the AWS console provides a convenient way to execute workflows at the click of a button, in real-world production environments, large scale data processing workflows are commonly invoked per schedule or the occurrence of an event (ex. a file upload).&lt;/p&gt;

&lt;p&gt;Two common patterns to invoke AWS Step Functions state machines are &lt;a href="https://docs.aws.amazon.com/scheduler/latest/UserGuide/schedule-types.html" rel="noopener noreferrer"&gt;Amazon EventBridge schedules &lt;/a&gt;and &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html" rel="noopener noreferrer"&gt;S3 event notifications &lt;/a&gt;. You can use the following links for instructions to set up event-driven or scheduled executions for the workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/using-eventbridge-scheduler.html" rel="noopener noreferrer"&gt;Periodically start a workflow execution using an Amazon EventBridge Scheduler&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html" rel="noopener noreferrer"&gt;Start a state machine execution in response to S3 events&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next module, you will build a distributed map workflow yourself. Get ready for some challenging questions as well!&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Distributed Map Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the previous module, you saw an example of distributed processing with a single S3 object. Distributed map not only iterates on a single large object located in S3, it can also iterate on a collection of objects in S3. You can iterate through and process each object in parallel and aggregate the results. This supports various use cases, such as processing thousands of log files, a Monte Carlo simulation which runs the same processing for multiple inputs, running a backfill process that scans millions of files for security vulnerability for past dates.&lt;/p&gt;

&lt;p&gt;Below diagram shows how distributed map works for multiple S3 objects. Notice that it uses S3.listObjectV2 instead of S3.GetObject which is used in the previous sub module. When processing multiple objects, distributed map lists the metadata of the objects, distributes batches of the metadata to the child workflows. This means you can process any file format; structured, unstructured, and semi-structured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9zfa6bczjvcpreab7wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9zfa6bczjvcpreab7wj.png" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files#what-you-do-in-the-module" rel="noopener noreferrer"&gt;What you do in the module&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this module, you are going to build a distributed map workflow that processes thousands of weather data files from &lt;a href="https://docs.opendata.aws/noaa-ghcn-pds/readme.html" rel="noopener noreferrer"&gt;NOAA climatology data &lt;/a&gt;. The workflow is going to find the highest precipitation for each weather station and store the results in DynamoDB table. Each weather station is an individual S3 object containing various weather data and there are about 1,000 stations.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files#services-used" rel="noopener noreferrer"&gt;Services used&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;— Serverless visual workflow service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;— Compute service; functions in serverless runtimes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files#pre-created-resources" rel="noopener noreferrer"&gt;Pre-created resources&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To quickly build the workflow, we have created a few resources ahead of time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Lambda function to find the highest precipitation for the station&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One S3 bucket for the dataset and another S3 bucket for storing distributed map results&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sample dataset of 1,000 S3 objects from NOAA climatology data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon DynamoDB table to store the precipitation data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building the Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files/build#workflow-studio" rel="noopener noreferrer"&gt;Workflow Studio&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions console &lt;/a&gt;then choose &lt;strong&gt;Create state machine&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Choose the &lt;strong&gt;Blank&lt;/strong&gt; card and choose &lt;strong&gt;Select&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You are in Workflow Studio. Take a moment to explore it. You will see the actions, flows, and patterns on the left side. You can drag the states on the left to the center of the page where you see the workflow design. You can configure the input, output, errors, etc. on the right side of the UI. If you click the &lt;strong&gt;Definition&lt;/strong&gt; toggle, you can view the &lt;em&gt;ASL&lt;/em&gt; definition of the workflow.&lt;/p&gt;

&lt;p&gt;Take some time to explore the menus.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfy8j69q7vx1zf9gs6zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfy8j69q7vx1zf9gs6zx.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright! Let’s start building the workflow.&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;Flow&lt;/strong&gt; tab under the search text box on the left side.&lt;/p&gt;

&lt;p&gt;Drag the &lt;strong&gt;Map&lt;/strong&gt; state between the Start and End states.&lt;/p&gt;

&lt;p&gt;Change the &lt;strong&gt;Processing mode&lt;/strong&gt; to &lt;strong&gt;Distributed — new&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk6st6qhkd9tk55l9p9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk6st6qhkd9tk55l9p9b.png" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright. You are now going to configure additional attributes of the distributed map. First, you will configure where to read the dataset from. You will pass the location of the precreated dataset in S3 as input.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Amazon S3&lt;/strong&gt; as the &lt;em&gt;Item source&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;S3 object list&lt;/strong&gt; in &lt;strong&gt;S3 item source&lt;/strong&gt; dropdown.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Get bucket and prefix at runtime from state input&lt;/strong&gt; in &lt;strong&gt;S3 bucket&lt;/strong&gt; dropdown.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Bucket Name&lt;/strong&gt;, enter $.bucket&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Prefix&lt;/strong&gt;, enter $.prefix&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbkoaacmyiecaypbso54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbkoaacmyiecaypbso54.png" width="800" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable batching&lt;/strong&gt; and set the &lt;strong&gt;Max items per batch&lt;/strong&gt; to 100.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmjjzbyq8pr5xo6d4hig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmjjzbyq8pr5xo6d4hig.png" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave everything else as default and move on to adding the child workflow components.&lt;/p&gt;

&lt;p&gt;Enter Lambda&lt;/p&gt;

&lt;p&gt;in the search textbox at the top left.&lt;/p&gt;

&lt;p&gt;Drag and drop the &lt;strong&gt;Lambda — Invoke&lt;/strong&gt; action to the center.&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Function name&lt;/strong&gt; dropdown and select function with &lt;strong&gt;HighPrecipitation&lt;/strong&gt; in its name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduldcff1m8waxxcu9t0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduldcff1m8waxxcu9t0j.png" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Review the definition in the workflow studio by clicking the &lt;strong&gt;Definition&lt;/strong&gt; toggle at the right.&lt;/p&gt;

&lt;p&gt;Notice that the &lt;em&gt;ItemReader&lt;/em&gt; object uses listobjectV2. In sub-module 1, you saw &lt;em&gt;GetObject&lt;/em&gt; in ItemReader. The reason is that you processed a single S3 object in sub-module 1 and you are processing multiple S3 objects here in this sub-module. You can nest both patterns to read multiple csv/json files in a highly parallel fashion.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      "ItemReader": {
        "Resource": "arn:aws:states:::s3:listObjectsV2",
        "Parameters": {
          "Bucket.$": "$.bucket",
          "Prefix.$": "$.prefix"
        }
      },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Select the &lt;strong&gt;Config&lt;/strong&gt; tab next to the state machine name at the top of the page and change the State machine name to FindHighPrecipitationWorkflow.&lt;/p&gt;

&lt;p&gt;Choose an existing IAM role with the name StatesHighPrecipitation&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Create&lt;/strong&gt; button at the top.&lt;/p&gt;

&lt;p&gt;Run the workflow by choosing &lt;strong&gt;Start execution&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Wait! You need the bucket and prefix as input to the workflow.&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/s3" rel="noopener noreferrer"&gt;S3 console &lt;/a&gt;then copy the full name of the bucket containing &lt;strong&gt;MultiFileDataBucket&lt;/strong&gt; in its name.&lt;/p&gt;

&lt;p&gt;Return to the &lt;em&gt;Start execution&lt;/em&gt; popup and enter the following json as input, replacing the bucket name with your bucket name from S3:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "bucket": "bucketname",
  "prefix":"csv/by_station"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify the ASL definition&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "A description of my state machine",
  "StartAt": "Map",
  "States": {
    "Map": {
      "Type": "Map",
      "ItemProcessor": {
        "ProcessorConfig": {
          "Mode": "DISTRIBUTED",
          "ExecutionType": "STANDARD"
        },
        "StartAt": "Lambda Invoke",
        "States": {
          "Lambda Invoke": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "OutputPath": "$.Payload",
            "Parameters": {
              "Payload.$": "$",
              "FunctionName": "arn:aws:lambda:{region}:{account}:function:sfw-processmulti-distribu-HighPrecipitationFunctio-vH7XVagF8llI:$LATEST"
            },
            "Retry": [
              {
                "ErrorEquals": [
                  "Lambda.ServiceException",
                  "Lambda.AWSLambdaException",
                  "Lambda.SdkClientException",
                  "Lambda.TooManyRequestsException"
                ],
                "IntervalSeconds": 1,
                "MaxAttempts": 3,
                "BackoffRate": 2
              }
            ],
            "End": true
          }
        }
      },
      "End": true,
      "Label": "Map",
      "MaxConcurrency": 1000,
      "ItemReader": {
        "Resource": "arn:aws:states:::s3:listObjectsV2",
        "Parameters": {
          "Bucket.$": "$.bucket",
          "Prefix.$": "$.prefix"
        }
      },
      "ItemBatcher": {
        "MaxItemsPerBatch": 100,
        "BatchInput": {
          "Bucket.$": "$.bucket"
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files/build#fix-the-workflow" rel="noopener noreferrer"&gt;Fix the workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Expand the error ribbon to see the reason for failure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3507jqqb0f6bo8630ann.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3507jqqb0f6bo8630ann.png" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, distributed map fails even when a single child fails. Let’s explore what actually caused the child workflow to fail.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Map Run&lt;/strong&gt; to check the child workflow executions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi66hkafhwvdw4awkuhdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi66hkafhwvdw4awkuhdc.png" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select a child workflow execution to see the error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0gj3b0sq8lw20o36i6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0gj3b0sq8lw20o36i6t.png" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It looks like Lambda is expecting event[BatchInput][Bucket] and it is not found. Explore the input to the Lambda function by selecting &lt;strong&gt;Execution input and output&lt;/strong&gt; at the top.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixe8y7lbthe0ozrqv1sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixe8y7lbthe0ozrqv1sw.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The input only contains the S3 key. It is missing the bucket name.&lt;/p&gt;

&lt;p&gt;Alright! You are going back to Workflow Studio to pass the bucket name as input to the Lambda function.&lt;/p&gt;

&lt;p&gt;Close this tab and return to the previous tab, or you can browse to the workflow by following the breadcrumb at the top.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Edit&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Select the map state in the workflow design.&lt;/p&gt;

&lt;p&gt;Modify the Batch input under &lt;strong&gt;Item batching&lt;/strong&gt; to include the bucket name.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
  "Bucket.$": "$.bucket"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Batch Input enables you to send additional input to the child workflow steps as global data. For ex. Bucket name is a global data and does not have to be in individual line item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqvkqghdnisq6abcqu6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqvkqghdnisq6abcqu6n.png" width="800" height="915"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save&lt;/strong&gt; the workflow and choose &lt;strong&gt;Execute&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Do not forget to execute the workflow with the proper bucket and prefix input. If you would like, you can get the input by navigating to the previous execution of the workflow.&lt;/p&gt;

&lt;p&gt;Voila!! It is a success now!&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing the Workflow Results
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files/view-results#verify-results" rel="noopener noreferrer"&gt;Verify Results&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/lambda" rel="noopener noreferrer"&gt;Lambda console &lt;/a&gt;and select the function containing &lt;strong&gt;HighPrecipitation&lt;/strong&gt; in its name.&lt;/p&gt;

&lt;p&gt;Explore the &lt;strong&gt;Code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Lambda function writes the calculated value to DynamoDB table.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def _write_results_to_ddb(high_by_station: Dict[str, Dict]):
    dynamodb = boto3.resource("dynamodb")
    table = dynamodb.Table(os.environ["RESULTS_DYNAMODB_TABLE_NAME"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The table name comes from the env variable RESULTS_DYNAMODB_TABLE_NAME.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Configuration&lt;/strong&gt; and select &lt;strong&gt;Environment Variable&lt;/strong&gt; to view the table name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sab6xph37keecs2slus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sab6xph37keecs2slus.png" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/dynamodbv2" rel="noopener noreferrer"&gt;DynamoDB console &lt;/a&gt;then select &lt;strong&gt;Tables&lt;/strong&gt; from left side menu.&lt;/p&gt;

&lt;p&gt;Select the table name that you saw in the Lambda configuration and choose &lt;strong&gt;Explore table items&lt;/strong&gt;. You can now view the calculated highest precipitation across stations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7wnsvxqu3nniw7wxm7t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7wnsvxqu3nniw7wxm7t.png" width="682" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this module, you built a data processing workflow with distributed map, configuring Step Functions to distribute the S3 objects across multiple child workflows to process them in parallel.&lt;/p&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;When processing large numbers of objects in S3 with Distributed Map, you have a couple of different options for listing those objects: S3 &lt;em&gt;listObjectsV2&lt;/em&gt; and &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory.html" rel="noopener noreferrer"&gt;S3 Inventory List &lt;/a&gt;. With S3 &lt;em&gt;listObjectsV2&lt;/em&gt;, Step Functions is making S3 &lt;em&gt;listObjectsV2 *API calls on your behalf to retrieve all of the items needed to run the Distributed Map. Each call to *listObjectsV2&lt;/em&gt; can only return a maximum of 1000 S3 objects. This means that if you have 2,000,000 objects to process, Step Functions has to make at least 2000 API calls. This API is fast and it won’t take too long, but if you have an S3 Inventory file that has all the objects listed in it that you need to process, you can use that as the input.&lt;/p&gt;

&lt;p&gt;Using an S3 Inventory file as the input for a Distributed Map when processing large numbers of files is faster than S3 &lt;em&gt;listObjectsV2&lt;/em&gt;. This is because, for S3 Inventory ItemReaders, there is a single S3 &lt;em&gt;getObject&lt;/em&gt; call to get the manifest file and then one call for each Inventory file. If you know that your Distributed Map is going to run on a set schedule you can schedule the S3 Inventory to be created ahead of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 2 — Advanced
&lt;/h2&gt;

&lt;p&gt;Welcome to the Advanced module of the data processing workshop!&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics" rel="noopener noreferrer"&gt;basics&lt;/a&gt; module, you learnt about how to use distributed map to build large scale data processing solution. For a production ready application, you need to make sure the solution is optimized for performance, cost etc. We will focus on optimizing cost and performance of the distributed map in this module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this module, we use the same example workflow from earlier sub module &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/basics/process-multiple-files" rel="noopener noreferrer"&gt;Building a distributed map workflow&lt;/a&gt;. But we pre-created the workflow to make it easier for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization#services-used" rel="noopener noreferrer"&gt;Services used&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;— Serverless visual workflow service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;— compute service; functions in serverless runtimes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization#pre-created-resources" rel="noopener noreferrer"&gt;Pre-created resources&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Lambda function to find the highest precipitation for the station.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One S3 bucket for data set and another S3 bucket for storing distributed map results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sample data set of 1000 S3 objects from &lt;a href="https://docs.opendata.aws/noaa-ghcn-pds/readme.html" rel="noopener noreferrer"&gt;NOAA climatology data &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon DynamoDB table to store the precipitation data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Step Functions workflow that processes the sample data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization#what-you-do-in-the-module" rel="noopener noreferrer"&gt;What you do in the module&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Using the precreated Step Functions workflow, you will tune some attributes/fields of distributed map and understand the performance and cost impact of the change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Workflow Type
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#workflow-types" rel="noopener noreferrer"&gt;Workflow types&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Step Functions offers two types of workflows — Standard and Express. Standard workflows are ideal for long running workflow; it can run for 365 day whereas express workflows can run only for 5 minutes. Another important distinction is how pricing works. Standard workflows are priced by state transition while express workflows are priced by number of request and the duration. To learn more about the difference, &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html" rel="noopener noreferrer"&gt;click here &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When you use distributed map, Step Functions spins up child workflows to run the states inside the distributed map. The number of child workflows to spin up is dependent on the number of objects or records to process, batch size and concurrency. You can define the child workflow to run as either standard or express based on your use case.&lt;/p&gt;

&lt;p&gt;In the following sections, you will learn how to change workflow type of distributed map child workflows, try out a technique to find if express workflow suits your use case, and the cost impact of running standard vs express.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#workflow-studio" rel="noopener noreferrer"&gt;Workflow studio&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions Console &lt;/a&gt;, select State machines from the right menu.&lt;/p&gt;

&lt;p&gt;Select the workflow that starts with &lt;strong&gt;OptimizationStateMachine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;edit&lt;/strong&gt; button to edit the workflow in workflow studio.&lt;/p&gt;

&lt;p&gt;Review the definition in the workflow studio by enabling &lt;strong&gt;definition&lt;/strong&gt; at the right.&lt;/p&gt;

&lt;p&gt;Highlight the &lt;strong&gt;Distributed map high precipitation&lt;/strong&gt; step in the workflow graphic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu563bkzff9gxv0gkqrbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu563bkzff9gxv0gkqrbv.png" width="565" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the child workflow &lt;strong&gt;ExecutionType&lt;/strong&gt;. It is set as &lt;strong&gt;STANDARD&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      "ItemProcessor": {
        "ProcessorConfig": {
          "Mode": "DISTRIBUTED",
          "ExecutionType": "STANDARD"
        },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Also, explore the batch setting. Each child workflow receives a batch of 100 objects.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      "ItemBatcher": {
        "MaxItemsPerBatch": 100
      },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#identify-if-workflow-can-be-express" rel="noopener noreferrer"&gt;Identify if workflow can be Express&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Child workflow can be run either STANDARD or EXPRESS. Express workflows are generally less expensive and run faster than Standard workflows.&lt;/p&gt;

&lt;p&gt;Sometimes, you may not be sure if your workflow runs within 5 minutes. In this section, you are going to use a feature of distributed map that allows you to test your data with small number of items. This technique is helpful in couple of ways&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To determine the duration of the child workflow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To gain confidence that the child workflow logic will run fine when running with full data set.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start making the changes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Toggle &lt;strong&gt;Definition&lt;/strong&gt; button to edit the configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expand &lt;strong&gt;Additional configuration&lt;/strong&gt; and select &lt;strong&gt;Limit number of items&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type 1 in &lt;strong&gt;Max Items&lt;/strong&gt; textbox.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3kmvft6r1rknz54jn8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3kmvft6r1rknz54jn8k.png" width="423" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save and execute with default input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Map Run&lt;/strong&gt; from the execution page.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xjfourf00bqgcag0hml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xjfourf00bqgcag0hml.png" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observe the duration for single item. It is around 3 seconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn4n9nc4gosww5o74kdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn4n9nc4gosww5o74kdg.png" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Repeat the above steps with 100 items.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore the &lt;strong&gt;Map Run&lt;/strong&gt; page to find the duration for 100 items. It is less than 30 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now you know it takes 3 seconds to run 1 item and 26 seconds for 100 items, you can even run all the 1000 items in a single child workflow with 1 concurrency. But, you don’t utilize any parallelism to speed up the process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This simple test is really handy in finding the right batch size and choosing the workflow type without running the entire dataset!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#changing-the-workflow-type-to-express" rel="noopener noreferrer"&gt;Changing the workflow type to Express&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Return to workflow studio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change the workflow type to &lt;strong&gt;EXPRESS&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn81vc6wo1206sm1yzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn81vc6wo1206sm1yzj.png" width="418" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unselect &lt;strong&gt;Limit number of items&lt;/strong&gt; under &lt;strong&gt;Additional configuration&lt;/strong&gt; to revert back to the setting’s pre-test state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the workflow and run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore the &lt;strong&gt;Map Run&lt;/strong&gt; page and note down the duration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you now run this workflow as a Standard workflow type and compare the durations, you will notice the Express workflow execution was faster.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#review-cost-impact" rel="noopener noreferrer"&gt;Review Cost impact&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Consider you are processing 500K objects and set the batch size to 500.&lt;/p&gt;

&lt;p&gt;500k objects / 500 objects per workflow = &lt;strong&gt;1000&lt;/strong&gt; child workflows&lt;/p&gt;

&lt;p&gt;Distributed map runs a total of 1000 child workflows to process 500K objects.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#standard-child-workflow-execution-cost" rel="noopener noreferrer"&gt;Standard child workflow execution cost&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the example we used in this module, we have one Lambda function inside the child workflow. so, the number of state transition per child workflow is 2:- one transition for starting the child workflow and one for Lambda function.&lt;/p&gt;

&lt;p&gt;Total cost = (number of transitions per execution x number of executions) x $0.000025&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = (2 * 1000) x $0.000025 = &lt;strong&gt;$0.05&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s assume you have 5 steps inside your child workflow, the number of state transitions per child workflow is 6.&lt;/p&gt;

&lt;p&gt;Total cost = (number of transitions per execution x number of executions) x $0.000025&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = (6 * 1000) x $0.000025 = &lt;strong&gt;$0.15&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s take another scenario. You have 2 steps inside your child workflow, the number of state transitions per child workflow is 3. Let’s assume you can not utilize batching, the number of child workflows to complete the work = 500K&lt;/p&gt;

&lt;p&gt;Total cost = (number of transitions per execution x number of executions) x $0.000025&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = (3 * 500K) x $0.000025 = &lt;strong&gt;$37.4&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/workflow-type#express-child-workflow-execution-cost" rel="noopener noreferrer"&gt;Express child workflow execution cost&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;With Express workflows, you pay for the number of requests for your workflow and the duration. With scenario outlined earlier under Review cost impact, we need additional dimension of how long workflow runs to calculate the express workflow cost. Let's assume express child workflow runs for an average of 100sec to process 500 objects using 64-MB memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Duration cost&lt;/strong&gt; = (Avg billed duration ms / 100) * 0.0000001042&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Duration cost&lt;/strong&gt; = (100,000 MS /100) * $ 0.0000001042 = $0.0001042&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Express request cost&lt;/strong&gt; = $0.000001 per request ($1.00 per 1M requests)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;workflow cost&lt;/strong&gt; = (Express request cost + Duration cost) x Number of Requests&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;workflow cost&lt;/strong&gt; = ($0.000001 + $0.0001042) x 1000 = &lt;strong&gt;$0.10&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Express child workflow introduces 1 state transition per child workflow regardless of how many states you have inside the workflow. This is to start each child execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transition cost&lt;/strong&gt; = (1 * 1000) x $0.000025 = &lt;strong&gt;$0.025&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = $0.10 + $0.025 = &lt;strong&gt;$0.125&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we repeat the calculation for an express workflow that runs for 30 seconds, the total cost = &lt;strong&gt;$0.057&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we repeat the calculation for an express workflow that runs for 1 second to process 1 object because you cannot utilize batching, the total cost = &lt;strong&gt;$13.42&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What did you observe?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Express workflows are cheaper when the duration is lesser. They are also cost effective if there are more steps in the child workflow or your distributed map cannot make use of batching. Remember, Standard workflows are priced by state transitions meaning when number of steps and number of child workflow executions increase, cost increases.&lt;/p&gt;

&lt;p&gt;You can look at from the below chart how express workflow duration affect the cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknpjo507v8t5xd8ilolt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknpjo507v8t5xd8ilolt.png" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Balancing Cost and Performance Using Concurrency and Batching
&lt;/h2&gt;

&lt;p&gt;Higher parallelism generally means you can run the workflows faster. Higher parallelism with no or sub optimal batching results in higher cost for couple of reasons;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;There is more state transitions for standard workflows and request cost for express workflows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Services you use inside the child workflow might have cost for requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Higher parallelism also causes scaling bottlenecks for downstream services inside the child workflow. You can use distributed map concurrency control to control the number of parallel workflow. If you have multiple workflows and need to manage downstream scaling then you can use techniques such as &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/connect-sqs.html" rel="noopener noreferrer"&gt;queueing &lt;/a&gt;and &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-activities.html" rel="noopener noreferrer"&gt;activities &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the following sections, you will run a few experiments with batch size and understand the performance and cost impact of batch size between standard and express workflows.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#review-the-workflow" rel="noopener noreferrer"&gt;Review the workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions Console &lt;/a&gt;, select State machines from the right menu.&lt;/p&gt;

&lt;p&gt;Select the workflow that starts with &lt;strong&gt;OptimizationStateMachine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;edit&lt;/strong&gt; to edit the workflow in workflow studio.&lt;/p&gt;

&lt;p&gt;Review the definition in the workflow studio by enabling &lt;strong&gt;definition&lt;/strong&gt; at the right.&lt;/p&gt;

&lt;p&gt;Highlight the &lt;strong&gt;Distributed map high precipitation&lt;/strong&gt; step in the workflow graphic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiu0pbr4tx61dp58uqec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiu0pbr4tx61dp58uqec.png" width="565" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each child workflow receives a batch of 100 objects. The concurrency or the parallelism of workflow is set as 1000.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      "ItemBatcher": {
        "MaxItemsPerBatch": 100
      },
      "MaxConcurrency": 1000,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#run-the-workflow" rel="noopener noreferrer"&gt;Run the workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Execute the workflow with default input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Map Run&lt;/strong&gt; from the Execution page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Examine the map run results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note down the duration. You can see there is 10 child workflow executions. As there are only 1000 objects, with the batch setting of 100, only 10 parallel workflows are triggered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Close all the tabs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#change-batch-setting" rel="noopener noreferrer"&gt;Change batch setting&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions Console &lt;/a&gt;, select State machines from the right menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the workflow that starts with &lt;strong&gt;OptimizationStateMachine&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;edit&lt;/strong&gt; to edit the workflow in workflow studio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Highlight the Distributed map high precipitation step and view the configurations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the &lt;strong&gt;MaxItemsPerBatch&lt;/strong&gt; to 1&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbksyu0u5jfhs1fa70klk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbksyu0u5jfhs1fa70klk.png" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save and Execute the workflow with default input&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore the map run results. You can see 1000 child workflow executions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All the child workflows are completed in little under 25 seconds&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repeat the exercise with different batch settings. What did you observe?&lt;/p&gt;

&lt;p&gt;Yes, The total duration increases when you increase the batch size since a single Lambda is looping through an array passed to it, thus increasing the duration of Lambda execution.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#review-the-cost-impact" rel="noopener noreferrer"&gt;Review the cost impact&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You are now going to review the impact of cost with different batch sizes. Assume your workflow is processing 50M S3 objects. Now, let’s look at the cost of distributed map if you define the execution type as standard against express.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#standard-child-workflow" rel="noopener noreferrer"&gt;Standard child workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Assume you need to process 50M objects. You have 2 steps inside your child workflow. Each child workflow processes 100 objects in a batch. The number of state transition per child workflow is 3. Total number of child workflows to process 50M objects is 500,000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = (number of transitions per execution x number of executions) x $0.000025&lt;/p&gt;

&lt;p&gt;Total number of child workflows to process 50M objects = &lt;strong&gt;500,000&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost&lt;/strong&gt; = (3 * 500000) x $0.000025 = &lt;strong&gt;$37.5&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching#express-vs-standard-vs-batch-size" rel="noopener noreferrer"&gt;Express vs Standard vs Batch size&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwl7w7cb8z82dgy6hgvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwl7w7cb8z82dgy6hgvw.png" width="712" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a visual representation of the same information&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh18xwbsmedo4o2qlyth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh18xwbsmedo4o2qlyth.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What did you observe?&lt;/p&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;The degree of parallelism is determined by the Concurrency setting in Distributed Map, which determines the maximum number of parallel child workflows you want to execute at once. A key consideration here is the service quotas of the AWS services called in your child workflow. For example AWS Lambda in most large regions has a default concurrency quota of 1500 and a default burst limit of 3000, other services such as AWS Rekognition or AWS Textract have much lower default quotas.&lt;/p&gt;

&lt;p&gt;The other thing to keep in mind is any performance limitations of other systems that your child workflow interacts with. An example here would be an on-prem relational database that Lambda within a child workflow connects to. This database might have a limit to the number of connections it can support, so you would need to limit your concurrency accordingly. Once you identify all of the AWS Service quotas and any additional concurrency limitation you’ll want to test various combinations of batch size and concurrency to find the best performance within your concurrency constraints&lt;/p&gt;

&lt;p&gt;Review the documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/" rel="noopener noreferrer"&gt;Lambda performance optimization&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Module 3 — Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing" rel="noopener noreferrer"&gt;HealthCare Claims Processing&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the US healthcare system, claims are typically categorized as professional, institutional, or dental claims when they are submitted to health insurance payers. Health plans are responsible for validating these claims, responding to the provider, assessing the claims, making payments to the provider, and providing an explanation of benefits to the member. In this module, we focus on the validation phase of the claims process, which occurs after the claims data has already been converted to comply with the FHIR specification. During the validation phase, various business rules are applied to validate and enrich the claims. This represents the final step in the incoming flow of claims before they are transformed into custom data formats required by backend claims adjudication systems.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning" rel="noopener noreferrer"&gt;Vulnerability Scanning&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Discovering and reporting vulnerabilities and security issues by scanning documents is a common process. If you are a security partner, when a new customer is onboarded, there can be hundreds of thousands of files to scan. Similarly, if a security procedure changes, previously scanned files may need to be rescanned. Scanning a large number of files is a both time consuming and expensive process. In this module, you will learn to scale a vulnerability scanning application to quickly and efficiently handle hundreds of thousands of files.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation" rel="noopener noreferrer"&gt;Monte Carlo Simulation&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Monte Carlo simulation is a mathematical technique that allows us to predict different outcomes for various changes to a given system. In financial portfolio analysis the technique can be used to predict likely outcomes for aggregate portfolio across a range of potential conditions such as aggregate rate of return or default rate in various market conditions. The technique is also valuable in scenarios where your business case requires predicting the likely outcome of individual portfolio assets such detailed portfolio analysis or stress tests.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Healthcare Claims Processing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Estimated Duration: 30 minutes&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the US healthcare system, claims are typically categorized as professional, institutional, or dental claims when they are submitted to health insurance payers. Health plans are responsible for validating these claims, responding to the provider, assessing the claims, making payments to the provider, and providing an explanation of benefits to the member. In this module, we focus on the validation phase of the claims process, which occurs after the claims data has already been converted to comply with the FHIR specification. During the validation phase, various business rules are applied to validate and enrich the claims. This represents the final step in the incoming flow of claims before they are transformed into custom data formats required by backend claims adjudication systems.&lt;/p&gt;

&lt;p&gt;You will build a Step Functions workflow that processes healthcare claims data in a highly parallel fashion. The workflow uses the Distributed Map state that runs multiple child workflows, each processing a batch of the overall claims data. Each child workflow picks a set of individual claims files and processes them using &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;functions that load the data to an &lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;table and then apply rules to determine validity of the claims. Upon processing the claims, the functions returns the output back to the workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx4tvll68az887hkcr7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx4tvll68az887hkcr7w.png" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing#what-you-will-accomplish" rel="noopener noreferrer"&gt;What you will accomplish&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Learn how to use and configure a Distributed Map state for Healthcare claims data processing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analyze the results of Distributed Map run&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Challenge yourself to optimize the solution&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing#services-in-this-module" rel="noopener noreferrer"&gt;Services in this module&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon S3 &lt;/a&gt;— Object storage built to retrieve any amount of data from anywhere&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;— Visual workflows for distributed applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;— Serverless compute service; Run code without thinking about servers or clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;— Fully managed, serverless, key-value NoSQL database designed to run high-performance applications&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing#what's-included-in-this-module" rel="noopener noreferrer"&gt;What’s included in this module&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Data processing code in the following &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DMapHealthCareClaimProcessingFunction&lt;/strong&gt;: This function reads a claims file and stores data in an &lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;table (&lt;em&gt;DMapHealthCareClaimTable&lt;/em&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DMapHealthCareRuleEngineLambdaFunction&lt;/strong&gt;: This function reads data from the &lt;a href="https://aws.amazon.com/dynamodb//" rel="noopener noreferrer"&gt;Amazon DynamoDB &lt;/a&gt;table and applies rules to determine whether the claims need to be accepted or rejected and returns the output. If it is rejected, it will return the reason as well.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Exploring the Dataset
&lt;/h2&gt;

&lt;p&gt;Navigate to the S3 Bucket. Search for &lt;strong&gt;dmapworkshophealthcare&lt;/strong&gt; bucket. This bucket contains 1026 JSON files with around 60,000 records, with a total size of 270MB. These JSON files are generated using &lt;a href="https://github.com/synthetichealth/synthea" rel="noopener noreferrer"&gt;Synthea Health Library &lt;/a&gt;to simulate patient claims data in &lt;a href="https://www.hl7.org/fhir/overview.html" rel="noopener noreferrer"&gt;FHIR &lt;/a&gt;format.&lt;/p&gt;

&lt;p&gt;The excerpt below shows one such daily claim record.&lt;/p&gt;

&lt;p&gt;Each claim has a claim coding, the service dates, the provider details, the procedure details, an item-wise bill, the total amount and any additional information required for processing.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
        "resourceType": "Claim",
        "id": "d6c1872c-9b97-0fc1-161a-5c2c9b3f54bf",
        "status": "active",
        "type": {
            "coding": [
                {
                    "system": "http://terminology.hl7.org/CodeSystem/claim-type",
                    "code": "institutional"
                }
            ]
        },
        "use": "claim",
        "patient": {
            "reference": "urn:uuid:86ad439b-3df7-8882-2458-af0d0743d12b",
            "display": "Alvin56 Zulauf375"
        },
        "billablePeriod": {
            "start": "2013-09-09T11:01:44+00:00",
            "end": "2013-09-09T12:01:44+00:00"
        },
        "created": "2013-09-09T12:01:44+00:00",
        "provider": {
            "reference": "Organization?identifier=https://github.com/synthetichealth/synthea|5c896155-eb9a-383e-9162-a43ebb7f1cc5",
            "display": "LINDEN PONDS"
        },
        "priority": {
            "coding": [
                {
                    "system": "http://terminology.hl7.org/CodeSystem/processpriority",
                    "code": "normal"
                }
            ]
        },
        "facility": {
            "reference": "Location?identifier=https://github.com/synthetichealth/synthea|de4402eb-c9e7-3723-9584-345f665c5f5c",
            "display": "LINDEN PONDS"
        },
        "procedure": [
            {
                "sequence": 1,
                "procedureReference": {
                    "reference": "urn:uuid:ece5738e-95ce-4dc6-1df0-95f4dcccce9d"
                }
            }
        ],
        "insurance": [
            {
                "sequence": 1,
                "focal": true,
                "coverage": {
                    "display": "Medicaid"
                }
            }
        ],
        "item": [
            {
                "sequence": 1,
                "productOrService": {
                    "coding": [
                        {
                            "system": "http://snomed.info/sct",
                            "code": "182813001",
                            "display": "Emergency treatment (procedure)"
                        }
                    ],
                    "text": "Emergency treatment (procedure)"
                },
                "encounter": [
                    {
                        "reference": "urn:uuid:c6f74fed-bdb9-6f50-29c5-3519fd948936"
                    }
                ]
            },
            {
                "sequence": 2,
                "procedureSequence": [
                    1
                ],
                "productOrService": {
                    "coding": [
                        {
                            "system": "http://snomed.info/sct",
                            "code": "65546002",
                            "display": "Extraction of wisdom tooth"
                        }
                    ],
                    "text": "Extraction of wisdom tooth"
                },
                "net": {
                    "value": 14150.92,
                    "currency": "USD"
                }
            }
        ],
        "total": {
            "value": 14297.1,
            "currency": "USD"
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the next step, you will build a workflow with a Distributed Map state to analyze this dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Data Analysis Workflow
&lt;/h2&gt;

&lt;p&gt;In this step, you’ll create a workflow in Workflow Studio to analyze the Healthcare claims data using a Distributed Map state.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing/step-3#creating-the-workflow" rel="noopener noreferrer"&gt;Creating the Workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;in your AWS console. Make sure you are in the correct region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are not on the State machines page, choose State machines on the left side hamburger menu icon and then select &lt;strong&gt;Create state machine&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Choose a template overlay, choose the &lt;strong&gt;Blank&lt;/strong&gt; template, and select &lt;strong&gt;Select&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42i19rc5b8h8s8vrnun8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42i19rc5b8h8s8vrnun8.png" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select patterns tab and drag &lt;strong&gt;Process S3 objects&lt;/strong&gt; onto the Workflow Studio canvas.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbz9blakboc0697fb2tk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbz9blakboc0697fb2tk.png" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the Distributed Map state with the following values:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zmwkanjh5ydjnbxmbnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zmwkanjh5ydjnbxmbnc.png" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbyao0m9nig0akms3nnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbyao0m9nig0akms3nnz.png" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Lambda Invoke&lt;/strong&gt; state within the Distributed Map state. Configure the state with the following values.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh30t7al20fige5jb7jis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh30t7al20fige5jb7jis.png" width="510" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeaterqcek6ey5ddn0ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyeaterqcek6ey5ddn0ca.png" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for &lt;strong&gt;AWS Lambda&lt;/strong&gt; and drag the &lt;em&gt;Invoke&lt;/em&gt; state onto the canvas within the Distributed Map under the existing Lambda state. Configure the state with the following values:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0po3u2qhvgkdhl7wkry8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0po3u2qhvgkdhl7wkry8.png" width="506" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp21uhjfb7lqekzq59cii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp21uhjfb7lqekzq59cii.png" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Config&lt;/strong&gt; tab next to the state machine name at the top of the page and Edit the State machine name: HealthCareClaimProcessingStateMachine&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you don’t use this exact name, you may receive an IAM error in the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjz9guy3mxngrt752xyx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjz9guy3mxngrt752xyx3.png" width="615" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzd1k5xkzz65yc7764pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzd1k5xkzz65yc7764pu.png" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;For the Execution role, choose an existing role containing HealthCareClaimProcessingStateMachineRole in its name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the rest of the defaults and select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next step, you’ll execute the workflow and view the results of the data processing job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Executing the Workflow and Viewing the Results
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing/step-4#view-map-run" rel="noopener noreferrer"&gt;View map run&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Start execution&lt;/strong&gt; and use the default input payload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The execution will take up to 5 minutes to complete successfully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the execution details page, select Distributed Map state in the Graph View, then select Details tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwmd9lim9qbmr2h8m65u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwmd9lim9qbmr2h8m65u.png" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select Map Run link to view details of the Distributed Map execution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This page provides a summary of the Distributed Map job.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21frcalej8etpijg5tcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21frcalej8etpijg5tcl.png" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We can see that 21 child workflow executions completed successfully with 0 failures. Each child workflow processed 50 files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can view the duration of each child workflow execution. You can see overlapping timestamps for the start and end times, indicating that the data was processed in parallel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you select the execution name, you can use the Execution Input and output tab to view the input files for a child workflow execution and the execution output with details.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq807m8q2e8e6j1314q43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq807m8q2e8e6j1314q43.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing/step-4#verifying-dynamodb-results" rel="noopener noreferrer"&gt;Verifying DynamoDB results&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Validate Claim function will apply the rules on the claims and store the claim status (&lt;strong&gt;Approved&lt;/strong&gt; / &lt;strong&gt;Rejected&lt;/strong&gt;) along with the rejected reason in the DynamoDB table &lt;strong&gt;DMapHealthCareClaimTable&lt;/strong&gt;. You can use the gear icon on the right side of the screen to select which columns you want to view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgvq77ctlwogljoe1yc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgvq77ctlwogljoe1yc1.png" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/healthcare-claims-processing/step-4#summary" rel="noopener noreferrer"&gt;Summary&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You have come to the end of the Healthcare Claim Processing Module. In this module, you created a Workflow with distributed map, learnt some important attributes of distributed map definition and run the workflow yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!&lt;/strong&gt; You used Distributed Map state to quickly process a large dataset using parallel processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extra Credits
&lt;/h2&gt;

&lt;p&gt;Great! You have now executed and analyzed the results of the workflow. &lt;strong&gt;Well done!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But you cannot stop there! You will need to optimize for performance and cost!&lt;/p&gt;

&lt;p&gt;Here is a list of things to try in order to understand the various handles you have at your disposal to optimize a workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Increase the concurrency limit to 1000 and execute it again. Does it change the duration of the execution?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What happens if you decrease the Item Batching size to 25 and execute the workflow? What is the impact on duration as well as cost?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What combination of concurrency limit and batching size would be optimal?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What happens if you change the type of the workflow to ‘Express’ and execute it? What is the impact on cost? Would this workflow type work for any batching size of the provided data set?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Review what you learnt previously in this workshop on &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/advanced/optimization/batching" rel="noopener noreferrer"&gt;concurrency and batching&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Vulnerability Scanning
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Estimated Duration: 30 minutes&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You are developing a security vulnerability scanning application that alerts you of sensitive information in plain text files. The application you have built executes a workflow that scans a single file for exposed social security numbers (SSNs) and, if one is detected, sends a message to a queue for further downstream processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq9016vbgbymdif9twke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq9016vbgbymdif9twke.png" width="544" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You intended to scan files singly as they got uploaded to your S3 bucket, but, due to delays in development, you have accumulated a large backlog of unscanned, potentially vulnerable files in S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning#what-you-do-in-the-module" rel="noopener noreferrer"&gt;What you do in the module&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this module, you will scale your security vulnerability scanning application using Step Functions Distributed Map to quickly address this backlog by processing multiple files concurrently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsbdpin7ul9ax3y105pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcsbdpin7ul9ax3y105pv.png" width="584" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning#services-in-this-module" rel="noopener noreferrer"&gt;Services in this module&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon S3 &lt;/a&gt;— Object storage built to retrieve any amount of data from anywhere&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;— Visual workflows for distributed applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda &lt;/a&gt;— Run code without thinking about servers or clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;Amazon SQS &lt;/a&gt;— Fully managed message queuing for microservices, distributed systems, and serverless applications&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reviewing the Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://us-east-1.console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the state machine containing “&lt;strong&gt;VulnerabilityScanningStateMachine&lt;/strong&gt;” in its name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; to review the design in Workflow Studio.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd6y8urpupcusftsymul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd6y8urpupcusftsymul.png" width="544" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select the Lambda state titled “&lt;strong&gt;Scan&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;API Parameters&lt;/strong&gt;, click &lt;strong&gt;View function&lt;/strong&gt; to review the &lt;strong&gt;Code&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reference&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kqpm08w05em5ndf49ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kqpm08w05em5ndf49ic.png" width="501" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Return to Workflow Studio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Choice&lt;/strong&gt; state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Choice Rules&lt;/strong&gt;, click the edit icons to expand the rule logic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The state machine orchestrates a Lambda function which scans a single file and, if an exposed SSN is detected, sends an SQS message with the location of the SSN and its serial number (the last four numbers).&lt;/p&gt;

&lt;h2&gt;
  
  
  Executing the Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://s3.console.aws.amazon.com/s3/home" rel="noopener noreferrer"&gt;S3 console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the name of the bucket containing “&lt;strong&gt;vulnerabilitydatabucket&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy both the full name of the bucket and the name of a file in that bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to Workflow Studio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Execute&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the following input, replacing [bucket name] and [file name] with the names you copied:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "detail": {&lt;br&gt;
        "bucket": {&lt;br&gt;
            "name": "[bucket name]"&lt;br&gt;
        },&lt;br&gt;
        "object": {&lt;br&gt;
            "key": "[file name]"&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0301qyrp06pa0xxmnmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0301qyrp06pa0xxmnmd.png" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Start execution&lt;/strong&gt; then wait a moment for the execution to succeed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb6tn6p085mtxyx4cbla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb6tn6p085mtxyx4cbla.png" width="206" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your successful graph will vary depending on the file name you copied. Only executions that process a file with an exposed SSN will send a message to the queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling the Workflow with Distributed Map
&lt;/h2&gt;

&lt;p&gt;In this section, you will add a Distributed Map state around the existing workflow to process multiple files concurrently.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Edit state machine&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Code&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Overwrite the JSON definition with the following:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "StartAt": "Map",&lt;br&gt;
  "States": {&lt;br&gt;
    "Map": {&lt;br&gt;
      "Type": "Map",&lt;br&gt;
      "ItemProcessor": {&lt;br&gt;
        "ProcessorConfig": {&lt;br&gt;
          "Mode": "DISTRIBUTED",&lt;br&gt;
          "ExecutionType": "STANDARD"&lt;br&gt;
        },&lt;br&gt;
        "StartAt": "Scan",&lt;br&gt;
        "States": {&lt;br&gt;
          "Scan": {&lt;br&gt;
            "Type": "Task",&lt;br&gt;
            "Resource": "arn:aws:states:::lambda:invoke",&lt;br&gt;
            "OutputPath": "$.Payload",&lt;br&gt;
            "Parameters": {&lt;br&gt;
              "Payload.$": "$",&lt;br&gt;
              "FunctionName": ""&lt;br&gt;
            },&lt;br&gt;
            "Retry": [&lt;br&gt;
              {&lt;br&gt;
                "ErrorEquals": [&lt;br&gt;
                  "Lambda.ServiceException",&lt;br&gt;
                  "Lambda.AWSLambdaException",&lt;br&gt;
                  "Lambda.SdkClientException",&lt;br&gt;
                  "Lambda.TooManyRequestsException"&lt;br&gt;
                ],&lt;br&gt;
                "IntervalSeconds": 2,&lt;br&gt;
                "MaxAttempts": 6,&lt;br&gt;
                "BackoffRate": 2&lt;br&gt;
              }&lt;br&gt;
            ],&lt;br&gt;
            "Next": "Choice"&lt;br&gt;
          },&lt;br&gt;
          "Choice": {&lt;br&gt;
            "Type": "Choice",&lt;br&gt;
            "Choices": [&lt;br&gt;
              {&lt;br&gt;
                "Variable": "$.ssns",&lt;br&gt;
                "IsPresent": true,&lt;br&gt;
                "Next": "Queue"&lt;br&gt;
              }&lt;br&gt;
            ],&lt;br&gt;
            "Default": "Pass"&lt;br&gt;
          },&lt;br&gt;
          "Queue": {&lt;br&gt;
            "Type": "Task",&lt;br&gt;
            "Resource": "arn:aws:states:::sqs:sendMessage",&lt;br&gt;
            "Parameters": {&lt;br&gt;
              "MessageBody.$": "$",&lt;br&gt;
              "QueueUrl": ""&lt;br&gt;
            },&lt;br&gt;
            "End": true&lt;br&gt;
          },&lt;br&gt;
          "Pass": {&lt;br&gt;
            "Type": "Pass",&lt;br&gt;
            "End": true&lt;br&gt;
          }&lt;br&gt;
        }&lt;br&gt;
      },&lt;br&gt;
      "End": true,&lt;br&gt;
      "Label": "Map",&lt;br&gt;
      "MaxConcurrency": 1000,&lt;br&gt;
      "ItemReader": {&lt;br&gt;
        "Resource": "arn:aws:states:::s3:listObjectsV2",&lt;br&gt;
        "Parameters": {&lt;br&gt;
          "Bucket.$": "$.bucket",&lt;br&gt;
          "Prefix.$": "$.prefix"&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Design&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F107xjjoyw0ioyqs3clwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F107xjjoyw0ioyqs3clwz.png" width="584" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select the Lambda state titled “&lt;strong&gt;Scan&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;API Parameters&lt;/strong&gt;, click on the &lt;strong&gt;Enter function name&lt;/strong&gt; dropdown then select the name containing “&lt;strong&gt;VulnerabilityScanning&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the SQS state titled “&lt;strong&gt;Queue&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;API Parameters&lt;/strong&gt;, click on the &lt;strong&gt;Enter queue URL&lt;/strong&gt; dropdown then select the URL containing “&lt;strong&gt;VulnerabilitiesQueue&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Map&lt;/strong&gt; state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Item source&lt;/strong&gt;, expand &lt;strong&gt;Additional configuration&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Limiting the number of items that are sent to the Map state is useful for performing trial executions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Limit number of items&lt;/strong&gt; then enter 1000 under &lt;strong&gt;Max items&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modify items before they are passed on to child executions, selecting only the relevant items and adding input where needed with &lt;em&gt;ItemSelector&lt;/em&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Modify Items with ItemSelector&lt;/strong&gt; then enter the following JSON:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "detail": {&lt;br&gt;
        "bucket": {&lt;br&gt;
            "name.$": "$.bucket"&lt;br&gt;
        },&lt;br&gt;
        "object": {&lt;br&gt;
            "key.$": "$$.Map.Item.Value.Key"&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Express Workflows are ideal for quick, high-volume workloads like this. They can run for up to five minutes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Child execution type&lt;/strong&gt;, choose &lt;strong&gt;Express&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Code&lt;/strong&gt; tab again to review the JSON definition.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your &lt;em&gt;FunctionName&lt;/em&gt; and &lt;em&gt;QueueUrl&lt;/em&gt; will be different.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "StartAt": "Map",
  "States": {
    "Map": {
      "Type": "Map",
      "ItemProcessor": {
        "ProcessorConfig": {
          "Mode": "DISTRIBUTED",
          "ExecutionType": "EXPRESS"
        },
        "StartAt": "Scan",
        "States": {
          "Scan": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "OutputPath": "$.Payload",
            "Parameters": {
              "Payload.$": "$",
              "FunctionName": "arn:aws:lambda:us-east-1:172187416625:function:vulnerability-scanning-mo-VulnerabilityScanningFun-G786d6ZNTaPI:$LATEST"
            },
            "Retry": [
              {
                "ErrorEquals": [
                  "Lambda.ServiceException",
                  "Lambda.AWSLambdaException",
                  "Lambda.SdkClientException",
                  "Lambda.TooManyRequestsException"
                ],
                "IntervalSeconds": 2,
                "MaxAttempts": 6,
                "BackoffRate": 2
              }
            ],
            "Next": "Choice"
          },
          "Choice": {
            "Type": "Choice",
            "Choices": [
              {
                "Variable": "$.ssns",
                "IsPresent": true,
                "Next": "Queue"
              }
            ],
            "Default": "Pass"
          },
          "Queue": {
            "Type": "Task",
            "Resource": "arn:aws:states:::sqs:sendMessage",
            "Parameters": {
              "MessageBody.$": "$",
              "QueueUrl": "https://sqs.us-east-1.amazonaws.com/172187416625/vulnerability-scanning-module-VulnerabilitiesQueue-UtFIvCLDH5Lh"
            },
            "End": true
          },
          "Pass": {
            "Type": "Pass",
            "End": true
          }
        }
      },
      "End": true,
      "Label": "Map",
      "MaxConcurrency": 1000,
      "ItemReader": {
        "Resource": "arn:aws:states:::s3:listObjectsV2",
        "Parameters": {
          "Bucket.$": "$.bucket",
          "Prefix.$": "$.prefix"
        },
        "ReaderConfig": {
          "MaxItems": 1000
        }
      },
      "ItemSelector": {
        "detail": {
          "bucket": {
            "name.$": "$.bucket"
          },
          "object": {
            "key.$": "$$.Map.Item.Value.Key"
          }
        }
      }
    }
  }
}

}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;If the changes to the JSON definition look accurate, click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Executing the Map Workflow
&lt;/h2&gt;

&lt;p&gt;In this section, you will test last section’s saved workflow. You will execute the workflow with the necessary input, explore child workflow executions, and verify the outputs by polling messages in the SQS queue.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Execute&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the following input, replacing [bucket name] with the bucket name you copied:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "bucket": "[bucket name]",&lt;br&gt;
    "prefix": ""&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupy2r8ureoba2fgtirv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcupy2r8ureoba2fgtirv.png" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Start execution&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa6bfx1tgzeri78akpu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa6bfx1tgzeri78akpu6.png" width="270" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Events&lt;/strong&gt;, click &lt;strong&gt;Map Run&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because you added a limitation on the number of items processed, under &lt;strong&gt;Executions&lt;/strong&gt;, there are only 1000 workflows running despite there being more files in S3.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click a given child execution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After a successful execution, select the &lt;strong&gt;Execution input and output&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9ny3qg3auj4cmywtffk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9ny3qg3auj4cmywtffk.png" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, your output will vary depending on the file that was executed on. Executions that process a clean file with no exposed SSN will return an empty object.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to the parent execution view.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r938dod96kwytt3ndmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r938dod96kwytt3ndmb.png" width="270" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final graph view should look like this upon completion.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://us-east-1.console.aws.amazon.com/sqs/v2/home" rel="noopener noreferrer"&gt;SQS console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the queue containing “&lt;strong&gt;VulnerabilitiesQueue&lt;/strong&gt;” in its name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Send and receive messages&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Poll for messages&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click a given message to view the &lt;strong&gt;Body&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The body of the message contains the location and serial number of the SSN(s).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notice the number of &lt;em&gt;Messages available&lt;/em&gt; in the queue. You will now purge the queue in anticipation of the next full state machine execution.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the queue name.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8wt43d4i2qpvauersas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8wt43d4i2qpvauersas.png" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under the queue name, click &lt;strong&gt;Purge&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter purge then click &lt;strong&gt;Purge&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Optimizing the Workflow with Batching
&lt;/h2&gt;

&lt;p&gt;In this section, you will modify your Step Functions workflow and Lambda function to enable batch processing, optimizing the performance and cost of your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning/edit-workflow-2#modify-the-workflow" rel="noopener noreferrer"&gt;Modify the Workflow&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Return to the Step Functions execution page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Edit state machine&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Map&lt;/strong&gt; state.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can optimize the performance and cost of your workflow by selecting a batch size that balances the number of items against the items processing time. If you use batching, Step Functions adds the items to an Items array. It then passes the array as input to each child workflow execution.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Item batching&lt;/strong&gt;, select &lt;strong&gt;Enable batching&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the &lt;strong&gt;Max MBs per batch&lt;/strong&gt; to 50 KBs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With batch input, you can pass a global JSON input to each child execution, merged with the inputs for items. In the last section, the bucket name was included in every single item, increasing the total input size. Instead of including the bucket name repeatedly with ItemSelector, you will include it once in the batch input.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enter the following &lt;strong&gt;Batch input&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "bucket.$": "$.bucket"&lt;br&gt;
}    &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Item source&lt;/strong&gt;, expand &lt;strong&gt;Additional configuration&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under &lt;strong&gt;Modify items with ItemSelector&lt;/strong&gt;, overwrite the JSON with the following, removing details of the bucket:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "key.$": "$$.Map.Item.Value.Key"&lt;br&gt;
}    &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unselect &lt;strong&gt;Limit number of items&lt;/strong&gt; to process all your files.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Express&lt;/em&gt; Workflows only run for up to five minutes and batch processing increases the number of files processed per child execution. If you configure a sufficiently large batch size, you may need to use a &lt;em&gt;Standard&lt;/em&gt; Workflow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Child execution type&lt;/strong&gt;, select &lt;strong&gt;Standard&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Code&lt;/strong&gt; tab to review the JSON definition.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your &lt;em&gt;FunctionName&lt;/em&gt; and &lt;em&gt;QueueUrl&lt;/em&gt; will be different.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "StartAt": "Map",
  "States": {
    "Map": {
      "Type": "Map",
      "ItemProcessor": {
        "ProcessorConfig": {
          "Mode": "DISTRIBUTED",
          "ExecutionType": "STANDARD"
        },
        "StartAt": "Scan",
        "States": {
          "Scan": {
            "Type": "Task",
            "Resource": "arn:aws:states:::lambda:invoke",
            "OutputPath": "$.Payload",
            "Parameters": {
              "Payload.$": "$",
              "FunctionName": "arn:aws:lambda:us-east-1:172187416625:function:vulnerability-scanning-mo-VulnerabilityScanningFun-G786d6ZNTaPI:$LATEST"
            },
            "Retry": [
              {
                "ErrorEquals": [
                  "Lambda.ServiceException",
                  "Lambda.AWSLambdaException",
                  "Lambda.SdkClientException",
                  "Lambda.TooManyRequestsException"
                ],
                "IntervalSeconds": 2,
                "MaxAttempts": 6,
                "BackoffRate": 2
              }
            ],
            "Next": "Choice"
          },
          "Choice": {
            "Type": "Choice",
            "Choices": [
              {
                "Variable": "$.ssns",
                "IsPresent": true,
                "Next": "Queue"
              }
            ],
            "Default": "Pass"
          },
          "Queue": {
            "Type": "Task",
            "Resource": "arn:aws:states:::sqs:sendMessage",
            "Parameters": {
              "MessageBody.$": "$",
              "QueueUrl": "https://sqs.us-east-1.amazonaws.com/172187416625/vulnerability-scanning-module-VulnerabilitiesQueue-UtFIvCLDH5Lh"
            },
            "End": true
          },
          "Pass": {
            "Type": "Pass",
            "End": true
          }
        }
      },
      "End": true,
      "Label": "Map",
      "MaxConcurrency": 1000,
      "ItemReader": {
        "Resource": "arn:aws:states:::s3:listObjectsV2",
        "Parameters": {
          "Bucket.$": "$.bucket",
          "Prefix.$": "$.prefix"
        },
        "ReaderConfig": {}
      },
      "ItemSelector": {
        "key.$": "$$.Map.Item.Value.Key"
      },
      "ItemBatcher": {
        "MaxInputBytesPerBatch": 51200,
        "BatchInput": {
          "bucket.$": "$.bucket"
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;If the changes to the JSON definition look accurate, click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/security-vulnerability-scanning/edit-workflow-2#modify-the-lambda-function" rel="noopener noreferrer"&gt;Modify the Lambda Function&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Since you have enabled batching and changed the input to each child execution, you will need to refactor your Lambda function code to read in the bucket name from BatchInput and process multiple files.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Design&lt;/strong&gt; tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the Lambda state titled “&lt;strong&gt;Scan&lt;/strong&gt;”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;API Parameters&lt;/strong&gt;, click &lt;strong&gt;View function&lt;/strong&gt; to edit the &lt;strong&gt;Code&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzpc41kojbk16vagrpp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzpc41kojbk16vagrpp2.png" width="501" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Overwrite the code with the following:&lt;/p&gt;

&lt;p&gt;import json&lt;br&gt;
import boto3&lt;br&gt;
import re&lt;/p&gt;

&lt;p&gt;def handler(event, context):&lt;/p&gt;

&lt;p&gt;bucket = event["BatchInput"]["bucket"]&lt;/p&gt;

&lt;p&gt;ssns = []&lt;br&gt;
  for item in event["Items"]:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;key = item["key"]

obj = boto3.client('s3').get_object(
  Bucket=bucket, 
  Key=key
)
body = obj['Body'].read().decode()

searches = re.findall("ssn=[^\s]+", body)
if searches:
  ssns.extend([{"key": key, "serial": number[-4:]} 
    for ssn, number in (search.split("=") for search in searches)
])
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;if ssns:&lt;br&gt;
    return {"ssns": ssns}&lt;br&gt;
  else:&lt;br&gt;
    return {}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Deploy&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Executing the Batch Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Return to Workflow Studio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Execute&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the following input, replacing [bucket name] with the bucket name you copied::&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
    "bucket": "[bucket name]",&lt;br&gt;
    "prefix": ""&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cx629uuwdnnqsufyxig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cx629uuwdnnqsufyxig.png" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Start execution&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqslsrxb2mw4xc5urslj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqslsrxb2mw4xc5urslj8.png" width="270" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Events&lt;/strong&gt;, click &lt;strong&gt;Map Run&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait a moment, if necessary, then click a given child execution.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwznlg90sxi55865t9139.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwznlg90sxi55865t9139.png" width="206" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Events&lt;/strong&gt;, expand the &lt;strong&gt;ID&lt;/strong&gt; 3 dropdown to view the new input to the Lambda function titled “Payload”.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6omvmdoryioy1wf69vir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6omvmdoryioy1wf69vir.png" width="206" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;After a successful invocation, expand the &lt;strong&gt;ID&lt;/strong&gt; 6 dropdown to see the output of the Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the parent execution view.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdb480jq4dzj1ozw1dpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdb480jq4dzj1ozw1dpp.png" width="270" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final graph view should look like this upon completion.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://us-east-1.console.aws.amazon.com/sqs/v2/home" rel="noopener noreferrer"&gt;SQS console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the refresh icon, if necessary.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notice the number of &lt;em&gt;Messages available&lt;/em&gt; in the queue. Since you removed the limitation on the number of items processed, it is much higher than before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this module, you added a Distributed Map state to your workflow to scale your security vulnerability scanning application across multiple files concurrently. By refactoring your application and enabling batch processing, you further optimized the performance and cost of your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oq862mrc3z09ya7smdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oq862mrc3z09ya7smdk.png" width="584" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Beyond security vulnerability scanning, AWS Step Functions can be used for other data processing use cases, from typical cleaning and normalization to healthcare claims processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monte Carlo Simulation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Estimated Duration: 30 minutes&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A Monte Carlo simulation is a mathematical technique that allows us to predict different outcomes for various changes to a given system. In financial portfolio analysis the technique can be used to predict likely outcomes for aggregate portfolio across a range of potential conditions such as aggregate rate of return or default rate in various market conditions. The technique is also valuable in scenarios where your business case requires predicting the likely outcome of individual portfolio assets such detailed portfolio analysis or stress tests.&lt;/p&gt;

&lt;p&gt;For this fictitious use case we will be working with a portfolio of personal and commercial loans owned by our company. Each loan is represented by a subset of data housed in individual S3 objects. Our company has tasked us with trying to predict which loans will default in the event of a Federal Reserve rate increase.&lt;/p&gt;

&lt;p&gt;Loan defaults occur when the borrower fails to repay the loan. Predicting which loans in a portfolio would default in various scenarios helps companies understand their risk and plan for future events.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation#what-is-a-worker" rel="noopener noreferrer"&gt;What is a Worker?&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this solution we are distributing the data using a Step Functions &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-activities.html" rel="noopener noreferrer"&gt;Activity &lt;/a&gt;. Activities are an AWS Step Functions feature that enables you to have a task in your state machine where the work is performed by a worker that can be hosted on Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), mobile devices — basically anywhere. Think of activity as a Step Functions managed internal queue. You use an activity state to send data to the queue, one or more workers will consume the data from the queue. For this solution we utilize Amazon ECS on Amazon Fargate to run our Activity Workers (Worker).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation#solution-overview" rel="noopener noreferrer"&gt;Solution Overview&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The solution uses &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions &lt;/a&gt;to provides end to end orchestration for processing billions of records with your simulation or transformation logic using AWS Step Functions &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-asl-use-map-state-distributed.html" rel="noopener noreferrer"&gt;Distributed Map &lt;/a&gt;and &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-activities.html" rel="noopener noreferrer"&gt;Activity &lt;/a&gt;features. At the start of the workflow, Step Functions will scale the number of workers to a (configurable) predefined number. It then reads in the dataset and distributes metadata about the dataset in &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/input-output-itembatcher.html" rel="noopener noreferrer"&gt;batches &lt;/a&gt;to the Activity. The workers are polling the Activity looking for data to process. Upon receiving a batch, the worker will process the data and report back to Step Functions that the batch has been completed. This cycle continues until all records from the dataset have been processed. Upon completion, Step Functions will scale the workers back to zero.&lt;/p&gt;

&lt;p&gt;The workers in this example are containers, running in &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Amazon Elastic Container Service (ECS) &lt;/a&gt;with an &lt;a href="https://aws.amazon.com/fargate/" rel="noopener noreferrer"&gt;Amazon Fargate &lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html" rel="noopener noreferrer"&gt;Capacity Provider &lt;/a&gt;. Though the workers could potentially run almost anywhere so long as they had access to poll the Step Functions Activity and report SUCCESS/FAILURE back to Step Functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0wecakaftfvoz92cy05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0wecakaftfvoz92cy05.png" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation#module-goals" rel="noopener noreferrer"&gt;Module Goals&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Learn how Step Functions Distributed Map can use a Step Functions Activity to distribute work to workers almost anywhere&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn how Step Functions can manage workers through its built-in Amazon ECS integrations&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Updating the ECS Service — Part 1
&lt;/h2&gt;

&lt;p&gt;The solution uses Amazon ECS to run the workers that handle the actual data processing. In this example we have created an ECS Service that will run a variable number of ECS Tasks, controlled by our Step Functions workflow. The workers run asynchronously from the Distributed Map, which uses an Activity to distribute the dataset. In this step you will configure that ECS Service to use a Task Definition that was predefined in CloudFormation. Let’s get started.&lt;/p&gt;

&lt;p&gt;Important&lt;/p&gt;

&lt;p&gt;A task definition is a blueprint for your application. It is a text file in JSON format that describes the parameters and one or more containers that form your application. You can learn more &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/ecs/home" rel="noopener noreferrer"&gt;ECS Console Page&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the ECS Cluster named “sfn-fargate-dataproc-xxxxxxxx” (note the similar Cluster named sfn-fargate-datagen-xxxxxxxx, please choose the one ending in dataproc)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose ECS Cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikt8fznos5zxk9535dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flikt8fznos5zxk9535dw.png" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose the ECS Service named “sfn-fargate-dataproc-xxxxxxxx”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose ECS Service&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc79x2v78z0au8640ipbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc79x2v78z0au8640ipbm.png" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the Update Service button in the top right of the ECS Service page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update Service&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ztdh6vqcj01esqcccg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ztdh6vqcj01esqcccg6.png" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the field “Family”, click the Dropdown menu and change the current Task Definition, sfn-fargate-dataproc-placeholder-xxxxxxxx, and choose the Task Definition named sfn-fargate-dataproc-&lt;em&gt;small&lt;/em&gt;-xxxxxxxx&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Task Definition&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7sfjvq3j5vchp4l2wb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7sfjvq3j5vchp4l2wb8.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leave all other fields default. Scroll to the bottom of the page and click Update.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it! You have successfully updated your ECS Service to use a new Task Definition. Now lets run our Step Functions State Machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Executing the Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Let’s go ahead and run the Step Function and then we will walk through each step as well as some optimizations for you to consider….&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your AWS Console navigate to the AWS Step Functions Console by using the search bar in the upper left corner of your screen, typing “step functions” and clicking the Step Functions icon.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Console Navigation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci7d915ulo6edikbg6bv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci7d915ulo6edikbg6bv.png" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the “sfn-fargate-dataproc-xxxxxxxxxxxx” Step Function by clicking on the Link and then click the “Start Execution” button in the upper right-hand corner of the details screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step Function Selection&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0py17uf4v0i0fscx0tw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0py17uf4v0i0fscx0tw.png" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the “&lt;strong&gt;Start execution&lt;/strong&gt;” button to start the workflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step Function Execution&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2umvcddh858f1cw7l3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2umvcddh858f1cw7l3q.png" width="800" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3a. You will be prompted for JSON input, leave default and click “&lt;strong&gt;Start Execution&lt;/strong&gt;”&lt;/p&gt;

&lt;p&gt;Step Function Execution — Take Defaults&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15s0t9b5chc8kogg23te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15s0t9b5chc8kogg23te.png" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can then monitor the process of the Step Function State Machine process from the execution status screen. Processing of the records in the simulated dataset takes just a few minutes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhajuqesr12bcgfkegzti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhajuqesr12bcgfkegzti.png" width="293" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, that will take about 5 minutes to complete. While it’s running, lets dive a little deeper into each step in the workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing the Workflow
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation/reviewing-the-workflow#distributed-map" rel="noopener noreferrer"&gt;Distributed Map&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Processing DMap step is an AWS Step Functions Distributed Map step that reads the S3 Inventory manifest provided by the Parent Map and processes the referenced &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory.html" rel="noopener noreferrer"&gt;S3 Inventory &lt;/a&gt;files. For this use case each line of the S3 Inventory file contains metadata referencing an object is S3 containing a single customer loan. The Distributed Map feature creates batches of 400 loan files per our configuration for concurrent distribution to the Step Functions Activity step. Each Distributed Map step supports up to ten thousand concurrent workers. For this example, the Runtime Concurrency is set to 1000. As Step Functions adds messages to the Activity, the workers are polling to pull batches for processing. Once complete, the worker reports back to Step Functions to acknowledge the batch has been completed. Step Functions removes that message from the Activity and adds a new batch, it will repeat this process until all batches have been completed.&lt;/p&gt;

&lt;p&gt;Distributed Map Configuration Example&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq04nvorpub5erwe5riuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq04nvorpub5erwe5riuk.png" width="455" height="818"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfk3xfvxfpkomcm22msg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfk3xfvxfpkomcm22msg.png" width="471" height="778"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As Step Functions Distributed Map can scale to massive concurrency very quickly it is advisable to configure back off and retries within our process to allow downstream systems to scale to meet the processing needs. We utilize Step Functions Distributed Map retry feature to implement graceful back offs without any code. In this example we have configured retry logic for S3 bucket. Each new S3 bucket allocates a single throughput partition of 5,500 reads and 3,500 writes per second which auto-scales based on usage patterns. To allow S3 to auto-scale to meet our workloads write demands the we configure the base retry delay, maximum retires, and back-off within the step function.&lt;/p&gt;

&lt;p&gt;Step Functions Retry Configuration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotj8xg14kxp9b989kydq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotj8xg14kxp9b989kydq.png" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation/reviewing-the-workflow#amazon-ecs-workers" rel="noopener noreferrer"&gt;Amazon ECS Workers&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Amazon Elastic Container Service (ECS) Cluster is using a Fargate Spot &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-capacity-providers.html" rel="noopener noreferrer"&gt;Capacity Provider &lt;/a&gt;to reduce costs and eliminate maintaining EC2 instances. Fargate provides us with AWS managed compute for scheduling our containers.&lt;/p&gt;

&lt;p&gt;ECS Cluster / Capacity Provider&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2fr2ey6pixqwbvi90no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2fr2ey6pixqwbvi90no.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Task Definition in the solution is where we define resources required for our containers to complete. Resources such as network requirements, number of vCPU’s, amount of RAM, etc. In this solution, to avoid building a container, we’re simply using an unmodified Amazon Linux 2023 container but specifying a bootstrap sequence. Bootstrapping lets us give the container a series of commands to execute on start. When the container comes up, it will download a pre-generated python script from our S3 bucket and execute it. This python script has the sample logic we want to run against our dataset.&lt;/p&gt;

&lt;p&gt;Task Definition&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "taskDefinitionArn": "arn:aws:ecs:us-east-2:123456789101:task-definition/sfn-fargate-dataproc-06f7c24a2819:1",
    "containerDefinitions": [
        {
            "name": "sfn-fargate-dataproc-06f7c24a2819",
            "image": "public.ecr.aws/amazonlinux/amazonlinux:2023",
            "cpu": 256,
            "memory": 512,
            "links": [],
            "portMappings": [],
            "essential": true,
            "entryPoint": [],
            "command": [
                "/bin/sh",
                "-c",
                "yum -y update &amp;amp;&amp;amp; yum -y install awscli python3-pip &amp;amp;&amp;amp; python3 -m pip install boto3 &amp;amp;&amp;amp; aws s3 cp s3://${SOURCEBUCKET}/script/fargate.py . &amp;amp;&amp;amp; python3 fargate.py"
            ],
            "environment": [
                {
                    "name": "ACTIVITY_ARN",
                    "value": "arn:aws:states:us-east-2:123456789101:activity:sfn-fargate-activity-06f7c24a2819"
                },
                {
                    "name": "DESTINATIONBUCKET",
                    "value": "sfn-datagen-destination-060aa4adf045"
                },
                {
                    "name": "RECORDCOUNT",
                    "value": "105000"
                },
                {
                    "name": "SOURCEBUCKET",
                    "value": "sfn-datagen-source-060aa4adf045"
                },
                {
                    "name": "REGION",
                    "value": "us-east-2"
                }
            ],
            "environmentFiles": [],
            "mountPoints": [],
            "volumesFrom": [],
            "secrets": [],
            "dnsServers": [],
            "dnsSearchDomains": [],
            "extraHosts": [],
            "dockerSecurityOptions": [],
            "dockerLabels": {},
            "ulimits": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "sfn-fargate-dataproc-06f7c24a2819",
                    "awslogs-region": "us-east-2",
                    "awslogs-stream-prefix": "sfn-fargate"
                },
                "secretOptions": []
            },
            "systemControls": []
        }
    ],
    "family": "sfn-fargate-dataproc-06f7c24a2819",
    "taskRoleArn": "arn:aws:iam::123456789101:role/sfn-fargate-ecs-task-role-06f7c24a2819",
    "executionRoleArn": "arn:aws:iam::123456789101:role/sfn-fargate-ecs-exec-role-06f7c24a2819",
    "networkMode": "awsvpc",
    "revision": 1,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
        },
        {
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "name": "ecs.capability.task-eni"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "256",
    "memory": "512",
    "registeredAt": "2023-09-02T15:36:54.377Z",
    "registeredBy": "arn:aws:sts::123456789101:assumed-role/haughtmx-vscode-remote-ssh-role/i-00c5182010adb45a3",
    "tags": [
        {
            "key": "Name",
            "value": "sfn-fargate-dataproc-06f7c24a2819"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The tasks are managed with an ECS Service. A Service can allow for integration with other services such as Elastic Load Balancers, but in our case, we’re using it to manage how many tasks are running at any given time. In the “Scale Out Workers” step of the workflow we are using Step Functions built-in integration with ECS to scale out the Service to 50. This way, once Distributed Map begins filling the Activity with batches of work, the containers are already running and immediately begin picking up batches for processing.&lt;/p&gt;

&lt;p&gt;ECS Service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3izbwvenhklxayh10bfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3izbwvenhklxayh10bfs.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation/reviewing-the-workflow#data-processing" rel="noopener noreferrer"&gt;Data Processing&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Run Activity step consists of a single Step Functions Activity which accepts a JSON payload from Distributed Map containing a batch of Loan objects stored in S3. The Run Activity step will continuously add/remove batches of loans by reading the contents of the S3 object. These batches are picked up by our workers and processed. After the worker reports that a batch was successfully processed, Step Functions will remove the batch from the Activity and adds a new one in its place, maintaining our concurrency limit of batches until all batches are processed. In this example the output files contain batched loans to facilitate more efficient reads for analytics and ML workloads.&lt;/p&gt;

&lt;p&gt;Step Functions error handling features removes the need to implement catch and retry logic within the python code. If a batch fails processing or a worker fails to report the batch as complete, Step Functions will wait the allotted time and re-add the batch for another worker to pick up and process.&lt;/p&gt;

&lt;p&gt;Example Processing Python Code&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3&lt;br&gt;
import json&lt;br&gt;
import csv&lt;br&gt;
from io import StringIO&lt;br&gt;
import os&lt;br&gt;
import time&lt;br&gt;
from random import randint&lt;br&gt;
from botocore.client import Config
&lt;h1&gt;
  
  
  set a few variables we'll use to get our data
&lt;/h1&gt;

&lt;p&gt;activity_arn = os.getenv('ACTIVITY_ARN')&lt;br&gt;
worker_name = os.getenv('HOSTNAME')&lt;br&gt;
region = os.getenv('REGION')&lt;/p&gt;

&lt;p&gt;print('starting job...')&lt;/p&gt;
&lt;h1&gt;
  
  
  setup our client
&lt;/h1&gt;

&lt;p&gt;config = Config(&lt;br&gt;
  connect_timeout=65,&lt;br&gt;
  read_timeout=65,&lt;br&gt;
  retries={'max_attempts': 0}&lt;br&gt;
)&lt;br&gt;
client = boto3.client('stepfunctions', region_name=region, config=config)&lt;br&gt;
s3_client = boto3.client('s3', region_name=region)&lt;br&gt;
s3 = boto3.resource('s3')&lt;/p&gt;
&lt;h1&gt;
  
  
  now we start polling until we have nothing left to do. i realize this should
&lt;/h1&gt;
&lt;h1&gt;
  
  
  be more functions and it's pretty gross but it works for a demo :)
&lt;/h1&gt;

&lt;p&gt;while True:&lt;br&gt;
  response = client.get_activity_task(&lt;br&gt;
    activityArn = activity_arn,&lt;br&gt;
    workerName = worker_name&lt;br&gt;
  )&lt;/p&gt;

&lt;p&gt;if 'input' not in response.keys() or 'taskToken' not in response.keys():&lt;br&gt;
    print('no tasks to process...waiting 30 seconds to try again')&lt;br&gt;
    time.sleep(30)&lt;br&gt;
    continue&lt;br&gt;
    # break&lt;/p&gt;

&lt;p&gt;token = response['taskToken']&lt;br&gt;
  data = json.loads(response['input'])&lt;br&gt;
  items = data['Items']&lt;br&gt;
  other = data['BatchInput']&lt;br&gt;
  rndbkt = other['dstbkt'] &lt;br&gt;
  success = True&lt;br&gt;
  cause = ""&lt;br&gt;
  error = ""&lt;br&gt;
  results = ["NO", "NO", "NO", "NO", "YES", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO", "NO"]&lt;br&gt;
  for item in items:&lt;br&gt;
    try:&lt;br&gt;
      source = s3_client.get_object(Bucket=other['srcbkt'], Key=item['Key'])&lt;br&gt;
      content = source.get('Body').read().decode('utf-8')&lt;br&gt;
      buf = StringIO(content)&lt;br&gt;
      reader = csv.DictReader(buf)&lt;br&gt;
      objects = list(reader)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # just randomly assign a value with a theoretical ballpark of 5% of the values being 'YES'
  objects[0]['WillDefault'] = results[randint(0,19)]

  stream = StringIO()
  headers = list(objects[0].keys())
  writer = csv.DictWriter(stream, fieldnames=headers)
  writer.writeheader()
  writer.writerows(objects)
  body = stream.getvalue()

  dst = s3.Object(rndbkt, other['dstkey'] + '/' + item['Key'].split('/')[1])
  dst.put(Body=body)

except Exception as e:
  cause = "failed to process object " + item['Key'],
  error = str(e)
  success = False
  break
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;if success:&lt;br&gt;
    client.send_task_success(&lt;br&gt;
      taskToken = token,&lt;br&gt;
      output = "{\"message\": \"success\"}"&lt;br&gt;
    )&lt;br&gt;
  else:&lt;br&gt;
    client.send_task_failure(&lt;br&gt;
      taskToken = token,&lt;br&gt;
      cause = cause,&lt;br&gt;
      error = error&lt;br&gt;
    )&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Updating the ECS Service — Part 2&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;The update you made in the “Updating the ECS Service — part 1” step used a task definition with a relatively low resource setting, 0.25vcpu and 0.5GB of memory. In this step you will choose a Task Definition that uses the same container as the previous step, but with double the previous resources. Then you will re-run the State Machine and check the differences between the executions. Let’s get started.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/ecs/home" rel="noopener noreferrer"&gt;ECS Console Page&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the ECS Cluster named “sfn-fargate-dataproc-xxxxxxxx” (note the similar Cluster named sfn-fargate-datagen-xxxxxxxx, please choose the one ending in dataproc)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose ECS Cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrb1nphla3ll2dw3fzia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrb1nphla3ll2dw3fzia.png" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose the ECS Service named “sfn-fargate-dataproc-xxxxxxxx”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose ECS Service&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9g01saagq6kggakplg47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9g01saagq6kggakplg47.png" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the Update Service button in the top right of the ECS Service page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update Service&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l5522i1o1ii70kdq15q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l5522i1o1ii70kdq15q.png" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the field “Family”, click the Dropdown menu and change the current Task Definition, sfn-fargate-dataproc-small-xxxxxxxx, and choose the Task Definition named sfn-fargate-dataproc-&lt;em&gt;large&lt;/em&gt;-xxxxxxxx&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Task Definition&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35wk98ysdy1f4shcz0hc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35wk98ysdy1f4shcz0hc.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Leave all other fields default. Scroll to the bottom of the page and click Update.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now that you have updated the Service, lets run the State Machine again. If you need instructions please refer back to &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/2a22e604-2f2e-4d7b-85a8-33b38c999234/en-US/use-cases/monte-carlo-simulation/executing-the-workflow/" rel="noopener noreferrer"&gt;Execute The State Machine&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Give that approximately 4 minutes to complete&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing the Results
&lt;/h2&gt;

&lt;p&gt;In the first execution you used a Task Definition that allocated .25 vCPU’s and .5GB of RAM to each container. In the second execution you used a Task Definition that allocated .5 vCPU’s and 1GB of RAM to each container. Let’s check out the results and see how they differ.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the &lt;a href="https://console.aws.amazon.com/states/home" rel="noopener noreferrer"&gt;Step Functions Console&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the State Machine named “sfn-fargate-dataproc-xxxxxxxx”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step Function Selection&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00h4ej1o4hhmlztgh7tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00h4ej1o4hhmlztgh7tn.png" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On the State Machine Details page you will see 2 executions, open each in a separate tab. (right-click each link and choose “Open Link in New Tab”)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open Executions&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxryi89kuoc7fzmvnsmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxryi89kuoc7fzmvnsmi.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On each tab view the details of the execution and find the Duration field to see how long each execution required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Find Duration&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwta16d17dlvg6h1ht0ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwta16d17dlvg6h1ht0ul.png" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do they differ? For this simulated dataset, which would be considered small for a Monte Carlo Simulation, the difference is typically less than a minute but overall the increased vCPU/RAM will lead to ~25% time savings. However, to get this ~25% savings you had to double the resources for each container, effectively doubling your Fargate costs. If time is your most critical factor, this may be worth it. If cost is your most critical factor, perhaps not. The variation for both cost and time are minimal with a dataset this small, however if you extrapolate this to a dataset consisting of millions or even billions of objects, both time and cost variations are considerable. You will want to experiment with your actual workloads on what settings work best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Login to the AWS account where you deployed the module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://s3.console.aws.amazon.com/" rel="noopener noreferrer"&gt;S3 console &lt;/a&gt;then empty both buckets containing “hellodmap” in the name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/cloudformation" rel="noopener noreferrer"&gt;CloudFormation console &lt;/a&gt;, select the stack with a name containing “sfw-hello-distributed-map” (or with the name you entered earlier), then click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure the stack deletion completes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Important&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the instructions on this page only if you are executing this workshop in your own account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://s3.console.aws.amazon.com/" rel="noopener noreferrer"&gt;S3 &lt;/a&gt;. Search for &lt;strong&gt;sfw-optimization&lt;/strong&gt;. Empty both buckets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/cloudformation" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt;console. Search for &lt;strong&gt;sfw-optimization-distributed-map&lt;/strong&gt;. Delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://s3.console.aws.amazon.com/" rel="noopener noreferrer"&gt;S3 &lt;/a&gt;. Search for &lt;strong&gt;dmapworkshophealthcare&lt;/strong&gt;. Empty the bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/cloudformation" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt;console. Search for &lt;strong&gt;sfw-healthcare-processing&lt;/strong&gt;. Delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/cloudformation/home" rel="noopener noreferrer"&gt;CloudFormation console &lt;/a&gt;then select the stack with a name containing “&lt;strong&gt;vulnerability-scanning-module&lt;/strong&gt;” (or with the name you entered earlier).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure the stack deletion completes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://s3.console.aws.amazon.com/" rel="noopener noreferrer"&gt;S3 &lt;/a&gt;. Search for &lt;strong&gt;sfn-datagen&lt;/strong&gt;. Empty both buckets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;a href="https://console.aws.amazon.com/cloudformation" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt;console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for &lt;strong&gt;sfn-fargate&lt;/strong&gt;. Delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for &lt;strong&gt;sfn-datagen&lt;/strong&gt;. Delete the stack.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges Faced and Solutions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Challenge 1: Complex Workflow Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Leveraged AWS Step Functions’ visual editor to design and troubleshoot each state transition, ensuring a seamless workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 2: Detailed Monitoring Across Multiple Services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Integrated AWS CloudWatch and X-Ray to gain a comprehensive view of workflow execution, enabling more effective troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 3: Ensuring Data Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Implemented strict IAM policies to ensure secure access control, preventing unauthorized access to sensitive data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project showcases how AWS Step Functions can effectively orchestrate complex data workflows across multiple services, enabling seamless scalability and enhanced reliability. By combining automated error handling, real-time monitoring, and optimized processing techniques, this architecture demonstrates a highly adaptable solution for data-driven organizations. The end result is a streamlined, resilient system capable of handling large datasets efficiently, supporting businesses in making data-informed decisions with minimal operational overhead.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
      <category>data</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building Web Applications Using Amazon EKS : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Wed, 13 Nov 2024 08:07:42 +0000</pubDate>
      <link>https://forem.com/shubham_murti/building-web-applications-using-amazon-eks-aws-project-mnb</link>
      <guid>https://forem.com/shubham_murti/building-web-applications-using-amazon-eks-aws-project-mnb</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This project focuses on building a highly scalable, resilient web application using Amazon EKS (Elastic Kubernetes Service) for container orchestration. By leveraging key AWS services like Amazon ECR (Elastic Container Registry), Cloud9, and AWS Fargate, along with CI/CD automation and monitoring tools like CloudWatch Container Insights, the setup ensures efficient deployment, auto-scaling, and robust application management. Ideal for DevOps teams, this architecture integrates Kubernetes best practices, providing automated deployments, scaling, and resource management to meet dynamic application demands with high reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;Here’s a rundown of the AWS services, tools, and technologies used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Cloud9&lt;/strong&gt;: For an integrated development environment (IDE) on the cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Elastic Container Registry (ECR)&lt;/strong&gt;: To store Docker images for easy deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Elastic Kubernetes Service (EKS)&lt;/strong&gt;: The primary Kubernetes cluster manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container Insights&lt;/strong&gt;: For real-time monitoring and insights into the application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt;: For serverless deployment and resource optimization&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;: To automate code deployment, enhancing release efficiency&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before diving in, ensure you meet these prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Basic Knowledge&lt;/strong&gt;: Familiarity with AWS core services and IAM (Identity and Access Management) permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker &amp;amp; Kubernetes Basics&lt;/strong&gt;: Knowledge of containerization and Kubernetes concepts will be helpful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI &amp;amp; eksctl Installed&lt;/strong&gt;: Both AWS CLI and eksctl should be configured on your local environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM Setup&lt;/strong&gt;: Permissions to access AWS resources, especially EKS, ECR, Cloud9, and IAM roles.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;Person A, a member of the DevOps team of a famous Korean company, will be in charge of a project to develop a new web applications. The application should be satisfied with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Quickly reflect changes when new requests are occurred&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Operate and develop application with fewer people&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After confirming the above requirements, Person A decided to build &lt;strong&gt;Modern Application&lt;/strong&gt;, and after discussing with team members, A wants to build the web application through &lt;strong&gt;MSA, container, CI/CD&lt;/strong&gt;. And they choose kubernetes as container orchestration tool based on majority opinion.&lt;/p&gt;

&lt;p&gt;There are few team members and not enough time to work directly to build everything by using open source. Expanding infrastructure is also a pain point. And there are some people who don’t know Kubernetes precisely.&lt;/p&gt;

&lt;p&gt;At this point, Person A found out managed kubernetes service, &lt;a href="https://aws.amazon.com/ko/kubernetes/" rel="noopener noreferrer"&gt;Amazon Elastic Kubernetes Service &lt;/a&gt;. And to understand the advantages and characteristics of the &lt;strong&gt;Amazon EKS&lt;/strong&gt;, A determined to do simple PoC(Proof of Concept)!&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;Below is the architecture diagram for the web application built on Amazon EKS. This high-level view showcases how different AWS services interact to deliver a cohesive deployment environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv026bjrysdkeg653qml4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv026bjrysdkeg653qml4.png" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;p&gt;Each component plays a vital role in the solution architecture:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon ECR&lt;/strong&gt;: Stores Docker container images for EKS to pull from and deploy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EKS&lt;/strong&gt;: Manages Kubernetes clusters that house the application, making it scalable and robust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container Insights&lt;/strong&gt;: Monitors the health and performance of the application and clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt;: Enables serverless container deployment, reducing infrastructure management needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt;: Automates code deployment to EKS, enhancing deployment frequency and reliability.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Setting workspace
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting#build-a-workspace-with-aws-cloud9" rel="noopener noreferrer"&gt;Build a workspace with AWS Cloud9&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This workshop is conducted through AWS Cloud9, a cloud-based integrated development environment(IDE).&lt;/p&gt;

&lt;p&gt;AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. Cloud9 comes prepackaged with essential tools for popular programming languages, including JavaScript, Python, PHP, and more, so you don’t need to install files or configure your development machine to start new projects.&lt;/p&gt;

&lt;p&gt;Click &lt;a href="https://aws.amazon.com/cloud9/?nc1=h_ls" rel="noopener noreferrer"&gt;here &lt;/a&gt;to learn more about the features and characteristics of &lt;strong&gt;AWS Cloud9&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2av26iphob1oithxi67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2av26iphob1oithxi67.png" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cloud9
&lt;/h2&gt;

&lt;p&gt;The order in which you build a workspace with AWS Cloud9 is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;IDE configuration with AWS Cloud9&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create IAM Role&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grant IAM Role to an AWS Cloud9 instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update IAM settings in IDE&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In case of AWS Event, &lt;strong&gt;accounts are already prepared with belows configuration&lt;/strong&gt;. You can skip this.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/100-aws-cloud9#ide-configuration-with-aws-cloud9" rel="noopener noreferrer"&gt;IDE configuration with AWS Cloud9&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Access &lt;a href="https://console.aws.amazon.com/cloud9" rel="noopener noreferrer"&gt;AWS Cloud9 console &lt;/a&gt;and click the &lt;strong&gt;Create environment&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write down the IDE name and click Next step. In this lab, type eks-workspace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the other instance type radio button, select &lt;strong&gt;t3.medium&lt;/strong&gt;. In case of platform, select &lt;strong&gt;Amazon Linux 2 (recommended)&lt;/strong&gt; and click Next step button. Check the property value you set, then click &lt;strong&gt;Create environment&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the creation is completed, the screen below appears.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio50kavxg0m6tngnpqmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fio50kavxg0m6tngnpqmm.png" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Cloud9 requires third-party-cookies. If the screen above does not appear, refer to &lt;a href="https://docs.aws.amazon.com/cloud9/latest/user-guide/troubleshooting.html#troubleshooting-env-loading" rel="noopener noreferrer"&gt;here &lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/100-aws-cloud9#create-iam-role" rel="noopener noreferrer"&gt;Create IAM Role&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;An IAM role is an IAM identity that you can create in your account that has specific permissions. For the IAM role, it is available for IAM users and services provided by AWS. If you grant an IAM Role to the AWS service, the service performs the delegated role on your behalf.&lt;/p&gt;

&lt;p&gt;In this lab, we create an IAM Role with &lt;strong&gt;Administrator access&lt;/strong&gt; policy and attach it to AWS Cloud9.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;a href="https://console.aws.amazon.com/iam/home#/roles$new?step=type&amp;amp;commonUseCase=EC2%2BEC2&amp;amp;selectedUseCase=EC2&amp;amp;policies=arn:aws:iam::aws:policy%2FAdministratorAccess" rel="noopener noreferrer"&gt;here &lt;/a&gt;to enter into IAM Role console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check that &lt;strong&gt;AWS service&lt;/strong&gt; and &lt;strong&gt;EC2&lt;/strong&gt; are selected and click &lt;strong&gt;Next: Permissions&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check that the &lt;strong&gt;AdministratorAccess&lt;/strong&gt; policy is selected and click &lt;strong&gt;Next: Tags&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Add Tag (optional) step, click &lt;strong&gt;Next: Review&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In &lt;strong&gt;Role name&lt;/strong&gt;, type eksworkspace-admin, confirm that the AdministratorAccess managed policy has been added, and click &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvcez7vpezuucrk1t77p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvcez7vpezuucrk1t77p.png" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this workshop, the AdministratorAccess policy is used to facilitate the workshop, but it is appropriate to grant minimum privileges when running a production environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/100-aws-cloud9#grant-iam-role-to-an-aws-cloud9-instance" rel="noopener noreferrer"&gt;Grant IAM Role to an AWS Cloud9 instance&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The AWS Cloud9 environment is powered by an EC2 instance. Therefore, grant the IAM Role that you just created to the AWS Cloud9 instance in EC2 console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click &lt;a href="https://console.aws.amazon.com/ec2/v2/home?#Instances:sort=desc:launchTime" rel="noopener noreferrer"&gt;here &lt;/a&gt;to enter into EC2 instnace console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select AWS Cloud9 instance, then click &lt;strong&gt;Actions &amp;gt; Security &amp;gt; Modify IAM Role&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpq1fac74wik3rb6g5oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpq1fac74wik3rb6g5oc.png" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select eksworkspace-admin in IAM Role section, then click the &lt;strong&gt;Save&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11uvdfu3onz0wmum46lc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11uvdfu3onz0wmum46lc.png" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/100-aws-cloud9#update-iam-settings-in-ide" rel="noopener noreferrer"&gt;Update IAM settings in IDE&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In AWS Cloud9, it dynamically manage IAM credits. Disable these credentials because they are not compatible with &lt;strong&gt;EKS IAM authentication&lt;/strong&gt;. After that &lt;strong&gt;attach the IAM Role&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Reconnect to the IDE you created in AWS Cloud9 console, click the gear icon in the upper right corner, and click &lt;strong&gt;AWS SETTINGS&lt;/strong&gt; in the sidebar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disable the &lt;strong&gt;AWS managed temperature credits&lt;/strong&gt; setting in the &lt;strong&gt;Credentials&lt;/strong&gt; topic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exit Preference tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uites9vvhbaipp6xqgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uites9vvhbaipp6xqgo.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Remove existing credential files to ensure &lt;strong&gt;Temporary credentials&lt;/strong&gt; are not present.&lt;/p&gt;

&lt;p&gt;rm -vf ${HOME}/.aws/credentials&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the &lt;strong&gt;GetCallerIdentity CLI&lt;/strong&gt; command line to check that Cloud9 IDE is using the correct IAM Role. &lt;strong&gt;If the result value is shown&lt;/strong&gt; it is set correctly.&lt;/p&gt;

&lt;p&gt;aws sts get-caller-identity --query Arn | grep eksworkspace-admin&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  kubectl
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/300-kubectl#install-kubectl" rel="noopener noreferrer"&gt;Install kubectl&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;kubectl&lt;/strong&gt; is the CLI that commands a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Kubernetes uses &lt;strong&gt;Kubernetes API&lt;/strong&gt; to perform actions related to creating, modifying, or deleting objects. When you use the kubectl CLI, the command invokes the Kubernetes API to perform the associated actions.&lt;/p&gt;

&lt;p&gt;Click &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" rel="noopener noreferrer"&gt;here &lt;/a&gt;to &lt;strong&gt;install the corresponding kubectl to the Amazon EKS version you want to deploy&lt;/strong&gt;. In this workshop, we will install the latest kubectl binary(as of August 2023).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl -o /usr/local/bin/kubectl  \
   https://s3.us-west-2.amazonaws.com/amazon-eks/1.27.4/2023-08-16/bin/linux/amd64/kubectl

sudo chmod +x /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use the command below to check that the latest kubectl is installed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version --client=true --short=true

# Output Sample
Client Version: v1.27.4-eks-8ccc7ba
Kustomize Version: v5.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  ETC
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/400-etc#install-jq" rel="noopener noreferrer"&gt;Install jq&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;jq is a command line utility that deals with data in JSON format. Using the command below, install jq.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y jq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/400-etc#install-bash-completion" rel="noopener noreferrer"&gt;Install bash-completion&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In the Bash shell, the kubectl completion script can be created using the &lt;strong&gt;kubectl completion bash&lt;/strong&gt; command. Sourcing the completion script to the shell enables automatic completion of the kubectl command. However, because these completion scripts rely on bash-completion, you must install &lt;a href="https://github.com/scop/bash-completion#installation" rel="noopener noreferrer"&gt;bash-completion &lt;/a&gt;through the command below.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y bash-completion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Below is only necessary for exploring &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd" rel="noopener noreferrer"&gt;CI/CD for EKS Cluster&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/400-etc#install-git" rel="noopener noreferrer"&gt;Install Git&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Click the &lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;Git Downloader &lt;/a&gt;link and install the git.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/400-etc#install-python" rel="noopener noreferrer"&gt;Install Python&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Install python because CDK for Python is used. Python is installed by default in the Cloud9 environment. &lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python Installer &lt;/a&gt;Select the appropriate package from the link to download and install it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python --version
python3 --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/400-etc#check-pip" rel="noopener noreferrer"&gt;Check PIP&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Checks whether PIP, a manager that installs and manages Python packages, is installed. It is installed by default in certain versions of Python and above.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip
pip3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Since pip version 9.0.3 or higher is required to use CodeCommit, update pip by executing the command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py --user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If not installed &lt;a href="https://pip.pypa.io/en/stable/cli/pip_install/" rel="noopener noreferrer"&gt;pip install page &lt;/a&gt;recommended to proceed with the installation according to the guide or install with the latest version of Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install eksctl
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/30-setting/500-eksctl#install-eksctl" rel="noopener noreferrer"&gt;Install eksctl&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;There are various ways to deploy an Amazon EKS cluster. AWS console, CloudFormation, CDK, eksctl, and Terraform are examples.&lt;/p&gt;

&lt;p&gt;In this lab, we will &lt;strong&gt;deploy the cluster using eksctl&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5dk1gkpx3ltzwzyc2c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5dk1gkpx3ltzwzyc2c4.png" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl &lt;/a&gt;is a CLI tool for easily creating and managing EKS clusters. It is written in Go language and deployed in CloudFormation form.&lt;/p&gt;

&lt;p&gt;Download the latest eksctl binary using the command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Move the binary to the location /usr/local/bin.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv -v /tmp/eksctl /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Use the command below to check the installation.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  AWS Cloud9 Additional Settings
&lt;/h2&gt;

&lt;p&gt;On the previous chapter, we &lt;strong&gt;deploy AWS Cloud9 IDE&lt;/strong&gt; and &lt;strong&gt;install the required tools&lt;/strong&gt;, then proceed to the additional setup below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Set default value to AWS Region that is currently using.&lt;/p&gt;

&lt;p&gt;TOKEN=$(curl -s -X PUT "&lt;a href="http://169.254.169.254/latest/api/token" rel="noopener noreferrer"&gt;http://169.254.169.254/latest/api/token&lt;/a&gt;" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")&lt;/p&gt;

&lt;p&gt;export AWS_REGION=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" &lt;a href="http://169.254.169.254/latest/dynamic/instance-identity/document" rel="noopener noreferrer"&gt;http://169.254.169.254/latest/dynamic/instance-identity/document&lt;/a&gt; | jq -r '.region')&lt;/p&gt;

&lt;p&gt;echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile&lt;/p&gt;

&lt;p&gt;aws configure set default.region ${AWS_REGION}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check AWS region.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure get default.region
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Register the account ID you are currently working on as an environment variable.&lt;/p&gt;

&lt;p&gt;export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)&lt;/p&gt;

&lt;p&gt;echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;During the docker image build, the AWS Cloud9 environment may experience a capacity shortage issue. To resolve this, run a shell script that extends the disk size.&lt;/p&gt;

&lt;p&gt;wget &lt;a href="https://gist.githubusercontent.com/joozero/b48ee68e2174a4f1ead93aaf2b582090/raw/2dda79390a10328df66e5f6162846017c682bef5/resize.sh" rel="noopener noreferrer"&gt;https://gist.githubusercontent.com/joozero/b48ee68e2174a4f1ead93aaf2b582090/raw/2dda79390a10328df66e5f6162846017c682bef5/resize.sh&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sh resize.sh&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After completion, use the command below to check that the increased volume size is reflected in the file system.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you do not set AWS Region, an error occurs when you deploy the cluster and check the relevant information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Image
&lt;/h2&gt;

&lt;p&gt;In this lab, you will learn how to create a container image using a docker platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/40-container#docker" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/docker/?nc1=h_ls" rel="noopener noreferrer"&gt;Docker &lt;/a&gt;is a software &lt;strong&gt;platform **that allows you to build, test and deploy **containerized applications&lt;/strong&gt;. Docker packages software into standardized units called containers, which contain everything you need to run the software, including libraries, system tools, code, runtime, and so on.&lt;/p&gt;

&lt;p&gt;To learn more about &lt;strong&gt;Docker&lt;/strong&gt;, click &lt;a href="https://www.docker.com/resources/what-container" rel="noopener noreferrer"&gt;here &lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/40-container#container-image" rel="noopener noreferrer"&gt;Container Image&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Container image&lt;/strong&gt; is a combination of the files and settings required to run the container. These images can be uploaded and downloaded in the repository. And the state in which the image was executed is &lt;strong&gt;container&lt;/strong&gt;. Container images can be downloaded and used by official image repositories such as &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub &lt;/a&gt;or created directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nh9614ncuxgoohf58dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nh9614ncuxgoohf58dv.png" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Container Image
&lt;/h2&gt;

&lt;p&gt;This part is an independent lab for those who want to learn more about containers and container images. If you don’t proceed with this lab, you won’t have any problem with &lt;strong&gt;configuring web application with Amazon EKS&lt;/strong&gt;. If you don’t want this lab, move onto the &lt;strong&gt;Upload Image to Amazon ECR&lt;/strong&gt; chapter.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/40-container/100-build-image#create-container-image-yourself" rel="noopener noreferrer"&gt;Create Container Image Yourself&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrtefastf0usyfbqdrvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrtefastf0usyfbqdrvx.png" width="378" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker File&lt;/strong&gt; is a &lt;strong&gt;setup file for building container images&lt;/strong&gt;. That is, think of it as a blueprint for the image to be built. When these images become containers, the application is actually running.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Paste the values below in the root folder(/home/ec2-user/environment).&lt;/p&gt;

&lt;p&gt;cd ~/environment/&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; Dockerfile&lt;br&gt;
FROM nginx:latest&lt;br&gt;
RUN  echo '&lt;/p&gt;
&lt;h1&gt; test nginx web page &lt;/h1&gt;'  &amp;gt;&amp;gt; index.html
RUN cp /index.html /usr/share/nginx/html
EOF&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instruction component in the Docker File is as follows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FROM&lt;/strong&gt;: Set the Base Image(Specify OS or version)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RUN&lt;/strong&gt;: Execute any commands in a new layer on top of the current image and commit the results&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WORKDIR&lt;/strong&gt;: Where to perform instructions such as RUN, CMD, ENTRYPOINT, COPY, ADD in the Docker File&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EXPOSE&lt;/strong&gt;: Specify the port number to connect to the host&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CMD&lt;/strong&gt;: Commands for running application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an image with the &lt;strong&gt;docker build command&lt;/strong&gt;. In name, enter the name of the container image and in case of tag, if not named, you will have a value called &lt;strong&gt;latest&lt;/strong&gt;. In this lab, you will write test-image by container image name.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker build -t test-image .&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Check the images created with the &lt;strong&gt;docker images command&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker images&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Run the image as a container with the &lt;strong&gt;docker run command&lt;/strong&gt;. The command below uses a container image named test-image to run a container named test-nginx, which means that 8080 ports of the host and 80 ports of the container are mapped.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker run -p 8080:80 --name test-nginx test-image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, information passed to 8080 ports on the host is forwarded through the docker to 80 ports on the container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can use the &lt;strong&gt;docker ps command&lt;/strong&gt; to check which containers are running on the current host. &lt;strong&gt;Open a new terminal on AWS Cloud9&lt;/strong&gt; and type the command below.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker ps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4oiu9f1gyndb9e0sewz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4oiu9f1gyndb9e0sewz.png" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can check the status by outputting logs from the container with the &lt;strong&gt;docker logs command&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker logs -f test-nginx&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;You can access into the inside shell environment of the container with &lt;strong&gt;docker exec command&lt;/strong&gt;. After access, you can apprehend the internal structure and exit through the exit command.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker exec -it test-nginx /bin/bash&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;In AWS cloud9, you can see which applications are currently running by clicking the top Tools &amp;gt; Preview &amp;gt; Preview Running Application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9sswjl6na8vuxbnu6ku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9sswjl6na8vuxbnu6ku.png" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqevn8mlit8kfxctu3jkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqevn8mlit8kfxctu3jkg.png" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stop running containers with &lt;strong&gt;docker stop command&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker stop test-nginx&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you type the docker ps command again, you can see that the container that was just running has disappeared from the list.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete the container with the &lt;strong&gt;docke rm command&lt;/strong&gt;. The container deletion is possible only when the container is stopped.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker rm test-nginx&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Delete the container image with &lt;strong&gt;docke rmi command&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker rmi test-image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you type the &lt;strong&gt;docker images&lt;/strong&gt; command, you can check that the container image that was created is not listed.&lt;/p&gt;

&lt;p&gt;You can also use the Docker command to perform operations such as cpu or memory restrictions, and sharing directory with hosts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upload container image to Amazon ECR
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/40-container/200-eks#create-amazon-ecr-repository-and-upload-image" rel="noopener noreferrer"&gt;Create Amazon ECR Repository and Upload Image&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Create repositories and upload container images in the docker container registry &lt;strong&gt;Amazon Elastic Container Registry (ECR)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d9kdlmpcvmxv4ieo8dt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d9kdlmpcvmxv4ieo8dt.png" width="112" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic Container Registry(&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html" rel="noopener noreferrer"&gt;**Amazon ECR &lt;/a&gt;**) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private container image repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the source code to be containerized through the command below.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;git clone &lt;a href="https://github.com/joozero/amazon-eks-flask.git" rel="noopener noreferrer"&gt;https://github.com/joozero/amazon-eks-flask.git&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Through the AWS CLI, create an image repository. In this lab, we will set the repository name to &lt;strong&gt;demo-flask-backend&lt;/strong&gt;. Also, specify AWS Region code(for example, ap-northeast-2) to deploy the EKS cluster in &lt;strong&gt;— region&lt;/strong&gt;’s value.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;aws ecr create-repository \ --repository-name demo-flask-backend \ --image-scanning-configuration scanOnPush=true \ --region ${AWS_REGION}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you enter this CLI, information about the repository is derived from the resulting value. You can also find the repositories created in the &lt;a href="https://console.aws.amazon.com/ecr/home" rel="noopener noreferrer"&gt;Amazon ECR Console &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xduvl877auwx425l0iu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xduvl877auwx425l0iu.png" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the tasks below, your personal account information will be included. Click on the repository you just created on the &lt;a href="https://console.aws.amazon.com/ecr/home" rel="noopener noreferrer"&gt;Amazon ECR Console &lt;/a&gt;, then click &lt;strong&gt;View push commands&lt;/strong&gt; in the upper right corner to find the guide below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hgv7i1q4e0g54vnyiyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5hgv7i1q4e0g54vnyiyl.png" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kckfw9c2oz0ohm79b5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kckfw9c2oz0ohm79b5d.png" width="797" height="809"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To push the container image to the repository, bring the authentication token and pass the authentication to the &lt;strong&gt;docker login&lt;/strong&gt; command. At this point, specify the user name as AWS and specify the Amazon ECR registry URI that you want to authenticate with.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;[!] If the above command does not work properly, check whether environment variable named ACCOUNT_ID is called in the terminal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Input the &lt;strong&gt;downloaded source code location(for example, /home/ec2-user/environment/amazon-eks-flask)&lt;/strong&gt; and enter the command below to build the docker image.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cd ~/environment/amazon-eks-flask docker build -t demo-flask-backend .&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;When the image is built, use the &lt;strong&gt;docker tag command&lt;/strong&gt; to enable it to be pushed to a specific repository.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker tag demo-flask-backend:latest $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-flask-backend:latest&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Push the image into the repository via the &lt;strong&gt;docker push&lt;/strong&gt; command.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-flask-backend:latest&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon ECR Console, click on the repository you just created to see the uploaded image as shown in the screen below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi1rl47dscbpzuo0k0p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi1rl47dscbpzuo0k0p5.png" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We have created &lt;strong&gt;container images&lt;/strong&gt; to deploy on EKS clusters and pushed it into the &lt;strong&gt;repository&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create EKS Cluster
&lt;/h2&gt;

&lt;p&gt;Amazon EKS clusters can be deployed in various ways.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Deploy by clicking on &lt;a href="https://console.aws.amazon.com/eks/home#/" rel="noopener noreferrer"&gt;AWS console&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy by using IaC(Infrastructure as Code) tool such as &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;AWS CloudFormation &lt;/a&gt;or &lt;a href="https://docs.aws.amazon.com/cdk/api/latest/" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy by using &lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy to Terraform, Pulumi, Rancher, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbvp51n5edjdmm9u507i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbvp51n5edjdmm9u507i.png" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this lab, we will create an EKS cluster using &lt;strong&gt;eksctl&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create EKS Cluster with eksctl
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/50-eks-cluster/100-launch-cluster#create-eks-cluster-with-eksctl" rel="noopener noreferrer"&gt;Create EKS Cluster with eksctl&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If you use eksctl to execute this command (eksctl create cluster) without giving any setting values, the cluster is deployed as a default parameter.&lt;/p&gt;

&lt;p&gt;However, we will &lt;strong&gt;create configuration files to customize some values&lt;/strong&gt; and deploy it. In later labs, when you create Kubernetes’ objects, you create a configuration file that is not just created with the kubectl CLI. This has the advantage of being able to easily identify and manage the desired state of the objects specified by the individual.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Paste the values below in the root folder(/home/ec2-user/environment) location.&lt;/p&gt;

&lt;p&gt;cd ~/environment&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt;&amp;lt; EOF &amp;gt; eks-demo-cluster.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: eksctl.io/v1alpha5&lt;br&gt;
kind: ClusterConfig&lt;/p&gt;

&lt;p&gt;metadata:&lt;br&gt;
  name: eks-demo # EKS Cluster name&lt;br&gt;
  region: ${AWS_REGION} # Region Code to place EKS Cluster&lt;br&gt;
  version: "1.27"&lt;/p&gt;

&lt;p&gt;vpc:&lt;br&gt;
  cidr: "10.0.0.0/16" # CIDR of VPC for use in EKS Cluster&lt;br&gt;
  nat:&lt;br&gt;
    gateway: HighlyAvailable&lt;/p&gt;

&lt;p&gt;managedNodeGroups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: node-group # Name of node group in EKS Cluster
instanceType: m5.large # Instance type for node group
desiredCapacity: 3 # The number of worker node in EKS Cluster
volumeSize: 20  # EBS Volume for worker node (unit: GiB)
privateNetworking: true
iam:
  withAddonPolicies:
    imageBuilder: true # Add permission for Amazon ECR
    albIngress: true  # Add permission for ALB Ingress
    cloudWatch: true # Add permission for CloudWatch
    autoScaler: true # Add permission Auto Scaling
    ebs: true # Add permission EBS CSI driver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;cloudWatch:&lt;br&gt;
  clusterLogging:&lt;br&gt;
    enableTypes: ["*"]&lt;br&gt;
EOF&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you look at the cluster configuration file, you can define policies through &lt;strong&gt;iam.attachPolicyARNs&lt;/strong&gt; and through &lt;strong&gt;iam.withAddonPolicies&lt;/strong&gt;, you can also define add-on policies. After the EKS cluster is deployed, you can check the IAM Role of the worker node instance in EC2 console to see added policies.&lt;/p&gt;

&lt;p&gt;Click &lt;a href="https://eksctl.io/usage/creating-and-managing-clusters/" rel="noopener noreferrer"&gt;here &lt;/a&gt;to see the various property values that you can give to the configuration file.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Using the commands below, deploy the cluster.&lt;/p&gt;

&lt;p&gt;eksctl create cluster -f eks-demo-cluster.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cluster takes &lt;strong&gt;approximately 15 to 20 minutes&lt;/strong&gt; to fully be deployed. You can see the progress of your cluster deployment in AWS Cloud9 terminal and also can see the status of events and resources in AWS CloudFormation console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;When the deployment is completed, use command below to check that the node is properly deployed.&lt;/p&gt;

&lt;p&gt;kubectl get nodes&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, you can see the cluster credentials added in &lt;strong&gt;~/.kube/config&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/50-eks-cluster/100-launch-cluster#the-architecture-as-of-now" rel="noopener noreferrer"&gt;The architecture as of now&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon5dys8meeqowot1y64u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon5dys8meeqowot1y64u.png" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating a Kubernetes cluster with eksctl, the architecture of the services configured as of now is shown below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Console Credential
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/50-eks-cluster/200-option-console#attach-console-credential" rel="noopener noreferrer"&gt;Attach Console Credential&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The EKS cluster uses IAM entity(user or role) for cluster access control. The rule runs in a ConfigMap named &lt;strong&gt;aws-auth&lt;/strong&gt;. By default, IAM entities used to create clusters are automatically granted &lt;strong&gt;system:masters&lt;/strong&gt; privilege of the cluster RBAC configuration in the control plane.&lt;/p&gt;

&lt;p&gt;If you access the Amazon EKS console in the current state, you cannot check any information as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiif6j5xr69e1czixs2pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiif6j5xr69e1czixs2pd.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you created the cluster through IAM credentials on Cloud9 in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/50-eks-cluster/100-launch-cluster.html" rel="noopener noreferrer"&gt;**Create EKS Cluster with eksctl&lt;/a&gt;** chapter, so you need to determine the correct credential(such as your IAM Role not Cloud9 credentials) to add for your &lt;a href="https://console.aws.amazon.com/eks" rel="noopener noreferrer"&gt;AWS EKS Console &lt;/a&gt;access.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use the command below to define the role ARN(Amazon Resource Number).&lt;/p&gt;

&lt;p&gt;rolearn=$(aws cloud9 describe-environment-memberships --environment-id=$C9_PID | jq -r '.memberships[].userArn')&lt;/p&gt;

&lt;p&gt;echo ${rolearn}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] If &lt;strong&gt;echo command&lt;/strong&gt;’s result contains &lt;strong&gt;assumed-role&lt;/strong&gt;, perform the additional actions below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;assumedrolename=$(echo ${rolearn} | awk -F/ '{print $(NF-1)}')
rolearn=$(aws iam get-role --role-name ${assumedrolename} --query Role.Arn --output text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an identity mapping.&lt;/p&gt;

&lt;p&gt;eksctl create iamidentitymapping --cluster eks-demo --arn ${rolearn} --group system:masters --username admin&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can check &lt;strong&gt;aws-auth&lt;/strong&gt; config map information through the command below.&lt;/p&gt;

&lt;p&gt;kubectl describe configmap -n kube-system aws-auth&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the above operations are completed, you will be able to get information from the control plane, the worker node, logging activation, and update information in Amazon EKS console.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz22agecust2lz8moi8jd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz22agecust2lz8moi8jd.png" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Workloads tab, you can see the applications placed in the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd00qhewmmbbdtl6w5a1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd00qhewmmbbdtl6w5a1.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Configuration tab, you can get cluster configuration detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfxc2rhv4cvpgufgcbf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfxc2rhv4cvpgufgcbf9.png" width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Ingress Controller
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/60-ingress-controller#ingress-controller" rel="noopener noreferrer"&gt;Ingress Controller&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this lab, we will use &lt;a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/" rel="noopener noreferrer"&gt;AWS Load Balancer Controller &lt;/a&gt;for Ingress Controller.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;AWS ALB Ingress Controller&lt;/strong&gt; has been rebranded to &lt;strong&gt;AWS Load Balancer Controller&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt; is a rule and resource object that defines how to handle requests, primarily when accessing from outside the cluster to inside the Kubernetes cluster. In short, it serve as a gateway for external requests to access inside of the cluster. You can set up it for load balancing for external requests, processing TLS/SSL certificates, routing to HTTP routes, and so on. Ingress processes requests from the L7.&lt;/p&gt;

&lt;p&gt;In Kubernetes, you can also externally expose to NodePort or LoadBalancer type in Service object, but if you use a Serivce object without any Ingress, you must consider detailed options such as routing rules and TLS/SSL to all services. That’s why Ingress is needed in Kubernetes environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwr2brj37wqj0ydcj5d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwr2brj37wqj0ydcj5d3.png" width="291" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ingress means the object that you have set up rules for handling external requests, and &lt;strong&gt;Ingress Controller&lt;/strong&gt; is needed for these settings to work. Unlike other controllers that run as part of the kube-controller-manager, the ingress controller is not created with the cluster by nature. Therefore, you need to install it yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create AWS Load Balancer Controller
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/60-ingress-controller/100-launch-alb#create-aws-load-balancer-controller" rel="noopener noreferrer"&gt;Create AWS Load Balancer Controller&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noopener noreferrer"&gt;AWS Load Balancer Controller &lt;/a&gt;manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions the following resources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It satisfies Kubernetes Service resources by provisioning Network Load Balancers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The controller was formerly named the AWS ALB Ingress Controller. There are two &lt;strong&gt;traffic modes&lt;/strong&gt; supported by each type of AWS Load Balancer controller:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Instance(default): Register nodes in the cluster as targets for ALB. Traffic reaching the ALB is routed to NodePort and then proxied to the Pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IP: Register the Pod as an ALB target. Traffic reaching the ALB is routed directly to the Pod. In order to use that traffic mode, you must explicitly specify it in the ingress.yaml file with comments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35iamum7ikqz7o34wlcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35iamum7ikqz7o34wlcn.png" width="728" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a folder named &lt;strong&gt;manifests&lt;/strong&gt; in the root folder (for example, /home/ec2-user/environment/) to manage manifests. Then, inside the manifests folder, create a folder &lt;strong&gt;alb-controller&lt;/strong&gt; to manage the manifest associated with the ALB Ingress Controller.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment

mkdir -p manifests/alb-ingress-controller &amp;amp;&amp;amp; cd manifests/alb-ingress-controller

# Final location
/home/ec2-user/environment/manifests/alb-ingress-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before deploying the AWS Load Balancer controller, we need to do some things. Because the controller operates over the worker node, you must make it accessible to AWS ALB/NLB resources through IAM permissions. IAM permissions can install IAM Roles for ServiceAccount or attach directly to IAM Roles on the worker node.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;First, create &lt;strong&gt;IAM OpenID Connect (OIDC) identity provider&lt;/strong&gt; for the cluster. &lt;strong&gt;IAM OIDC provider&lt;/strong&gt; must exist in the cluster(in this lab, &lt;em&gt;eks-demo&lt;/em&gt;) in order for objects created by Kubernetes to use &lt;a href="https://kubernetes.io/ko/docs/reference/access-authn-authz/service-accounts-admin/" rel="noopener noreferrer"&gt;*&lt;em&gt;service account *&lt;/em&gt;&lt;/a&gt;which purpose is to authenticate to API Server or external services.&lt;/p&gt;

&lt;p&gt;eksctl utils associate-iam-oidc-provider \&lt;br&gt;
    --region ${AWS_REGION} \&lt;br&gt;
    --cluster eks-demo \&lt;br&gt;
    --approve&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] Let’s find out a little more here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The IAM OIDC identity provider you create can be found in Identity providers menu on IAM console or in the commands below.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the OIDC provider URL of the cluster through the commands below.&lt;/p&gt;

&lt;p&gt;aws eks describe-cluster --name eks-demo --query "cluster.identity.oidc.issuer" --output text&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result from the command have the following format:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://oidc.eks.ap-northeast-2.amazonaws.com/id/8A6E78112D7F1C4DC352B1B511DD13CF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Copy the value after &lt;strong&gt;/id/&lt;/strong&gt; from the output above, then execute the command as shown below.&lt;/p&gt;

&lt;p&gt;aws iam list-open-id-connect-providers | grep 8A6E78112D7F1C4DC352B1B511DD13CF&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the result appears, &lt;strong&gt;IAM OIDC identity provider&lt;/strong&gt; is created in the cluster, and if no value appears, you must execute the creation operation again.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create an IAM Policy to grant to the AWS Load Balancer Controller.&lt;/p&gt;

&lt;p&gt;curl -O &lt;a href="https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;aws iam create-policy \&lt;br&gt;
    --policy-name AWSLoadBalancerControllerIAMPolicy \&lt;br&gt;
    --policy-document file://iam_policy.json&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create ServiceAccount for AWS Load Balancer Controller.&lt;/p&gt;

&lt;p&gt;eksctl create iamserviceaccount \&lt;br&gt;
    --cluster eks-demo \&lt;br&gt;
    --namespace kube-system \&lt;br&gt;
    --name aws-load-balancer-controller \&lt;br&gt;
    --attach-policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \&lt;br&gt;
    --override-existing-serviceaccounts \&lt;br&gt;
    --approve&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When deploying an EKS cluster, you can also add the IAM policy associated with the AWS Load Balancer Controller to the Worker node in the form of Addon. However, in this lab, we will conduct with the reference, &lt;a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/installation/" rel="noopener noreferrer"&gt;here &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, please refer simple hands on lab about IAM roles for service accounts(&lt;a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="noopener noreferrer"&gt;IRSA &lt;/a&gt;) in &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-restrict-s3-bucket/?nc1=h_ls" rel="noopener noreferrer"&gt;here &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/60-ingress-controller/100-launch-alb#add-controller-to-cluster" rel="noopener noreferrer"&gt;Add Controller to Cluster&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add AWS Load Balancer controller to the cluster. First, install &lt;a href="https://github.com/jetstack/cert-manager" rel="noopener noreferrer"&gt;*&lt;em&gt;cert-manager *&lt;/em&gt;&lt;/a&gt;to insert the certificate configuration into the Webhook. &lt;strong&gt;Cert-manager&lt;/strong&gt; is an open source that automatically provisions and manages TLS certificates within a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;kubectl apply -f &lt;a href="https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml" rel="noopener noreferrer"&gt;https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download Load balancer controller yaml file.&lt;/p&gt;

&lt;p&gt;curl -Lo v2_5_4_full.yaml &lt;a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.5.4/v2_5_4_full.yaml" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.5.4/v2_5_4_full.yaml&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following command to remove the &lt;strong&gt;ServiceAccount&lt;/strong&gt; section in the manifest. If you don’t remove this section, the required annotation that you made to the service account in a previous step is overwritten.&lt;/p&gt;

&lt;p&gt;sed -i.bak -e '596,604d' ./v2_5_4_full.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Replace cluster name in the &lt;strong&gt;Deployment&lt;/strong&gt; spec section of the file with the name of your cluster by replacing my-cluster with the name of your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -i.bak -e 's|your-cluster-name|eks-demo|' ./v2_5_4_full.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy &lt;strong&gt;AWS Load Balancer controller&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;kubectl apply -f v2_5_4_full.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Download the IngressClass and IngressClassParams manifest to your cluster. And apply the manifest to your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo v2_5_4_ingclass.yaml https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.5.4/v2_5_4_ingclass.yaml

kubectl apply -f v2_5_4_ingclass.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Check that the deployment is successed and the controller is running through the command below. When the result is derived, it means success.&lt;/p&gt;

&lt;p&gt;kubectl get deployment -n kube-system aws-load-balancer-controller&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In addition, the command below shows that &lt;strong&gt;service account&lt;/strong&gt; has been created.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get sa aws-load-balancer-controller -n kube-system -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Pods running inside the cluster for the necessary functions are called &lt;strong&gt;Addon&lt;/strong&gt;. Pods used for add-on are managed by the Deployment, Replication Controller, and so on. And the namespace that this add-on uses is &lt;strong&gt;kube-system&lt;/strong&gt;. Because the namespace is specified as kube-system in the yaml file, it is successfully deployed when the pod name is derived from the command above. You can also check the relevant logs with the commands below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "aws-load-balancer[a-zA-Z0-9-]+")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Detailed property values are available with the commands below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALBPOD=$(kubectl get pod -n kube-system | egrep -o "aws-load-balancer[a-zA-Z0-9-]+")

kubectl describe pod -n kube-system ${ALBPOD}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Deploy Microservices
&lt;/h2&gt;

&lt;p&gt;In this lab, you will learn how to deploy the backend, frontend to Amazon EKS, which makes up the web service. The order in which each service is deployed is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oklj5s542uqf35haf9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oklj5s542uqf35haf9t.png" width="735" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download source code from git repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a repository for each container image in Amazon ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build container image from source code location, including Dockerfile, and push to repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create and deploy Deployment, Service, Ingress manifest files for each service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The figure below shows the order in which end users access the web service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpr1k4asmi085wu8m530.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpr1k4asmi085wu8m530.png" width="537" height="175"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy First Backend Service
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/70-deploy-service/100-flask-backend#deploy-flask-backend" rel="noopener noreferrer"&gt;Deploy flask backend&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To proceed with this lab, &lt;strong&gt;Upload container image to Amazon ECR&lt;/strong&gt; part must be preceded.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Move on to &lt;strong&gt;manifests folder&lt;/strong&gt;(/home/ec2-user/environment/manifests).&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cd ~/environment/manifests/&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;strong&gt;deploy manifest&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cat &amp;lt; flask-deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: demo-flask-backend namespace: default spec: replicas: 3 selector: matchLabels: app: demo-flask-backend template: metadata: labels: app: demo-flask-backend spec: containers: - name: demo-flask-backend image: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-flask-backend:latest imagePullPolicy: Always ports: - containerPort: 8080 EOF&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Next, create &lt;strong&gt;service manifest&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cat &amp;lt; flask-service.yaml --- apiVersion: v1 kind: Service metadata: name: demo-flask-backend annotations: alb.ingress.kubernetes.io/healthcheck-path: "/contents/aws" spec: selector: app: demo-flask-backend type: NodePort ports: - port: 8080 targetPort: 8080 protocol: TCP EOF&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Finally, create &lt;strong&gt;ingress manifest&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cat &amp;lt; flask-ingress.yaml --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: "flask-backend-ingress" namespace: default annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: eks-demo-group alb.ingress.kubernetes.io/group.order: '1' spec: ingressClassName: alb rules: - http: paths: - path: /contents pathType: Prefix backend: service: name: "demo-flask-backend" port: number: 8080 EOF&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Deploy the manifest created above in the order shown below. Ingress provisions Application Load Balancer(ALB).&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl apply -f flask-deployment.yaml kubectl apply -f flask-service.yaml kubectl apply -f flask-ingress.yaml&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Paste the results of the following command into the Web browser or API platform(like Postman) to check:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;echo http://$(kubectl get ingress/flask-backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')/contents/aws&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It will take a few time for the ingress object to be deployed. Wait for the &lt;strong&gt;Load Balancers&lt;/strong&gt; status to be active in &lt;a href="https://console.aws.amazon.com/ec2/v2/home#LoadBalancers:" rel="noopener noreferrer"&gt;EC2 console &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The architecture as of now is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l0e7xg27785188k96j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l0e7xg27785188k96j2.png" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy Second Backend Service
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/70-deploy-service/200-nodejs-backend#deploy-express-backend" rel="noopener noreferrer"&gt;Deploy Express backend&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Deploy the express backend in the same order as the flask backend.&lt;/p&gt;

&lt;p&gt;The lab below will deploy pre-built container images to skip the image build and repository push process conducted in &lt;strong&gt;Upload container image to Amazon ECR&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Move on to &lt;strong&gt;manifests folder&lt;/strong&gt;(/home/ec2-user/environment/manifests).&lt;/p&gt;

&lt;p&gt;cd ~/environment/manifests/&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create &lt;strong&gt;deploy manifest&lt;/strong&gt; which contains built pre-built container image.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF&amp;gt; nodejs-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-nodejs-backend
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo-nodejs-backend
  template:
    metadata:
      labels:
        app: demo-nodejs-backend
    spec:
      containers:
        - name: demo-nodejs-backend
          image: public.ecr.aws/y7c9e1d2/joozero-repo:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3000
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And then, create &lt;strong&gt;service manifest&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF&amp;gt; nodejs-service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: demo-nodejs-backend
  annotations:
    alb.ingress.kubernetes.io/healthcheck-path: "/services/all"
spec:
  selector:
    app: demo-nodejs-backend
  type: NodePort
  ports:
    - port: 8080
      targetPort: 3000
      protocol: TCP
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;create &lt;strong&gt;ingress manifest&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF&amp;gt; nodejs-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "nodejs-backend-ingress"
  namespace: default
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: eks-demo-group
    alb.ingress.kubernetes.io/group.order: '2'
spec:
  ingressClassName: alb
  rules:
  - http:
        paths:
          - path: /services
            pathType: Prefix
            backend:
              service:
                name: "demo-nodejs-backend"
                port:
                  number: 8080
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy the manifest files.&lt;/p&gt;

&lt;p&gt;kubectl apply -f nodejs-deployment.yaml&lt;br&gt;
kubectl apply -f nodejs-service.yaml&lt;br&gt;
kubectl apply -f nodejs-ingress.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Paste the results of the following command into the Web browser or API platform(like Postman) to check.&lt;/p&gt;

&lt;p&gt;echo http://$(kubectl get ingress/nodejs-backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')/services/all&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The architecture as of now is shown below.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gn8wf8lqv6rdib7ifp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gn8wf8lqv6rdib7ifp2.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Frontend Service
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/70-deploy-service/300-frontend#deploy-react-frontend" rel="noopener noreferrer"&gt;Deploy React Frontend&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Once you have deployed two backend services, you will now deploy the frontend to configure the web page’s screen.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the source code to be containerized through the command below.&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/environment&lt;br&gt;
git clone &lt;a href="https://github.com/joozero/amazon-eks-frontend.git" rel="noopener noreferrer"&gt;https://github.com/joozero/amazon-eks-frontend.git&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Through AWS CLI, create an image repository. In this lab, we will set the repository name to &lt;strong&gt;demo-frontend&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;aws ecr create-repository \&lt;br&gt;
--repository-name demo-frontend \&lt;br&gt;
--image-scanning-configuration scanOnPush=true \&lt;br&gt;
--region ${AWS_REGION}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To spray two backend API data on the web screen, we have to change source code. Change the url values in &lt;strong&gt;App.js&lt;/strong&gt; file and &lt;strong&gt;page/upperPage.js&lt;/strong&gt; file from the frontend source code(location: /home/ec2-user/environment/amazon-eks-frontend/src).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtf5r7bj97wr4ko7w3x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtf5r7bj97wr4ko7w3x6.png" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above source code, paste the values derived from the result (ingress addresses) below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo http://$(kubectl get ingress/flask-backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')/contents/'${search}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4n9547lvrvppmwgndpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4n9547lvrvppmwgndpd.png" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above source code, paste the values derived from the result (ingress addresses) below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo http://$(kubectl get ingress/nodejs-backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')/services/all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Execute the following command in the location of the &lt;strong&gt;amazon-eks-frontend&lt;/strong&gt; folder.&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/environment/amazon-eks-frontend&lt;br&gt;
npm install&lt;br&gt;
npm run build&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] After &lt;strong&gt;npm install&lt;/strong&gt;, if severity vulnerability comes out, perform the npm audit fix command and apply &lt;strong&gt;npm run build&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Refer &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/40-container/200-eks" rel="noopener noreferrer"&gt;Upload container image to Amazon ECR&lt;/a&gt; guide and proceed to create container image repository and push image. In this lab, set the image repository name to &lt;strong&gt;demo-frontend&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;docker build -t demo-frontend .&lt;/p&gt;

&lt;p&gt;docker tag demo-frontend:latest $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-frontend:latest&lt;/p&gt;

&lt;p&gt;docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-frontend:latest&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] During applying above CLI, if you receive message which said &lt;strong&gt;denied: Your authorization token has expired. Reauthenticate and try again.&lt;/strong&gt;, then applying bottom command line and do this again.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Move to &lt;strong&gt;manifests folder&lt;/strong&gt;. At this point, type image value to &lt;strong&gt;demo-frontend&lt;/strong&gt; repository URI.&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/environment/manifests&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; frontend-deployment.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: demo-frontend&lt;br&gt;
  namespace: default&lt;br&gt;
spec:&lt;br&gt;
  replicas: 3&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: demo-frontend&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: demo-frontend&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
        - name: demo-frontend&lt;br&gt;
          image: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-frontend:latest&lt;br&gt;
          imagePullPolicy: Always&lt;br&gt;
          ports:&lt;br&gt;
            - containerPort: 80&lt;br&gt;
EOF&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; frontend-service.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: demo-frontend&lt;br&gt;
  annotations:&lt;br&gt;
    alb.ingress.kubernetes.io/healthcheck-path: "/"&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: demo-frontend&lt;br&gt;
  type: NodePort&lt;br&gt;
  ports:&lt;br&gt;
    - protocol: TCP&lt;br&gt;
      port: 80&lt;br&gt;
      targetPort: 80&lt;br&gt;
EOF&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; frontend-ingress.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  name: "frontend-ingress"&lt;br&gt;
  namespace: default&lt;br&gt;
  annotations:&lt;br&gt;
    alb.ingress.kubernetes.io/scheme: internet-facing&lt;br&gt;
    alb.ingress.kubernetes.io/target-type: ip&lt;br&gt;
    alb.ingress.kubernetes.io/group.name: eks-demo-group&lt;br&gt;
    alb.ingress.kubernetes.io/group.order: '3'&lt;br&gt;
spec:&lt;br&gt;
  ingressClassName: alb&lt;br&gt;
  rules:&lt;br&gt;
    - http:&lt;br&gt;
        paths:&lt;br&gt;
          - path: /&lt;br&gt;
            pathType: Prefix&lt;br&gt;
            backend:&lt;br&gt;
              service:&lt;br&gt;
                name: "demo-frontend"&lt;br&gt;
                port:&lt;br&gt;
                  number: 80&lt;br&gt;
EOF&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy manifest file.&lt;/p&gt;

&lt;p&gt;kubectl apply -f frontend-deployment.yaml&lt;br&gt;
kubectl apply -f frontend-service.yaml&lt;br&gt;
kubectl apply -f frontend-ingress.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy and paste the result of the command below into the web browser.&lt;/p&gt;

&lt;p&gt;echo http://$(kubectl get ingress/frontend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the screen below comes out same, all the containers are working successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62fthozg3kp51j1t4aux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62fthozg3kp51j1t4aux.png" width="736" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/70-deploy-service/300-frontend#the-architecture-as-of-now" rel="noopener noreferrer"&gt;The architecture as of now&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;After deploying Ingress Controller and Service objects, the architecture configured is shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexeghh5ysjwbguofrjg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexeghh5ysjwbguofrjg6.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Fargate
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/80-fargate#aws-fargate" rel="noopener noreferrer"&gt;AWS Fargate&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvpusrz8n2msrp9thuny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvpusrz8n2msrp9thuny.png" width="727" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt; is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy service with AWS Fargate
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/80-fargate/100-fargate-pod#deploy-pod-with-aws-fargate" rel="noopener noreferrer"&gt;Deploy pod with AWS Fargate&lt;/a&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To deploy pods to Fargate in a cluster, you must define at least one fargate profile that the pod uses when it runs. In other words, the fargate profile is a profile that specifies the conditions for creating pods with AWS Fargate type.&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/environment/manifests&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; eks-demo-fargate-profile.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: eksctl.io/v1alpha5&lt;br&gt;
kind: ClusterConfig&lt;br&gt;
metadata:&lt;br&gt;
  name: eks-demo&lt;br&gt;
  region: ${AWS_REGION}&lt;br&gt;
fargateProfiles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: frontend-fargate-profile
selectors:

&lt;ul&gt;
&lt;li&gt;namespace: default
labels:
  app: frontend-fargate
EOF&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For pods that meet the conditions listed in &lt;strong&gt;selectors&lt;/strong&gt; in the yaml file above, it will be deployed as AWS Fargate type.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Deploy fargate profile.&lt;/p&gt;

&lt;p&gt;eksctl create fargateprofile -f eks-demo-fargate-profile.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check whether fargate profile was deployed successfully.&lt;/p&gt;

&lt;p&gt;eksctl get fargateprofile --cluster eks-demo -o json&lt;/p&gt;
&lt;h1&gt;
  
  
  Example of command result
&lt;/h1&gt;

&lt;p&gt;{&lt;br&gt;
    "name": "frontend-fargate-profile",&lt;br&gt;
    "podExecutionRoleARN": "arn:aws:iam::account-id:role/eksctl-eks-demo-test-farga-FargatePodExecutionRole-OLC3P21AD5DX",&lt;br&gt;
    "selectors": [&lt;br&gt;
        {&lt;br&gt;
            "namespace": "default",&lt;br&gt;
            "labels": {&lt;br&gt;
                "app": "frontend-fargate"&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
    ],&lt;br&gt;
    "subnets": [&lt;br&gt;
        "subnet-07e2d55650225419c",&lt;br&gt;
        "subnet-0ac4a7fdbd803039c",&lt;br&gt;
        "subnet-046a3dcfabce11b5f"&lt;br&gt;
    ],&lt;br&gt;
    "status": "ACTIVE"&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In this lab, we will provision frontend pods to Fargate type. First, delete the existing frontend pod. Work with the command below in the folder where the yaml file is located.&lt;/p&gt;

&lt;p&gt;kubectl delete -f frontend-deployment.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Modify frontend-deployment.yaml file. Compared with previous yaml file, you can see that the value of label changed from &lt;strong&gt;demo-frontend&lt;/strong&gt; to &lt;strong&gt;frontend-fargate&lt;/strong&gt;. In step 1, when the pod meet the condition that key=app, value=frontend-fargate and namespace=default, eks cluster deploy pod to Fargate type.&lt;/p&gt;

&lt;p&gt;cd /home/ec2-user/environment/manifests&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; frontend-deployment.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: demo-frontend&lt;br&gt;
  namespace: default&lt;br&gt;
spec:&lt;br&gt;
  replicas: 3&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: frontend-fargate&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: frontend-fargate&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
        - name: demo-frontend&lt;br&gt;
          image: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-frontend:latest&lt;br&gt;
          imagePullPolicy: Always&lt;br&gt;
          ports:&lt;br&gt;
            - containerPort: 80&lt;br&gt;
EOF&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Modify frontend-service.yaml file.&lt;/p&gt;
&lt;h2&gt;
  
  
  cat &amp;lt; frontend-service.yaml
&lt;/h2&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: demo-frontend&lt;br&gt;
  annotations:&lt;br&gt;
    alb.ingress.kubernetes.io/healthcheck-path: "/"&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: frontend-fargate&lt;br&gt;
  type: NodePort&lt;br&gt;
  ports:&lt;br&gt;
    - protocol: TCP&lt;br&gt;
      port: 80&lt;br&gt;
      targetPort: 80&lt;br&gt;
EOF&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy manifest file.&lt;/p&gt;

&lt;p&gt;kubectl apply -f frontend-deployment.yaml&lt;br&gt;
kubectl apply -f frontend-service.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;With the command below, you can see that demo-frontend pods are provisioned at NOMINATED NODE.&lt;/p&gt;

&lt;p&gt;kubectl get pod -o wide&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Or, you can check the list of Fargate worker nodes by following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes -l eks.amazonaws.com/compute-type=fargate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You can also paste the results of the command below into the web browser to see the same screen as before.&lt;/p&gt;

&lt;p&gt;echo http://$(kubectl get ingress/frontend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Explore Container Insights
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/90-monitoring#amazon-cloudwatch-container-insight" rel="noopener noreferrer"&gt;Amazon CloudWatch Container Insight&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cooq9peu457kdad5lsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cooq9peu457kdad5lsv.png" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use &lt;strong&gt;CloudWatch Container Insights&lt;/strong&gt; to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices. Container Insights is available for Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Kubernetes platforms on Amazon EC2. Amazon ECS support includes support for Fargate.&lt;/p&gt;

&lt;p&gt;CloudWatch automatically collects metrics for many resources, such as CPU, memory, disk, and network. Container Insights also provides diagnostic information, such as container restart failures, to help you isolate issues and resolve them quickly. You can also set CloudWatch alarms on metrics that Container Insights collects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Explorer EKS CloudWatch Container Insights
&lt;/h2&gt;

&lt;p&gt;In this lab, you will use &lt;a href="https://fluentbit.io/" rel="noopener noreferrer"&gt;*&lt;em&gt;Fluent Bit *&lt;/em&gt;&lt;/a&gt;to route logs. The lab order will install &lt;strong&gt;CloudWatch Agent **to collect metric of the cluster and **Fluent Bit&lt;/strong&gt; to send logs to CloudWatch Logs in DaemonSet type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pdp3p60yo13vep1uv02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pdp3p60yo13vep1uv02.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, create a folder to manage manifest files.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment
mkdir -p manifests/cloudwatch-insight &amp;amp;&amp;amp; cd manifests/cloudwatch-insight
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/90-monitoring/100-build-insight#install-cloudwatch-agent-fluent-bit" rel="noopener noreferrer"&gt;Install CloudWatch agent, Fluent Bit&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/50-eks-cluster/100-launch-cluster" rel="noopener noreferrer"&gt;Create EKS Cluster with eksctl&lt;/a&gt;, &lt;strong&gt;cloudWatch&lt;/strong&gt; related permissions were placed in the worker node.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create namespace named amazon-cloudwatch by following command.&lt;/p&gt;

&lt;p&gt;kubectl create ns amazon-cloudwatch&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] If namespace created successfully, the namespace will exist in the list using the command below.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;After specifying some settings values, install CloudWatch agent and Fluent Bit. Copy and paste one line at a time.&lt;/p&gt;

&lt;p&gt;ClusterName=eks-demo&lt;br&gt;
RegionName=$AWS_REGION&lt;br&gt;
FluentBitHttpPort='2020'&lt;br&gt;
FluentBitReadFromHead='Off'&lt;br&gt;
[[ ${FluentBitReadFromHead} = 'On' ]] &amp;amp;&amp;amp; FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'&lt;br&gt;
[[ -z ${FluentBitHttpPort} ]] &amp;amp;&amp;amp; FluentBitHttpServer='Off' || FluentBitHttpServer='On'&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For &lt;strong&gt;RegionName&lt;/strong&gt;, make sure it contains AWS Region code that you are currently working on. For instance, &lt;strong&gt;ap-northeast-2&lt;/strong&gt; if you are working on in Seoul Region.&lt;/p&gt;

&lt;p&gt;Then, through the command below, download yaml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And then, apply environment variables into this yaml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -i 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' cwagent-fluent-bit-quickstart.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open this yaml file, find &lt;strong&gt;DaemonSet&lt;/strong&gt; object which name is fluent-bit and add the values below the &lt;em&gt;spec&lt;/em&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: eks.amazonaws.com/compute-type
          operator: NotIn
          values:
          - fargate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The results of extracting some of the pasted ones are as follows. Take care indentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5fao4gtri567r7k22z0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5fao4gtri567r7k22z0.png" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deploy yaml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cwagent-fluent-bit-quickstart.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use the command below to check that the installation was succeeded. As a result, three cloudwatch-agent pods and three fluid-bit pods are available.&lt;/p&gt;

&lt;p&gt;kubectl get po -n amazon-cloudwatch&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can also check it through the command below. You can see that two Daemonsets are output.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get daemonsets -n amazon-cloudwatch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/90-monitoring/100-build-insight#dive-into-the-cloudwatch-console" rel="noopener noreferrer"&gt;Dive into the CloudWatch console&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Log in &lt;a href="https://console.aws.amazon.com/cloudwatch" rel="noopener noreferrer"&gt;Amazon CloudWatch console &lt;/a&gt;, click &lt;strong&gt;Container Insights **under **Insights&lt;/strong&gt; menu in the left sidebar.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86zphetwfjfi6282waxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86zphetwfjfi6282waxy.png" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you click &lt;strong&gt;Map View&lt;/strong&gt; in the upper right corner, the cluster’s resources are displayed in tree form. You can also click on a particular object to see the associated metric values as shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuy35282i7kxemrlso3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuy35282i7kxemrlso3y.png" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then select &lt;strong&gt;Performance monitoring&lt;/strong&gt; in the above select bar. And if you click &lt;strong&gt;EKS Services&lt;/strong&gt; at the top, you can see the metric values in terms of service as shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvim2nk642yhomduo3024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvim2nk642yhomduo3024.png" width="800" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Also, select a specific pod from &lt;strong&gt;Pod performance&lt;/strong&gt; section, then click &lt;strong&gt;View performance logs&lt;/strong&gt; in the dropbox to the right.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr957yomga93hlqyk5rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr957yomga93hlqyk5rq.png" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be redirected to the &lt;strong&gt;CloudWatch Logs Insights&lt;/strong&gt; page as shown below. The query allows you to view the logs you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpja7crmy69bhts21dr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpja7crmy69bhts21dr9.png" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Autoscaling Pod &amp;amp; Cluster
&lt;/h2&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling#kubernetes-auto-scaling" rel="noopener noreferrer"&gt;Kubernetes Auto Scaling&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Auto scaling service means that the ability to automatically create or delete servers based on user-defined cycles and events. Auto scaling enables applications to respond flexibly to traffic.&lt;/p&gt;

&lt;p&gt;Kubernetis has two main auto-scaling capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;HPA(Horizontal Pod AutoScaler)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cluster Autoscaler&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;HPA automatically scales the number of pods by observing CPU usage or custom metrics. However, if you run out of EKS cluster’s own resources to which the pod goes up, consider Cluster Autoscaler.&lt;/p&gt;

&lt;p&gt;Applying these auto-scaling capabilities to a cluster allows you to configure a more resilient and scalable environment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Apply HPA
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/100-pod-scaling#applying-pod-scaling-with-hpa" rel="noopener noreferrer"&gt;Applying Pod Scaling with HPA&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The HPA(Horizontal Pod Autoscaler) controller allocates the number of pods based on metric. To apply pod scaling, you must specify the amount of resources required for the container and create conditions to scale through HPA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd2s3gqd6sqx309oo161.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftd2s3gqd6sqx309oo161.png" width="562" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;strong&gt;metrics server&lt;/strong&gt;. &lt;strong&gt;Metrics Server&lt;/strong&gt; aggregates resource usage data across the Kubernetes cluster. Collect metrics such as the CPU and memory usage of the worker node or container through kubelet installed on each worker node.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl apply -f &lt;a href="https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Use the command below to check that the metrics server is created successfully.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl get deployment metrics-server -n kube-system&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;And then, modify flask deployment yaml file that you created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/70-deploy-service/100-flask-backend" rel="noopener noreferrer"&gt;Deploy First Backend Service&lt;/a&gt;. Change replicas to 1 and set the amount of resources required for the container.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cd /home/ec2-user/environment/manifests cat &amp;lt; flask-deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: demo-flask-backend namespace: default spec: replicas: 1 selector: matchLabels: app: demo-flask-backend template: metadata: labels: app: demo-flask-backend spec: containers: - name: demo-flask-backend image: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-flask-backend:latest imagePullPolicy: Always ports: - containerPort: 8080 resources: requests: cpu: 250m limits: cpu: 500m EOF&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1vCPU = 1000m(milicore)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Apply the yaml file to reflect the changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl apply -f flask-deployment.yaml&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;To set up HPA, create the yaml file below as well.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cat &amp;lt; flask-hpa.yaml --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: demo-flask-backend-hpa namespace: default spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: demo-flask-backend minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 30 EOF&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy yaml file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f flask-hpa.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can set to this with simple kubectl command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl autoscale deployment demo-flask-backend --cpu-percent=30 --min=1 --max=5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;After creating HPA, you can check the HPA status with the command below. If the target shows CPU usage as unknown, wait a moment and check.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl get hpa&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Perform a simple load test to check that the autoscaling functionality is working properly. First, enter the command below to understand the amount of change in the pod.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;kubectl get hpa -w&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, create additional terminals for load testing in AWS Cloud9. HTTP load testing through siege tool.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum -y install siege

export flask_api=$(kubectl get ingress/flask-backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')/contents/aws

siege -c 200 -i [http://$flask_api](http://$flask_api)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As shown in the screen below, you can load one side terminal and observe the amount of change in the pod accordingly on the other side. You can see that the &lt;strong&gt;REPLICAS&lt;/strong&gt; value changes by up to 5 depending on the load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apply Cluster Autoscaler
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/200-cluster-scaling#applying-cluster-scaling-with-cluster-autoscaler" rel="noopener noreferrer"&gt;Applying Cluster Scaling with Cluster Autoscaler&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Auto scaling was applied to the pod on the previous chapter. However, depending on the traffic, there may be insufficient Worker Node resources for the pod to increase. In other words, it’s full of Worker Nodes’ capacity and no more pod can’t be scheduled. At this point, what we use is &lt;strong&gt;Cluster Autoscaler(CA)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoqzz1ef0f6w6zlypz8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkoqzz1ef0f6w6zlypz8i.png" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cluster Autoscaler(CA) scales out the worker node if a pod in the pending state exists. Perform scale-in/out by checking utilization at intervals of a particular time. AWS also uses Auto Scaling Group to apply Cluster Autoscaler.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.toOptional"&gt;!&lt;/a&gt; To visualize the status of the current cluster, see &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/300-kube-ops-view" rel="noopener noreferrer"&gt;kube-ops-view&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the command below to check &lt;strong&gt;the value of ASG(Auto Scaling Group)&lt;/strong&gt; applied to the current cluster’s worker nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;aws autoscaling \ describe-auto-scaling-groups \ --query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='eks-demo']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \ --output table&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this lab, when deploying an EKS cluster, we performed the task of attaching IAM policies related to autoscaling. However, if you have not done so, click the hidden folder below to create the relevant IAM policy and attach it to the IAM role.&lt;/p&gt;

&lt;p&gt;Creating an Auto Scaling IAM policy and attaching it to a worker nodes’ IAM Role&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In &lt;a href="http://console.aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Auto Scaling Groups &lt;/a&gt;page, click ASG applied in worker node, and update &lt;strong&gt;Group details&lt;/strong&gt; value same as below.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to your Cloud9 environment and download the deployment example file provided by the &lt;strong&gt;Cluster Atuoscaler&lt;/strong&gt; project.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;cd /home/ec2-user/environment/manifests wget &lt;a href="https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Open the downloaded yaml file, set the cluster name from  to eks-demo and deploy it.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;... command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --expander=least-waste - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-demo ...&lt;/p&gt;

&lt;p&gt;kubectl apply -f cluster-autoscaler-autodiscover.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Perform a simple load test to check that the autoscaling functionality is working properly. First, enter the command below to understand the change in the number of worker nodes.&lt;/p&gt;

&lt;p&gt;kubectl get nodes -w&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then turn on the new terminal, and then perform a command to deploy 100 pods to increase the worker node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment autoscaler-demo --image=nginx
kubectl scale deployment autoscaler-demo --replicas=100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To check the progress of the pod’s deployment, perform the following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployment autoscaler-demo --watch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If &lt;strong&gt;kube-ops-view&lt;/strong&gt; is installed, you can visually see the results below. This shows that two additional worker nodes were created and 100 pods were deployed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you delete a previously created pods with the command below, you can see that the worker node will be scaled in.&lt;/p&gt;

&lt;p&gt;kubectl delete deployment autoscaler-demo&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Install Kubernetes Operational View
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/300-kube-ops-view#install-kubernetes-operational-view" rel="noopener noreferrer"&gt;Install Kubernetes Operational View&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://codeberg.org/hjacobs/kube-ops-view" rel="noopener noreferrer"&gt;Kubernetes Operational View &lt;/a&gt;is a simple web page that provides a visual view of the health of multiple Kubernetes clusters. Although not used for monitoring and operations management purposes, you can visually observe the process of scale-in/out during cluster autoscaling operations, such as &lt;strong&gt;Cluster Autoscaler&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this lab, we deploy kube-ops-view via &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;*&lt;em&gt;Helm *&lt;/em&gt;&lt;/a&gt;. Helm is a tool for managing Kubernetes charts, which means a preconfigured Kubernetes resource package. The purpose of managing charts with Helm is to manage various manifest files easily.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/300-kube-ops-view#install-helm" rel="noopener noreferrer"&gt;Install Helm&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Before configuring Helm, install the helm cli tool.&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3&lt;/a&gt; | bash&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check the current version through the command below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm version --short
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Use the command below to add a stable repository.&lt;/p&gt;

&lt;p&gt;helm repo add stable &lt;a href="https://charts.helm.sh/stable" rel="noopener noreferrer"&gt;https://charts.helm.sh/stable&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;helm repo add k8s-at-home &lt;a href="https://k8s-at-home.com/charts/" rel="noopener noreferrer"&gt;https://k8s-at-home.com/charts/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;helm repo update&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Optional) Configure Bash completion for the helm command.&lt;/p&gt;

&lt;p&gt;helm completion bash &amp;gt;&amp;gt; ~/.bash_completion&lt;br&gt;
. /etc/profile.d/bash_completion.sh&lt;br&gt;
. ~/.bash_completion&lt;br&gt;
source &amp;lt;(helm completion bash)&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/100-scaling/300-kube-ops-view#install-kube-ops-view" rel="noopener noreferrer"&gt;Install kube-ops-view&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install kube-ops-view through helm.&lt;/p&gt;

&lt;p&gt;helm install kube-ops-view k8s-at-home/kube-ops-view&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check the chart was installed successfully.&lt;/p&gt;

&lt;p&gt;helm list&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the application URL by running these commands.&lt;/p&gt;

&lt;p&gt;export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=kube-ops-view,app.kubernetes.io/instance=kube-ops-view" -o jsonpath="{.items[0].metadata.name}")&lt;/p&gt;

&lt;p&gt;echo "Visit &lt;a href="http://127.0.0.1:8080" rel="noopener noreferrer"&gt;http://127.0.0.1:8080&lt;/a&gt; to use your application"&lt;/p&gt;

&lt;p&gt;kubectl port-forward $POD_NAME 8080:8080&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In case of Cloud9, click the preview &amp;gt; preview running application which placed in the upper end. You can observe screen like below.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  CI/CD for EKS cluster
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd#cicd-pipeline-for-eks-cluster-kubernetes-cluster" rel="noopener noreferrer"&gt;CI/CD pipeline for EKS Cluster / Kubernetes Cluster&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Shape of CI/CD pipeline for either EKS or Kubernetes varies. This is because; 1/many different kinds of tools for CI/CD are out there, 2/each of dev/ops team’s culture that embraces and uses those tools is also diverse.&lt;/p&gt;

&lt;p&gt;Given that situation, this tutorial amis to introduce the shape of CI/CD pipeline that is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;easy and speedy to implement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to minimize manual task rather than automation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of that, the goal CI/CD pipeline in this tutorial will automatically detect application code changes in &lt;strong&gt;Github&lt;/strong&gt;, and then trigger &lt;strong&gt;Github Action&lt;/strong&gt; to integrate and buid the code changes. At the end, &lt;strong&gt;ArgoCD&lt;/strong&gt; will be subsequently executed to deploy built artifacts to the target, EKS cluster. For pieces of block helping to automate this flow, we will introduce &lt;strong&gt;Kustomize&lt;/strong&gt; that is tool to package kubernetes manifest up, &lt;strong&gt;Checkov&lt;/strong&gt; and &lt;strong&gt;Tryvy&lt;/strong&gt; for static analysis to secure EKS cluster running on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kustomize&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ArgoCD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checkov&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trivy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal of CI/CD pipeline will like below, which is also called as &lt;strong&gt;gitops&lt;/strong&gt; flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd#cicd-pipeline-for-eks-cluster-kubernetes-cluster-with-cdk-helm" rel="noopener noreferrer"&gt;CI/CD pipeline for EKS Cluster / Kubernetes Cluster with cdk helm&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Shape of CI/CD pipeline for either EKS or Kubernetes varies. This is because; 1/many different kinds of tools for CI/CD are out there, 2/each of dev/ops team’s culture that embraces and uses those tools is also diverse.&lt;/p&gt;

&lt;p&gt;Given that situation, this tutorial amis to introduce the shape of CI/CD pipeline that is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;easy and speedy to implement&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to minimize manual task rather than automation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of that, the goal CI/CD pipeline in this tutorial will automatically detect application code changes in &lt;strong&gt;CodeCommit&lt;/strong&gt;, and then trigger &lt;strong&gt;Codebuild&lt;/strong&gt; to integrate and buid the code changes. At the end, &lt;strong&gt;ArgoCD&lt;/strong&gt; will be subsequently executed to deploy built artifacts to the target, EKS cluster. For pieces of block helping to automate this flow, we will introduce &lt;strong&gt;Helm&lt;/strong&gt; that is tool to package kubernetes manifest up, &lt;strong&gt;Checkov&lt;/strong&gt; and &lt;strong&gt;Tryvy&lt;/strong&gt; for static analysis to secure EKS cluster running on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CodeCommit&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CodeBuild&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CodePipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ArgoCD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checkov&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trivy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal of CI/CD pipeline will like below, which is also called as &lt;strong&gt;gitops&lt;/strong&gt; flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create CI/CD pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/100-cicd#build-up-cicd-pipeline" rel="noopener noreferrer"&gt;Build up CI/CD pipeline&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Target CI/CD pipeline looks like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create two git repository for application, kubernetes manifest each&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to have two github repository in place.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;front-app-repo&lt;/em&gt;&lt;/strong&gt;: located front-end application source code in&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;: located kubernetes manifest files in&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Create &lt;strong&gt;&lt;em&gt;front-app-repo&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; initialize local code directory&lt;/p&gt;

&lt;p&gt;You should change &lt;strong&gt;“your-github-username”&lt;/strong&gt; to your own git user name.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
rm -rf .git
export GITHUB_USERNAME=your-github-username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Configure git remote repository locally&lt;/p&gt;

&lt;p&gt;Push front-end source code to &lt;strong&gt;&lt;em&gt;front-app-repo&lt;/em&gt;&lt;/strong&gt; you just created.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin https://github.com/$GITHUB_USERNAME/front-app-repo.git
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Confirm that it is all set.&lt;/p&gt;

&lt;p&gt;If you feel reluctant to submit username and password every single time you login, you can set cache to avoid it as below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config --global user.name USERNAME
git config --global user.email EMAIL
git config credential.helper store
git config --global credential.helper 'cache --timeout TIME YOU WANT'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you are already using github MFA authentication, you are asked to use personal access token for password. To generate personal access token, you can follow &lt;a href="https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noopener noreferrer"&gt;this github &lt;/a&gt;guidance on it. Once you get the token from there, you can use the token when asked to submit password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Prepare least privilege IAM to use in CI/CD pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gitHub Action plays main role to build front-end application, build its docker container image and push it to ECR at the end. So, to make gitHub Action access ECR securly, we are strongly recommended to use seperate IAM with least privilege, and which will limit gitHub Action to access to ECR only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Create IAM user&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-user --user-name github-action
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Create ECR policy&lt;/p&gt;

&lt;p&gt;Make json file containg policy to ECR&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment
cat &amp;lt;&amp;lt;EOF&amp;gt; ecr-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowPush",
            "Effect": "Allow",
            "Action": [
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
            ],
            "Resource": "arn:aws:ecr:${AWS_REGION}:${ACCOUNT_ID}:repository/demo-frontend"
        },
        {
            "Sid": "GetAuthorizationToken",
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken"
            ],
            "Resource": "*"
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using the file you made, create IAM policy. Policy name is ecr-policy recommended.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-policy --policy-name ecr-policy --policy-document file://ecr-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Attach ECR policy to IAM user&lt;/p&gt;

&lt;p&gt;Attach ecr-policy to IAM user you create previously.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-user-policy --user-name github-action --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/ecr-policy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3. Create githup secrets(AWS Credential, githup token)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gitHub Action we make will store and use AWS credential, github token in githup secrets. This way, we can secure those secrets without being exposed unintentinally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Generate AWS Credential&lt;/p&gt;

&lt;p&gt;When gitHub Action push the docker image of front-end application, it uses AWS credential. For this working, we have created github-action, IAM user with least privilege. Now, create Access Key, Secret Key for the IAM User.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-access-key --user-name github-action
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make a note of values of "SecretAccessKey", "AccessKeyId" out of output. Those will be used going forward in this tutorial.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "AccessKey": {
    "UserName": "github-action",
    "Status": "Active",
    "CreateDate": "2021-07-29T08:41:04Z",
    "SecretAccessKey": "***",
    "AccessKeyId": "***"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Generate gitHub personal token&lt;/p&gt;

&lt;p&gt;Once log in github.com, naviate &lt;strong&gt;profile &amp;gt; Settings &amp;gt; Developer settings &amp;gt; Personal access tokens&lt;/strong&gt;. Finally, click on &lt;strong&gt;Generate new token&lt;/strong&gt; in the top right corner.&lt;/p&gt;

&lt;p&gt;Type access token for github action in Note, and then select &lt;strong&gt;repo&lt;/strong&gt; in &lt;strong&gt;Select scopes&lt;/strong&gt;. Finally, click &lt;strong&gt;Generate token&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copy value of token in the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Set up gitHub secret&lt;/p&gt;

&lt;p&gt;Go back to &lt;strong&gt;front-app-repo&lt;/strong&gt; repository and navigate &lt;strong&gt;Settings &amp;gt; Secrets&lt;/strong&gt;. And click &lt;strong&gt;New repository secret&lt;/strong&gt; in the top right corner.&lt;/p&gt;

&lt;p&gt;As below screen shot, put ACTION_TOKEN, personal access token in &lt;strong&gt;Name&lt;/strong&gt;, &lt;strong&gt;Value&lt;/strong&gt; respectively.(&lt;em&gt;You must have copied personal access token in the previous step). Finally click **Add secret&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;Similar way, store both AccessKeyId and SecretAccessKey that github-action will use in gitHub secret. Note that &lt;strong&gt;Name&lt;/strong&gt; of AccessKeyId and SecretAccessKey must be AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Make build script for gitHub Action to use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Make .github directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
mkdir -p ./.github/workflows
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Make build.yaml for gitHub Action to use&lt;/p&gt;

&lt;p&gt;gitHub Action will be running based on tasks that build.yaml contains. So we need to declare tasks that gitHub Action execute for us in build.yaml. build.yamlwill include checkout, build front-end application, make docker image, and push it to ECR.&lt;/p&gt;

&lt;p&gt;Most eye-catching part in the script is procedure to dynamically put &lt;strong&gt;docker image tag&lt;/strong&gt;. &lt;strong&gt;We intend to have **$IMAGE_TAG **that is dynamically and randomly created to attach docker image built.&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend/.github/workflows
cat &amp;gt; build.yaml &amp;lt;&amp;lt;EOF
name: Build Front
on:
  push:
    branches: [ main ]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout source code
        uses: actions/checkout@v2
      - name: Check Node v
        run: node -v
      - name: Build front
        run: |
          npm install
          npm run build
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: \${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: \${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: $AWS_REGION
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1
      - name: Get image tag(verion)
        id: image
        run: |
          VERSION=\$(echo \${{ github.sha }} | cut -c1-8)
          echo VERSION=\$VERSION
          echo "::set-output name=version::\$VERSION"
      - name: Build, tag, and push image to Amazon ECR
        id: image-info
        env:
          ECR_REGISTRY: \${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: demo-frontend
          IMAGE_TAG: \${{ steps.image.outputs.version }}
        run: |
          echo "::set-output name=ecr_repository::\$ECR_REPOSITORY"
          echo "::set-output name=image_tag::\$IMAGE_TAG"
          docker build -t \$ECR_REGISTRY/\$ECR_REPOSITORY:\$IMAGE_TAG .
          docker push \$ECR_REGISTRY/\$ECR_REPOSITORY:\$IMAGE_TAG
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Run gitHub Action workflow&lt;/p&gt;

&lt;p&gt;Push code to &lt;em&gt;front-app-repo&lt;/em&gt; to trigger gitHub Action workflow. gitHub Action workflow will automatically be triggered on push of code. gitHub Action workflow will run step-by-step based on build.yaml we’ve declared.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
git add .
git commit -m "Add github action build script"
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Return to gitHub page and comfirm push is done. Also comfirm that gitHub Action workflow works exepectedly as below screen shots.&lt;/p&gt;

&lt;p&gt;If you confirmed the workflow finished successfully, go to ECR repository, demo-frontend to see if new docker imgage is pushed with new $IMAGE_TAG.&lt;/p&gt;

&lt;p&gt;Check if the image tag contains part of sha value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Kustomize Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this tutorial, we are going to use Kustomize to simply inject same value of label, metadata, etc. into kubernetes Deployment objects. This helps us to avoid hassle job to modify mannually value in each of kubernetes objects. Most importantly, we are to use Kustomize to not only automatically, but also dynamically assign image tag to kubernetes Deployment.&lt;/p&gt;

&lt;p&gt;Please discover more details on Kustomize here, &lt;a href="https://kustomize.io/" rel="noopener noreferrer"&gt;Kustomize official document&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Structure directories for Kustomize&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Make directories&lt;/p&gt;

&lt;p&gt;Now that kubernetes manifest owns seperated github repository. On top of that, we are going to package it to deploy using Kustomize. For this we need to make directories that Kustomize can run accordingly. The struction of directories should follow predefined naming rule.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment
mkdir -p ./k8s-manifest-repo/base
mkdir -p ./k8s-manifest-repo/overlays/dev
cd ~/environment/manifests
cp *.yaml ../k8s-manifest-repo/base
cd ../k8s-manifest-repo/base
ls -rlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Outcome of directories has &lt;em&gt;base&lt;/em&gt; and &lt;em&gt;overlays/dev&lt;/em&gt; under &lt;em&gt;k8s-manifest-repo&lt;/em&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;base&lt;/em&gt; : raw kubernetes manifest files are here. During kustomize build process, those files in here will be automatically modified along with customized content by users in &lt;strong&gt;kustomize.yaml&lt;/strong&gt; under &lt;em&gt;overlays&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;overlays&lt;/em&gt; : &lt;strong&gt;customized content by users&lt;/strong&gt; is in &lt;strong&gt;kustomize.yaml&lt;/strong&gt; under this directory. Also note that &lt;em&gt;dev&lt;/em&gt; directory is to put all relevant files for deploying to dev environment. In this tutorial, we assume that we deploy to dev environment accordingly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Make Kustomize manifest files&lt;/p&gt;

&lt;p&gt;Remember the goal of this tutorial is to make deployment pipeline for front-end application. So, we will change and replace some values in frontend-deployment.yaml and frontend-service.yaml with values we intend to inject during deployment step(e.g. image tag). These are values we are definitely to inject dynamically into associated kubernetes manifest files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;metadata.labels*&lt;em&gt;:&lt;/em&gt;* "env: dev" will be reflected to frontend-deployment.yaml, frontend-service.yaml&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;spec.selector : "select.app: frontend-fargate" will be reflected to frontend-deployment.yaml, frontend-service.yaml&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;spec.template.spec.containers.image : "image: " with newly created image tag will be reflected to frontend-deployment.yaml&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make kustomize.yamlas below. Main purpose of this file is to define target files to be automatically injected by kustomize.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/base
cat &amp;lt;&amp;lt;EOF&amp;gt; kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - frontend-deployment.yaml
  - frontend-service.yaml
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, it’s time to make files that contain what to inject into target files that kustomize.yamldefines in the previous step. First, make a file to inject into frontend-deployment.yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/overlays/dev
cat &amp;lt;&amp;lt;EOF&amp;gt; front-deployment-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-frontend
  namespace: default
  labels:
    env: dev
spec:
  selector:
    matchLabels:
      app: frontend-fargate
  template:
    metadata:
      labels:
        app: frontend-fargate
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Also, make make a file to inject into frontend-service.yaml.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/overlays/dev
cat &amp;lt;&amp;lt;EOF&amp;gt; front-service-patch.yaml
apiVersion: v1
kind: Service
metadata:
  name: demo-frontend
  annotations:
    alb.ingress.kubernetes.io/healthcheck-path: "/"
  labels:
    env: dev
spec:
  selector:
    app: frontend-fargate
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Lastly, improve kustomization.yaml to replace the name of image kubernetes deployment object refers to. The name of image will have to be with &lt;strong&gt;image tag&lt;/strong&gt; that is randomly generated during build process of front-end application in gitHub Action workflow.&lt;/p&gt;

&lt;p&gt;To be specific, value assigned to name will be replaced with value of a combination of newNameand newTag&lt;/p&gt;

&lt;p&gt;Run this code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/overlays/dev
cat &amp;lt;&amp;lt;EOF&amp;gt; kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/demo-frontend
  newName: ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/demo-frontend
  newTag: abcdefg
resources:
- ../../base
patchesStrategicMerge:
- front-deployment-patch.yaml
- front-service-patch.yaml
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As a result, content in XXXX-patch.yaml and value of images in kustomization.yaml will be automatically applied to kubernetes manifest on deployment to EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Create gitHub repository for kubernetes manifest&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; gitHub repo for kubernetes manifest&lt;/p&gt;

&lt;p&gt;Create respository in gitHub with name as &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;. This will persist kubernetes manifest files we’ve created so far.&lt;/p&gt;

&lt;p&gt;Push kubernetes manifest files to &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin https://github.com/$GITHUB_USERNAME/k8s-manifest-repo.git
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;8. Set up ArgoCD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Install ArgoCD in EKS cluster&lt;/p&gt;

&lt;p&gt;Run this code to install ArgoDC in EKS cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ArgoCD also provides CLI to users. Install ArgoCD CLI, we are not using it in this tutorial going on though.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment
VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')

sudo curl --silent --location -o /usr/local/bin/argocd [https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64](https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64)

sudo chmod +x /usr/local/bin/argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Basically ArgoCD is not directly exposed to external, so we need to set up ELB in front of ArgoCD for incoming transactions.=&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It may take 3~4 mins to be reachable via ELB. Run this code to get the URL of ELB.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname`
echo $ARGOCD_SERVER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;ArgoCD default username is admin. Get password against it with this command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open $ARGOCD_SERVER in your local web browser with Username = adminand Password = $ARGO_PWD.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Configure ArgoCD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Configure ArgoCD&lt;/p&gt;

&lt;p&gt;After logging in, Click &lt;strong&gt;Applicaions&lt;/strong&gt; to configure in the top left corner.&lt;/p&gt;

&lt;p&gt;Next, input basic information about target deployment of application. &lt;strong&gt;Application Name&lt;/strong&gt; and &lt;strong&gt;Project&lt;/strong&gt; should beeksworkshop-cd-pipeline and default each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository URL&lt;/strong&gt; , &lt;strong&gt;Revision&lt;/strong&gt;, &lt;strong&gt;Path&lt;/strong&gt; in section of &lt;strong&gt;SOURCE&lt;/strong&gt; must be **git address of **k8s-manifest-repo, main, and overlays/dev each.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster URL&lt;/strong&gt; and &lt;strong&gt;Namespace&lt;/strong&gt; in section of &lt;strong&gt;DESTINATION&lt;/strong&gt; must be &lt;a href="https://kubernetes.default.svc" rel="noopener noreferrer"&gt;https://kubernetes.default.svc&lt;/a&gt; and default each. After input, click &lt;strong&gt;Create&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Appropriate result will be named &lt;strong&gt;eksworkshop-cd-pipeline&lt;/strong&gt; as below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Add Kustomize build step&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Improve gitHub Action build script Add this code in build.yamlfor &lt;strong&gt;&lt;em&gt;front-app-repo&lt;/em&gt;&lt;/strong&gt;. This code will update container image tag in kubernetes manifest files using kustomize. Afte that, it will commit and push those files to &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When it successfully finishes, &lt;strong&gt;ArgoCD&lt;/strong&gt; watching &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt; will catch the new update and start deployment process afterward.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend/.github/workflows
cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt; build.yaml
      - name: Setup Kustomize
        uses: imranismail/setup-kustomize@v1
      - name: Checkout kustomize repository
        uses: actions/checkout@v2
        with:
          repository: $GITHUB_USERNAME/k8s-manifest-repo
          ref: main
          token: \${{ secrets.ACTION_TOKEN }}
          path: k8s-manifest-repo
      - name: Update Kubernetes resources
        run: |
          echo \${{ steps.login-ecr.outputs.registry }}
          echo \${{ steps.image-info.outputs.ecr_repository }}
          echo \${{ steps.image-info.outputs.image_tag }}
          cd k8s-manifest-repo/overlays/dev/
          kustomize edit set image \${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}=\${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}:\${{ steps.image-info.outputs.image_tag }}
          cat kustomization.yaml
      - name: Commit files
        run: |
          cd k8s-manifest-repo
          git config --global user.email "github-actions@github.com"
          git config --global user.name "github-actions"
          git commit -am "Update image tag"
          git push -u origin main
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Commit&amp;amp;push to front-app-repo&lt;/p&gt;

&lt;p&gt;Commit and push newly improved build.yamlto &lt;strong&gt;&lt;em&gt;front-app-repo&lt;/em&gt;&lt;/strong&gt; to run gitHub Action workflow.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
git add .
git commit -m "Add kustomize image edit"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Check github action&lt;/p&gt;

&lt;p&gt;Check if gitHub Action workflow works fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(4)&lt;/strong&gt; Check k8s-manifest-repo&lt;/p&gt;

&lt;p&gt;Check if &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;’s latest commit is derived from &lt;strong&gt;gitHub Action workflow of *front-app-repo&lt;/strong&gt;*.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(5)&lt;/strong&gt; Check ArgoCD&lt;/p&gt;

&lt;p&gt;Return to ArgoCD UI. Navigate &lt;strong&gt;Applications &amp;gt; eksworkshop-cd-pipeline&lt;/strong&gt;. Now &lt;strong&gt;CURRENT SYNC STATUS&lt;/strong&gt; is &lt;strong&gt;Out of Synced&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To run sync job automatically, we need to enable &lt;strong&gt;Auto-Sync&lt;/strong&gt;. To do so, go to &lt;strong&gt;APP DETAILS&lt;/strong&gt; and click &lt;strong&gt;ENABLE AUTO-SYNC&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As a result, &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;’s commit will be deployed to EKS Cluster.&lt;/p&gt;

&lt;p&gt;To confirm that new image tag is deployed, check out &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt; commit history to get image tag information. And then, compare it with image tag that frontend-deploymentin ArgoCD used.&lt;/p&gt;

&lt;p&gt;{{% notice info %}} You can see detail information when you click &lt;strong&gt;pod&lt;/strong&gt; that starts with demo-frontend-. To get there, you might need to first navigate &lt;strong&gt;Applications &amp;gt; eksworkshop-cd-pipeline &amp;gt;&lt;/strong&gt;. {{% /notice %}}&lt;/p&gt;

&lt;p&gt;From now on, on commit in &lt;strong&gt;&lt;em&gt;k8s-manifest-repo&lt;/em&gt;&lt;/strong&gt;, ArgoCD automatically deploy the commit to EKS Cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Check CI/CD pipeline working from end to end&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s test out whole gitops pipeline we’ve built by making code changes in front-end application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Change code&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Cloud9&lt;/strong&gt; first, and then move to &lt;strong&gt;amazon-eks-frontend/src/&lt;/strong&gt; and open App.js in folder tree of the left pane.&lt;/p&gt;

&lt;p&gt;Replace code at &lt;strong&gt;line 67&lt;/strong&gt; with EKS DEMO Blog version 1 and save it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  return (
    &amp;lt;div className={classes.root}&amp;gt;
      &amp;lt;AppBar position="static" style={{ background: '#2E3B55' }}&amp;gt;
        &amp;lt;Toolbar&amp;gt;
          &amp;lt;IconButton edge="start" className={classes.menuButton} color="inherit" aria-label="menu"&amp;gt;
            &amp;lt;CloudIcon /&amp;gt;
          &amp;lt;/IconButton&amp;gt;
          &amp;lt;Typography
            variant="h6"
            align="center"
            className={classes.title}
          &amp;gt;
            EKS DEMO Blog version 1
          &amp;lt;/Typography&amp;gt;
          {new Date().toLocaleTimeString()}
        &amp;lt;/Toolbar&amp;gt;
      &amp;lt;/AppBar&amp;gt;
      &amp;lt;br/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Commit and push&lt;/p&gt;

&lt;p&gt;Commit and push changed code to git repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
git add .
git commit -m "Add new blog version"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Check CI/CD pipeline and application&lt;/p&gt;

&lt;p&gt;Wait until ArgoCD sync job completes as below.&lt;/p&gt;

&lt;p&gt;When it is all set, hit the application via URL from the below command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo http://$(kubectl get ingress/frontend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It must show new code change as below.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD with security
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/200-cicd-security#improve-cicd-pipeline-with-security-implementation" rel="noopener noreferrer"&gt;Improve CI/CD pipeline with security implementation&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Prior to deploying kubernetes manifest files to EKS Cluster, supplementary steps need to be added to prevent security and misconfiguration issue by using both &lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;*&lt;em&gt;Checkov *&lt;/em&gt;&lt;/a&gt;and &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;&lt;strong&gt;Trivy&lt;/strong&gt; &lt;/a&gt;. Also, we will use seperate ArgoCD account from admin user that we’ve used in the previous lab. This will follow ArgoCD RBAC rule to secure ArgoCD and EKS cluster ultimately.&lt;/p&gt;

&lt;p&gt;For this, we will need to improve CD (Continuous Deploy) process as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On application code change, new docker image with new image tag is created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trivy inspects security vulnerability of the new image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kustomize starts making kubernetes manifest files with the new image information&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checkov inspects security vulnerability and misconfiguration of kubernetes manifest files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If no issue out there, ArgoCD starts sync job to deploy&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of steps above is ran in the different gitHub Action workflow&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;1~2 : &lt;strong&gt;github Action workflow of application repository&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;3~5 : &lt;strong&gt;github Action workflow of k8s manifest repository&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will conduct followings to build it up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Improve &lt;strong&gt;gitHub Action build script&lt;/strong&gt; in &lt;strong&gt;frontend application repository&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve &lt;strong&gt;gitHub Action build script&lt;/strong&gt; in &lt;strong&gt;k8s manifest repository&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deactivate &lt;strong&gt;ArgoCD&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;AUTO_SYNC&lt;/em&gt;&lt;/strong&gt; (&lt;strong&gt;&lt;em&gt;Manual&lt;/em&gt;&lt;/strong&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create new &lt;strong&gt;ArgoCD&lt;/strong&gt; &lt;strong&gt;account&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create &lt;strong&gt;&lt;em&gt;auth-token&lt;/em&gt;&lt;/strong&gt; for new &lt;strong&gt;ArgoCD&lt;/strong&gt; &lt;strong&gt;account&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure &lt;strong&gt;Argo RBAC&lt;/strong&gt; for new &lt;strong&gt;ArgoCD&lt;/strong&gt; &lt;strong&gt;account&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Improve gitHub Action build script in frontend application repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need additional step to ensure that newly created docker image has no security vulnerability prior to pushing it to ECR. For this we will modify build.yaml to add image scanning process using &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;**Trivy&lt;/a&gt;**&lt;/p&gt;

&lt;p&gt;Run this code to change build.yaml for frontend application repo.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend/.github/workflows
cat &amp;lt;&amp;lt;EOF&amp;gt; build.yaml
name: Build Front

on:
  push:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout source code
        uses: actions/checkout@v2

      - name: Check Node v
        run: node -v

      - name: Build front
        run: |
          npm install
          npm run build

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: \${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: \${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: $AWS_REGION

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Get image tag(verion)
        id: image
        run: |
          VERSION=\$(echo \${{ github.sha }} | cut -c1-8)
          echo VERSION=\$VERSION
          echo "::set-output name=version::\$VERSION"

      - name: Build, tag, and push image to Amazon ECR
        id: image-info
        env:
          ECR_REGISTRY: \${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: demo-frontend
          IMAGE_TAG: \${{ steps.image.outputs.version }}
        run: |
          echo "::set-output name=ecr_repository::\$ECR_REPOSITORY"
          echo "::set-output name=image_tag::\$IMAGE_TAG"
          docker build -t \$ECR_REGISTRY/\$ECR_REPOSITORY:\$IMAGE_TAG .

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: '\${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}:\${{ steps.image-info.outputs.image_tag }}'
          format: 'table'
          exit-code: '0'
          ignore-unfixed: true
          vuln-type: 'os,library'
          severity: 'CRITICAL,HIGH'

      - name: Push image to Amazon ECR
        run: |
          docker push \${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}:\${{ steps.image-info.outputs.image_tag }}

      - name: Setup Kustomize
        uses: imranismail/setup-kustomize@v1

      - name: Checkout kustomize repository
        uses: actions/checkout@v2
        with:
          repository: $GITHUB_USERNAME/k8s-manifest-repo
          ref: main
          token: \${{ secrets.ACTION_TOKEN }}
          path: k8s-manifest-repo

      - name: Update Kubernetes resources
        run: |
          echo \${{ steps.login-ecr.outputs.registry }}
          echo \${{ steps.image-info.outputs.ecr_repository }}
          echo \${{ steps.image-info.outputs.image_tag }}
          cd k8s-manifest-repo/overlays/dev/
          kustomize edit set image \${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}=\${{ steps.login-ecr.outputs.registry}}/\${{ steps.image-info.outputs.ecr_repository }}:\${{ steps.image-info.outputs.image_tag }}
          cat kustomization.yaml

      - name: Commit files
        run: |
          cd k8s-manifest-repo
          git config --global user.email "github-actions@github.com"
          git config --global user.name "github-actions"
          git commit -am "Update image tag"
          git push -u origin main

EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit&amp;amp;push&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "Add Image Scanning in build.yaml"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Afterwards, gitHub Action workflow will be executed and it shows the result of image scan.&lt;/p&gt;

&lt;p&gt;We intended to have gitHub Action workflow move forward and complete even though image scan step fails. In real world, you might want to stop the workflow when image scan fails. For this you can set exit-code: '1'instead of exit-code: '0'in part of &lt;strong&gt;trivy&lt;/strong&gt; step in build.yaml. For more details on this, please refer to &lt;a href="https://github.com/aquasecurity/trivy-action" rel="noopener noreferrer"&gt;Trivy doc &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Improve gitHub Action build script in k8s manifest repository&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo
mkdir -p ./.github/workflows
cd ~/environment/k8s-manifest-repo/.github/workflows
cat &amp;lt;&amp;lt;EOF&amp;gt; build.yaml
name: "ArgoCD sync"
on: "push"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

      - name: Checkout source code
        uses: actions/checkout@v2

      - name: Setup Kustomize
        uses: imranismail/setup-kustomize@v1

      - name: Build Kustomize
        run: |
          pwd
          mkdir kustomize-build
          kustomize build ./overlays/dev &amp;gt; ./kustomize-build/kustomize-build-output.yaml
          ls -rlt
          cd kustomize-build
          cat kustomize-build-output.yaml

      - name: Run Checkov action
        id: checkov
        uses: bridgecrewio/checkov-action@master
        with:
          directory: kustomize-build/
          framework: kubernetes

      - name: Install ArgoCD and execute Sync in ArgoCD
        run: |
          curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
          chmod +x /usr/local/bin/argocd
          ARGO_SERVER=$ARGOCD_SERVER
          argocd app sync eksworkshop-cd-pipeline --auth-token \${{ secrets.ARGOCD_TOKEN }} --server $ARGOCD_SERVER --insecure

EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3. Deactivate ArgoCD &lt;em&gt;AUTO_SYNC *(*Manual&lt;/em&gt;)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Applicaiton &amp;gt; eksworkshop-cd-pipeline&lt;/strong&gt; and then click &lt;strong&gt;APP DETAIL&lt;/strong&gt;. Next, change &lt;strong&gt;SYNC_POLICY&lt;/strong&gt; with &lt;strong&gt;DISABLE AUTO-SYNC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create new ArgoCD account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To increase security of ArgoCD, we will use seperate ArgoCD account from admin user we used. Also we add role on top of the account.&lt;/p&gt;

&lt;p&gt;New account name of ArgoCD for CI/CD pipeline is devops&lt;/p&gt;

&lt;p&gt;ArgoCD allows us to add account via Configmap tha ArgoCD is using in the cluster.&lt;/p&gt;

&lt;p&gt;Run this code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n argocd edit configmap argocd-cm -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, add this code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data:
  accounts.devops: apiKey,login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Final code must be like this. (* createTimestamp, resourceVersion, etc differ depending on users environment)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
data:
  accounts.devops: apiKey,login
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"argocd-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cm","namespace":"argocd"}}
  creationTimestamp: "2021-07-28T07:45:53Z"
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cm
  namespace: argocd
  resourceVersion: "153620981"
  selfLink: /api/v1/namespaces/argocd/configmaps/argocd-cm
  uid: a8bb80e7-577c-4f10-b3de-359e83ccee20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, type :wq! to save and exit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Create *auth-token *for new ArgoCD account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s generate &lt;em&gt;auth-token&lt;/em&gt; that new ArgoCD account, devops is using. This will be used for authentication token when we make api call to ArgoCD. So this is different credential from login password for ArgoCD UI.&lt;/p&gt;

&lt;p&gt;Run this code and make a note of the output so that &lt;strong&gt;we can continue to use it&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd account generate-token --account devops
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Failed to establish connection&lt;/p&gt;

&lt;p&gt;If you are encountered “Failed to establish connection”,you should log in ArgoCD with following command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;argocd login $ARGOCD_SERVER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To save token value from the output in &lt;strong&gt;Secrets&lt;/strong&gt; of kubernetes maniffest repository, go to &lt;strong&gt;Settings &amp;gt; Secrets&lt;/strong&gt;, and then click &lt;strong&gt;New repository secret&lt;/strong&gt;. Finally, input &lt;strong&gt;&lt;em&gt;ARGOCD_TOKEN&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;saved token value&lt;/strong&gt; into &lt;strong&gt;Name&lt;/strong&gt; and &lt;strong&gt;Values&lt;/strong&gt; each, and then click &lt;strong&gt;Add Secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Configure Argo RBAC for new ArgoCD account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The new ArgoCD account we’ve created has no permission to make API call to sync. So, we need to grant it permission according to RBAC of ArgoCD.&lt;/p&gt;

&lt;p&gt;To grant permission, run this code to modify argocd-rbac-cm, ArgoCD Configmap.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n argocd edit configmap argocd-rbac-cm -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add this content. For lab purpose only, we intend to allow many of permissions. So please be mindful when you do this in production environment.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data:
  policy.csv: |
    p, role:devops, applications, *, */*, allow
    p, role:devops, clusters, get, *, allow
    p, role:devops, repositories, get, *, allow
    p, role:devops, repositories, create, *, allow
    p, role:devops, repositories, update, *, allow
    p, role:devops, repositories, delete, *, allow

    g, devops, role:devops
  policy.default: role:readonly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Final content must be like this after adding the content. (* createTimestamp, resourceVersion, etc differ depending on users environment)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
data:
  policy.csv: |
    p, role:devops, applications, *, */*, allow
    p, role:devops, clusters, get, *, allow
    p, role:devops, repositories, get, *, allow
    p, role:devops, repositories, create, *, allow
    p, role:devops, repositories, update, *, allow
    p, role:devops, repositories, delete, *, allow

    g, devops, role:devops
  policy.default: role:readonly
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"argocd-rbac-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-rbac-cm","namespace":"argocd"}}
  creationTimestamp: "2021-07-28T07:45:53Z"
  labels:
    app.kubernetes.io/name: argocd-rbac-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-rbac-cm
  namespace: argocd
  resourceVersion: "153629591"
  selfLink: /api/v1/namespaces/argocd/configmaps/argocd-rbac-cm
  uid: 1fe0d735-f3a0-4867-9357-7a9e766fef22
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;7. Check new implementation working&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Commit and push the code&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo
git add .
git commit -m "Add github action with ArgoCD"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;See if gitHub Action workflow completes and trigger ArgoCD deployment process.&lt;/p&gt;

&lt;p&gt;gitHub Action workflow run into failure error as below. This is the result from &lt;strong&gt;Checkov&lt;/strong&gt; ‘s static analysis on kubernetes manifest files. The result comes along with warning messages based on security best practice which is predefined in Checkov.&lt;/p&gt;

&lt;p&gt;Since we confirmed &lt;strong&gt;Checkov&lt;/strong&gt; works expectedly, we will narrow down the scope of analysis for lab purpose.&lt;/p&gt;

&lt;p&gt;Run this code to scope &lt;strong&gt;Checkov&lt;/strong&gt; analysis to CKV_K8S_17.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo/.github/workflows
cat &amp;lt;&amp;lt;EOF&amp;gt; build.yaml
name: "ArgoCD sync"
on: "push"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

      - name: Checkout source code
        uses: actions/checkout@v2

      - name: Setup Kustomize
        uses: imranismail/setup-kustomize@v1

      - name: Build Kustomize
        run: |
          pwd
          mkdir kustomize-build
          kustomize build ./overlays/dev &amp;gt; ./kustomize-build/kustomize-build-output.yaml
          ls -rlt
          cd kustomize-build
          cat kustomize-build-output.yaml

      - name: Run Checkov action
        id: checkov
        uses: bridgecrewio/checkov-action@master
        with:
          directory: kustomize-build/
          framework: kubernetes
          check: CKV_K8S_17

      - name: Install ArgoCD and execute Sync in ArgoCD
        run: |
          curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
          chmod +x /usr/local/bin/argocd
          ARGO_SERVER=$ARGOCD_SERVER
          argocd app sync eksworkshop-cd-pipeline --auth-token \${{ secrets.ARGOCD_TOKEN }} --server $ARGOCD_SERVER --insecure

EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit and push code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/k8s-manifest-repo
git add .
git commit -m "Chage Checkov check scope"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;See if gitHub Action workflow completes and trigger ArgoCD deployment process. This time, ArgoCD will be successfully completed without interruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Test out end-to-end pipeline working&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s test out end-to-end pipeline working with code change on front-end application first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; front-end application code change&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Cloud9&lt;/strong&gt; first, and then move to &lt;strong&gt;amazon-eks-frontend/src/&lt;/strong&gt; and open App.js in folder tree of the left pane.&lt;/p&gt;

&lt;p&gt;Replace code at &lt;strong&gt;line 67&lt;/strong&gt; with EKS DEMO Blog version 2 and save it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  return (
    &amp;lt;div className={classes.root}&amp;gt;
      &amp;lt;AppBar position="static" style={{ background: '#2E3B55' }}&amp;gt;
        &amp;lt;Toolbar&amp;gt;
          &amp;lt;IconButton edge="start" className={classes.menuButton} color="inherit" aria-label="menu"&amp;gt;
            &amp;lt;CloudIcon /&amp;gt;
          &amp;lt;/IconButton&amp;gt;
          &amp;lt;Typography
            variant="h6"
            align="center"
            className={classes.title}
          &amp;gt;
            EKS DEMO Blog version 2
          &amp;lt;/Typography&amp;gt;
          {new Date().toLocaleTimeString()}
        &amp;lt;/Toolbar&amp;gt;
      &amp;lt;/AppBar&amp;gt;
      &amp;lt;br/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; commit and push&lt;/p&gt;

&lt;p&gt;Commit and push changed code to git repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/amazon-eks-frontend
git add .
git commit -m "Add new blog version 2"
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;See if gitHub Action workflow completes and trigger ArgoCD deployment process. This time, ArgoCD will be successfully completed without interruption.&lt;/p&gt;

&lt;p&gt;After end-to-end pipeline finishes successfully, open your local browser with URL of the application from this command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo http://$(kubectl get ingress/backend-ingress -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It must show you &lt;strong&gt;&lt;em&gt;EKS DEMO Blog version 2&lt;/em&gt;&lt;/strong&gt; at the top of page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create CI/CD with HELM &amp;amp; CDK
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#build-up-cicd-pipeline" rel="noopener noreferrer"&gt;Build up CI/CD pipeline&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#1.-deploy-cdk-stack" rel="noopener noreferrer"&gt;1. Deploy CDK stack&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Deploy pipeline with &lt;a href="https://aws.amazon.com/cdk/" rel="noopener noreferrer"&gt;AWS CDK &lt;/a&gt;for workshop. Below resources will be deployed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;CodeCommit-App-Repo&lt;/em&gt;&lt;/strong&gt;: App Source repo&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;CodeCommit-Helm-Repo&lt;/em&gt;&lt;/strong&gt;: Helm source&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;CodeBuild-App&lt;/em&gt;&lt;/strong&gt;: Perform app build and push image to ecr&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;CodeBuild-Helm&lt;/em&gt;&lt;/strong&gt;: Update image tag and push to helm repo&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;CodePipeline&lt;/em&gt;&lt;/strong&gt;: Pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;ECR-Repo&lt;/em&gt;&lt;/strong&gt;: Container Image Repo for save App Container Image&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(0)&lt;/strong&gt; Prerequisite&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To install AWS CDK, follow &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install" rel="noopener noreferrer"&gt;this instruction &lt;/a&gt;. Latest version is recommended.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Check AWS Account&lt;/p&gt;

&lt;p&gt;aws sts get-caller-identity&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set AWS Region&lt;/p&gt;

&lt;p&gt;export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If aws profile setted account and workshop account are different, please re-configure account id to workshop account&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Download CDK source for deploy infra and install python packages.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/code-pipeline-cdk.zip' --output cdk.zip
unzip cdk.zip
cd cdk
pip install .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you use Windows OS, please download source with followed link. &lt;a href="https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/code-pipeline-cdk.zip" rel="noopener noreferrer"&gt;CDK Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; CDK Bootstrap and deploy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk bootstrap
cdk synth
cdk deploy --require-approval never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; Check deployed resource.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws codecommit list-repositories --region ${AWS_REGION}
aws codepipeline list-pipelines --region ${AWS_REGION}
aws codebuild list-projects --region ${AWS_REGION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#2.-deploy-application" rel="noopener noreferrer"&gt;2. Deploy Application&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;(0)&lt;/strong&gt; Prerequirement for CodeCommit.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install git-remote-codecommit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you use the Instance Profile in Cloud9, you can access codecommit through credential.helper.&lt;/p&gt;

&lt;p&gt;git config --global user.email "&lt;a href="mailto:test@aaa.com"&gt;test@aaa.com&lt;/a&gt;" # put your email git config --global --replace-all credential.helper '!aws codecommit credential-helper $@' git config --global credential.UseHttpPath true&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you access with &lt;a href="https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html" rel="noopener noreferrer"&gt;AWS IAM User &lt;/a&gt;, get User name and password from IAM User’s Generate Credentials.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Download application source and unzip the source.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/app.zip' --output app.zip
unzip app.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you use Windows OS, please download source with followed link. &lt;a href="https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/app.zip" rel="noopener noreferrer"&gt;App Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Push source codes into eks-workshop-app repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export APP_CODECOMMIT_URL=$(aws codecommit get-repository --repository-name eks-workshop-app --region ${AWS_REGION} | grep -o '"cloneUrlHttp": "[^"]*'|grep -o '[^"]*$')

git clone $APP_CODECOMMIT_URL
cd eks-workshop-app

cp -R ../app/* ./
git add .
git commit -m "init app Source"
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#3.-deploy-helm" rel="noopener noreferrer"&gt;3. Deploy Helm&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Download helm source and unzip the source.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl 'https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/helm.zip' --output helm.zip
unzip helm.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you use Windows OS, please download source with followed link. &lt;a href="https://static.us-east-1.prod.workshops.aws/public/d0e72c6e-904d-4933-beec-4c908d928217/static/images/110-cicd/helm.zip" rel="noopener noreferrer"&gt;Helm Source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Helm CodeCommit Repo push&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export HELM_CODECOMMIT_URL=$(aws codecommit get-repository --repository-name eks-workshop-helm --region ${AWS_REGION} | grep -o '"cloneUrlHttp": "[^"]*'|grep -o '[^"]*$')
cd helm
git init
git checkout -b master
git add .
git commit -m "init"
git remote add origin $HELM_CODECOMMIT_URL
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#4.-check-code-pipeline" rel="noopener noreferrer"&gt;4. Check Code Pipeline&lt;/a&gt;
&lt;/h2&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#5.-install-argocd" rel="noopener noreferrer"&gt;5. Install ArgoCD&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; ArgoCD install&lt;/p&gt;

&lt;p&gt;ArgoCD install to EKS cluster with below commands&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Below commands for argocd cli&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment
VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')

sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64
sudo chmod +x /usr/local/bin/argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Mac install command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Argocd is not recommanded expose public, but we expose argocd to public for workshop.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;check argocd server host&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ARGOCD_SERVER=`kubectl get svc argocd-server -n argocd -o json | jq --raw-output .status.loadBalancer.ingress[0].hostname`
echo $ARGOCD_SERVER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Argocd username is admin and get pw from below command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARGO_PWD=`kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`
echo $ARGO_PWD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;open $ARGOCD_SERVER from browser and input admin and $ARGO_PWD&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#6.-configure-argocd" rel="noopener noreferrer"&gt;6. Configure ArgoCD&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;(0)&lt;/strong&gt; Argocd access iam user configure&lt;/p&gt;

&lt;p&gt;Create user from IAM Console&lt;/p&gt;

&lt;p&gt;Set Username and check Access Key ckeckbox&lt;/p&gt;

&lt;p&gt;Choose AWSCodeCommitPoserUser Policy&lt;/p&gt;

&lt;p&gt;And reaccess to IAM user console and click the Security credentials tab and generate credenitals from HTTPS Git credentials for AWS CodeCommit&lt;/p&gt;

&lt;p&gt;Download credentials or note&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Configure ArgoCD&lt;/p&gt;

&lt;p&gt;Login to Argocd and clicked leftside settings icon and click the Repositories menu.&lt;/p&gt;

&lt;p&gt;Click Connect Repo Button &lt;strong&gt;Method&lt;/strong&gt; -&amp;gt; VIA HTTPS, &lt;strong&gt;Project&lt;/strong&gt; -&amp;gt; default&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository URL&lt;/strong&gt; -&amp;gt; &lt;strong&gt;CodeCommit Helm Repo&lt;/strong&gt;’s HTTPS Address, Put &lt;strong&gt;UserName Password&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cvm1r454tv7js99i5v6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cvm1r454tv7js99i5v6.png" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NewApp button click in Application tab &lt;strong&gt;Application Name&lt;/strong&gt; -&amp;gt; random , &lt;strong&gt;Project&lt;/strong&gt; -&amp;gt; default&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sync policy&lt;/strong&gt; -&amp;gt; AUTOMATIC, &lt;strong&gt;RepoURL&lt;/strong&gt; -&amp;gt; before generated Repository, &lt;strong&gt;PATH&lt;/strong&gt; -&amp;gt; . , &lt;strong&gt;DESTINATION&lt;/strong&gt; section’s &lt;strong&gt;Cluster URL&lt;/strong&gt; -&amp;gt; &lt;a href="https://kubernetes.default.svc" rel="noopener noreferrer"&gt;https://kubernetes.default.svc&lt;/a&gt;, &lt;strong&gt;Namespace&lt;/strong&gt; -&amp;gt; default and click &lt;strong&gt;Create&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(5)&lt;/strong&gt; Check ArgoCD&lt;/p&gt;

&lt;p&gt;Check deployment status in Argocd console&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applications &amp;gt; APP Name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check new pipeline check image tag from &lt;strong&gt;&lt;em&gt;Helm&lt;/em&gt;&lt;/strong&gt;’s commit history&lt;/p&gt;

&lt;p&gt;And check Argocd console’s eks-workshop pod Image Tag&lt;/p&gt;

&lt;p&gt;Move to &lt;strong&gt;Applications &amp;gt; eks-workshop &amp;gt;&lt;/strong&gt; and check eks-workshop &lt;strong&gt;pod&lt;/strong&gt; click and check status&lt;/p&gt;

&lt;p&gt;It will be continous synced &lt;strong&gt;Helm Repo&lt;/strong&gt; and Argocd, when &lt;strong&gt;Helm Repo&lt;/strong&gt; commit occurred.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/en-US/110-cicd/300-cicd#7.-check-pipeline" rel="noopener noreferrer"&gt;7. Check Pipeline&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Commit Java code and check deployment app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Create source code&lt;/p&gt;

&lt;p&gt;Create &lt;strong&gt;app/src/main/java/com/aws/sample/HelloWorldController.java&lt;/strong&gt; from cloud9 console.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.aws.samples;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.servlet.ModelAndView;

@Controller

public class HelloWorldController {

    @GetMapping("/hello-world")

    public ModelAndView hello() {

        ModelAndView modelAndView = new ModelAndView("hello-world");
        modelAndView.addObject("str", "Hello World");
        return modelAndView;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create &lt;strong&gt;app/src/main/resources/templates/hello-world.html&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"
      xmlns="http://www.w3.org/1999/xhtml"
      xmlns:th="http://www.thymeleaf.org"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;meta charset="utf-8"&amp;gt;
    &amp;lt;meta content="width=device-width, initial-scale=1" name="viewport"/&amp;gt;
    &amp;lt;title&amp;gt;SampleApp&amp;lt;/title&amp;gt;
    &amp;lt;link href="/favicon.ico" rel="icon"&amp;gt;
    &amp;lt;link href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.1/dist/css/bootstrap.min.css" rel="stylesheet"&amp;gt;
    &amp;lt;link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.css" rel="stylesheet"
          type="text/css"/&amp;gt;
    &amp;lt;script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"
            integrity="sha512-894YE6QWD5I59HgZOGReFYm4dnWc1Qt5NtvYSaNcOP+u1T9qYdvdihz0PPSiiqn/+/3e7Jo4EaG7TubfWGUrMQ=="
            crossorigin="anonymous" referrerpolicy="no-referrer"&amp;gt;&amp;lt;/script&amp;gt;
    &amp;lt;script src="https://cdn.jsdelivr.net/npm/bootstrap@4.6.1/dist/js/bootstrap.min.js"
            integrity="sha256-SyTu6CwrfOhaznYZPoolVw2rxoY7lKYKQvqbtqN93HI=" crossorigin="anonymous"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;div&amp;gt;
    &amp;lt;div class="container"&amp;gt;
        &amp;lt;h1 style="text-align: center; margin-top: 10px" th:text=" ${str} + '&amp;amp;#127811;'"&amp;gt;&amp;lt;/h1&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Commit and push&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd app
git add .
git commit -m "Add hello-world page"
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;(3)&lt;/strong&gt; check result&lt;/p&gt;

&lt;p&gt;Check Codepipeline status in AWS console.&lt;/p&gt;

&lt;p&gt;Check ArgoCd console&lt;/p&gt;

&lt;p&gt;Access to Sample App&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo http://$(kubectl get ingress/eks-workshop-workshop-example-app -o jsonpath='{.status.loadBalancer.ingress[*].hostname}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the deployment successed, Hello-world menu print nomally&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional : IRSA setting for APP Cluster Menu&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy Generate&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF&amp;gt; eks-workshop-test-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters"
            ],
            "Resource": "*"
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;create Policy&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-policy \
    --policy-name EKSWorkshopTestPolicy \
    --policy-document file://eks-workshop-test-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;check OIDC Url&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export OIDC_URL=$(aws eks describe-cluster --name [my-cluster] --query "cluster.identity.oidc.issuer" --output text)
export ACCOUNT_ID=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.accountId')
export FEDERATED=arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_URL
create Trust Policy
cat &amp;gt;eks-workshop-test-trust-policy.json &amp;lt;&amp;lt;EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "$FEDERATED"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com",
                    "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:default:eks-workshop-test-role"
                }
            }
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Create IAM Role&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-role \
  --role-name eks-workshop-test-role \
  --assume-role-policy-document file://"eks-workshop-test-trust-policy.json"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Attach policy to role&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy \
  --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/EKSWorkshopTestPolicy \
  --role-name eks-workshop-test-role
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Edit values yaml role-arn and push to helm repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/eks-workshop-test-role
  name: workshop-example-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Pull recent commit from helm repo&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd helm
git pull
git add .
git commit -m "update irsa role arn"
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If conflict occurred, solve the conflict.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull
git merge origin/master
git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;check deployment in argocd&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean up resources
&lt;/h2&gt;

&lt;p&gt;At the end of this workshop, you need to delete used resources to avoid additional costs to your AWS account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Delete the Ingress resources. At this point, perform the command in the folder where the yaml file is located(/home/ec2-user/environment/manifests).&lt;/p&gt;

&lt;p&gt;cd ~/environment/manifests/&lt;/p&gt;

&lt;p&gt;kubectl delete -f flask-ingress.yaml&lt;br&gt;
kubectl delete -f nodejs-ingress.yaml&lt;br&gt;
kubectl delete -f frontend-ingress.yaml&lt;br&gt;
kubectl delete -f alb-ingress-controller/v2_5_4_full.yaml&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delete EKS cluster.&lt;/p&gt;

&lt;p&gt;eksctl delete cluster --name=eks-demo&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;[!] Check that all related stacks have been deleted from AWS CloudFormation console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Remove Amazon ECR repository. With the command below, load the list of repository that you created.&lt;/p&gt;

&lt;p&gt;aws ecr describe-repositories&lt;/p&gt;

&lt;p&gt;aws ecr delete-repository --repository-name demo-flask-backend --force&lt;br&gt;
aws ecr delete-repository --repository-name demo-frontend --force&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Delete the collected metrics.&lt;/p&gt;

&lt;p&gt;aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | \&lt;br&gt;
awk '{print $2}' | grep ^/aws/containerinsights/eks-demo | while read x; do  echo "deleting $x" ; aws logs delete-log-group --log-group-name $x; done&lt;/p&gt;

&lt;p&gt;aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | \&lt;br&gt;
awk '{print $2}' | grep ^/aws/eks/eks-demo | while read x; do  echo "deleting $x" ; aws logs delete-log-group --log-group-name $x; done&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the Cloud9 IDE environment you created.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Challenges Faced and Solutions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Challenge 1: Managing Kubernetes Configurations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Used eksctl and pre-configured YAML templates to manage and deploy configurations easily.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 2: Monitoring Application and Cluster Performance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: AWS Container Insights was critical in providing visibility into resource utilization and application health.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge 3: Automating Deployment without Downtime&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: CI/CD with blue-green deployment strategies helped ensure smooth transitions with minimal downtime during updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates the power and flexibility of Amazon EKS and AWS services in building a scalable, high-performance web application infrastructure. By using EKS for container orchestration and integrating tools like Amazon ECR, AWS Fargate, and CloudWatch Container Insights, the application achieves automated deployment, efficient resource management, and robust monitoring. This setup is ideal for production environments where scalability, resilience, and operational efficiency are crucial, equipping DevOps teams with an automated and adaptable solution that can seamlessly handle dynamic workloads and application demands.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>web</category>
      <category>learning</category>
    </item>
    <item>
      <title>Create a Continuous Delivery Pipeline : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Tue, 12 Nov 2024 12:40:25 +0000</pubDate>
      <link>https://forem.com/shubham_murti/create-a-continuous-delivery-pipeline-aws-project-6mn</link>
      <guid>https://forem.com/shubham_murti/create-a-continuous-delivery-pipeline-aws-project-6mn</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This project implements a Continuous Delivery (CD) pipeline using AWS CodePipeline, AWS CodeBuild, and AWS Elastic Beanstalk to automate deployments. The setup provides a structured approach to code deployment, enhancing reliability and minimizing manual processes—ideal for agile development teams aiming for efficient, high-frequency deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CodePipeline&lt;/strong&gt;: Manages the entire deployment flow, coordinating the build, test, and deployment stages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CodeBuild&lt;/strong&gt;: Automates the building and testing of code, ensuring that each change is thoroughly validated before deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Elastic Beanstalk&lt;/strong&gt;: Manages the deployment of the web application, ensuring it is hosted in a high-availability, scalable environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2 with Auto Scaling&lt;/strong&gt;: Ensures scalable, fault-tolerant infrastructure to support application load, with ALB (Application Load Balancer) for even traffic distribution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt;: Required for configuring the CD pipeline and associated services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Repository&lt;/strong&gt;: A source code repository such as &lt;strong&gt;GitHub&lt;/strong&gt; or &lt;strong&gt;CodeCommit&lt;/strong&gt; that integrates with CodePipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic CI/CD Knowledge&lt;/strong&gt;: Familiarity with the principles of continuous integration and continuous delivery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;: To facilitate configuration and command-line management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;: Manually deploying updates to web applications is error-prone, time-consuming, and can lead to inconsistent deployment practices, especially in fast-paced development environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: The Continuous Delivery Pipeline ensures that each code change is automatically built, tested, and deployed to a managed environment, reducing manual intervention. This setup allows developers to push code more frequently, get faster feedback, and minimize deployment risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Relevance&lt;/strong&gt;: In production settings, a CD pipeline is crucial for &lt;strong&gt;agile teams&lt;/strong&gt; who need reliable, frequent deployments without impacting application uptime or performance. This project demonstrates how AWS can automate and manage application deployments, making it ideal for high-availability and fast-paced development scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjdequ0wxt6ogd848gl1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjdequ0wxt6ogd848gl1.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Repository&lt;/strong&gt;: A source control repository (GitHub or CodeCommit) triggers the pipeline when new code is committed, enabling continuous integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CodePipeline&lt;/strong&gt;: Automates the entire CI/CD process, orchestrating each stage from source to build to deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CodeBuild&lt;/strong&gt;: Builds and tests the code. CodeBuild compiles, runs unit tests, and verifies the application to ensure it is ready for deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Elastic Beanstalk&lt;/strong&gt;: Deploys the built application onto a highly available environment with Auto Scaling capabilities, which provides a robust infrastructure layer for the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step-by-Step Implementation
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Module 1: Set Up Git Repo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Fork the starter repo
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This tutorial assumes you have an existing GitHub account and Git installed on your computer. If you don’t have either of these two installed, you can follow these &lt;a href="https://docs.github.com/en/github/getting-started-with-github/quickstart" rel="noopener noreferrer"&gt;step-by-step instructions&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser tab, navigate to &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and make sure you are logged into your account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In that same tab, open the &lt;a href="https://github.com/aws-samples/aws-elastic-beanstalk-express-js-sample" rel="noopener noreferrer"&gt;aws-elastic-beanstalk-express-js-sample&lt;/a&gt; repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white Fork button on the top right corner of the screen. Next, you will see a small window asking you where you would like to fork the repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify it is showing your account and choose Create a fork. After a few seconds, your browser will display a copy of the repo in your account under Repositories.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push a change to your new repo
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the &lt;a href="https://github.com/aws-samples/aws-elastic-beanstalk-express-js-sample" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and choose the green Code button near the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To clone the repository using HTTPS, confirm that the heading says &lt;em&gt;Clone with HTTPS.&lt;/em&gt; If not, select the Use HTTPS link.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white button with a clipboard icon on it (to the right of the URL).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvck1mq986glw00lf1ini.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvck1mq986glw00lf1ini.png" width="720" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If you’re on a Mac or Linux computer, open your terminal. If you’re on Windows, launch Git Bash.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the terminal or Bash platform, whichever you are using, enter the following command and paste the URL you just copied in Step 2 when you clicked the clipboard icon. Be sure to change “YOUR-USERNAME” to your GitHub username. You should see a message in your terminal that starts with &lt;em&gt;Cloning into.&lt;/em&gt; This command creates a new folder that has a copy of the files from the GitHub repo.&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/YOUR-USERNAME/aws-elastic-beanstalk-express-js-sample" rel="noopener noreferrer"&gt;https://github.com/YOUR-USERNAME/aws-elastic-beanstalk-express-js-sample&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the new folder there is a file named app.js. Open app.js in your favorite code editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change the message in line 5 to say something other than “Hello World!” and save the file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Go to the folder created with the name aws-elastic-beanstalk-express-js-sample/ and Commit the change with the following commands:&lt;/p&gt;

&lt;p&gt;git add app.js&lt;br&gt;
git commit -m "change message"&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Push the local changes to the remote repo hosted on GitHub with the following command. Note that you need to configure Personal access tokens (classic) under Developer Settings in GitHub for remote authentication.&lt;/p&gt;

&lt;p&gt;git push&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Test your changes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In your browser window, open &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation panel, under Repositories, select the one named aws-elastic-beanstalk-express-js-sample.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the app.js file. The contents of the file, including your change, should be displayed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;Here is what our architecture looks like right now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymqm5xqol9r9ur08kymf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymqm5xqol9r9ur08kymf.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created a code repository containing a simple web app. We will be using this repository to start our continuous delivery pipeline. It’s important to set it up properly so we push code to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 2: Deploy Web App
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configure an AWS Elastic Beanstalk app
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser tab, open the &lt;a href="https://console.aws.amazon.com/elasticbeanstalk/home?region=us-west-2#/welcome" rel="noopener noreferrer"&gt;AWS Elastic Beanstalk console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Create Application button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Web server environment under the Configure environment heading.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the text box under the heading Application name, enter DevOpsGettingStarted*.*&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Platform dropdown menu, under the Platform heading, select Node.js . Platform branch and Platform version will automatically populate with default selections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the radio button next to Sample application under the Application code heading is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the radio button next to Single instance (free tier eligible) under the Presets heading is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Next.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3308%2F0%2An0ygHvVoW6KOYkx7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3308%2F0%2An0ygHvVoW6KOYkx7.png" width="800" height="1992"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On the Configure service access screen, choose Use an existing service role for Service Role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For EC2 instance profile dropdown list, the values displayed in this dropdown list may vary, depending on whether you account has previously created a new environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose one of the following, based on the values displayed in your list.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If &lt;em&gt;aws-elasticbeanstalk-ec2-role&lt;/em&gt; displays in the dropdown list, select it from the EC2 instance profile dropdown list.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If another value displays in the list, and it’s the default EC2 instance profile intended for your environments, select it from the EC2 instance profile dropdown list.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the EC2 instance profile dropdown list doesn’t list any values to choose from, expand the procedure that follows, &lt;em&gt;Create IAM Role for EC2 instance profile&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complete the steps in &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html" rel="noopener noreferrer"&gt;Create IAM Role for EC2 instance profile&lt;/a&gt; to create an IAM Role that you can subsequently select for the EC2 instance profile. Then, return back to this step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now that you’ve created an IAM Role, and refreshed the list, it displays as a choice in the dropdown list. Select the IAM Role you just created from the EC2 instance profile dropdown list.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Choose Skip to Review on the Configure service access page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This will select the default values for this step and skip the optional steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhqaaw2bh98qimmlkigf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhqaaw2bh98qimmlkigf.png" width="800" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Review page displays a summary of all your choices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Submit at the bottom of the page to initialize the creation of your new environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrxeprxw8mhba5neifp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrxeprxw8mhba5neifp1.png" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While waiting for deployment, you should see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A screen that will display status messages for your environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After a few minutes have passed, you will see a green banner with a checkmark at the top of the environment screen.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you see the banner, you have successfully created an AWS Elastic Beanstalk application and deployed it to an environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test your web app
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;To test your sample web app, select the link under the name of your environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg9x8p1yv5pnpo6kk7gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwg9x8p1yv5pnpo6kk7gt.png" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2. Once the test has completed, a new browser tab should open with a webpage congratulating you!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwbn4wtj47kio2jw6rbu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwbn4wtj47kio2jw6rbu.png" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;Now that we are done with this module, our architecture will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2p4xj6h4c5t4b15hhrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2p4xj6h4c5t4b15hhrw.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created an AWS Elastic Beanstalk environment and sample application. We will be using this environment and our continuous delivery pipeline to deploy the Hello World! web app we created in the previous module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 3: Create Build Project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configure the AWS CodeBuild project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser tab, open the &lt;a href="https://console.aws.amazon.com/codesuite/codebuild/start?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodeBuild console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Create project button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Project name field, enter &lt;em&gt;Build-DevOpsGettingStarted.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select GitHub from the Source provider dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the Connect using OAuth radio button is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white Connect to GitHub button. A new browser tab will open asking you to give AWS CodeBuild access to your GitHub repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the green Authorize aws-codesuite button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter your GitHub password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Confirm button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Repository in my GitHub account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter &lt;em&gt;aws-elastic-beanstalk-express-js-sample&lt;/em&gt; in the search field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the repo you forked in Module 1. After selecting your repo, your screen should look like this:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g96j1mwg147bb6zn3wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g96j1mwg147bb6zn3wk.png" width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Confirm that Managed Image is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Amazon Linux 2 from the Operating system dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Standard from the Runtime(s) dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select aws/codebuild/amazonlinux2-x86_64-standard:3.0 from the Image dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that Always use the latest image for this runtime version is selected for Image version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that Linux is selected for Environment type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that New service role is selected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Create a Buildspec file for the project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select Insert build commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Switch to editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Replace the Buildspec in the editor with the code below:&lt;/p&gt;

&lt;p&gt;version: 0.2&lt;br&gt;
phases:&lt;br&gt;
    build:&lt;br&gt;
        commands:&lt;br&gt;
            - npm i --save&lt;br&gt;
artifacts:&lt;br&gt;
    files:&lt;br&gt;
        - '*&lt;em&gt;/&lt;/em&gt;'&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose the orange Create build project button. You should now see a dashboard for your project.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  Test the CodeBuild project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Start build button. This will load a page to configure the build process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the loaded page references the correct GitHub repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Start build button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for the build to complete. As you are waiting you should see a green bar at the top of the page with the message &lt;em&gt;Build started,&lt;/em&gt; the progress for your build under Build log, and, after a couple minutes, a green checkmark and a &lt;em&gt;Succeeded&lt;/em&gt; message confirming the build worked.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;Here’s what our architecture looks like now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszj99z274jee8acqgks8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszj99z274jee8acqgks8.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created a build project on AWS CodeBuild to run the build process of the Hello World! web app from our GitHub repository. We will be using this build project as the build step in our continuous delivery pipeline, which we will create in the next module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 4: Create Delivery Pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a new pipeline
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a browser window, open the &lt;a href="https://console.aws.amazon.com/codesuite/codepipeline/start?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodePipeline console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Create pipeline button. A new screen will open up so you can set up the pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Pipeline name field, enter &lt;em&gt;Pipeline-DevOpsGettingStarted.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that New service role is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Next button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configure the source stage
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select GitHub version 1 from the Source provider dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white Connect to GitHub button. A new browser tab will open asking you to give AWS CodePipeline access to your GitHub repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the green Authorize aws-codesuite button. Next, you will see a green box with the message &lt;em&gt;You have successfully configured the action with the provider.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From the Repository dropdown, select the repo you created in Module 1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select main from the branch dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that GitHub webhooks is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Next button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configure the build stage
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;From the Build provider dropdown menu, select AWS CodeBuild.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Region confirm that the US West (Oregon) Region is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Build-DevOpsGettingStarted under Project name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Next button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configure the deploy stage
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select AWS Elastic Beanstalk from the Deploy provider dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Region, confirm that the US West (Oregon) Region is selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the field under Application name and confirm you can see the app DevOpsGettingStarted created in Module 2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select DevOpsGettingStarted-env from the Environment name textbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Next button. You will now see a page where you can review the pipeline configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Create pipeline button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Watch first pipeline execution
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;While watching the pipeline execution, you will see a page with a green bar at the top. This page shows all the steps defined for the pipeline and, after a few minutes, each will change from blue to green.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Once the Deploy stage has switched to green and it says &lt;em&gt;Succeeded,&lt;/em&gt; choose AWS Elastic Beanstalk. A new tab listing your AWS Elastic Beanstalk environments will open.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the URL in the Devopsgettingstarted-env row. You should see a webpage with a white background and the text you included in your GitHub commit in Module 1.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;Here’s what our architecture looks like now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eniasg54bifs8huj6s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eniasg54bifs8huj6s7.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have created a continuous delivery pipeline on AWS CodePipeline with three stages: source, build, and deploy. The source code from the GitHub repo created in Module 1 is part of the source stage. That source code is then built by AWS CodeBuild in the build stage. Finally, the built code is deployed to the AWS Elastic Beanstalk environment created in Module 3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Module 5: Finalize Pipeline and Test
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a review stage in pipeline
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;a href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodePipeline console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should see the pipeline we created in Module 4, which was called Pipeline-DevOpsGettingStarted. Select this pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white Edit button near the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the white Add stage button between the Build and Deploy stages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Stage name field, enter &lt;em&gt;Review.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Add stage button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Review stage, choose the white Add action group button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Action name, enter &lt;em&gt;Manual_Review.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From the Action provider dropdown, select Manual approval.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the optional fields have been left blank.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Done button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Save button at the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Save button to confirm the changes. You will now see your pipeline with four stages: Source, Build, Review, and Deploy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Push a new commit to your repo
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In your favorite code editor, open the app.js file from Module 1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change the message in Line 5.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open your preferred Git client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the folder created in Module 1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Commit the change with the following commands:&lt;/p&gt;

&lt;p&gt;git add app.js&lt;br&gt;
git commit -m "Full pipeline test"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Push the local changes to the remote repo hosted on GitHub with the following command:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;git push&lt;/p&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  Monitor the pipeline and manully approve the change
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;a href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodePipeline console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the pipeline named Pipeline-DevOpsGettingStarted. You should see the Source and Build stages switch from blue to green.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the Review stage switches to blue, choose the white Review button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write an approval comment in the Comments textbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the orange Approve button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for the Review and Deploy stages to switch to green.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the AWS Elastic Beanstalk link in the Deploy stage. A new tab listing your Elastic Beanstalk environments will open.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the URL in the Devopsgettingstarted-env row. You should see a webpage with a white background and the text you had in your most recent GitHub commit.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Congratulations! You have a fully functional continuous delivery pipeline hosted on AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;With all modules now completed, here is the architecture of what you built:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrseepgciwwtr77vjnt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrseepgciwwtr77vjnt0.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have used AWS CodePipeline to add a review stage with manual approval to our continuous delivery pipeline. Now, our code changes will have to be reviewed and approved before they are deployed to AWS Elastic Beanstalk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean up resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Delete AWS Elastic Beanstalk application
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser window, open the &lt;a href="https://console.aws.amazon.com/elasticbeanstalk/home?region=us-west-2#/applications" rel="noopener noreferrer"&gt;AWS Elastic Beanstalk Console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation menu, click on “Applications.” You should see the “DevOpsGettingStarted” application listed under “All applications.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the radio button next to “DevOpsGettingStarted.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the white dropdown “Actions” button at the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select “Delete application” under the dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type “DevOpsGettingStarted” in the text box to confirm deletion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the orange “Delete” button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Delete pipeline in AWS CodePipeline
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser window, open the &lt;a href="https://console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodePipeline Console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the radio button next to “Pipeline-DevOpsGettingStarted.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the white “Delete pipeline” button at the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type “delete” in the text box to confirm deletion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the orange “Delete” button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Delete pipeline resources from Amazon S3 bucket
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser window, open the &lt;a href="https://s3.console.aws.amazon.com/s3/home?region-us-west-2" rel="noopener noreferrer"&gt;Amazon S3 Console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should see a bucket named “codepipeline-us-west-2” followed by your AWS account number. Click on this bucket. Inside this bucket, you should see a folder named “Pipeline-DevOpsGettingStarted.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the checkbox next to the “Pipeline-DevOpsGettingStarted” folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the white “Actions” button from the dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select “Delete” under the dropdown menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the blue “Delete” button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Delete build project in AWS CodeBuild
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a new browser window, open the &lt;a href="https://console.aws.amazon.com/codesuite/codebuild/projects?region=us-west-2" rel="noopener noreferrer"&gt;AWS CodeBuild Console&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation, click on “Build projects” under “Build.” You should see the “Build-DevOpsGettingStarted” build project listed under “Build project.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the radio button next to “Build-DevOpsGettingStarted.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the white “Delete build project” button at the top of the page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type “delete” in the text box to confirm deletion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the orange “Delete” button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Congratulations!
&lt;/h2&gt;

&lt;p&gt;You successfully built a continuous delivery pipeline on AWS! As a great next step, dive deeper into specific AWS technologies and take your application to the next level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges Faced and Solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Failures Due to Environment Variables&lt;/strong&gt;: At times, missing environment variables caused builds to fail.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Ensured that the required environment variables were securely stored in Elastic Beanstalk and made accessible to the application during deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CodePipeline Integration with External Repositories&lt;/strong&gt;: Faced issues when integrating CodePipeline with GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Configured OAuth permissions carefully and verified GitHub webhook functionality to trigger pipeline events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The project demonstrates an efficient AWS Continuous Delivery pipeline that automates deployments from commit to production. By integrating CodePipeline, CodeBuild, and Elastic Beanstalk, development teams can focus on code quality and agility, resulting in faster releases and robust application performance.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Create a Highly Available WordPress Web Application : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Tue, 12 Nov 2024 10:57:39 +0000</pubDate>
      <link>https://forem.com/shubham_murti/create-a-highly-available-wordpress-web-application-aws-project-3a1g</link>
      <guid>https://forem.com/shubham_murti/create-a-highly-available-wordpress-web-application-aws-project-3a1g</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This project focuses on designing a resilient, scalable WordPress web application on AWS. Using key AWS services like Amazon VPC, Amazon RDS, Amazon EFS, EC2, and Application Load Balancer (ALB), it establishes a robust architecture that ensures high availability, scalability, and fault tolerance for a WordPress site, making it suitable for high-traffic applications.&lt;/p&gt;

&lt;p&gt;By deploying WordPress in a multi-tier architecture, I learned how to leverage AWS infrastructure to meet the needs of high-traffic applications, ensuring both availability and fault tolerance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon VPC&lt;/strong&gt;: Provides isolated network environments and security controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt;: Hosts a reliable, highly available MySQL database for WordPress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EFS&lt;/strong&gt;: A scalable, shared storage solution for dynamic content, enabling multiple EC2 instances to access the same data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt;: Hosts WordPress instances and dynamically scales with Auto Scaling groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt;: Distributes incoming traffic across multiple instances for enhanced availability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt;: Required to access and configure AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;: For resource management and deployment tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic AWS Networking Knowledge&lt;/strong&gt;: Understanding of VPCs, subnets, and security groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WordPress Setup Knowledge&lt;/strong&gt;: Familiarity with WordPress installation and configuration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;: WordPress applications often face challenges around &lt;strong&gt;scalability&lt;/strong&gt; and &lt;strong&gt;availability&lt;/strong&gt;, especially with traditional hosting environments where capacity may not automatically adjust to traffic demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Using AWS, this project demonstrates how to set up WordPress in a highly available, scalable architecture that can handle fluctuations in traffic and ensure minimal downtime. AWS’s managed services enable the environment to scale automatically, handle failover, and improve the user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Relevance&lt;/strong&gt;: This approach is ideal for production-grade WordPress applications with high traffic, such as e-commerce sites, news platforms, and corporate blogs. By implementing this architecture, companies can reduce manual management, lower costs, and improve their WordPress application’s reliability and speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Configure the network
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/foundations/lab1#create-a-new-virtual-private-cloud-(vpc)" rel="noopener noreferrer"&gt;Create a new Virtual Private Cloud (VPC)&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As a starting point for the workshop you will need to login to your AWS account, select the region of your choice and create a new VPC.&lt;/p&gt;

&lt;p&gt;To do this click on &lt;strong&gt;Your VPCs&lt;/strong&gt; on the left hand side of the console and click &lt;strong&gt;Create VPC&lt;/strong&gt;. Enter wordpress-workshop as name for your VPC and a CIDR range such as the one below. When you're fiinished click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkij6b6059utabc77eql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkij6b6059utabc77eql.png" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the VPC, on the VPC details page click on &lt;strong&gt;Actions&lt;/strong&gt; and then select &lt;strong&gt;Edit VPC Settings&lt;/strong&gt;.&lt;br&gt;
Make sure to enable both &lt;strong&gt;DNS resolution&lt;/strong&gt; and &lt;strong&gt;DNS hostnames&lt;/strong&gt; under &lt;strong&gt;DNS Settings&lt;/strong&gt; and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxziwignp3rm7do6l5k88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxziwignp3rm7do6l5k88.png" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/foundations/lab1#create-public-and-private-subnets-in-the-new-vpc" rel="noopener noreferrer"&gt;Create public and private subnets in the new VPC&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the VPC has been created, the next step is to create the subnets that will be used to host the application across two different Availability Zones. We are going to create six subnets in total, three for each AZ, as shown in the following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6xbsdmekbzhndytgzy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6xbsdmekbzhndytgzy0.png" width="612" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first pair of subnets, &lt;em&gt;Public&lt;/em&gt;, will be accessible from the Internet and contain load balancers and NAT gateways. The second pair, &lt;em&gt;Application&lt;/em&gt;, will contain application servers and your shared EFS filesystem. Your application servers will be able to communicate with the Internet via the NAT gateways but will only be addressable from the load balancers. Finally the &lt;em&gt;Database&lt;/em&gt; pair of subnets will hold your active / passive relational database. It will be accessible to other resources in the VPC but will have no access to the Internet and cannot be addressed by the Internet or the load balancers.&lt;/p&gt;

&lt;p&gt;To create each of the six subnets please select &lt;strong&gt;Subnets&lt;/strong&gt; on the left of the AWS VPC console, then click on &lt;strong&gt;Create subnet&lt;/strong&gt; and use the details in the table below to define the characteristics of each of your subnets. Make sure to always select the &lt;strong&gt;Wordpress-workshop&lt;/strong&gt; VPC when creating the subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3u26jjoweiwo6n0l1ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3u26jjoweiwo6n0l1ti.png" width="800" height="1117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screenshots in this lab were taken from a deployment in the &lt;strong&gt;Ireland (eu-west-1)&lt;/strong&gt; region, if you are building in a different AWS region please just ensure that you create your subnets in 2 different availability zones in the same region, such as &lt;em&gt;us-west-2a&lt;/em&gt; and &lt;em&gt;us-west-2b&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For each subnet specify a name and a CIDR range for the subnet. Be sure and create a public, application, and data subnet in each of two availability zones as detailed in the table below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzt4whfgza1njvitamnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzt4whfgza1njvitamnw.png" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point all the correct subnets have been created and they can route network traffic between them. In the next set of steps you will create an Internet Gateway, allowing communication between your VPC and the Internet. You will also configure your routing tables to only allow Internet communication with your public subnets and not the private application or data subnets.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/foundations/lab1#create-an-internet-gateway-and-set-up-routing" rel="noopener noreferrer"&gt;Create an Internet Gateway and set up routing&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The following steps will allow connectivity from the Internet to the public subnets and also connectivity from the private subnets to the Internet via NAT gateways.&lt;/p&gt;

&lt;p&gt;First you need to create a new Internet Gateway (IGW) from your VPC dashboard and attach it to the wordpress-workshop VPC. Start by clicking &lt;strong&gt;Internet gateways&lt;/strong&gt; on the left hand side of the VPC console and then click the &lt;strong&gt;Create Internet gateway&lt;/strong&gt; button. Enter a name for your IGW such as WP Internet Gateway and click &lt;strong&gt;Create Internet gateway&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24ot51ttieffjpon526l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24ot51ttieffjpon526l.png" width="800" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the IGW has been created you need to associate it with your VPC by attaching it to your VPC.&lt;br&gt;
Select &lt;strong&gt;Attach to VPC&lt;/strong&gt; from the &lt;strong&gt;Actions&lt;/strong&gt; drop-down menu then select the wordpress-workshop VPC from drop-down list of available VPCs and click on &lt;strong&gt;Attach Internet gateway&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffe2snfpmnnsn6fk9txnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffe2snfpmnnsn6fk9txnl.png" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gateway will be used by instances and services running in the public subnets (e.g. Public Subnet A and Public Subnet B) to communicate to the Internet.&lt;/p&gt;

&lt;p&gt;Once the gateway is created you will need to create a new routing table and associate it with the public subnets.&lt;br&gt;
Create a new route table by selecting &lt;strong&gt;Route tables&lt;/strong&gt; in the left-hand menu of the console and then clicking on the &lt;strong&gt;Create route table&lt;/strong&gt; button.&lt;br&gt;
Give it a &lt;strong&gt;Name&lt;/strong&gt; and select the wordpress-workshop from the drop-down, then click on &lt;strong&gt;Create route table&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi1ypftge2gp67skuamq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi1ypftge2gp67skuamq.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the route table, select it from the &lt;strong&gt;Route tables&lt;/strong&gt; section of your VPC dashboard, then click on &lt;strong&gt;Actions&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Edit routes&lt;/strong&gt; and add a default route via the Internet Gateway created in the previous step and click on &lt;strong&gt;Save changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fftv4lvxwjonkczp8ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fftv4lvxwjonkczp8ui.png" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, you need to associate the newly created route table with the public subnets. To do that, click on the public route table, then click on &lt;strong&gt;Subnet Associations&lt;/strong&gt;, edit by clicking on &lt;strong&gt;Edit subnet associations&lt;/strong&gt; and select the two public subnets created earlier and click on &lt;strong&gt;Save associations&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ya16ufzl0wf3txy7eau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ya16ufzl0wf3txy7eau.png" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/foundations/lab1#create-one-nat-gateway-in-each-public-subnet" rel="noopener noreferrer"&gt;Create one NAT gateway in each public subnet&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Wordpress instances will need to be able to connect to the Internet and download application and OS updates. To avoid dependencies across Availability Zones, you are going to create two NAT gateways, one for each Availability Zone where the application is deployed.&lt;/p&gt;

&lt;p&gt;To do this, you will create one NAT Gateway in each Availability Zone, then create one route table for each application subnet, update the route table with a default route through the NAT gateway in the same AZ, and then associate the route table to the respective application subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga8bvxh2o8oprto1415c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga8bvxh2o8oprto1415c.png" width="668" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the VPC dashboard in your account, select &lt;strong&gt;NAT gateways&lt;/strong&gt; and create one gateway in each of the two public subnets (i.e. Public Subnet A and Public Subnet B) Always make sure you have selected the correct public subnet when creating the gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fjv6l0t3algiwwcbzq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fjv6l0t3algiwwcbzq3.png" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to create route tables for each of the two Application subnets and use the NAT gateways created earlier as the default gateway:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65ghvioa8147hy6r469p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65ghvioa8147hy6r469p.png" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Edit the route table and add the default route via the NAT gateway in Application subnet A:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74yqe7k12xqjw1sfdola.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74yqe7k12xqjw1sfdola.png" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Associate the route table with Application Subnet A:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypdyxidljkqqufs81dyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypdyxidljkqqufs81dyg.png" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repeat the last three steps to also create a route table for Application Subnet B which uses the NAT gateway deployed in the second availability zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/foundations/lab1#verify-your-configuration" rel="noopener noreferrer"&gt;Verify your configuration&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You have now created a virtual private cloud network across two availability zones within an AWS region. You have created six subnets, three in each availability zone, and have configured a route so that the Internet can communicate with resources in the public subnets and vice versa. The application subnets have been configured, via routing table, to communicate with the Internet via NAT gateways in the public subnets, and the data subnets can only communicate with resources in the six subnets, but not the Internet.&lt;/p&gt;

&lt;p&gt;Please note that the information below is based on a VPC deployed in the &lt;em&gt;Ireland (eu-west-1)&lt;/em&gt; region. If you had choosen a different region for your setup you need to adjust the region name accordingly.&lt;/p&gt;

&lt;p&gt;You can compare your own configuration based on the screenshot below and move along when you have verified your setup.&lt;/p&gt;

&lt;p&gt;Check the &lt;strong&gt;Resource map&lt;/strong&gt; section of your VPC which shows your VPC, subnets, route tables, Internet gateways, NAT gateways which helps you visualise the resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy117yiof7j2cq4dlaek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy117yiof7j2cq4dlaek.png" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Data Tier
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Set up the RDS database
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab2#create-database-security-groups" rel="noopener noreferrer"&gt;Create database security groups&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You will create 2 security groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;WP Database Clients will be attached to the EC2 instances running the web servers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WP Database will be attached to the RDS DB instance&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Visit the &lt;a href="https://console.aws.amazon.com/vpc/home?#SecurityGroups" rel="noopener noreferrer"&gt;Amazon VPC console &lt;/a&gt;and create 2 security groups.&lt;/p&gt;

&lt;p&gt;First, create the WP Database Clients security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill in the &lt;em&gt;Security group name&lt;/em&gt; and &lt;em&gt;Description&lt;/em&gt; fields&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the wordpress-workshop from the drop-down&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to the bottom of the page and click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyq50x08aycc8iy5peiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyq50x08aycc8iy5peiv.png" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create the WP Database security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Inbound Rules&lt;/strong&gt; section, click on &lt;strong&gt;Add rule&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Type &lt;strong&gt;MySQL/Aurora&lt;/strong&gt; which allows traffic on port 3306 from &lt;strong&gt;Custom&lt;/strong&gt; source &lt;em&gt;WP Database Clients&lt;/em&gt; security group.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r1sykfgk2eqcynquvkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r1sykfgk2eqcynquvkg.png" width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note, that you can search for the security group by name in the source field of the security group rule.&lt;/p&gt;

&lt;p&gt;Now you are ready to create your RDS database.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab2#create-an-rds-subnet-group" rel="noopener noreferrer"&gt;Create an RDS subnet group&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon RDS is an easy to manage relational database service. When you use Amazon RDS to deploy a database in a highly available setup, it will create 2 instances in 2 different availability zones. To do this, when you create a database you specify a subnet group which tells RDS in which subnets it can deploy your database instances.&lt;/p&gt;

&lt;p&gt;To create a DB subnet group browse to the &lt;a href="https://console.aws.amazon.com/rds/home" rel="noopener noreferrer"&gt;Amazon RDS console &lt;/a&gt;, click on &lt;strong&gt;Subnet groups **in the panel on your left, click on the **Create DB Subnet Group&lt;/strong&gt; button and use the following details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Name: Aurora-Wordpress&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Description: RDS subnet group used by Wordpress&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC: wordpress-workshop&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz7bxvyrrwppmc7edqk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz7bxvyrrwppmc7edqk0.png" width="788" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and add the two &lt;strong&gt;Data subnets&lt;/strong&gt; created earlier (one for each AZ) to your new subnet group and click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F018ocnvrcetxtxavs86m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F018ocnvrcetxtxavs86m.png" width="777" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note, in order to get the ID of the data subnets, you can open a second tab and navigate to the &lt;a href="https://console.aws.amazon.com/vpcconsole/home?#subnets:" rel="noopener noreferrer"&gt;**Subnets **section &lt;/a&gt;of the VPC console. From the list of subnets select the one you are interested in. On the bottom of the screen you will then be able to copy the &lt;strong&gt;Subnet ID&lt;/strong&gt; by clicking the Copy to clipboard icon beside the id.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab2#create-the-aurora-database-instance" rel="noopener noreferrer"&gt;Create the Aurora database instance&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the subnet group has been created you are ready to launch the RDS-managed database.&lt;br&gt;
Go to the &lt;a href="https://console.aws.amazon.com/rds/home" rel="noopener noreferrer"&gt;Amazon RDS Console &lt;/a&gt;, select &lt;strong&gt;Databases **from the menu on the left and click **Create database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Enter the following details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Database creation method: &lt;strong&gt;Standard create&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Engine options: &lt;strong&gt;Aurora (MySQL Compatible)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep the default Engine Version&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7inkm3t5n647ph2hbrtd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7inkm3t5n647ph2hbrtd.png" width="773" height="765"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use wpadminas &lt;strong&gt;Master username&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When prompted for the Master password, you can either click on Auto generate a password or create your own — in either case please make sure you write down the password as it will be required a few steps later when setting up the connectivity of the Wordpress instances to the database.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkvz6n814vv6m56j689n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkvz6n814vv6m56j689n.png" width="697" height="710"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the DB instance size together with a Multi-AZ deployment, required for high availability. To keep costs low, for this workshop we recommend using a burstable instance class (db.t4g.medium or similar). Burstable instances might not be suited for production environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc09jc257hn5xogg05o74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc09jc257hn5xogg05o74.png" width="779" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the connectivity section, make sure you select the wordpress-workshop VPC, together with the aurora-wordpress DB subnet group, and WP Database security group created earlier:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq712lti34a9he8w9p1wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq712lti34a9he8w9p1wy.png" width="773" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvviiuo1qjvbfdhvxuz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvviiuo1qjvbfdhvxuz2.png" width="783" height="863"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Monitoring&lt;/strong&gt; section, uncheck &lt;strong&gt;Turn on DevOps Guru&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa160gojn2filgop3vfrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa160gojn2filgop3vfrm.png" width="775" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Expand &lt;strong&gt;Additional configuration&lt;/strong&gt; section and specify an &lt;strong&gt;Initial database name&lt;/strong&gt; of &lt;em&gt;wordpress&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hadfsl8759s7zw1c5ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hadfsl8759s7zw1c5ja.png" width="787" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, click on &lt;strong&gt;Create database&lt;/strong&gt; to start building the cluster.&lt;br&gt;
The database will take a few minutes to be provisioned and made available.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab2#verify-your-configuration" rel="noopener noreferrer"&gt;Verify your configuration&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The active / passive database should now be available and running in two different availability zones, waiting for connections from any EC2 resource with the client security group associated to it.&lt;br&gt;
Please compare your own configuration based on the screenshots below and move along when you have verified your setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya07xp866enuw9a0tn3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya07xp866enuw9a0tn3t.png" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subnet group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zyt384k31gbrux1yjug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zyt384k31gbrux1yjug.png" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46oup4tdckzle126ek5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46oup4tdckzle126ek5k.png" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the shared filesystem
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic File System (Amazon EFS) provides a simple, scalable, elastic file system for general purpose workloads for use with AWS Cloud services and on-premises resources. In this lab you will create an EFS cluster that will provide a shared filesystem for your Wordpress content.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab3#create-filesystem-security-groups" rel="noopener noreferrer"&gt;Create filesystem security groups&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When using Amazon EFS, you specify Amazon EC2 security groups for your EC2 instances and security groups for the EFS mount targets associated with the file system. A security group acts as a firewall, and the rules that you add define the traffic flow.&lt;/p&gt;

&lt;p&gt;In this workshop, you will create 2 security groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;WP EFS Clients will be attached to the EC2 instances running the web servers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WP EFS will be attached to the EFS mount targets&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Visit the &lt;a href="https://console.aws.amazon.com/vpc/home?#SecurityGroups" rel="noopener noreferrer"&gt;Amazon VPC console &lt;/a&gt;to create the 2 security groups.&lt;/p&gt;

&lt;p&gt;First, create the WP EFS Clients security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill in the &lt;em&gt;Security group name&lt;/em&gt; and &lt;em&gt;Description&lt;/em&gt; fields&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the wordpress-workshop VPC from the drop-down&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to the bottom of the page and click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq06xjz41rifzhcrs7fk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq06xjz41rifzhcrs7fk5.png" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon EFS creates a shared file system and exposes it as a NFS share. The security group attached to the EFS mount points will need to allow inbound connections on the NFS TCP port 2049.&lt;/p&gt;

&lt;p&gt;Create the WP EFS security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Inbound Rules&lt;/strong&gt; section, click on &lt;strong&gt;Add rule&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Type &lt;strong&gt;NFS&lt;/strong&gt; and specify &lt;strong&gt;Custom&lt;/strong&gt; source &lt;em&gt;WP EFS Clients&lt;/em&gt; security group.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzqggif0bzl4u5v3lvwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvzqggif0bzl4u5v3lvwv.png" width="800" height="586"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note, that you can search for the security group by name in the source field of the security group rule.&lt;/p&gt;

&lt;p&gt;Now you are ready to create the EFS file system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab3#create-the-efs-file-system" rel="noopener noreferrer"&gt;Create the EFS file system&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To create an EFS file system visit the &lt;a href="https://console.aws.amazon.com/efs/home" rel="noopener noreferrer"&gt;Amazon EFS console &lt;/a&gt;and click &lt;strong&gt;Create file system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Enter Wordpress-EFSin the &lt;strong&gt;Name&lt;/strong&gt; field.&lt;br&gt;
From the VPC drop down select the wordpress-workshop VPC and click &lt;strong&gt;Customize&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnb2hyq2qsi0zy0t2qer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnb2hyq2qsi0zy0t2qer.png" width="617" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the creation page, uncheck &lt;strong&gt;Enable automatic backups&lt;/strong&gt; to avoid backing up the contents of the file system. It’s recommended to keep it enabled when deploying a production environment.&lt;/p&gt;

&lt;p&gt;Keep all other settings unchanged and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdy41nvmx1s0iq91obia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdy41nvmx1s0iq91obia.png" width="800" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;Network access&lt;/strong&gt; page, under &lt;strong&gt;Mount targets&lt;/strong&gt;, choose the two subnets created for the Data tier (Data subnet A and B). On the right side, under &lt;strong&gt;Security groups&lt;/strong&gt;, associate the WP EFS security group created above to each mount target and remove the association with the &lt;em&gt;Default&lt;/em&gt; security group.&lt;br&gt;
Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j0nz8b2ovd92m3u17i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j0nz8b2ovd92m3u17i0.png" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accept the defaults on the next screen for &lt;strong&gt;File system policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw2ub352dfzy0u9im0qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw2ub352dfzy0u9im0qd.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt;, review and confirm the file system creation by clicking &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This will create two mount targets in the Data subnets and after a few moment the file system will become &lt;em&gt;Available&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7xydoq6g2z4e09p0mv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7xydoq6g2z4e09p0mv8.png" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the Application Tier
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Create the load balancer
&lt;/h2&gt;

&lt;p&gt;To distribute traffic across your Wordpress application servers you will need a load balancer. In this lab you will create an Application Load Balancer.&lt;/p&gt;

&lt;p&gt;Application Load Balancer operates at the request level (layer 7), routing traffic to targets (EC2 instances, containers, IP addresses, and Lambda functions) based on the content of the request&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab4#create-load-balancer-and-application-security-groups" rel="noopener noreferrer"&gt;Create load balancer and application security groups&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Visit the &lt;a href="https://console.aws.amazon.com/vpc/home?#SecurityGroups" rel="noopener noreferrer"&gt;Amazon VPC console &lt;/a&gt;to create the security groups.&lt;/p&gt;

&lt;p&gt;First, create the WP Load Balancer security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill in the &lt;em&gt;Security group name&lt;/em&gt; and &lt;em&gt;Description&lt;/em&gt; fields&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the wordpress-workshop from the drop-down&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Inbound Rules&lt;/strong&gt; section, click on &lt;strong&gt;Add rule&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Type &lt;strong&gt;HTTP&lt;/strong&gt; which allows traffic on port 80 from &lt;strong&gt;My IP&lt;/strong&gt; source to limit access to your current public IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to the bottom of the page and click on &lt;strong&gt;Create security group&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Outside of a workshop environment you would likely want to modify the security group to allow access from any IP address. To learn more about security in and of the cloud please visit the &lt;a href="https://aws.amazon.com/security/" rel="noopener noreferrer"&gt;AWS Cloud Security &lt;/a&gt;website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1loj6nw3wz6stx2x8r33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1loj6nw3wz6stx2x8r33.png" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create the WP Web Servers security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Inbound Rules&lt;/strong&gt; section, click on &lt;strong&gt;Add rule&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Type &lt;strong&gt;HTTP&lt;/strong&gt; which allows traffic on port 80 from &lt;strong&gt;Custom&lt;/strong&gt; source &lt;em&gt;WP Load Balancer&lt;/em&gt; security group.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx3e3ecjfk3d0rwbo814.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx3e3ecjfk3d0rwbo814.png" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please note, that you can search for the security group by name in the source field of the security group rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab4#create-a-load-balancer" rel="noopener noreferrer"&gt;Create a load balancer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones, increasing the availability of the Wordpress platform.&lt;/p&gt;

&lt;p&gt;From the &lt;a href="https://console.aws.amazon.com/ec2/home" rel="noopener noreferrer"&gt;EC2 console &lt;/a&gt;click &lt;strong&gt;Load Balancers **on the left-hand menu and then click **Create load balancer&lt;/strong&gt;.&lt;br&gt;
Click &lt;strong&gt;Create **under **Application Load Balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogrlxrtg04zfhj0uk36n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogrlxrtg04zfhj0uk36n.png" width="800" height="954"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give your load balancer a name and under &lt;strong&gt;Network mapping&lt;/strong&gt; select the wordpress-workshop VPC.&lt;br&gt;
Then tick the checkbox for both availability zones and select the public subnets created in the first lab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenmlz8o65cyivkzn7o2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenmlz8o65cyivkzn7o2b.png" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8o4cjle16ocm4n0rjgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8o4cjle16ocm4n0rjgf.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Security groups&lt;/strong&gt; select the &lt;strong&gt;WP Load Balancer&lt;/strong&gt; created earlier and remove any default security group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figkvo4xo311o78xfb6ed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figkvo4xo311o78xfb6ed.png" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Listeners and routing&lt;/strong&gt; click on the link &lt;strong&gt;Create target group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iawr38efzybulgh1mdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iawr38efzybulgh1mdz.png" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This opens a new window for you to create a new target group. Use the following details&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Wordpress-TargetGroup as &lt;strong&gt;Traget group name&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;wordpress-workshopas &lt;strong&gt;VPC&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf61o3hmmq5cha0jpp99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf61o3hmmq5cha0jpp99.png" width="800" height="922"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, in the &lt;strong&gt;Health checks&lt;/strong&gt; section, enter the following path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/phpinfo.php In the next lab you will create the Launch Template for the web application servers which will create the phpinfo.php as part of the UserData script execution at instance boot. If health checks fail the User Data script has most likely failed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncxsuhwvvc1d6r822yu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncxsuhwvvc1d6r822yu7.png" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Next&lt;/strong&gt; and then on &lt;strong&gt;Create target group&lt;/strong&gt; without defining any targets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv5kg2sfr73kwzzw92r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv5kg2sfr73kwzzw92r8.png" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Close the window, refresh the Listener to choose the created target group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j6mpe5ixle2sdio81uq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j6mpe5ixle2sdio81uq.png" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Summary&lt;/strong&gt; section, review and click &lt;strong&gt;Create load balancer&lt;/strong&gt; to create the load balancer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9h4ltdmw4cm81msip6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9h4ltdmw4cm81msip6z.png" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22k7b8p33ocr09tm2tln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22k7b8p33ocr09tm2tln.png" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make a note of the &lt;strong&gt;DNS name&lt;/strong&gt; created for your load balancer as you will need this in the following steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a launch Template
&lt;/h2&gt;

&lt;p&gt;You have created a software-defined network across multiple fault-isolated Availability Zones, deployed a highly-available Aurora MySQL database and an EFS file system for shared storage. In this lab you will define the templates for the application servers running PHP as part of a scalable Wordpress installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab5#create-a-launch-template-for-the-auto-scaling-groups-(asg)" rel="noopener noreferrer"&gt;Create a launch template for the Auto Scaling Groups (ASG)&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A launch template is an instance configuration information, that allows you to create a saved instance configuration that can be used to launch instances at a later time. It includes the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and other parameters used to launch EC2 instances. Additionally, it allows you to have multiple versions of a launch template.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Launch Templates&lt;/strong&gt; on the left panel of your &lt;a href="https://console.aws.amazon.com/ec2/home" rel="noopener noreferrer"&gt;EC2 console &lt;/a&gt;, then click on &lt;strong&gt;Create launch template&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give the Launch template the name WP-WebServers-LT&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pzhtt5ug2cn6p58b8kq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pzhtt5ug2cn6p58b8kq.png" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the &lt;strong&gt;Amazon Linux&lt;/strong&gt; AMI after selecting &lt;strong&gt;Quick Start&lt;/strong&gt; in the &lt;strong&gt;Application and OS Images (Amazon Machine Image)&lt;/strong&gt; section&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxv428dgwwgp0vir63o7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxv428dgwwgp0vir63o7.png" width="797" height="741"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the t3.micro instance type:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs6r4yihtjhbca889hpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs6r4yihtjhbca889hpi.png" width="796" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Don’t include a key pair and select the following &lt;strong&gt;Security groups&lt;/strong&gt; to attach to the instances launched from this Launch Template:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WP Web Servers to allow connections from the Application Load Balancer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WP Database Clients to allow instances to connect to the Aurora MySQL DB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WP EFS Clients to allow instances to mount the NFS export of the EFS file system&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mfkwhxm15jo8clvf42f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mfkwhxm15jo8clvf42f.png" width="792" height="922"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Expand &lt;strong&gt;Advanced details&lt;/strong&gt; and use the script below to populate the User Data field as text.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr9sp0w5bqot18c3t725.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr9sp0w5bqot18c3t725.png" width="796" height="875"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update the variables in the Bash script below with the values from your environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EFS_FS_ID&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This should be set to the file system ID of the Elastic Filesystem deployed in the previous lab. To obtain the file system ID visit the &lt;a href="https://console.aws.amazon.com/efs/home?#/file-systems" rel="noopener noreferrer"&gt;EFS console &lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DB_NAME&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is the name of the database which Wordpress should use to store its data. If you entered the default values in Lab 2 this should be a value of wordpress. To confirm visit the details page for your RDS database and look for &lt;em&gt;DB name&lt;/em&gt; under &lt;em&gt;Configuraiton&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DB_HOST&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is the hostname of your database Writer instance. To obtain this visit the details page for your RDS database and look under &lt;em&gt;Connectivity &amp;amp; Security&lt;/em&gt;. Use the &lt;em&gt;Writer&lt;/em&gt; type instance hostname, a value such as wordpress-workshop.cluster-ctdnyvvewl6s.eu-west-1.rds.amazonaws.com.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DB_USERNAME&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This will be the database username you specified in Lab 2. It can be found as &lt;em&gt;Master username&lt;/em&gt; under &lt;em&gt;Configuration&lt;/em&gt; on the details page for your RDS instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DB_PASSWORD&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is the password for the database user created in Lab 2.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    #!/bin/bash

    DB_NAME="wordpress"
    DB_USERNAME="wpadmin"
    DB_PASSWORD=""
    DB_HOST="wordpress-workshop.cluster-xxxxxxxxxx.eu-west-1.rds.amazonaws.com"
    EFS_FS_ID="fs-xxxxxxxxx"

    dnf update -y

    #install wget, apache server, php and efs utils
    dnf install -y httpd wget php-fpm php-mysqli php-json php amazon-efs-utils

    #create wp-content mountpoint
    mkdir -p /var/www/html/wp-content
    mount -t efs $EFS_FS_ID:/ /var/www/html/wp-content

    #install wordpress
    cd /var/www
    wget https://wordpress.org/latest.tar.gz
    tar -xzf latest.tar.gz
    cp wordpress/wp-config-sample.php wordpress/wp-config.php
    rm -f latest.tar.gz

    #change wp-config with DB details
    cp -rn wordpress/* /var/www/html/
    sed -i "s/database_name_here/$DB_NAME/g" /var/www/html/wp-config.php
    sed -i "s/username_here/$DB_USERNAME/g" /var/www/html/wp-config.php
    sed -i "s/password_here/$DB_PASSWORD/g" /var/www/html/wp-config.php
    sed -i "s/localhost/$DB_HOST/g" /var/www/html/wp-config.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;#change httpd.conf file to allowoverride&lt;br&gt;
    #  enable .htaccess files in Apache config using sed command&lt;br&gt;
    sed -i '//,/&amp;lt;\/Directory&amp;gt;/ s/AllowOverride None/AllowOverride All/' /etc/httpd/conf/httpd.conf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create phpinfo file
echo "&amp;lt;?php phpinfo(); ?&amp;gt;" &amp;gt; /var/www/html/phpinfo.php

# Recursively change OWNER of directory /var/www and all its contents
chown -R apache:apache /var/www

systemctl restart httpd
systemctl enable httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Review the final configuration under &lt;strong&gt;Summary&lt;/strong&gt; and click &lt;strong&gt;Create launch template&lt;/strong&gt;. You can disregard warnings about being able to SSH into the server and can also choose &lt;em&gt;Proceed without keypair&lt;/em&gt; as you will not need to remotely access these servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the app server
&lt;/h2&gt;

&lt;p&gt;In this lab you will use the load balancer and launch configuration from the previous 2 labs to create an auto scaling fleet of Wordpress application servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab6#create-the-asg-for-the-back-end-web-servers" rel="noopener noreferrer"&gt;Create the ASG for the back-end web servers&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once you have created the launch configuration you can proceed to creating the Autoscaling group for the Wordpress web servers.&lt;br&gt;
To do that select &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt; in the EC2 console, click on &lt;strong&gt;Create an Auto Scaling Group&lt;/strong&gt; specify a name and select the previously created launch template:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg8zsztqffqlxnzu8xco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg8zsztqffqlxnzu8xco.png" width="733" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen make sure that the wordpress-workshop VPC is selected, together with the Application Subnet A and Application Subnet B subnets for the web servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22k79cvyu6ccq8o7kzqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22k79cvyu6ccq8o7kzqh.png" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen &lt;strong&gt;Configure advanced options&lt;/strong&gt; choose the option &lt;strong&gt;Attach to an existing load balancer&lt;/strong&gt; and choose the target group you created earlier from the &lt;strong&gt;Existing load balancer target groups&lt;/strong&gt; list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h7q01nrl11xb65taw0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h7q01nrl11xb65taw0n.png" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Health checks&lt;/strong&gt; section, make sure to &lt;strong&gt;Turn on Elastic Load Balancing health checks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenb9g56ge21ufh9bfvil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenb9g56ge21ufh9bfvil.png" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt; to configure group size and scaling policies with the following values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Group size&lt;/strong&gt; section, enter 2 in the &lt;strong&gt;Desired capacity&lt;/strong&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Scaling&lt;/strong&gt; section, enter 2 as &lt;strong&gt;Min desired capacity&lt;/strong&gt; and 4 as &lt;strong&gt;Max desired capacity&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Target tracking scaling policy&lt;/strong&gt; and enter 80 as &lt;strong&gt;Target value&lt;/strong&gt; for the &lt;strong&gt;Average CPU utilization&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5gijjnspxhkxkk0ad45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5gijjnspxhkxkk0ad45.png" width="800" height="1183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click through and accept the remaining defaults to complete the creation of Auto Scaling Group by clicking on &lt;strong&gt;Create Auto Scaling group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The autoscaling group will now begin creating the desired number of EC2 instances based on the launch template you created. As the systems come online, the target group is updated with the instance details for your EC2 instances and the load balancer will begin distributing traffic across the instances. As instances are added or removed, the autoscaling group and load balancer will work in concert with one another to ensure that only healthy instances receive traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s4k859xyet7o6uze4go.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s4k859xyet7o6uze4go.png" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When your targets are deemed healthy in your target group you can open the DNS name for your Application Load Balancer in your web browser to view your newly created Wordpress installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf2l2ifbxx6sgkgx7s52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyf2l2ifbxx6sgkgx7s52.png" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab6#next-steps" rel="noopener noreferrer"&gt;Next steps&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You have now created a highly-available auto-scaling deployment of Wordpress that will scale in and out in response to client traffic hitting the website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;If you used your own account to follow this workshop, please perfom the actions below to remove all resources you created and stop incurring charges for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-the-auto-scaling-group" rel="noopener noreferrer"&gt;Delete the Auto Scaling Group&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Go to the &lt;strong&gt;&lt;em&gt;Auto Scaling Groups&lt;/em&gt;&lt;/strong&gt; section of the EC2 Console, select the Autoscaling Group created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab6.md" rel="noopener noreferrer"&gt;Lab 6&lt;/a&gt;, open the &lt;strong&gt;Actions **menu and select **Delete&lt;/strong&gt;. To confirm deletion, type delete in the text field of the dialog that will open and click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7jinbsnbvfjuj9g22u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7jinbsnbvfjuj9g22u.png" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-the-load-balancer" rel="noopener noreferrer"&gt;Delete the Load Balancer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;On the EC2 Console, go the &lt;strong&gt;Load Balancers&lt;/strong&gt; section, select the Application Load Balancer created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab4.md" rel="noopener noreferrer"&gt;Lab 4&lt;/a&gt;, open the &lt;strong&gt;Actions **menu and select **Delete load balancer&lt;/strong&gt;. To confirm deletion, type confirm in the text field of the dialog that will open and click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdbcb5j4vxnw0fkv8j06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdbcb5j4vxnw0fkv8j06.png" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-the-target-group" rel="noopener noreferrer"&gt;Delete the Target Group&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Go to the &lt;strong&gt;Target Groups&lt;/strong&gt; section, select the Target Group created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab4.md" rel="noopener noreferrer"&gt;Lab 4&lt;/a&gt;, open the &lt;strong&gt;Actions **menu and select **Delete&lt;/strong&gt;. Click on &lt;strong&gt;Yes, delete&lt;/strong&gt; on the dialog that will open.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88crha9w4zm8dku2n96a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88crha9w4zm8dku2n96a.png" width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-the-launch-template" rel="noopener noreferrer"&gt;Delete the Launch Template&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Move to the &lt;strong&gt;Launch Templates&lt;/strong&gt; section, select the Launch Template created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab5.md" rel="noopener noreferrer"&gt;Lab 5&lt;/a&gt;, open the &lt;strong&gt;Actions menu **and select **Delete template&lt;/strong&gt;.&lt;br&gt;
To confirm deletion, type Delete in the field and click &lt;strong&gt;Delete&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwng53n3cd72fdxr0gn6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwng53n3cd72fdxr0gn6k.png" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#verify-instances" rel="noopener noreferrer"&gt;Verify instances&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once you delete the Auto Scaling Group, the instances will begin shutting down and eventually they will be terminated. Verify that all instances launched by the Auto Scaling Group have been terminated correctly.&lt;br&gt;
You can use the &lt;strong&gt;aws:autoscaling:groupName&lt;/strong&gt; attribute to filter instances launched by the Auto Scaling Group created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/application/lab6.md" rel="noopener noreferrer"&gt;Lab 6&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyswm6phpykvb4qo0nrnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyswm6phpykvb4qo0nrnr.png" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-the-aurora-cluster" rel="noopener noreferrer"&gt;Delete the Aurora cluster&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://console.aws.amazon.com/rds/home?#databases:" rel="noopener noreferrer"&gt;RDS console &lt;/a&gt;, select the &lt;strong&gt;wordpress-workshop ***Regional cluster *and click **Modify&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to the bottom of the page, unckeck &lt;strong&gt;Enable deletion protection&lt;/strong&gt; and click on &lt;strong&gt;Continue&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo8vxfoebszvgvdwype9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo8vxfoebszvgvdwype9.png" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the option &lt;strong&gt;Apply immediately&lt;/strong&gt; and click &lt;strong&gt;Modify cluster&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z7gy8ipo26l1v3jw5t3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z7gy8ipo26l1v3jw5t3.png" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the &lt;a href="https://console.aws.amazon.com/rds/home?#databases:" rel="noopener noreferrer"&gt;RDS console &lt;/a&gt;, select the &lt;em&gt;Reader instance&lt;/em&gt;, go to the &lt;strong&gt;Actions **menu and select **Delete&lt;/strong&gt;.
To confirm deletion, type delete me into the field and click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxeg78rjh7hed0cr9itf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxeg78rjh7hed0cr9itf.png" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now select the &lt;em&gt;Writer instance, go to the **Actions&lt;/em&gt;* menu and select &lt;strong&gt;Delete&lt;/strong&gt;.&lt;br&gt;
To confirm deletion, type delete me into the field and click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can delete the &lt;em&gt;Regional cluster&lt;/em&gt; now. Select it, go to the &lt;strong&gt;Actions&lt;/strong&gt; menu and select &lt;strong&gt;Delete&lt;/strong&gt;.&lt;br&gt;
Make sure to&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;uncheck &lt;strong&gt;Create final snapshot&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;check the acknowledgement&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To confirm deletion, type delete me into the field and click on &lt;strong&gt;Delete DB cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62kmm9rbh4gnn6j6uh28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62kmm9rbh4gnn6j6uh28.png" width="613" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#verify-rds-snapshots" rel="noopener noreferrer"&gt;Verify RDS Snapshots&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once the RDS Aurora Cluster has been completely deleted, move to the &lt;a href="https://console.aws.amazon.com/rds/home?#snapshots-list:tab=automated" rel="noopener noreferrer"&gt;RDS Snapshots &lt;/a&gt;page, select the &lt;strong&gt;System&lt;/strong&gt; tab and make sure no automated snapshots are present for the Aurora cluster created during the workshop.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-efs-filesystem" rel="noopener noreferrer"&gt;Delete EFS filesystem&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Go to the &lt;a href="https://console.aws.amazon.com/efs/home?#/file-systems" rel="noopener noreferrer"&gt;EFS Console &lt;/a&gt;, select the file system created in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/datatier/lab3.md" rel="noopener noreferrer"&gt;Lab 3&lt;/a&gt; and click &lt;strong&gt;Delete&lt;/strong&gt;.&lt;br&gt;
Confirm the deletion by entering the file system’s ID in the dialog that will appear and click &lt;strong&gt;Confirm&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefo5zfl6vhh8pkifooqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefo5zfl6vhh8pkifooqr.png" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-vpc" rel="noopener noreferrer"&gt;Delete VPC&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-nat-gateways" rel="noopener noreferrer"&gt;Delete NAT Gateways&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Go to the &lt;a href="https://console.aws.amazon.com/vpcconsole/home?#NatGateways:" rel="noopener noreferrer"&gt;NAT gateways &lt;/a&gt;page of the VPC Console, select one NAT Gateway at a time, go to the &lt;strong&gt;Actions **menu and select **Delete NAT gateway&lt;/strong&gt;.&lt;br&gt;
To confirm deletion, type delete in the field and click &lt;strong&gt;Delete&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm52adxus8xatcgtq6ctg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm52adxus8xatcgtq6ctg.png" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#delete-vpc" rel="noopener noreferrer"&gt;Delete VPC&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Select the wordpress-workshop VPC from &lt;a href="https://console.aws.amazon.com/vpcconsole/home?#vpcs:" rel="noopener noreferrer"&gt;Your VPCs&lt;/a&gt; page in the VPC Console. Go to the &lt;strong&gt;Actions **menu and select **Delete VPC&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc77i8zcbegmzmw1a8f2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc77i8zcbegmzmw1a8f2p.png" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be able to delete the VPC only if no network interfaces are still present in any of the subnets of the VPC. The dialog that will appear when you click on &lt;strong&gt;Delete VPC&lt;/strong&gt; will show any remaining ENIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat9xwkqhz0klb1utqo1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat9xwkqhz0klb1utqo1w.png" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;network interfaces&lt;/strong&gt; link to check which services are still using the VPC and take appropriate actions.&lt;/p&gt;

&lt;p&gt;Once all ENIs have been removed, you will be able to delete the VPC.&lt;br&gt;
The dialog that will appear shows which resources will be deleted once you delete the VPC.&lt;br&gt;
To confirm deletion, type delete in the field and click &lt;strong&gt;Delete&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi3ras5majrf3ax4ua1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi3ras5majrf3ax4ua1l.png" width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#release-elastic-ips" rel="noopener noreferrer"&gt;Release Elastic IPs&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Go to the &lt;a href="https://console.aws.amazon.com/vpcconsole/home?#Addresses:" rel="noopener noreferrer"&gt;Elastic IPs &lt;/a&gt;page of the VPC Console, select all unassociated Elastic IPs (&lt;em&gt;Association ID *value is -), open the **Actions **menu and select **Release Elastic IP addresses&lt;/em&gt;*.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cy79808uauiegzxacg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cy79808uauiegzxacg7.png" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/3de93ad5-ebbe-4258-b977-b45cdfe661f1/en-US/summary/clean-up#remove-the-iam-user" rel="noopener noreferrer"&gt;Remove the IAM User&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you created the workshop-user IAM User to follow the workshop labs, make sure to delete it from the &lt;a href="https://us-east-1.console.aws.amazon.com/iam/home?#/users" rel="noopener noreferrer"&gt;IAM Console&lt;/a&gt; logging in with a IAM User or IAM Role with the appropriate permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4gw9r91jy3fxt8s4ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4gw9r91jy3fxt8s4ez.png" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges Faced and Solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Connection Issues&lt;/strong&gt;: When setting up RDS, there were some configuration issues with access control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Adjusted VPC security groups and ensured that EC2 instances had the correct permissions to access RDS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WordPress File Management&lt;/strong&gt;: Managing WordPress media files on multiple instances was initially challenging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Implemented Amazon EFS as a shared file system, enabling seamless media management across instances.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This AWS project demonstrates how to create a highly available, scalable WordPress environment, ideal for production-grade sites with high traffic. It showcases AWS’s capabilities in managing infrastructure to minimize manual intervention while maximizing uptime.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
      <category>wordpress</category>
      <category>web</category>
    </item>
    <item>
      <title>Building a High-Availability Multi-Tier Web App on AWS with Amazon VPC, EC2, and Aurora RDS</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Tue, 12 Nov 2024 09:15:38 +0000</pubDate>
      <link>https://forem.com/shubham_murti/building-a-high-availability-multi-tier-web-app-on-aws-with-amazon-vpc-ec2-and-aurora-rds-2lla</link>
      <guid>https://forem.com/shubham_murti/building-a-high-availability-multi-tier-web-app-on-aws-with-amazon-vpc-ec2-and-aurora-rds-2lla</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In this project, I developed a highly available, multi-tier, and fault-tolerant web application on AWS, focusing on uptime, scalability, and security - making it suitable for production use. This experience allowed me to work hands-on with essential AWS services, like Amazon VPC, Amazon EC2, Amazon Aurora, and Amazon S3, to build an architecture that provides high performance, resilience, and cost efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon VPC&lt;/strong&gt;: Provides isolated networking environments for secure data flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt;: Hosts scalable web and application server instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Aurora&lt;/strong&gt;: A managed, high-performance relational database with automated failover.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Stores and serves static content with durability and low latency.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Account&lt;/strong&gt;: Required to access and configure all necessary AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;: For managing resources, configurations, and deployment tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic Networking Knowledge&lt;/strong&gt;: Familiarity with networking concepts like subnets, load balancing, and security groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Console Proficiency&lt;/strong&gt;: Experience using the AWS Console for deploying and configuring services.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem Statement or Use Case
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;: Traditional on-premises infrastructure often fails to meet the needs of applications requiring high availability and fault tolerance, especially under varying loads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: This project implements a &lt;strong&gt;multi-tier architecture&lt;/strong&gt; with &lt;strong&gt;high availability&lt;/strong&gt; and &lt;strong&gt;fault tolerance&lt;/strong&gt; using AWS. By setting up a web application across multiple tiers (frontend, application logic, and database), the solution ensures seamless user experience, even in case of server failure or maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Relevance&lt;/strong&gt;: The solution suits production environments where uptime is crucial, such as e-commerce platforms, content-driven websites, and customer-facing applications. The architecture can dynamically adjust resources to accommodate fluctuating traffic, making it scalable and cost-effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;Below is a high-level overview of the architecture used:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t2jrcavh6ezf1bpnnj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t2jrcavh6ezf1bpnnj0.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Component Breakdown
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon VPC&lt;/strong&gt;: Provides network isolation, enabling private and public subnets to securely route traffic between the internet and internal services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon EC2&lt;/strong&gt;: Hosts web and application server instances, with auto-scaling groups to dynamically adjust resources as traffic demands change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load Balancer&lt;/strong&gt;: Manages incoming requests and distributes them to healthy EC2 instances across different availability zones, ensuring high availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Aurora&lt;/strong&gt;: A managed relational database that automatically replicates data and performs failovers, providing a resilient storage solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: Stores and delivers static content like images, CSS, and JavaScript files, reducing load on EC2 instances and improving performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step-by-Step Implementation
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Network — Amazon VPC
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Create VPC
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Virtual Private Cloud (Amazon VPC)&lt;/strong&gt; allows you to start AWS resources with a user-defined virtual network. This virtual network, along with the benefits of using AWS’s scalable infrastructure, is very similar to the existing network operating in the customer’s own data center.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/network/10-index#move-on-to-vpc-service" rel="noopener noreferrer"&gt;Move on to VPC service&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;After logging in to the AWS console, select &lt;strong&gt;VPC&lt;/strong&gt; from the service menu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90y0d151cg1nel5oz7wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90y0d151cg1nel5oz7wp.png" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the screenshot below is &lt;strong&gt;different&lt;/strong&gt; from the screen that you’re viewing, enable the &lt;strong&gt;New VPC Experience toggle&lt;/strong&gt; to active.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1n9gp0hxmvela6thepm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1n9gp0hxmvela6thepm.png" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/network/10-index#create-vpc-through-vpc-wizard" rel="noopener noreferrer"&gt;Create VPC through VPC Wizard&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;VPC Dashboard&lt;/strong&gt; and click &lt;strong&gt;Create VPC&lt;/strong&gt; to create your own VPC.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0ya2cbup8j3rt1ruh8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0ya2cbup8j3rt1ruh8k.png" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To create a space to provision AWS resources used in this lab, we will create a VPC and Subnets. Select &lt;strong&gt;VPC and more&lt;/strong&gt; in &lt;strong&gt;Resource to create&lt;/strong&gt; tab and change name tag to &lt;strong&gt;VPC-Lab&lt;/strong&gt;. Leave the default setting for IPv4 CIDR block.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5x6fmpyanamkmguknlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5x6fmpyanamkmguknlg.png" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to deploy resources across multiple Availability Zones for high availability and fault tolerance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To design high availability architecture, we create &lt;strong&gt;2&lt;/strong&gt; subnet space and select &lt;strong&gt;2a&lt;/strong&gt; and &lt;strong&gt;2c&lt;/strong&gt; for &lt;strong&gt;Customize AZs&lt;/strong&gt;. And set the CIDR value of the public subnet that can communicate directly with the Internet as shown in the screen below. Set the CIDR value of the private subnet as shown in the screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dxz0elxl7fam4y0qrs1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dxz0elxl7fam4y0qrs1.png" width="426" height="858"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mqqlfayjd1ci1gzuck6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mqqlfayjd1ci1gzuck6.png" width="369" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can use a &lt;strong&gt;NAT gateway&lt;/strong&gt; so that instances in your private subnets can connect to services outside your VPC, but external services cannot initiate direct connections to these instances. In this lab, we will create a NAT gateway in only one Availability Zone to save cost. Also, for DNS options, &lt;strong&gt;enable&lt;/strong&gt; both &lt;strong&gt;DNS hostnames&lt;/strong&gt; and &lt;strong&gt;DNS resolution&lt;/strong&gt;. After confirming the setting value, click the &lt;strong&gt;Create VPC&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far0voi0txch7t1ie11gq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far0voi0txch7t1ie11gq.png" width="463" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;As the VPC is created, you can see the process of creating network-related resources as shown in the screen below. For NAT Gateway, provisioning may take longer compared to other resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fxu2t90pxtmsctbw2qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fxu2t90pxtmsctbw2qj.png" width="800" height="935"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can check the information of the created VPC. Check related information such as &lt;strong&gt;CIDR&lt;/strong&gt; value, route table, network ACL, etc. Check that the values you just set are correct.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvfil699rvyngl24b7jp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvfil699rvyngl24b7jp.png" width="800" height="706"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/network/10-index#architecture-configured-so-far" rel="noopener noreferrer"&gt;Architecture Configured So Far&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If VPC is completed through the VPC Wizard, the environment configured so far is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9pyf022horivjsi84lk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9pyf022horivjsi84lk.png" width="800" height="651"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges Faced and Solutions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-AZ Latency&lt;/strong&gt;: Replicating data across availability zones resulted in some latency in data access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Used Aurora’s automated cross-region replication to reduce latency while maintaining data consistency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto Scaling Configuration&lt;/strong&gt;: Initially faced challenges with EC2 instances not scaling back down after load reduction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Adjusted &lt;strong&gt;Auto Scaling policies&lt;/strong&gt; to ensure smoother scaling transitions, keeping resource usage cost-effective.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create VPC Endpoint
&lt;/h2&gt;

&lt;p&gt;In this section, you create an endpoint for S3 to learn a VPC endpoint. Skip to do this step will not affect your progress to the next lab.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/network/20-index#vpc-endpoint" rel="noopener noreferrer"&gt;VPC Endpoint&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In &lt;strong&gt;VPC Dashboard&lt;/strong&gt;, select &lt;strong&gt;Endpoints&lt;/strong&gt;. Click &lt;strong&gt;Create endpoint&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fava2ux0ouiif9mqsq592.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fava2ux0ouiif9mqsq592.png" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type &lt;strong&gt;s3 endpoint&lt;/strong&gt; for name and select &lt;strong&gt;AWS services&lt;/strong&gt; in Service category tab. In the search bar below, type &lt;strong&gt;s3&lt;/strong&gt; and select the list at the top.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekhc3hl26rbzv1cge4ir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekhc3hl26rbzv1cge4ir.png" width="800" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For S3 VPC endpoints, there are &lt;strong&gt;gateway&lt;/strong&gt; types and &lt;strong&gt;interface&lt;/strong&gt; types. For this lab, select the &lt;strong&gt;gateway&lt;/strong&gt; type. And for the deployment location, select the &lt;strong&gt;VPC-Lab-vpc&lt;/strong&gt; created in this lab.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokb8mxyujpqi6svkwoep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokb8mxyujpqi6svkwoep.png" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose a route table to reflect the endpoint. Select the &lt;strong&gt;two private subnets&lt;/strong&gt; as shown below. Additional routing information for using the endpoint is automatically added to the selected route table.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ejs6n57rh13k9ndq1zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ejs6n57rh13k9ndq1zw.png" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can also configure policies to control access to endpoints as shown below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwkifpxj0ugxefkzuasn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwkifpxj0ugxefkzuasn.png" width="513" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use VPC endpoint policies to allow full access to AWS services or create custom policies. Check &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#vpc-endpoint-policies" rel="noopener noreferrer"&gt;Use VPC endpoint policies&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Confirm that the route to access Amazon S3 through the gateway endpoint has been automatically added to the &lt;strong&gt;private route table&lt;/strong&gt; specified earlier.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsldklwb0mn601imm1bzr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsldklwb0mn601imm1bzr.png" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VPC endpoints are &lt;strong&gt;communications within the AWS network&lt;/strong&gt; and have the &lt;strong&gt;security and compliance&lt;/strong&gt; advantage of being able to control traffic through the endpoints. You can also optimize the &lt;strong&gt;data processing cost&lt;/strong&gt; if you transfer your data through a VPC endpoint rather than a NAT gateway.&lt;/p&gt;

&lt;p&gt;In this section, you created a S3 gateway endpoint to allow private S3 access from within the VPC without needing an internet gateway. This keeps S3 traffic private within the AWS network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compute — Amazon EC2
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Launch a web server instance
&lt;/h2&gt;

&lt;p&gt;This chapter starts with the default Amazon Linux instance and lets you automatically configure the Apache/PHP Web server during initial step.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#launch-instance-and-connect-to-web-service" rel="noopener noreferrer"&gt;Launch instance and connect to web service&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the AWS console search bar, type &lt;a href="http://console.aws.amazon.com/ec2" rel="noopener noreferrer"&gt;EC2 &lt;/a&gt;and select it. Then click &lt;strong&gt;EC2 Dashboard **at the top of the left menu. Press the **Launch instance **button and select **Launch instance&lt;/strong&gt; from the menu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx16q9ehf4fm281vz180.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx16q9ehf4fm281vz180.png" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In &lt;strong&gt;Name&lt;/strong&gt;, put the value &lt;strong&gt;Web server for custom AMI&lt;/strong&gt;. And check the default setting in Amazon Machine Image below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmnpr7ckaw4156u1t4w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmnpr7ckaw4156u1t4w1.png" width="800" height="903"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqvh4pltqzrt1j1sijai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqvh4pltqzrt1j1sijai.png" width="297" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select t2.micro in &lt;strong&gt;Instance Type&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdfa0bceuz5a8l6z8ke1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdfa0bceuz5a8l6z8ke1.png" width="791" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For &lt;strong&gt;Key pair&lt;/strong&gt;, choose &lt;strong&gt;Proceed without a key pair&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4kulkvhhy5ygc5zvret.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4kulkvhhy5ygc5zvret.png" width="795" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;Edit&lt;/strong&gt; button in &lt;strong&gt;Network settings&lt;/strong&gt; to set the space where EC2 will be located.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi28w06k44o40lfuq0vu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi28w06k44o40lfuq0vu.png" width="795" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And choose the &lt;strong&gt;VPC-Lab-vpc&lt;/strong&gt; created in the previous lab, and for the subnet, choose &lt;strong&gt;public subnet&lt;/strong&gt;. &lt;strong&gt;Auto-assign public IP&lt;/strong&gt; is set to &lt;strong&gt;Enable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jd62289p8capyaql74s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jd62289p8capyaql74s.png" width="752" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk3aw8cmfpuxcv3znmw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk3aw8cmfpuxcv3znmw5.png" width="530" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Right below it, create &lt;strong&gt;Security groups&lt;/strong&gt; to act as a network firewall. Security groups will specify the protocols and addresses you want to allow in your firewall policy. For the security group you are currently creating, this is the rule that applies to the EC2 that will be created. After entering Immersion Day - Web Server in &lt;strong&gt;Security group name&lt;/strong&gt; and &lt;strong&gt;Description&lt;/strong&gt;, select &lt;strong&gt;Add Security group rule&lt;/strong&gt; and set &lt;strong&gt;&lt;em&gt;Type&lt;/em&gt;&lt;/strong&gt; to &lt;strong&gt;HTTP&lt;/strong&gt;. Also allow TCP/80 for Web Service by specifying it. Select &lt;strong&gt;My IP&lt;/strong&gt; in the source.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr9a55p3lrilt396hzq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flr9a55p3lrilt396hzq1.png" width="766" height="802"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh570qnyh5kt3wj288gib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh570qnyh5kt3wj288gib.png" width="750" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to configure security groups following the principle of least privilege, allowing only the minimum required traffic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;All other values accept the default values, expand by clicking on the &lt;strong&gt;Advanced Details&lt;/strong&gt; tab at the bottom of the screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9hwcgsxtxd5dognbjyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9hwcgsxtxd5dognbjyj.png" width="796" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;Meta Data&lt;/strong&gt; version dropdown and select &lt;strong&gt;V2 only (token required)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha4q59viepo3m4pp89ay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha4q59viepo3m4pp89ay.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the following values in the &lt;strong&gt;User data&lt;/strong&gt; field and select &lt;strong&gt;Launch instance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc6rxq7m8pbrpkdbx3ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsc6rxq7m8pbrpkdbx3ua.png" width="492" height="872"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh
​
#Install a LAMP stack
dnf install -y httpd wget php-fpm php-mysqli php-json php php-devel
dnf install -y mariadb105-server
dnf install -y httpd php-mbstring
​
#Start the web server
chkconfig httpd on
systemctl start httpd
​
#Install the web pages for our lab
if [ ! -f /var/www/html/immersion-day-app-php7.zip ]; then
   cd /var/www/html
   wget -O 'immersion-day-app-php7.zip' 'https://static.us-east-1.prod.workshops.aws/public/2e449d3a-fc13-44c9-8c99-35a37735e7f5/assets/immersion-day-app-php7.zip'
   unzip immersion-day-app-php7.zip
fi
​
#Install the AWS SDK for PHP
if [ ! -f /var/www/html/aws.zip ]; then
   cd /var/www/html
   mkdir vendor
   cd vendor
   wget https://docs.aws.amazon.com/aws-sdk-php/v3/download/aws.zip
   unzip aws.zip
fi
​
# Update existing packages
dnf update -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;User Data is a user-defined initialization script that is executed when the first instance is created.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Information indicating that the instance creation is in progress is displayed on the screen. You can view the list of EC2 instances by selecting &lt;strong&gt;View Instances&lt;/strong&gt; in the lower right corner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the instance configuration is complete, you can check the Availability Zone in which the instance is running, and externally accessible IP and DNS information.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwio744dgoz7nqbosozl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwio744dgoz7nqbosozl0.png" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Wait for the instance’s &lt;strong&gt;Instance state&lt;/strong&gt; result to be &lt;strong&gt;Running&lt;/strong&gt;. Open a new web browser tab and enter the &lt;strong&gt;Public DNS or IPv4 Public IP&lt;/strong&gt; of your EC2 instance in the URL address field. If the page is displayed as shown below, the web server instance is configured normally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are using the Chrome web browser, when you attach the &lt;strong&gt;Public IPv4 DNS&lt;/strong&gt; value to the web browser, if it does not run, https may be automatically added in front of the DNS value, so it may not run. Therefore, it is recommended to enter &lt;a href="http://." rel="noopener noreferrer"&gt;http://.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmmr9ckhucjvth485rxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmmr9ckhucjvth485rxm.png" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#access-the-web-service" rel="noopener noreferrer"&gt;Access the web service&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the EC2 instance console. Select the instance you want to connect to and click the &lt;strong&gt;Connect&lt;/strong&gt; button in the center.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf4y1y9v0up3khbgnnn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf4y1y9v0up3khbgnnn0.png" width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;strong&gt;Connect your instance&lt;/strong&gt; window, select the EC2 Instance Connect tab, then click the &lt;strong&gt;Connect&lt;/strong&gt; button in the lower right corner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9pq6wmniri6swhaltsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9pq6wmniri6swhaltsi.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After a while, you can use the browser-based SSH console as shown below. Just close the window after the CLI test.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzhrh2ls51tsaf7991o4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzhrh2ls51tsaf7991o4.png" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#connect-to-the-linux-instance-using-session-manager" rel="noopener noreferrer"&gt;Connect to the Linux instance using Session Manager&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;You must click the &lt;strong&gt;Access your Linux instance using Session Manager&lt;/strong&gt; link below to proceed with the exercise.&lt;/p&gt;

&lt;p&gt;In the database lab to be followed, we connect to RDS database using the IAM role granted to the web server. Therefore, refer to &lt;a href="https://catalog.workshops.aws/general-immersionday/en-US/basic-modules/10-ec2/ec2-linux/3-ec2-1" rel="noopener noreferrer"&gt;Accessing Linux instance using Session Manager &lt;/a&gt;to assign IAM role to EC2 instance and connect to your Linux instance using Session Manager&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h3yqixgi55elgvtndla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h3yqixgi55elgvtndla.png" width="127" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#create-a-custom-ami" rel="noopener noreferrer"&gt;Create a custom AMI&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the AWS EC2 console, you can create an Custom AMI to meet your needs. This can then be used for future EC2 instance creation. In this page, let’s create an AMI using the web server instance that we built earlier.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the EC2 console, select the instance that we made earlier in this lab, and click &lt;strong&gt;Actions&lt;/strong&gt; &amp;gt; &lt;strong&gt;Image and templates&lt;/strong&gt; &amp;gt; &lt;strong&gt;Create Image&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws10g8v782pju5exzyic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws10g8v782pju5exzyic.png" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Create Image console, type as shown below and press &lt;strong&gt;Create image&lt;/strong&gt; to create the custom image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fremtn95eepuwf229625f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fremtn95eepuwf229625f.png" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppcz9ravymjhb5isu0e6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppcz9ravymjhb5isu0e6.png" width="325" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Verify in the console that the image creation request in completed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation panel, Click the &lt;strong&gt;AMIs&lt;/strong&gt; button located under &lt;strong&gt;IMAGES&lt;/strong&gt;. You can see that the &lt;strong&gt;Status&lt;/strong&gt; of the AMI that you just created. It will show either &lt;strong&gt;Pending&lt;/strong&gt; or &lt;strong&gt;Available&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k4ft2yzgc2mltjmwzz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k4ft2yzgc2mltjmwzz4.png" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#terminate-the-instance" rel="noopener noreferrer"&gt;Terminate the instance&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Custom AMI (Golden Image) creation has been completed for the auto scaling by using the EC2 instance you just created.&lt;/strong&gt; Therefore, the EC2 instance currently running is no longer needed, so let’s try to terminate it. ( In &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching/compute/auto-scaling" rel="noopener noreferrer"&gt;Deploy auto scaling web service&lt;/a&gt;, we will use custom AMI to create a new web server.)&lt;/p&gt;

&lt;p&gt;Do &lt;strong&gt;not&lt;/strong&gt; terminate the “Web server for custom AMI” Instance until the AMI creation process is fully completed. Ensure the AMI status shows as &lt;strong&gt;Available&lt;/strong&gt; before proceeding.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the left navigation panel of the EC2 dashboard, select &lt;strong&gt;Instances&lt;/strong&gt;. Then select the instance that should be deleted. From there, click &lt;strong&gt;Instance state&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Terminate instance&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwu0zo8sclzoqbtuff2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwu0zo8sclzoqbtuff2e.png" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When the alert message appears, click &lt;strong&gt;Terminate&lt;/strong&gt; to delete.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxltmkaeeolw42yyl7tas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxltmkaeeolw42yyl7tas.png" width="590" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The instance status changes to &lt;strong&gt;Shutting down&lt;/strong&gt;. After that, the instance status turned to &lt;strong&gt;terminated&lt;/strong&gt;. The instance deletion is complete. You may see the instance for a short period of time for deletion logging.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/launching#architecture-configured-so-far" rel="noopener noreferrer"&gt;Architecture Configured So Far&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ctare84ag7upclhet2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ctare84ag7upclhet2a.png" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you mark the resources that have been configured so far in conceptual terms, it is same with the picture below.&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully created a Custom AMI (Golden Image) using the EC2 web server, which can be utilized for deploying an auto-scaling web service in the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy auto scaling web service
&lt;/h2&gt;

&lt;p&gt;Using the network infrastructure created in the Network- AMazon VPC lab, we will deploy a web service that can automatically scale out/in under load and ensure high availability. We use the web server AMI created in the previous chapter and the network infrastructure named VPC-Lab.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#configure-application-load-balancer" rel="noopener noreferrer"&gt;Configure Application Load Balancer&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;AWS Elastic Load Balancer supports three types of load balancers: Application Load Balancer, Network Load Balancer, and Gateway Load Balancer. In this lab, you will configure and set up the Application Load Balancer to handle load balancing HTTP requests.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the &lt;strong&gt;EC2 Management Console&lt;/strong&gt; in the left navigation panel, click &lt;strong&gt;Load Balancers&lt;/strong&gt; under &lt;strong&gt;Load Balancing&lt;/strong&gt;. Then click &lt;strong&gt;Create Load Balancer&lt;/strong&gt;. In the Select load balancer type, click the &lt;strong&gt;Create&lt;/strong&gt; button under &lt;strong&gt;Application Load Balancer&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdwmly29csmsjqz97yrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdwmly29csmsjqz97yrw.png" width="800" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Name the load balancer. In this case, name &lt;strong&gt;Name&lt;/strong&gt; as Web-ALB. Leave the other settings at their default values.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjwi3otv8x7yo1pgj46c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjwi3otv8x7yo1pgj46c.png" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to deploy resources across multiple Availability Zones for fault tolerance and high availability.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scrolling down a little bit, there is a section for selecting availability zones. First, Select the VPC-Lab-vpc created previously. For Availability Zones select the 2 public subnets that were created previously. This should be &lt;strong&gt;Public Subnet&lt;/strong&gt; for ap-northeast-2a and &lt;strong&gt;Public Subnet C&lt;/strong&gt; for ap-northeast-2c.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F213nm6wsggcnjsu63yrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F213nm6wsggcnjsu63yrb.png" width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;strong&gt;Security groups&lt;/strong&gt; section, click the &lt;strong&gt;Create new security group hyperlink&lt;/strong&gt;. Enter web-ALB-SG as the security group name and check the VPC information. Scroll down to modify the Inbound rules. Click the &lt;strong&gt;Add rule&lt;/strong&gt; button and select &lt;strong&gt;HTTP&lt;/strong&gt; as the Type and &lt;strong&gt;Anywhere-IPv4&lt;/strong&gt; as the Source. And create a security group.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtcva765zsixekkto36v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtcva765zsixekkto36v.png" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to the load balancer page again, click the refresh button, and select the &lt;strong&gt;web-ALB-SG&lt;/strong&gt; you just created. &lt;strong&gt;Remove the default security group.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff47uc2yttaky5ltv2k2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff47uc2yttaky5ltv2k2b.png" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In &lt;strong&gt;Listeners and routing&lt;/strong&gt; column, click &lt;strong&gt;Create target group&lt;/strong&gt;. Put Web-TG for Target group name and check all settings same with the screen below. After that click &lt;strong&gt;Next&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wowm02zrwfjdys0cnx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wowm02zrwfjdys0cnx3.png" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9aiktcu18ohgo0b4efs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9aiktcu18ohgo0b4efs.png" width="800" height="1009"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This is where we would register our instances. However, as we mentioned earlier, there are not instances to register at this moment. Click &lt;strong&gt;Create target group&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxttwsjsx582c3qtyi4r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxttwsjsx582c3qtyi4r7.png" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Again, move into the Load balancers page, click refresh button and select Web-TG. And then Click &lt;strong&gt;Create load balancer&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45wytcvz5g8003vr7nqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45wytcvz5g8003vr7nqk.png" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa20f8k9qy3wp82vqtvc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa20f8k9qy3wp82vqtvc3.png" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#configure-launch-template" rel="noopener noreferrer"&gt;Configure launch template&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now that ALB has been created, it’s time to place the instances behind the load balancer. To configure an Amazon EC2 instance to start with Auto Scaling Group, you can use &lt;strong&gt;Launch Template&lt;/strong&gt;, &lt;strong&gt;Launch Configuration&lt;/strong&gt;, or &lt;strong&gt;EC2 Instance&lt;/strong&gt;. In this workshop, we will use the &lt;strong&gt;Launch Template&lt;/strong&gt; to create an Auto Scaling group.&lt;/p&gt;

&lt;p&gt;The launch template configures all parameters within a resource at once, reducing the number of steps required to create an instance. Launch templates make it easier to implement best practices with support for Auto Scaling and spot fleets, as well as spot and on-demand instances. This helps you manage costs more conveniently, improve security, and minimize the risk of deployment errors.&lt;/p&gt;

&lt;p&gt;The launch template contains information that Amazon EC2 needs to start an instance, such as AMI and instance type. The Auto Scaling group refers to this and adds new instances when a scaling out event occurs. If you need to change the configuration of the EC2 instance to start in the Auto Scaling group, you can create a new version of the launch template and assign it to the Auto Scaling group. You can also select a specific version of the launch template that you use to start an EC2 instance in the Auto Scaling group, if necessary. You can change this setting at any time.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#create-security-group" rel="noopener noreferrer"&gt;Create security group&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Before creating a launch template, let’s create a security group for the instances created through the launch template to use.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the left navigation panel of the EC2 console, select &lt;strong&gt;Security Groups&lt;/strong&gt; under the &lt;strong&gt;Network &amp;amp; Security&lt;/strong&gt; heading and click &lt;strong&gt;Create Security Group&lt;/strong&gt; in the upper right corner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffncwzh5pf61pdw5qywq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffncwzh5pf61pdw5qywq5.png" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1m4be84rkuqj2cpqmlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1m4be84rkuqj2cpqmlz.png" width="315" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down to modify the Inbound rules. First, select the &lt;strong&gt;Add rule&lt;/strong&gt; button to add the Inbound rules, and select HTTP in the &lt;strong&gt;Type&lt;/strong&gt;. For &lt;strong&gt;Source&lt;/strong&gt;, type ALB in the search bar to search for the security group created earlier Web-ALB-SG. This will &lt;strong&gt;configure the security group to only receive HTTP traffic coming from ALB&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4edegydk5y6fwdujmt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4edegydk5y6fwdujmt6.png" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenm30om1wv08knljdoyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenm30om1wv08knljdoyz.png" width="480" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leave outbound rules’ default settings and click &lt;strong&gt;Create Security Group&lt;/strong&gt; to create a new security group. This creates a security group that allows traffic only for HTTP connections (TCP 80) that enter the instance via ALB from the Internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#create-launch-template" rel="noopener noreferrer"&gt;Create launch template&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the EC2 console, select &lt;strong&gt;Launch Templates&lt;/strong&gt; from the left navigation panel. Then click &lt;strong&gt;Create Launch Template&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6596xhbqrdkpxlwaszw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6596xhbqrdkpxlwaszw.png" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Let’s proceed with setting up the launch template step by step. First, set &lt;strong&gt;Launch template name&lt;/strong&gt; and &lt;strong&gt;Template version description&lt;/strong&gt; as shown below, and select &lt;strong&gt;&lt;em&gt;Checkbox&lt;/em&gt;&lt;/strong&gt; for &lt;strong&gt;Provide guidance&lt;/strong&gt; in Auto Scaling guidance. Select this checkbox to enable the template you create to be utilized by Amazon EC2 Auto Scaling.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z165cela431y1wqxlw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z165cela431y1wqxlw7.png" width="796" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvychkoj8wc7d1cs5lm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvychkoj8wc7d1cs5lm9.png" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scroll down to set the launch template contents. In &lt;strong&gt;Amazon Machine Image(AMI)&lt;/strong&gt;, set the AMI to Web Server v1, which was created in the previous EC2 lab. You can find it by typing Web Server v1 in the search section, or you can scroll down to find it in the My AMI section. Next, select t2.micro for the instance type. We are not going to configure SSH access because this is only for Web service server. Therefore, we do not use key pairs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovjbv3v9w0kw0608d8eq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovjbv3v9w0kw0608d8eq.png" width="800" height="1040"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leave the other parts as default. Let’s take a look at the &lt;strong&gt;Network Settings&lt;/strong&gt; section. In security group dropdown, find and apply ASG-Web-Inst-SG created before.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y1nqslw0aczzco1a8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92y1nqslw0aczzco1a8y.png" width="797" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Follow the Storage’s default values without any additional change. Go down and define the Instance tags. Click &lt;strong&gt;Add tag&lt;/strong&gt; and Name for &lt;strong&gt;Key&lt;/strong&gt; and Web Instance for &lt;strong&gt;Value&lt;/strong&gt;. Select Resource types as &lt;strong&gt;Instances and Volumes&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgbq7i6q40fjpju8w2wh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgbq7i6q40fjpju8w2wh.png" width="796" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v76vm2dfrwoy7ufye4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v76vm2dfrwoy7ufye4j.png" width="326" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Finally, in the &lt;strong&gt;Advanced details&lt;/strong&gt; tab, set the &lt;strong&gt;IAM instance profile&lt;/strong&gt; to &lt;strong&gt;SSMInstanceProfile&lt;/strong&gt;. If IAM role was not created earlier, refer &lt;a href="https://catalog.workshops.aws/general-immersionday/en-US/basic-modules/10-ec2/ec2-linux/3-ec2-1#create-an-iam-instance-profile-for-systems-manager" rel="noopener noreferrer"&gt;Create an IAM instance profile for Systems Manager &lt;/a&gt;to create the SSMInstanceProfile IAM role.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Leave all other settings as default, and click the &lt;strong&gt;Create launch template&lt;/strong&gt; button at the bottom right to create a launch template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gke0vdxptdm6ukddv1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gke0vdxptdm6ukddv1z.png" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After checking the values set in &lt;strong&gt;Summary&lt;/strong&gt; on the right, click &lt;strong&gt;Create launch template&lt;/strong&gt; to create a template.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jxsncbjvsxs3rbw4ml3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jxsncbjvsxs3rbw4ml3.png" width="399" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#set-auto-scaling-group" rel="noopener noreferrer"&gt;Set Auto Scaling Group&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now, let’s create the Auto Scaling Group.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter the EC2 console and select &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt; at the bottom of the left navigation panel. Then click the &lt;strong&gt;Create Auto Scaling group&lt;/strong&gt; button to create an &lt;em&gt;Auto Scaling Group&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax8gle5lvmkld39ctp3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax8gle5lvmkld39ctp3s.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In &lt;strong&gt;&lt;em&gt;[Step 1: Choose launch template or configuration]&lt;/em&gt;&lt;/strong&gt;, specify the name of the Auto Scaling group. In this workshop, we will designate it as Web-ASG. Then select the launch template that you just created named Web. The default settings for the launch template will be displayed. Confirm and click the lower right &lt;strong&gt;Next&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ngzk1hir7fcuhndoxpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ngzk1hir7fcuhndoxpm.png" width="800" height="809"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38f5udelvx7g28gqiqov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38f5udelvx7g28gqiqov.png" alt="Set the network configuration with the Purging options and instance types as default. Choose VPC-Lab-vpc for **VPC**, select **Private subnet 1** and **Private subnet 2** for **Subnets**. When the setup is completed, click the **Next** button." width="273" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezzfvn9l8a5ksv5272rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezzfvn9l8a5ksv5272rl.png" width="800" height="945"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, proceed to set up load balancing. First, select Attach to an existing load balancer. Then in &lt;strong&gt;Choose a target group for your load balancer&lt;/strong&gt;, select Web-TG created during in ALB creation. At the &lt;strong&gt;Monitoring&lt;/strong&gt;, select Check box for &lt;strong&gt;Enable group metrics collection within CloudWatch&lt;/strong&gt;. This allows CloudWatch to see the group metrics that can determine the status of Auto Scaling groups. Click the &lt;strong&gt;Next&lt;/strong&gt; button at the bottom right.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4lysbla4pr63otv21tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4lysbla4pr63otv21tz.png" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvyqh1oegaeu8tx22xdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvyqh1oegaeu8tx22xdi.png" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the step of Configure group size and scaling policies, set scaling policy for Auto Scaling Group. In the &lt;strong&gt;Group size&lt;/strong&gt; column, specify &lt;strong&gt;Desired capacity&lt;/strong&gt; and &lt;strong&gt;Minimum capacity&lt;/strong&gt; as 2 and &lt;strong&gt;Maximum capacity&lt;/strong&gt; as 4. Keep the number of the instances to 2 as usual, and allow scaling of at least 2 and up to 4 depending on the policy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73r5nu73af004r6dr0an.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73r5nu73af004r6dr0an.png" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Scaling policies section, select &lt;strong&gt;Target tracking scaling policy&lt;/strong&gt; and type 30 in &lt;strong&gt;Target value&lt;/strong&gt;. This is a scaling policy for adjusting the number of instances based on the CPU average utilization remaining at 30% overall. Leave all other settings as default and click the &lt;strong&gt;Next&lt;/strong&gt; button in the lower right corner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27atcl4scqpdgqp40nzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27atcl4scqpdgqp40nzu.png" width="800" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We will not &lt;strong&gt;Add notifications&lt;/strong&gt;. Clcik the &lt;strong&gt;Next&lt;/strong&gt; button to move to the next step. In the Add tags step, we will simply assign name tag. Click &lt;strong&gt;Add tag&lt;/strong&gt;, type Name in &lt;strong&gt;Key&lt;/strong&gt;, ASG-Web-Instance in &lt;strong&gt;Value&lt;/strong&gt;, and then click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kdhnkjsz3xqdazssv3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kdhnkjsz3xqdazssv3e.png" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8deb7ahlpxiu8hobw01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8deb7ahlpxiu8hobw01.png" width="223" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now we are in the final stage of review. After checking the all settings, click the &lt;strong&gt;Create Auto Scaling Group&lt;/strong&gt; button at the bottom right.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Auto Scaling group has been created. You can see the Auto Scaling group created in the Auto Scaling group console as shown below.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9l0d06ep73uouv907em.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9l0d06ep73uouv907em.png" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Instances created through the Auto Scaling group can also be viewed from the EC2 Instance menu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24hy2np3pqfr8fsplnu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24hy2np3pqfr8fsplnu9.png" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling#architecture-configured-so-far" rel="noopener noreferrer"&gt;Architecture Configured So Far&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now, we’ve built a web service that is high available and automatically scales under load! The configuration of the services we have created so far is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1clqmcayzi5g3g21yohf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1clqmcayzi5g3g21yohf.png" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You successfully deployed a scalable and highly available web service using an Application Load Balancer, security groups, launch template, and Auto Scaling group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check web service and test
&lt;/h2&gt;

&lt;p&gt;Now, let’s test the service you have configured for successful operation. First, let’s check whether you can access the website normally and whether the load balancer works, and then load the web server to see if Auto Scaling works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/test-service#check-web-service-and-load-balancer" rel="noopener noreferrer"&gt;Check web service and load balancer&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;To access through the Application Load Balancer configured for the web service, click the &lt;strong&gt;Load Balancers&lt;/strong&gt; menu in the EC2 console and select the Web-ALB you created earlier. Copy &lt;strong&gt;&lt;em&gt;DNS name&lt;/em&gt;&lt;/strong&gt; from the basic configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58vaahyb3p6yua82ppdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58vaahyb3p6yua82ppdz.png" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new tab in your web browser and paste the &lt;strong&gt;copied DNS name&lt;/strong&gt;. You can see that web service is working as shown below. For the figure below, you can see that the web instance placed in &lt;strong&gt;ap-northeast-2a&lt;/strong&gt; is running this web page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxynle2szdrm53oc1tik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxynle2szdrm53oc1tik.png" width="703" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you click the refresh button here, you can see that the host serving the web page has been replaced with &lt;strong&gt;an instance of another availability zone area&lt;/strong&gt; (ap-northeast-2c) as shown below. This is because routing algorithms in ALB target groups behave &lt;strong&gt;Round Robin&lt;/strong&gt; by default.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovw6qyk0cdl799fslxwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovw6qyk0cdl799fslxwf.png" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Currently, in the the Auto Scaling group, scaling policy’s baseline has been set to 30% CPU utilization for each instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If the average &lt;strong&gt;&lt;em&gt;CPU utilization of an instance is less than 30%&lt;/em&gt;&lt;/strong&gt;, Reduce the number of instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the average &lt;strong&gt;&lt;em&gt;CPU utilization of an instance is over 30%&lt;/em&gt;&lt;/strong&gt;, Additional instances will be deployed, load will be distributed, and adjusted to ensure that the average CPU utilization of the instances is 30%.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Now, let’s test load to see whether Auto Scaling works well. On the web page above, click the &lt;strong&gt;LOAD TEST&lt;/strong&gt; menu. The web page changes and the applied load is visible. Click on the logo at the top left of the page to see that each instance is under load.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Before load:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o40hcp63dmfziy4f6iq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o40hcp63dmfziy4f6iq.png" width="674" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The principle that causes CPU load is that when the CPU Idle value is over 50, the PHP code operates every five seconds to create, compress, and decompress arbitrary files. Traffic is distributed and operated by the ALB, so the load is applied to other instances continuously.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt; from the left side menu of the EC2 console and click the &lt;strong&gt;Monitoring&lt;/strong&gt; tab. Under &lt;strong&gt;Enabled metrics&lt;/strong&gt;, click &lt;strong&gt;EC2&lt;/strong&gt; and set the right time frame to &lt;strong&gt;1 hour&lt;/strong&gt;. If you wait for a few seconds, you’ll see the &lt;strong&gt;CPU Utilization (Percent)&lt;/strong&gt; graph changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg33g7atnci3btpl67r79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg33g7atnci3btpl67r79.png" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Wait for about 5 minutes (300 seconds) and click the &lt;strong&gt;Activity&lt;/strong&gt; tab to see the additional EC2 instances deployed according to the scaling policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you click on the &lt;strong&gt;Instance management&lt;/strong&gt; tab, you can see that two additional instances have sprung up and a total of four are up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you use the ALB DNS that you copied earlier to access and refresh the web page, you can see that it is hosting the web page in two instances that were not there before. The current CPU load is 0% because it is a new instance. It can also be seen that each of them was created in a different availability zone. If it’s not 0%, it can look more than 100% because it’s a constant load situation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So far, we’ve checked that Auto Scaling group is working through a load test on the web service. If the page that causes the CPU load is working, close the page to prevent additional load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database — Amazon Aurora
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Create VPC security group
&lt;/h2&gt;

&lt;p&gt;The RDS service uses the same security model as EC2. The most common usage format is to provide data as a database server to an EC2 instance operating as an applicatiojn server within the same VPC, or to configure it to be accessible to the DB Application client outside of the VPC. The VPC Security Group must be applied for proper access control.&lt;/p&gt;

&lt;p&gt;In the previous &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute" rel="noopener noreferrer"&gt;Compute — Amazon EC2&lt;/a&gt; lab, we created web server EC2 instances using Launch Template and Auto Scaling Group. These instances use Launch Template to apply the security group &lt;strong&gt;ASG-Web-Inst-SG&lt;/strong&gt; . Using this information, we will create a security group so that only web server instances within the Auto Scaling Group can access RDS instances.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On the left side of the VPC dashboard, select &lt;strong&gt;Security Groups&lt;/strong&gt; and then select &lt;strong&gt;Create Security Group&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter &lt;strong&gt;Security group name&lt;/strong&gt; and &lt;strong&gt;Description&lt;/strong&gt; as shown below. Choose the &lt;strong&gt;VPC&lt;/strong&gt; that was created in the first lab. It should be named VPC-Lab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r8x89g63mr4jc0f4kya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r8x89g63mr4jc0f4kya.png" width="372" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the principle of least privilege, it is a best practice to allow inbound traffic to your database only from trusted sources, such as your application servers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scroll down to the Inbound rules column. Click Add rule to create a security group policy that allows access to RDS from the EC2 Web servers that you previously created through the Auto Scaling Group. Under &lt;strong&gt;Type&lt;/strong&gt;, select &lt;strong&gt;MySQL/Aurora&lt;/strong&gt; The port range should default to &lt;strong&gt;3306&lt;/strong&gt;. The protocol and port ranges are automatically specified. The &lt;strong&gt;Source type&lt;/strong&gt; entry can specify the IP band (CIDR) that you want to allow acces to, or other security groups that the EC2 instances to access are already using. Select the security group(named &lt;strong&gt;&lt;em&gt;ASG-Web-Inst-SG&lt;/em&gt;&lt;/strong&gt; ) that is applied to the web instances of the Auto Scaling group in the &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute" rel="noopener noreferrer"&gt;Compute — Amazon EC2&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z32a64x73mhq517a9nv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z32a64x73mhq517a9nv.png" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When settings are completed, click &lt;strong&gt;Create Security Group&lt;/strong&gt; at the bottom of the list to create this security group.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F490el599izsqxfzgf8a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F490el599izsqxfzgf8a9.png" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create RDS instance
&lt;/h2&gt;

&lt;p&gt;Since the security group that RDS will use has been created, let’s create an instance of &lt;strong&gt;RDS Aurora (MySQL compatible)&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the AWS Management console, go to the &lt;a href="https://console.aws.amazon.com/rds" rel="noopener noreferrer"&gt;RDS (Relational Database Service) &lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy988uk36j8wekh5xexmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy988uk36j8wekh5xexmg.png" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Create Database&lt;/strong&gt; in dashboard to start creating a RDS instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1kwnptmnc1uoig3xv29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1kwnptmnc1uoig3xv29.png" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You want to select the RDS instances’ database engine. In Amazon RDS, you can select the database engine based on open source or commercial database engine. In this lab, we will use &lt;strong&gt;Amazon Aurora with MySQL-compliant database engine&lt;/strong&gt;. Select &lt;strong&gt;Standard Create&lt;/strong&gt; in the choose a database creation method section. Set &lt;strong&gt;&lt;em&gt;Engine type&lt;/em&gt;&lt;/strong&gt; to &lt;strong&gt;Aurora (MySQL Compatible)&lt;/strong&gt;, Set &lt;strong&gt;&lt;em&gt;Version&lt;/em&gt;&lt;/strong&gt; to &lt;strong&gt;Aurora (MySQL 5.7) 2.11.4&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwptbvee6rwffyv407e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwptbvee6rwffyv407e2.png" width="800" height="971"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Production&lt;/strong&gt; in &lt;strong&gt;&lt;em&gt;Template&lt;/em&gt;&lt;/strong&gt;. Under &lt;strong&gt;Settings&lt;/strong&gt;, we want to specify administrator information for identifying the RDS instances. Enter the information as it appears below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yi9nctdf2ozw2itwxqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yi9nctdf2ozw2itwxqu.png" width="778" height="834"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdsqawly1magstmd05ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdsqawly1magstmd05ky.png" width="283" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For production workloads, it is a best practice to enable high availability and fault tolerance by creating read replicas in different Availability Zones.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;DB instance size&lt;/strong&gt; select &lt;strong&gt;Memory Optimized class&lt;/strong&gt;. Under &lt;strong&gt;Availability &amp;amp; durability&lt;/strong&gt; select &lt;strong&gt;Create an Aurora Replica or reader node in a different AZ&lt;/strong&gt;. Select &lt;strong&gt;db.r5.large&lt;/strong&gt; for instance type.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekst501y1738t0qcj2rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekst501y1738t0qcj2rd.png" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to deploy databases within a private subnet of a VPC for better security and network isolation&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up network and security on the &lt;strong&gt;Connectivity&lt;/strong&gt; page. Select the VPC-Lab that you created earlier in the Virtual private cloud (VPC) and specify the subnet that the RDS instance will be placed in, public access, and security groups. Enter the information as it appears below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F446zen77mw7a83tjuzg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F446zen77mw7a83tjuzg0.png" width="778" height="838"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhzdvj1q4hm35czdhksx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhzdvj1q4hm35czdhksx.png" width="708" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scroll down and click &lt;strong&gt;Additional configuration&lt;/strong&gt;. Set database options as shown below. Be aware of the uppercase and lowercase letters of &lt;strong&gt;Initial database name&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytbw4879aa2skwu0lhy3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytbw4879aa2skwu0lhy3.png" width="785" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nikpnlflttmnh7b0kq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nikpnlflttmnh7b0kq4.png" width="424" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Subsequent items such as &lt;strong&gt;Backup&lt;/strong&gt;, &lt;strong&gt;Entry&lt;/strong&gt;, &lt;strong&gt;Backtrack&lt;/strong&gt;, &lt;strong&gt;Monitoring&lt;/strong&gt;, and &lt;strong&gt;Log exports&lt;/strong&gt; all accept the default values, and press &lt;strong&gt;Create database&lt;/strong&gt; to create a database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A new RDS instance is now creating. This may take more than 5 minutes. You can use an RDS instance when the DB instance’s status changed to &lt;strong&gt;&lt;em&gt;Available&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mxeg3oqzeqpqoq7ps02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mxeg3oqzeqpqoq7ps02.png" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/create-rds#architecture-configured-so-far" rel="noopener noreferrer"&gt;Architecture Configured So Far&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The configuration of the services we have created so far is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4caaijkkqj8ytft4ta1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4caaijkkqj8ytft4ta1d.png" width="800" height="733"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect RDS with Web App server
&lt;/h2&gt;

&lt;p&gt;The Web Server instance that you created in the previous computer lab contains code that generates a simple address book to RDS. The Endpoint URL of the RDS must be verified first in order to use the RDS on the EC2 Web Server.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/connect-app#storing-rds-credentials-in-aws-secrets-manager" rel="noopener noreferrer"&gt;Storing RDS Credentials in AWS Secrets Manager&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The web server we built includes sample code for our address book. In this lab, you specify which database to use in the sample code and how to connect it. We will store that information in AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;In this chapter, we will create a secret containing data connection information. Later, we will give the web server the appropriate permission to retrieve the secret.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the console window, open AWS Secrets Manager (&lt;a href="https://console.aws.amazon.com/secretsmanager/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/secretsmanager/ &lt;/a&gt;) and click the &lt;strong&gt;Store a new secret&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F598vnlqfrfrf4dadk2s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F598vnlqfrfrf4dadk2s3.png" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to store database credentials and other sensitive information securely using AWS Secrets Manager, instead of hard-coding them in application code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Secret Type&lt;/strong&gt;, choose &lt;strong&gt;Credentials for Amazon RDS database&lt;/strong&gt;. Write down the user name and password you entered when creating the database. And under &lt;strong&gt;Database&lt;/strong&gt; select the database you just created. Then click the &lt;strong&gt;Next&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5sfgxwy219jh5ksgegw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5sfgxwy219jh5ksgegw.png" width="222" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhocm9za0zihcp7338xs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhocm9za0zihcp7338xs.png" width="800" height="918"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Name your secret, mysecret. The sample code is written to ask for the secret by this specific name. Click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy8ystguc5bmsfm8e2if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy8ystguc5bmsfm8e2if.png" width="758" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Leave Secret rotation at default values. Click Next.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvabqi45o5wxualht4kau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvabqi45o5wxualht4kau.png" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review your choices. Click &lt;strong&gt;Store&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6jnoa0or9gfzk82i5v3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6jnoa0or9gfzk82i5v3.png" width="800" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can check the list of secret values with the name &lt;strong&gt;mysecret&lt;/strong&gt; as shown below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufldk95tpsjkpk69bvyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufldk95tpsjkpk69bvyq.png" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;mysecret&lt;/strong&gt; hyperlink and find &lt;strong&gt;Secret value&lt;/strong&gt; tab. And click &lt;strong&gt;Retrieve secret value&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplktytcu12yo03u0tyq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplktytcu12yo03u0tyq5.png" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; button, and check whether there is &lt;strong&gt;dbname&lt;/strong&gt; and &lt;strong&gt;immersionday&lt;/strong&gt; in key/value section. If they were not, click &lt;strong&gt;Add&lt;/strong&gt; button, fill out the value and click &lt;strong&gt;save&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g80nyzp6vt19cwji04h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g80nyzp6vt19cwji04h.png" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access RDS from EC2
&lt;/h2&gt;

&lt;p&gt;Now that you have created a secret, you must give your web server permission to use it. To do this, we will create a Policy that allows the web server to read a secret. We will add this policy to the Role you previously assigned to the web server.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/update-asg#allow-the-web-server-to-access-the-secret" rel="noopener noreferrer"&gt;Allow the web server to access the secret&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To follow the principle of least privilege, it is a best practice to grant the minimum required permissions to resources. In this case, you will grant permissions for the web server instances to access the specific secret containing the database credentials.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to the AWS Management Console and open the &lt;a href="https://console.aws.amazon.com/iamv2/home#/home" rel="noopener noreferrer"&gt;IAM console &lt;/a&gt;. In the navigation pane, choose &lt;strong&gt;Policies&lt;/strong&gt;, and then choose &lt;strong&gt;Create Policy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxacasku62fabo4te28ue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxacasku62fabo4te28ue.png" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Choose a service&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hrk1w4snxfaai2yoy23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hrk1w4snxfaai2yoy23.png" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type &lt;strong&gt;Secrets Manager&lt;/strong&gt; into the search box. Click &lt;strong&gt;Secrets Manager&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7pokn496ikn74l62b5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b7pokn496ikn74l62b5.png" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Access level&lt;/strong&gt;, click on the carat next to &lt;strong&gt;Read&lt;/strong&gt; and then check the box by &lt;strong&gt;GetSecretValue&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57qs3nkw2le9kasmqb7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57qs3nkw2le9kasmqb7n.png" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the carat next to &lt;strong&gt;Resources&lt;/strong&gt;. For this lab, select &lt;strong&gt;All resources&lt;/strong&gt;. Click &lt;strong&gt;Next: Tags&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the lab, we’re allowing EC2 to access all secrets. With a real workload, you should consider allowing access to specific secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr25wkthvi4guapro6s9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr25wkthvi4guapro6s9j.png" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Next: Review&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbu7t0614nz9kvlw8drs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbu7t0614nz9kvlw8drs.png" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the &lt;strong&gt;Review Policy&lt;/strong&gt; screen, give your new policy the name &lt;strong&gt;ReadSecrets&lt;/strong&gt;. Click &lt;strong&gt;Create policy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojepb9sbsey3yx2jxgdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojepb9sbsey3yx2jxgdy.png" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the navigation pane, choose &lt;strong&gt;Roles&lt;/strong&gt; and type &lt;strong&gt;SSMInstanceProfile&lt;/strong&gt; into the search box. This is the role you created previously in &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/basic-modules/10-ec2/ec2-linux/3-ec2-1" rel="noopener noreferrer"&gt;Connect to your Linux instance using Session Manager&lt;/a&gt;. Click &lt;strong&gt;SSMInstanceProfile&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifx2fc63f13w33gkgunj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifx2fc63f13w33gkgunj.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Permissions policies&lt;/strong&gt;, click &lt;strong&gt;Attach policies&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddr3eefptpgybmmkakbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddr3eefptpgybmmkakbc.png" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for the policy you created called &lt;strong&gt;ReadSecrets&lt;/strong&gt;. Check the box and click &lt;strong&gt;Attach policy&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx93t305nenatc7ea30fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx93t305nenatc7ea30fa.png" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under &lt;strong&gt;Permissions policies&lt;/strong&gt;, verify that &lt;strong&gt;AmazonSSMManagedInstanceCore&lt;/strong&gt; and &lt;strong&gt;ReadSecrets&lt;/strong&gt; are both listed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7cirisi90m6ajzsi32p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7cirisi90m6ajzsi32p.png" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/update-asg#try-the-address-book" rel="noopener noreferrer"&gt;Try the Address Book&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Access the &lt;a href="[https://console.aws.amazon.com/ec2/v2/home?instanceState=running%20](https://console.aws.amazon.com/ec2/v2/home?instanceState=running)"&gt;EC2 Console&lt;/a&gt; window and click &lt;strong&gt;load balancer&lt;/strong&gt;. After copying the &lt;strong&gt;DNS name&lt;/strong&gt; of the load balancer created in the compute lab, open a new tab in your browser and paste it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4vxyoe22srogr1ms6qv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4vxyoe22srogr1ms6qv.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After connecting to the web server, go to the RDS tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdos0kwlyypdfu2fper98.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdos0kwlyypdfu2fper98.jpg" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now you can check the data in the database you created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dkkfgqxrl7ctmkijs7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dkkfgqxrl7ctmkijs7w.png" width="691" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a very basic exercise in interacting with a MySQL database managed by AWS. RDS can support much more complex relational database scenarios, but hopefully this simple example will make the point clear. You are free to add/edit/delete content from the RDS database using the &lt;strong&gt;Add Contact, Edit&lt;/strong&gt; and &lt;strong&gt;Remove&lt;/strong&gt; links in the address book.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/update-asg#architecture-configured-so-far" rel="noopener noreferrer"&gt;Architecture Configured So Far&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Now, with the work done so far, you have built a web service with guaranteed high availability. The infrastructure architecture we have constructed so far is as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50uszgurntd89wpo8200.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50uszgurntd89wpo8200.png" width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RDS Management Features
&lt;/h2&gt;

&lt;p&gt;In multiple AZ deployments, Amazon RDS automatically provisions and maintains synchronous spare replicas in different availability zone. The default DB instance is synchronized from the availability zone to the spare replica to provide data redundancy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/manage-rds#rds-failover-tests" rel="noopener noreferrer"&gt;RDS Failover Tests&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;When multiple AZs are enabled, Amazon RDS automatically switches to a spare replica in another availability zone if the DB instance has a planned or unplanned outage. The amount of time that failover takes to complete depends on the database activity and other conditions when the default DB instance becomes unavailable. The time required for failover is typically 60–120 seconds. However, if the transaction is large or the recovery process is complex, the time required for failover can be increased. When failover is complete, the RDS console UI takes additional time to reflect in the new availability zone.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the RDS management console, select &lt;strong&gt;Databases&lt;/strong&gt;, select the instance that you want to proceed with the failover, and click &lt;strong&gt;Failover&lt;/strong&gt; in the task menu.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa21dl332yjzj11ldunb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa21dl332yjzj11ldunb7.png" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A message asking whether you’re going to failover the rdscluster. Press the &lt;strong&gt;Failover&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1so1hgrvgzspjrf45ujf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1so1hgrvgzspjrf45ujf.png" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;strong&gt;refresh&lt;/strong&gt; button changes the status of &lt;strong&gt;rdscluster&lt;/strong&gt; in the DB identifier to &lt;strong&gt;Failing-over&lt;/strong&gt;. In a few minutes, press the &lt;strong&gt;Refresh&lt;/strong&gt; button to see &lt;strong&gt;Reader and Writer roles changed&lt;/strong&gt;. The failover is complete.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faub4225clb6i1ugiqlp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faub4225clb6i1ugiqlp1.png" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/manage-rds#create-rds-snapshot" rel="noopener noreferrer"&gt;Create RDS Snapshot&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s take a snapshot of the RDS in production. Snapshot can be created at any frequency for backup to database instances, and the database can be restored at any time based on the snapshots created.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the RDS management console, select &lt;strong&gt;Databases&lt;/strong&gt;, and &lt;strong&gt;Select the instance&lt;/strong&gt; on which you want to perform the snapshot operation. Select &lt;strong&gt;Actions&lt;/strong&gt; &amp;gt; &lt;strong&gt;Take snapshot&lt;/strong&gt; in the upper right corner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nmygsv4b6njc9z0t570.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nmygsv4b6njc9z0t570.png" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type the name you want to use for the snapshot as immersionday-snapshot. Press the &lt;strong&gt;Take Snapshot&lt;/strong&gt; button to complete the creation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiae3v8qv7qcszve17th.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiae3v8qv7qcszve17th.png" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the left RDS menu, select &lt;strong&gt;Snapshots&lt;/strong&gt; and check the creation status of the snapshot. The state of the snapshot is the first &lt;strong&gt;&lt;em&gt;creating&lt;/em&gt;&lt;/strong&gt; state, and you can use that snapshot to restore the database when state become &lt;strong&gt;&lt;em&gt;available&lt;/em&gt;&lt;/strong&gt;. To restore, select the &lt;strong&gt;snapshot&lt;/strong&gt; and select &lt;strong&gt;Actions&lt;/strong&gt; to see what you can do with that snapshot. &lt;strong&gt;Restore Snapshot&lt;/strong&gt; allows you to create RDS instances with the same data based on snapshots taken. &lt;strong&gt;&lt;em&gt;This lab will not perform a restore.&lt;/em&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F908ca6le9k9smv0venid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F908ca6le9k9smv0venid.png" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/manage-rds#change-rds-instance-type" rel="noopener noreferrer"&gt;Change RDS Instance Type&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Scale-Up/Scale-Down of RDS instances can be done very simply through the RDS Management Console.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Let’s change the specification of the RDS instance by &lt;strong&gt;selecting the instance&lt;/strong&gt; you want to change and clicking &lt;strong&gt;Modify&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can select the specification of the instance that you want to change by selecting the list box of &lt;strong&gt;instance classes&lt;/strong&gt;. Let’s choose &lt;strong&gt;db.r6g.large&lt;/strong&gt; here.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll to the bottom and select &lt;strong&gt;Continue&lt;/strong&gt; to go to the page where you check the instance’s current value and new value and select when to apply.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Apply immediately&lt;/strong&gt;. In this case, RDS changes its instance immediately after perform a back up task. Then click &lt;strong&gt;Modify DB Instance&lt;/strong&gt;. Depending on the type of instance and the amount of data to back up, it can take several minutes. Therefore, you should expect a certain amount of downtime for RDS services(Redundant configuration minimizes downtime).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In case of selecting &lt;strong&gt;Apply during the next scheduled maintenance window&lt;/strong&gt;, make the change in the user’s Maintenance Window, which is specified on a weekly basis.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You can see that the status of the instance has changed to &lt;strong&gt;&lt;em&gt;Modifying&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you click refresh button again, you can see that the Writer instance has changed. This is because the instance you selected earlier for the size change was the Writer instance. RDS minimizes downtime through failover before resizing. If you wait a moment, you will see that the change to &lt;strong&gt;&lt;em&gt;Available&lt;/em&gt;&lt;/strong&gt; status has been completed as shown below.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;RDS can change the size of the instance at any time. However, the size of the database does not support shrink after scaling up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect RDS Aurora
&lt;/h2&gt;

&lt;p&gt;Let’s try to make an RDS connection through MySQL CLI, which is used for general database management/operation.&lt;/p&gt;

&lt;p&gt;To do this,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create an EC2 instance with the AMI created in Public Subnet within the VPC-Lab. The networking option should allow Public IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Changes the security group settings for RDS Aurora. Configure the newly created EC2 instance to accept security groups as sources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log in to the EC2 instance you just created with SSH, and connect to RDS Aurora through the MySQL Client. The EC2 web server already has MySQL client installed during EC2 deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Organizing the above items will be a challenge. Once the setup is successful, you can connect to the CLI environment and perform mysql commands as shown below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh -i AWS-ImmersionDay-Lab.pem ec2-user@”EC2 Host FQDN or IP”
Last login: Sun Feb 18 14:41:59 2018 from 112.148.83.236

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/


$ mysql -u awsuser -pawspassword -h awsdb.ccjlcjlrtga1.ap-northeast-2.rds.amazonaws.com

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 34
Server version: 5.6.10 MySQL Community Server (GPL)


Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql&amp;gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| immersionday       |
| mysql              |
| performance_schema |
+--------------------+
4 rows in set (0.01 sec)

mysql&amp;gt; use immersionday;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql&amp;gt; show tables;
+------------------------+
| Tables_in_immersionday |
+------------------------+
| address                |
+------------------------+
1 row in set (0.01 sec)

mysql&amp;gt; select * from address;
+----+-------+--------------+---------------------+
| id | name  | phone        | email               |
+----+-------+--------------+---------------------+
|  1 | Bob   | 630-555-1254 | bob@fakeaddress.com |
|  2 | Alice | 571-555-4875 | alice@address2.us   |
+----+-------+--------------+---------------------+
2 rows in set (0.00 sec)

mysql&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Storage — Amazon S3
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Create Bucket on S3
&lt;/h2&gt;

&lt;p&gt;All objects in Amazon S3 are stored within a bucket. You must create a Bucket before storing data on Amazon S3.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/create-bucket#create-bucket" rel="noopener noreferrer"&gt;Create Bucket&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;From the AWS Management Console, connect to &lt;a href="https://console.aws.amazon.com/s3" rel="noopener noreferrer"&gt;S3 &lt;/a&gt;. Press &lt;strong&gt;Create bucket&lt;/strong&gt; to create a bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jbb357k463k0qe3n5wh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jbb357k463k0qe3n5wh.png" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to use S3 Bucket Policy and Access Control Lists (ACLs) to control access to your S3 buckets and objects, following the principle of least privilege.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter a unique bucket name in the &lt;strong&gt;&lt;em&gt;Bucket name&lt;/em&gt;&lt;/strong&gt; field. For this lab, type immersion-day-user_name, substituiting user-name with your name. &lt;strong&gt;&lt;em&gt;All bucket names in Amazon S3 have to be unique and cannot be duplicated&lt;/em&gt;&lt;/strong&gt;. In the &lt;strong&gt;Region&lt;/strong&gt; drop-down box, specify the region to create the bucket. In this lab, select the region closest to you. The images will show the &lt;strong&gt;Asia Pacific (Seoul)&lt;/strong&gt; region. &lt;strong&gt;Object Ownership&lt;/strong&gt; change to &lt;strong&gt;ACLs enabled&lt;/strong&gt;. Bucket settings for Block Public Access use default values, and select &lt;strong&gt;Create bucket&lt;/strong&gt; in the lower right corner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foog4ry1wsjfvrl1t1ya2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foog4ry1wsjfvrl1t1ya2.png" width="800" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bucket names must comply with these rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Can contain lowercase letters, numbers, dots (.), and dashes (-).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Must start with a number or letter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be specified from a minimum of 3 to a maximum of 255 characters in length.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cannot be specified in the format like the IP address (e.g., 265.255.5.4).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There may be additional restrictions depending on the region in which the bucket is created. The name of the bucket cannot be changed once it is created and is included in the URL to specify objects stored within the bucket. Please make sure that the bucket you want to create is named appropriately.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A bucket has been created on Amazon S3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7matt8jyej2wzdrvv3t5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7matt8jyej2wzdrvv3t5.png" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are no costs incurred for creating bucket. You pay for storing objects in your S3 buckets. The rate you’re charged depends on the region you are using, your objects’ size, how long you stored the objects during the month, and the storage class. There are also per-request fees. &lt;a href="https://aws.amazon.com/s3/pricing/" rel="noopener noreferrer"&gt;Click for more information&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Adding objects to buckets
&lt;/h2&gt;

&lt;p&gt;If the bucket has been created successfully, you are ready to add the object. Objects can be any kind of file, including text files, image files, and video files. When you add a file to Amazon S3, you can include information about the permissions and access settings for that file in the metadata.&lt;/p&gt;
&lt;h1&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/put-object#adding-objects-for-static-web-hosting" rel="noopener noreferrer"&gt;Adding objects for static Web hosting&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This lab hosts static websites through S3. The static website serves as a redirect to an instance created by the VPC Lab when you click on a particular image. Therefore, prepare one image file, one HTML file, and an ALB DNS name.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the image file &lt;a href="https://static.us-east-1.prod.workshops.aws/public/2e449d3a-fc13-44c9-8c99-35a37735e7f5/static/common/s3_advanced_lab/aws.png" rel="noopener noreferrer"&gt;aws.png &lt;/a&gt;and save it as aws.png.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write index.html using the source code below.
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
        &amp;lt;head&amp;gt;
            &amp;lt;meta charset="utf-8"&amp;gt;
            &amp;lt;title&amp;gt; AWS General Immersion Day S3 HoL &amp;lt;/title&amp;gt;
        &amp;lt;/head&amp;gt;
        &amp;lt;body&amp;gt;
            &amp;lt;center&amp;gt;
            &amp;lt;br&amp;gt;
            &amp;lt;h2&amp;gt; Click image to be redirected to the EC2 instance that you created &amp;lt;/h2&amp;gt;
            &amp;lt;img src="{{Replace with your S3 URL Address}}" onclick="window.location='DNS Name'"/&amp;gt;
            &amp;lt;/center&amp;gt;
        &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Upload the aws.png file to S3. Click &lt;strong&gt;&lt;em&gt;S3 Bucket&lt;/em&gt;&lt;/strong&gt; that you just created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foekzln5a2p99164856ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foekzln5a2p99164856ao.png" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;Upload&lt;/strong&gt; button. Then click the &lt;strong&gt;Add files&lt;/strong&gt; button. Select the pre-downloaded aws.png file through File Explorer. Alternatively, place the file in Drag and Drop to the screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex92xhw6lawv4pw7l9ot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex92xhw6lawv4pw7l9ot.png" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3dolbfs4snuirkpdhe5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3dolbfs4snuirkpdhe5.png" width="800" height="745"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the file information named aws.png to upload, then click the &lt;strong&gt;Upload&lt;/strong&gt; button at the bottom.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5vlmp6vol0a2oybcflw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5vlmp6vol0a2oybcflw.png" width="800" height="1058"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the URL information to fill in the image URL in index.html file. Select the uploaded aws.png file and copy the &lt;strong&gt;&lt;em&gt;Object URL&lt;/em&gt;&lt;/strong&gt; information from the details on the right.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frykiibcgcxhm9c20ec8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frykiibcgcxhm9c20ec8s.png" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Paste &lt;strong&gt;&lt;em&gt;Object URL&lt;/em&gt;&lt;/strong&gt; into the image URL part of the index.html. Then specify the &lt;strong&gt;&lt;em&gt;ALB DNS Name&lt;/em&gt;&lt;/strong&gt; of the load balancer created by &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/compute/auto-scaling" rel="noopener noreferrer"&gt;Deploy auto scaling web service&lt;/a&gt; to redirect to ALB when you click on the image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frco08t7bndv6laedetlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frco08t7bndv6laedetlu.png" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload the index.html file to S3 following the same instructions as you did to upload the image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmcvvubajuvx1yvxul0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmcvvubajuvx1yvxul0e.png" width="800" height="826"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you check the objects in your S3 bucket, you should see 2 files.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn26kqp9y8ezlgij27x2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn26kqp9y8ezlgij27x2l.png" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully created an S3 bucket and uploaded objects into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  View objects
&lt;/h2&gt;

&lt;p&gt;Now that you’ve added an object to your bucket, let’s check it out in your web browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/view-object#view-objects" rel="noopener noreferrer"&gt;View Objects&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon S3 Console, please &lt;strong&gt;&lt;em&gt;click the object&lt;/em&gt;&lt;/strong&gt; you want to see. You can see detailed information about the object as shown below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0mofsvyknec36hu8edf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0mofsvyknec36hu8edf.png" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, all objects in the S3 bucket are owner-only(Private). To determine the object through a URL of the same format as &lt;a href="https://{Bucket}.s3.{region}.amazonaws.com/{Object}," rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;https://{Bucket}.s3.{region}.amazonaws.com/{Object}&lt;/em&gt;&lt;/strong&gt;,&lt;/a&gt; you must grant &lt;strong&gt;&lt;em&gt;Read&lt;/em&gt;&lt;/strong&gt; permission for external users to read it. Alternatively, you can create a signature-based Signed URL that contains credentials for that object, allowing unauthorized users to access it temporarily.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to the previous page and select the &lt;strong&gt;Permissions&lt;/strong&gt; tab in the bucket. To modify the application of &lt;strong&gt;Block public access (bucket settings)&lt;/strong&gt;, press the right Edit button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ys3kaz8nud49up4r18f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ys3kaz8nud49up4r18f.png" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Uncheck box&lt;/strong&gt; and press the &lt;strong&gt;Save changes&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoukdk4sehpjmko37l2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoukdk4sehpjmko37l2f.png" width="800" height="686"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter confirm in the bucket's Edit Block public access pop up window and press the &lt;strong&gt;Confirm&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x64dti5bj1jrbcfv09e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x64dti5bj1jrbcfv09e.png" width="577" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;Objects&lt;/strong&gt; tab, select the uploaded &lt;strong&gt;files&lt;/strong&gt;, click the &lt;strong&gt;Action&lt;/strong&gt; drop-down button, and press the &lt;strong&gt;Make public&lt;/strong&gt; button to set them to public.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eei7u3ytkwng9de4y78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0eei7u3ytkwng9de4y78.png" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When the confirmation window pops up, press the &lt;strong&gt;Make public&lt;/strong&gt; button again to confirm.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wmqbxltue3botr9nem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wmqbxltue3botr9nem.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is a best practice to periodically review and audit the permissions and access settings for your S3 buckets and objects to ensure they align with your security requirements and the principle of least privilege.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Return to the bucket page, select index.html, and click the &lt;strong&gt;&lt;em&gt;Object URL&lt;/em&gt;&lt;/strong&gt; link in the Show Details entry.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzmsvnewuwibo1o4s1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzmsvnewuwibo1o4s1l.png" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you access the HTML object file object URL, the following screen is printed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdnh8tqsbixsogqtjkj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdnh8tqsbixsogqtjkj0.png" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When you click on an image, it is redirected to the instance’s web page you created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaq6r7ekqipv734m7w51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaq6r7ekqipv734m7w51.png" width="742" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable Static Web Site Hosting
&lt;/h2&gt;

&lt;p&gt;You can use Amazon S3 to host static websites.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/static-web-hosting#static-web-site-settings" rel="noopener noreferrer"&gt;Static Web Site Settings&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;A static website refers to a website that contains static content (HTML, image, video) or client-side scripts (Javascript) on a web page. In contrast, dynamic websites require server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Server-side scripting is not supported on Amazon S3. If you want to host a dynamic website, you can use other services such as EC2 on AWS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the S3 console, select the bucket you just created, and click the &lt;strong&gt;Properties&lt;/strong&gt; tab. Scroll down and click the Edit button on &lt;strong&gt;Static website hosting&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2sef538f7w7t2zaufc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2sef538f7w7t2zaufc9.png" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafc501pcgcw63lpf2ct2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafc501pcgcw63lpf2ct2.png" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Activate the static website hosting function and select the hosting type and enter the index.html value in the Index document value, then click the &lt;strong&gt;save changes&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrsiesnvyyay8ob9hj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrsiesnvyyay8ob9hj9.png" width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Bucket website endpoint&lt;/strong&gt; created in the &lt;strong&gt;Static website hosting&lt;/strong&gt; entry to access the static website.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzmgujykuiiq4jerzqgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzmgujykuiiq4jerzqgb.png" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This allows you to host static websites using Amazon S3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3btnaiy41155ppjfdvxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3btnaiy41155ppjfdvxg.png" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Move objects
&lt;/h2&gt;

&lt;p&gt;You have seen the ability to add objects to buckets and verify them so far. Now, let’s see how we can move objects to different buckets or folders.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/move-object#move-objects" rel="noopener noreferrer"&gt;Move Objects&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a temporary bucket for moving objects between buckets (Bucket name: immersion-day-myname-target). Substitute &lt;strong&gt;myname&lt;/strong&gt; with your name. Rememeber the naming rules for the bucket. &lt;strong&gt;Block all public access&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;Uncheckbox&lt;/em&gt;&lt;/strong&gt; for quick configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97kkab00pnrxwzy4xttm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97kkab00pnrxwzy4xttm.png" width="800" height="939"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check the notification window below and select &lt;strong&gt;Create bucket&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fied7xmcballbr0y30xfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fied7xmcballbr0y30xfz.png" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon S3 Console, select the bucket that contains the object (the first bucket you created) and click the checkbox for the object you want to move. Select the &lt;strong&gt;Actions&lt;/strong&gt; menu at the top to see the various functions you can perform on that object. Select &lt;strong&gt;Move&lt;/strong&gt; from the listed features.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan5k26q8x2dtnc6n36hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan5k26q8x2dtnc6n36hh.png" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the destination as &lt;strong&gt;bucket&lt;/strong&gt;, then click the &lt;strong&gt;Browse S3&lt;/strong&gt; button to find the new bucket you just created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n6b8e5gxvgdnh0gubmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n6b8e5gxvgdnh0gubmu.png" width="800" height="705"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the bucket name in the pop-up window, then select the destination (arrival) bucket. Click the &lt;strong&gt;Choose destination&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk64bipf9ct01hkadqso8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk64bipf9ct01hkadqso8.png" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlgcs67la95j3nkbdoxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlgcs67la95j3nkbdoxa.png" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check that the object has moved to the target bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaw9gy2150zkidcdioiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaw9gy2150zkidcdioiy.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though you move an object, its existing permissions remain intact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable Bucket versioning
&lt;/h2&gt;

&lt;p&gt;You can use &lt;strong&gt;&lt;em&gt;Bucket Versioning&lt;/em&gt;&lt;/strong&gt; if you want to update existing files to the latest version within the same bucket, but still want to keep the existing version.&lt;/p&gt;

&lt;p&gt;It is a best practice to enable versioning on your S3 buckets to protect against accidental deletion or overwrites of objects, and to maintain a history of changes to your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/storage/enable-versioning#enable-versioning" rel="noopener noreferrer"&gt;Enable versioning&lt;/a&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon S3 Console, select the first S3 bucket we created. Select the &lt;strong&gt;Properties&lt;/strong&gt; menu. Click the Edit button in &lt;strong&gt;Bucket Versioning&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsppy7j1i7muxgs3nhbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsppy7j1i7muxgs3nhbj.png" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the enable radio button on &lt;strong&gt;Bucket Versioning&lt;/strong&gt;, then click &lt;strong&gt;Save changes&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd9am9hj7pmzv4uxurz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd9am9hj7pmzv4uxurz2.png" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In this lab, the index.html file will be modified and re-uploaded with the same name. Make some changes to the &lt;strong&gt;index.html&lt;/strong&gt; file. Then upload the modified file to the same S3 bucket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the changed file is completely uploaded, click the object in the S3 Console. You can view &lt;strong&gt;&lt;em&gt;current version&lt;/em&gt;&lt;/strong&gt; information by clicking the &lt;strong&gt;Versions&lt;/strong&gt; tab on the page that contains object details.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y7tv5zeqm07iv2ioddt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y7tv5zeqm07iv2ioddt.png" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations on your progress! You’ve successfully learned how to add and verify objects in Amazon S3 buckets, move objects between buckets or folders, and utilize bucket versioning to update files while preserving existing versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deleting objects and buckets
&lt;/h2&gt;

&lt;p&gt;You can delete unnecessary objects and buckets to avoid unnecessary costs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Amazon S3 Console, select the &lt;strong&gt;Bucket&lt;/strong&gt; that you want to delete. Then click &lt;strong&gt;Delete&lt;/strong&gt;. A dialog box appears for deletion.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wygolv5rp8yqrq2kj96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wygolv5rp8yqrq2kj96.png" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There is a warning that buckets cannot be deleted because they are not empty. Select &lt;strong&gt;empty bucket configuration&lt;/strong&gt; to empty buckets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdwvl8bk3lumt1s5ujd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdwvl8bk3lumt1s5ujd9.png" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Empty bucket&lt;/strong&gt; performs a one-time deletion of all objects in the bucket. Confirm by typing &lt;strong&gt;permanently delete&lt;/strong&gt; in the box. Then click the &lt;strong&gt;Empty&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v4ii2hz6d0kmyrx3zjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v4ii2hz6d0kmyrx3zjq.png" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now the bucket is empty. Perform task 1 again. &lt;strong&gt;Enter a bucket name&lt;/strong&gt; and press the &lt;strong&gt;Delete bucket&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqmid52gkb0oxtnp6qa7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqmid52gkb0oxtnp6qa7.png" width="792" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Congratulations!! You have completed all the workshop. Thank you for your efforts.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean up resource
&lt;/h2&gt;

&lt;p&gt;If you participated in an AWS event using an AWS-provisioned account, no cleanup is necessary. However, if you completed this workshop with &lt;strong&gt;your own account&lt;/strong&gt;, we strongly recommend following this guide to delete the resources and avoid incurring costs&lt;/p&gt;

&lt;p&gt;Delete the resources you created for the lab in reverse order.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/cleanup#database" rel="noopener noreferrer"&gt;Database&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Delete an Amazon RDS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After accessing to the Amazon RDS console, select &lt;strong&gt;DB Instances&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gnoazzbi16nfec1f3va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gnoazzbi16nfec1f3va.png" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;By default, an &lt;strong&gt;Amazon RDS cluster&lt;/strong&gt; has delete protection enabled to prevent accidental deletions. To disable it, select the &lt;strong&gt;Cluster&lt;/strong&gt; and click the &lt;strong&gt;Modify&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5y5smn2dd03yf3iw27u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5y5smn2dd03yf3iw27u.png" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Uncheck the &lt;strong&gt;Enable deletion protection&lt;/strong&gt; button and click the &lt;strong&gt;Continue&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For immediate deletion, select &lt;strong&gt;Apply immediately&lt;/strong&gt; and click the &lt;strong&gt;Modify cluster&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In order to delete a DB Cluster, you must first delete the DB instances included in the cluster. They can be deleted in any order, but we will delete the &lt;strong&gt;Writer instance&lt;/strong&gt; first. Select the &lt;strong&gt;Writer instance&lt;/strong&gt;, and click the &lt;strong&gt;Delete&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc2yi0g5wwnf74z2x4d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc2yi0g5wwnf74z2x4d3.png" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete me&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This time, we will delete the &lt;strong&gt;Reader instance&lt;/strong&gt;. Select the &lt;strong&gt;Reader instance&lt;/strong&gt; and click the &lt;strong&gt;Delete&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete me&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lastly, we will delete the &lt;strong&gt;DB Cluster&lt;/strong&gt;. Click the &lt;strong&gt;Delete&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uncheck the &lt;strong&gt;Take a final snapshot&lt;/strong&gt; button, check the &lt;strong&gt;I acknowledge that automatic backups, including system snapshots and point-in-time recovery, are no longer available when I delete an instance&lt;/strong&gt; button, and type &lt;strong&gt;delete me&lt;/strong&gt; in the blank. Click &lt;strong&gt;Delete DB Cluster&lt;/strong&gt; and the DB cluster will be deleted.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a Amazon RDS Snapshot&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;To delete the snapshot of the DB Cluster created during the lab, select &lt;strong&gt;immersionday-snapshot&lt;/strong&gt; and click the &lt;strong&gt;Delete snapshot&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a secret in AWS Secrets Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We’re going to delete the secret that stored a &lt;strong&gt;RDS credential&lt;/strong&gt; during the lab. Type Secrets Manager in the AWS console search bar and then select it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;mysecret&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Delete secret&lt;/strong&gt; on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To prevent accidental deletion of secrets, AWS Secrets Manager has a deletion wait time of &lt;strong&gt;minimum 7 days&lt;/strong&gt; and &lt;strong&gt;maximum 30 days&lt;/strong&gt;. Enter the minimum time of 7 days and press the &lt;strong&gt;Schedule deletion&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/cleanup#compute" rel="noopener noreferrer"&gt;Compute&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Delete an Auto Scaling Group&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We’re going to delete the &lt;strong&gt;Auto Scaling Group&lt;/strong&gt; that we used during the lab. Type &lt;strong&gt;EC2&lt;/strong&gt; in the AWS Console search bar and select it. Select &lt;strong&gt;Auto Scaling Groups&lt;/strong&gt; from the left menu. Select the &lt;strong&gt;Web-ASG&lt;/strong&gt; that we created in the lab and click the &lt;strong&gt;Delete&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete an Application Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Next, we’re going to delete the &lt;strong&gt;Application Load Balancers&lt;/strong&gt;. Select &lt;strong&gt;Load Balancers&lt;/strong&gt; from the left menu. Then select the &lt;strong&gt;Web-ALB&lt;/strong&gt; that we created in the lab and click the &lt;strong&gt;Delete load balancer&lt;/strong&gt; button in the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;confirm&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a Target Group&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We’re going to delete the Target Group we created when we created the Application Load Balancer. Select Target Groups from the left menu. Select the Target Group we created in the lab, web-TG, and click the Delete button on the Actions menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Yes, delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete EC2 AMIs&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;AMIs&lt;/strong&gt; from the left menu. Select the AMI named &lt;strong&gt;Web Server v1&lt;/strong&gt; that you created in the lab. Click the &lt;strong&gt;Deregister AMI&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Deregister AMI&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete EC2 Snapshots&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You’ve just deleted an AMI, but this action doesn’t automatically remove the associated snapshot. So you need to remove it manually. From the left menu, choose &lt;strong&gt;Snapshots&lt;/strong&gt;. Be sure to note the snapshot’s creation date. Then, select the snapshot you created in the lab, and click the &lt;strong&gt;Delete snapshot&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Launch Templates&lt;/strong&gt; from the left menu. Select the template named &lt;strong&gt;Web&lt;/strong&gt; that you created in the lab. Click the &lt;strong&gt;Delete template&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;Delete&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;(Optional) Delete an EC2 instance&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If you went through the &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/database/challenge-aurora" rel="noopener noreferrer"&gt;(Optional) Connect RDS Aurora&lt;/a&gt; section during the database lab, you need to delete the EC2 instance you created in the lab. Select &lt;strong&gt;Instances **from the left menu. Select the EC2 instance you created during the lab, and click the **Terminate instance **button on the **Instance state&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Terminate&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/869a0a06-1f98-4e19-b5ac-cbb1abdfc041/en-US/advanced-modules/cleanup#network" rel="noopener noreferrer"&gt;Network&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Delete VPC endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You’re almost there. Type &lt;strong&gt;VPC&lt;/strong&gt; in the AWS Console search bar and select it. Select &lt;strong&gt;Endpoints&lt;/strong&gt; from the left menu. Select &lt;strong&gt;S3 endpoint&lt;/strong&gt;, the endpoint you created in the lab, and click the &lt;strong&gt;Delete VPC endpoints&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete&lt;/strong&gt; in the blank, and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a NAT gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;NAT gateways&lt;/strong&gt; from the left menu and select &lt;strong&gt;VPC-Lab-nat-public&lt;/strong&gt; you created during the lab. Click the &lt;strong&gt;Delete NAT gateway&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete an Elastic IP&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You’ve just deleted the NAT gateway, but this action doesn’t automatically delete the Elastic IP that the NAT gateway used, so you need to remove it manually. Select &lt;strong&gt;Elastic IPs&lt;/strong&gt; from the left menu, and select &lt;strong&gt;VPC-Lab-eip-ap-northeast-2a&lt;/strong&gt;. (The name after VPC-Lab-eip may vary depending on your region.) Click the &lt;strong&gt;Release Elastic IP addresses&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu. If it says it is still associated with the NAT gateway and cannot be deleted, refresh the webpage and try again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Release&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We’re going to delete the &lt;strong&gt;Security Group&lt;/strong&gt; you created during the lab. Select &lt;strong&gt;Security Groups&lt;/strong&gt; from the left menu. Select &lt;strong&gt;Immersion Day — Web Server and DB-SG&lt;/strong&gt; first, and then click the &lt;strong&gt;Delete security groups&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu. The reason for not deleting all security groups at once is that some security groups reference other security groups in their inbound rules. A security group that is being referenced cannot be deleted until the security group that is referencing it is deleted. Therefore, delete the security groups in the following order: &lt;strong&gt;Immersion Day — Web Server, DB-SG -&amp;gt; ASG-Web-Inst-SG -&amp;gt; web-ALB-SG&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;ASG-Web-Inst-SG&lt;/strong&gt; and click the &lt;strong&gt;Delete security groups&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;web-ALB-SG&lt;/strong&gt; and click the &lt;strong&gt;Delete security groups&lt;/strong&gt; button on the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Delete a VPC&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Finally, select &lt;strong&gt;Your VPCs&lt;/strong&gt; from the left menu, and select the &lt;strong&gt;VPC-Lab-vpc&lt;/strong&gt; that you created during the lab. Click the &lt;strong&gt;Delete VPC&lt;/strong&gt; button in the &lt;strong&gt;Actions&lt;/strong&gt; menu.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type &lt;strong&gt;delete&lt;/strong&gt; in the blank and click the &lt;strong&gt;Delete&lt;/strong&gt; button.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We strongly recommend that you double-check to make sure you haven’t missed anything, as some resources that weren’t cleared may incur costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates the power of AWS for building scalable, resilient applications with best practices in networking, compute, database, and storage services. From implementing VPC security to auto-scaling EC2 instances and configuring a fault-tolerant Aurora database, this architecture is well-suited for real-world applications that demand reliability and flexibility.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>learning</category>
      <category>aws</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building with Generative AI on AWS : PartyRock, Amazon Bedrock, and Retrieval-Augmented Generation (RAG)</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Mon, 11 Nov 2024 12:24:16 +0000</pubDate>
      <link>https://forem.com/shubham_murti/building-with-generative-ai-on-aws-partyrock-amazon-bedrock-and-retrieval-augmented-generation-rag-7kf</link>
      <guid>https://forem.com/shubham_murti/building-with-generative-ai-on-aws-partyrock-amazon-bedrock-and-retrieval-augmented-generation-rag-7kf</guid>
      <description>&lt;h1&gt;
  
  
  Generative AI on AWS: PartyRock, Amazon Bedrock, and Amazon Titan
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project explores three distinct applications of &lt;strong&gt;Generative AI on AWS&lt;/strong&gt; utilizing &lt;strong&gt;PartyRock&lt;/strong&gt;, &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;, and &lt;strong&gt;Amazon Titan&lt;/strong&gt;. Through practical experiments, we delve into no-code app development, advanced AI model integration, and retrieval-augmented generation (RAG) workflows. These projects demonstrate scalable AI solutions for various needs, from a book recommendation chatbot to context-aware response systems, all powered by AWS services.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tech Stack&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PartyRock&lt;/strong&gt;: A no-code AI app builder with pre-configured widgets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Bedrock&lt;/strong&gt;: Access to cutting-edge AI models, including:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt;: For chat functionalities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Titan&lt;/strong&gt;: For text generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Titan Image Generator&lt;/strong&gt;: Creates images based on prompts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;FAISS&lt;/strong&gt;: Enables similarity searches and vector storage for RAG.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Amazon Titan Text Embeddings&lt;/strong&gt;: Converts text to vectors for document-based AI models.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Account&lt;/strong&gt;: Access to Bedrock, PartyRock, and Titan.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: Used for configuration management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic AWS Knowledge&lt;/strong&gt;: Familiarity with IAM roles, Bedrock, and AI model concepts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No-Code Access&lt;/strong&gt;: PartyRock allows non-developers to build apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use Case Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project highlights the potential of &lt;strong&gt;Generative AI&lt;/strong&gt; in creating real-time content, context-aware responses, and AI-driven images. Here are three main applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No-code Book Recommendation Chatbot&lt;/strong&gt;: Uses &lt;strong&gt;PartyRock&lt;/strong&gt; to deliver personalized recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foundation Model Integration&lt;/strong&gt;: Powered by &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;, supporting real-time text, chat, and image generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt;: Combines &lt;strong&gt;Amazon Titan&lt;/strong&gt;, &lt;strong&gt;FAISS&lt;/strong&gt;, and &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt; to provide accurate responses based on stored knowledge.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Industries Benefiting from AI-driven Solutions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Industries such as customer service, e-commerce, and education can benefit significantly from these scalable and AI-driven applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Project 1: Build Generative AI Applications with PartyRock&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this section, we’ll learn how to use &lt;strong&gt;PartyRock&lt;/strong&gt; to generate AI apps without any code.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;What is PartyRock?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;PartyRock&lt;/strong&gt; is a shareable Generative AI app-building playground that allows you to experiment with prompt engineering in a hands-on and fun way. In just a few clicks, you can build, share, and remix apps, getting inspired while playing with Generative AI. Some examples of what you can do with PartyRock include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build an app to generate dad jokes on any topic of your choice.&lt;/li&gt;
&lt;li&gt;Create and play a virtual trivia game with friends around the world.&lt;/li&gt;
&lt;li&gt;Build an AI storyteller to guide your next fantasy roleplaying campaign.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo3hqg0sdm6xvb974dq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo3hqg0sdm6xvb974dq6.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By building and playing with PartyRock apps, you will learn the fundamental techniques and capabilities needed to get started with Generative AI. This includes understanding how a foundational model responds to prompts, experimenting with different text-based inputs, and chaining prompts together to create more dynamic outputs.&lt;/p&gt;

&lt;p&gt;Anyone can experiment with PartyRock by creating a profile using a social login from &lt;strong&gt;Amazon.com&lt;/strong&gt;, &lt;strong&gt;Apple&lt;/strong&gt;, or &lt;strong&gt;Google&lt;/strong&gt;. PartyRock is separate from the AWS console and does not require an AWS account to get started.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Exercise 1: Building a PartyRock Application&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To highlight the power of PartyRock, we’re going to build an application that can provide book recommendations based on your mood.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lnzdpigvc4ug3kyu5a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lnzdpigvc4ug3kyu5a3.png" alt="Image description" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Head over to the &lt;strong&gt;PartyRock website&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Log in with a social account (Amazon, Apple, or Google).&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Build your own app&lt;/strong&gt; and enter the following prompt: &lt;em&gt;"Provide book recommendations based on your mood and a chatbot to talk about the books."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Generate app&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Using the App&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;PartyRock will create the interface needed to take in user input, provide recommendations, and create a chatbot—just from your prompt! Try the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter a mood, like "Happy."&lt;/li&gt;
&lt;li&gt;Ask the chatbot for more information about the book recommendations by typing: &lt;em&gt;"Can you tell me more about one of the books that was listed?"&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also share your app by clicking the &lt;strong&gt;Make public&lt;/strong&gt; and &lt;strong&gt;Share&lt;/strong&gt; buttons.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Updating Your App&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In PartyRock, each &lt;strong&gt;UI element&lt;/strong&gt; is a &lt;em&gt;widget&lt;/em&gt;, which displays content, takes input, connects to other widgets, and generates output. Widgets that take input allow users to interact with the app, while widgets that create output use prompts and references to generate something like an image or text.&lt;/p&gt;




&lt;h5&gt;
  
  
  &lt;strong&gt;Types of Widgets&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;There are three types of AI-powered widgets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Image Generation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chatbot&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Text Generation&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can edit these widgets to connect them to others and change their outputs.&lt;/p&gt;

&lt;p&gt;Additionally, there are three other widgets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User Input&lt;/strong&gt;: Allows users to change the output by connecting it to AI-powered widgets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static Text&lt;/strong&gt;: Provides a space for text descriptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Upload&lt;/strong&gt;: Lets users upload documents that can be processed by the app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For more details, check out the PartyRock Guide.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Exercise 2: Playtime with PartyRock&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now that you have a basic app, it’s time to explore! Try the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the prompts in your app.&lt;/li&gt;
&lt;li&gt;Play with the settings.&lt;/li&gt;
&lt;li&gt;Chain outputs together to create new workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get creative and explore what PartyRock can do. For example, try adding a widget that can draw an image of the book based on its description.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Remixing an Application&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;With PartyRock, you can &lt;strong&gt;Remix&lt;/strong&gt; applications, which lets you make a copy of an app and edit it to your liking. You can remix your own apps, or remix public apps from friends or from the PartyRock Discover page. &lt;/p&gt;

&lt;p&gt;Try remixing one of the apps from the Discover page to create new variations!&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Creating a Snapshot&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Did you get a funny or interesting response from an app you’re using? You can share a snapshot with others! Here's how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure the app is in &lt;strong&gt;public mode&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Snapshot&lt;/strong&gt; in the top right corner of the app page.&lt;/li&gt;
&lt;li&gt;The URL that includes the current input and output of your app will be copied to your clipboard, so you can share it with others.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Wrap Up&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;With &lt;strong&gt;PartyRock&lt;/strong&gt;, you can demo and propose ideas that leverage &lt;strong&gt;Generative AI&lt;/strong&gt; in a fun, easy-to-use environment. When you're ready to build apps for production, you can implement those ideas using &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Project 2: Use Foundation Models in Amazon Bedrock&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon Bedrock&lt;/strong&gt; is a fully managed service that offers access to high-performing foundation models (FMs) from leading AI companies like Stability AI, Anthropic, and Meta, all via a single API. With Amazon Bedrock, you can securely integrate and deploy Generative AI capabilities into your applications, using the AWS services you’re already familiar with—without the need to manage infrastructure. &lt;/p&gt;

&lt;p&gt;In this module, we'll explore how to use &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; through the console and the API to generate both text and images.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Model Access&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Before you can start building with Bedrock, you will need to grant model access to your AWS account.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Head to the &lt;strong&gt;Model Access page&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the &lt;strong&gt;Enable specific models&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Check the models you wish to activate. If you're running this from your own account, there’s no cost to activate the models—you only pay for what you use during the labs. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s a list of supported models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon&lt;/strong&gt; (select to automatically activate all Amazon Titan models)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic&lt;/strong&gt;: Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Haiku&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta&lt;/strong&gt;: Llama 3.1 405B Instruct, Llama 3.1 70B Instruct, Llama 3.1 8B Instruct&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mistral AI&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stability AI&lt;/strong&gt;: SDXL 1.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After selecting the models, click &lt;strong&gt;Request model access&lt;/strong&gt; to activate them in your account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcm3one9auz1h68olk7xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcm3one9auz1h68olk7xp.png" alt="Model Access" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Monitor Model Access Status&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;It may take a few minutes for the models to transition from "In Progress" to "Access granted" status. You can use the &lt;strong&gt;Refresh&lt;/strong&gt; button to periodically check for updates.&lt;/p&gt;

&lt;p&gt;Once the status shows &lt;strong&gt;Access granted&lt;/strong&gt;, you're ready to begin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6meg3v770ylq37kuzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6meg3v770ylq37kuzm.png" alt="Access Granted" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Using the Amazon Bedrock Playground&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Amazon Bedrock Playground&lt;/strong&gt; is a great way to experiment with different foundation models directly inside the AWS Console. You can compare model outputs, load example prompts, and even export API requests.&lt;/p&gt;

&lt;p&gt;The playground currently supports three modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Chat&lt;/strong&gt;: Experiment with a wide range of language processing tasks in a turn-by-turn interface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text&lt;/strong&gt;: Test fast iterations on a variety of language processing tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image&lt;/strong&gt;: Generate compelling images by providing text prompts to pre-trained models.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can access the playground via the links above or from the &lt;strong&gt;Amazon Bedrock Console&lt;/strong&gt; under the &lt;strong&gt;Playgrounds&lt;/strong&gt; side menu. Take a few minutes to explore the examples.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Playground Examples&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Here are some examples you can try in each playground mode:&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Chat Mode&lt;/strong&gt;
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;Select model&lt;/strong&gt; button to open the Model Selection popup.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Anthropic Claude 3 Sonnet&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qg14ee3lfyrhv6213ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qg14ee3lfyrhv6213ci.png" alt="Select Model" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Load examples&lt;/strong&gt; and select &lt;strong&gt;Advanced Q&amp;amp;A with Citations&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Once the example is loaded, click &lt;strong&gt;Run&lt;/strong&gt; to start the chat.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rweqnq1i5a0ch9o3y0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rweqnq1i5a0ch9o3y0c.png" alt="Advanced Q&amp;amp;A" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can adjust the model’s configuration in the sidebar. Try changing the &lt;strong&gt;Temperature&lt;/strong&gt; to &lt;strong&gt;1&lt;/strong&gt; to make the model more creative in its responses.&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Text Mode&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;In this example, we selected &lt;strong&gt;Amazon Titan Text G1 - Express&lt;/strong&gt; as the model and loaded the &lt;strong&gt;JSON creation&lt;/strong&gt; example.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try changing the model by selecting &lt;strong&gt;Change&lt;/strong&gt; and choosing &lt;strong&gt;Mistral&lt;/strong&gt; → &lt;strong&gt;Mistral Large 2&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Clear the output and run the prompt again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83cls06zk61s3a228kuz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83cls06zk61s3a228kuz.png" alt="Text Mode" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how the output differs. It's important to experiment with different foundation models to find the one that best fits your use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej5ox9evm9arrqabikdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej5ox9evm9arrqabikdw.png" alt="Model Output" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs300d8c99k0h8vx689do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs300d8c99k0h8vx689do.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Image Mode&lt;/strong&gt;
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Titan Image Generator G1&lt;/strong&gt; as the model and load the &lt;strong&gt;Generate images from a text prompt&lt;/strong&gt; example.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7lt1t8yg0b0e7zv1div.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7lt1t8yg0b0e7zv1div.png" alt="Image Mode" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try changing the &lt;strong&gt;Prompt Strength&lt;/strong&gt; to &lt;strong&gt;10&lt;/strong&gt; and experiment with different prompts, such as:

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;"Unicorns in a magical forest. Lots of trees and animals around. The mood is bright, and there is lots of natural lighting."&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;"Downtown City, with lots of skyscrapers. At night time, lots of lights in the buildings."&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Play around with different prompts to see how the model responds to variations in input.
&lt;/h2&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Wrap Up: Using Amazon Bedrock in Applications via API&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now that you’ve explored the Bedrock Playground, let’s see how we can bring the power of &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; to applications using the API.&lt;/p&gt;

&lt;p&gt;The playground is a great starting point, but integrating these models into your applications will give you the flexibility to create production-level solutions. Whether you’re generating text, images, or handling interactive chat, Amazon Bedrock offers a serverless, scalable way to leverage powerful foundation models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project 3: Chat with your Documents
&lt;/h3&gt;

&lt;p&gt;This module teaches how to use &lt;strong&gt;Retrieval Augmented Generation (RAG)&lt;/strong&gt;, a powerful approach that combines document retrieval with generative AI models to answer questions based on the content of documents. We'll learn how to set up a &lt;strong&gt;RAG&lt;/strong&gt; system using Amazon Bedrock and work with various types of documents, including PDFs and knowledge bases, to generate accurate responses based on the context of the document.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Architecture Overview&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this module, the &lt;strong&gt;Retrieval Augmented Generation (RAG)&lt;/strong&gt; system consists of several key components:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk57y319qgfpomugt5ykk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk57y319qgfpomugt5ykk.gif" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embeddings&lt;/strong&gt;: Text is converted into vector representations (embeddings) using models like Amazon Titan or Anthropic Claude. These embeddings capture the meaning and context of the text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vector Database&lt;/strong&gt;: Once converted into embeddings, the text data is stored in a vector database, which enables quick and relevant retrieval of documents using similarity searches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bedrock for RAG&lt;/strong&gt;: With embeddings and a vector store, we use Amazon Bedrock to retrieve the most relevant documents based on the user query and then pass those documents to a language model (like Claude or Titan) to generate a relevant answer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LangChain&lt;/strong&gt;: This Python framework simplifies the development of applications with LLMs and helps with managing the RAG process.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Exercise 1: Getting Started with RAG&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;base_rag.py&lt;/code&gt; script demonstrates how RAG works by using a small set of documents (sentences) and querying them to generate answers based on context.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Document Setup&lt;/strong&gt;: Define a set of sentences representing different topics, such as pets, cities, and colors.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;sentences&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Your dog is so cute.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How cute your dog is!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You have such a cute dog!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;New York City is the place where I work.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I work in New York City.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What color do you like the most?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is your favorite color?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Embedding with Amazon Bedrock&lt;/strong&gt;: Use the &lt;code&gt;Amazon Titan Text Embeddings&lt;/code&gt; model to convert these sentences into embeddings.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_runtime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.titan-embed-text-v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Vector Store with FAISS&lt;/strong&gt;: Store the embeddings in a &lt;strong&gt;FAISS&lt;/strong&gt; vector store, which allows for efficient similarity searches.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;local_vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FAISS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sentences&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;RAG Workflow&lt;/strong&gt;: Given a user query, convert the query into an embedding, retrieve the relevant documents, and combine the context of the documents to generate a coherent answer using a model like &lt;strong&gt;Claude&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Use the following pieces of context to answer the question at the end.

&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;

Question: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Answer:&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run the Code&lt;/strong&gt;: Test the RAG system by running the script in the terminal.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 rag_examples/base_rag.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can experiment by modifying the query in the code. For instance, try asking, "What city do I work in?" and see how the system responds based on the context.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Exercise 2: Chat with a PDF&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now let's work with a PDF document. In the file &lt;code&gt;chat_with_pdf.py&lt;/code&gt;, the function &lt;code&gt;chunk_doc_to_text&lt;/code&gt; will read the PDF and chunk its content into pieces, each of 1000 characters, before storing them in the vector database.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PDF Chunking&lt;/strong&gt;: The process of chunking the document allows you to process large files and query sections of the document efficiently. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Querying the PDF&lt;/strong&gt;: After the PDF has been chunked, you can use a query like &lt;strong&gt;"What are some good use cases for non-SQL databases?"&lt;/strong&gt; to test the retrieval and response capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run the code as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 rag_examples/chat_with_pdf.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Play around with the queries and see how the model answers questions based on the PDF context.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Creating a Knowledge Base&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A Knowledge Base (KB) in Amazon Bedrock helps store and query documents in a structured manner. By uploading a dataset (e.g., AWS Well-Architected Framework) to Amazon S3, you can automatically create a KB, which will handle the embeddings and vector database setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyow0p9tsmrkmx31ajaav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyow0p9tsmrkmx31ajaav.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create Knowledge Base&lt;/strong&gt;: Navigate to the &lt;strong&gt;Knowledge Base Console&lt;/strong&gt; and create a new Knowledge Base by uploading documents to an S3 bucket and selecting the embeddings model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sync Data&lt;/strong&gt;: Once the Knowledge Base is created, click the &lt;strong&gt;Sync&lt;/strong&gt; button to load the data. You can then query the KB for information like &lt;strong&gt;"What is a VPC?"&lt;/strong&gt; and retrieve relevant responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Query via Console&lt;/strong&gt;: Use the console to enter a query and see how the system responds. For example:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Can you explain what a VPC is?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30s5ukr9bk33tyj8oli8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30s5ukr9bk33tyj8oli8.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Exercise 3: Using the Knowledge Base API&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can also query your Knowledge Base programmatically via the API using the &lt;strong&gt;retrieve&lt;/strong&gt; or &lt;strong&gt;retrieve_and_generate&lt;/strong&gt; methods. These methods can be invoked to fetch documents or generate answers using the RAG process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the file &lt;code&gt;kb_rag.py&lt;/code&gt; and update the &lt;strong&gt;KB_ID&lt;/strong&gt; with your created Knowledge Base ID.&lt;/li&gt;
&lt;li&gt;Run the script using:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 rag_examples/kb_rag.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ayc22s2uh1amn2khk4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ayc22s2uh1amn2khk4m.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuat94bl9noci2aco821.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuat94bl9noci2aco821.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can modify the &lt;code&gt;QUERY&lt;/code&gt; variable to test different questions and see how the KB API performs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Building Agents for Amazon Bedrock&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this section, we'll build an &lt;strong&gt;Agent&lt;/strong&gt; using Amazon Bedrock. An agent is a task-specific AI model that can interact with different services, such as querying Knowledge Bases or invoking AWS Lambda functions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create an Agent&lt;/strong&gt;: In the &lt;strong&gt;Agent Console&lt;/strong&gt;, create a new agent named &lt;code&gt;Agent-AWS&lt;/code&gt;, select the model (Claude 3 Sonnet), and provide a role description (e.g., AWS Certified Solutions Architect).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf4pks9ndu8marleldk7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf4pks9ndu8marleldk7.png" alt="Image description" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define Action Groups&lt;/strong&gt;: Action groups are predefined tasks that the agent can perform. For instance, you can define an action to process data by reading records from a database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F617rez9fm6sc8nfuqz7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F617rez9fm6sc8nfuqz7v.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Knowledge Base Integration&lt;/strong&gt;: You can add the Knowledge Base you created earlier to the agent. This allows the agent to query the KB and use it to answer AWS-related questions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test the Agent&lt;/strong&gt;: Once the agent is created, use the console to test it. Ask questions like &lt;strong&gt;"What can you tell me about S3 buckets?"&lt;/strong&gt; and inspect the agent’s responses. You can also simulate errors to test the agent's capabilities to troubleshoot issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ksicatndz3onqfe3x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ksicatndz3onqfe3x8.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7db608roffuq95bwujau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7db608roffuq95bwujau.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Debugging Lambda Functions with Amazon Q&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use the following test event JSON to mimic the agent calling the fucntion&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"alias"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TSTALIASID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Agent-AWS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DRAFT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ADI6ICMMZZ"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sessionId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"975786472213626"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"httpMethod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"sessionAttributes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inputText"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Can you get the number of records in the databse"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"promptSessionAttributes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"apiPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/get_num_records"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"messageVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"actionGroup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"agent_action_group"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0n2fwkyfcoa4mw7ls62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0n2fwkyfcoa4mw7ls62.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you encounter errors in Lambda functions triggered by the agent, you can use &lt;strong&gt;Amazon Q&lt;/strong&gt; to debug and resolve issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8eoe6berjku3fikc2nur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8eoe6berjku3fikc2nur.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Invoke Lambda Function&lt;/strong&gt;: In the Lambda console, invoke your function and intentionally trigger an error.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8d27wb91zpguhbhvz8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8d27wb91zpguhbhvz8w.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dorvy7char0ob8j2pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dorvy7char0ob8j2pp.png" alt="Image description" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshoot with Amazon Q&lt;/strong&gt;: Use &lt;strong&gt;Amazon Q&lt;/strong&gt; to debug the error and resolve issues by following the troubleshooting steps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv5jkl16jannyzw2on3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frv5jkl16jannyzw2on3n.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Clean-up&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once you've finished, make sure to clean up your resources by deleting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Objects&lt;/strong&gt;: Delete any objects in the S3 buckets (e.g., &lt;code&gt;awsdocsbucket&lt;/code&gt;, &lt;code&gt;openapiBucket&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Roles&lt;/strong&gt;: Delete IAM roles associated with Bedrock and other services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge Base&lt;/strong&gt;: Delete the Knowledge Base and any related data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent&lt;/strong&gt;: Delete the agent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenSearch Collection&lt;/strong&gt;: Delete the vector store.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudFormation Stack&lt;/strong&gt;: Delete the CloudFormation stack to remove all resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This project demonstrates the flexibility of Generative AI on AWS to build dynamic applications. Whether it’s no-code development with PartyRock, scalable AI integrations with Bedrock, or context-driven responses through RAG, AWS offers the tools to create efficient and powerful AI solutions. These applications highlight the potential for AI-driven solutions in customer support, knowledge management, and interactive content creation.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>learning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building a Serverless Recipe Generator : AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Mon, 11 Nov 2024 09:14:04 +0000</pubDate>
      <link>https://forem.com/shubham_murti/building-a-serverless-recipe-generator-aws-project-o7a</link>
      <guid>https://forem.com/shubham_murti/building-a-serverless-recipe-generator-aws-project-o7a</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project walks through creating a serverless recipe generator application using AWS Amplify and Amazon Bedrock. Users can input ingredients and receive AI-generated recipes via a simple web interface. By combining Generative AI with a serverless architecture, this application offers a scalable and interactive experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tech Stack&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Amplify&lt;/strong&gt;: Provides full-stack hosting and deployment services for the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Bedrock&lt;/strong&gt;: Utilized as an AI model to generate recipes based on user input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS AppSync&lt;/strong&gt;: Manages real-time API connections between the frontend and backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Cognito&lt;/strong&gt;: Provides secure user authentication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Handles serverless backend functionality for processing requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js, npm, Git, GitHub&lt;/strong&gt;: Essential tools for frontend dependencies and version control.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account: Permissions for Amplify, Cognito, Bedrock, and Lambda.&lt;/li&gt;
&lt;li&gt;AWS CLI and Amplify CLI: For resource management.&lt;/li&gt;
&lt;li&gt;Node.js &amp;amp; npm: For running React app.&lt;/li&gt;
&lt;li&gt;GitHub: Enables Amplify’s continuous deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Use Case&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;: Searching for suitable recipes based on available ingredients can be time-consuming, and manually sifting through results often yields non-personalized options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: This serverless recipe generator offers a &lt;strong&gt;real-time AI-powered solution&lt;/strong&gt;, allowing users to input ingredients and receive tailored recipes instantly. The use of &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; makes it possible to generate diverse, personalized recipes on demand, saving users time and enhancing their cooking experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Relevance&lt;/strong&gt;: This type of application has potential in e-commerce, cooking platforms, and smart home solutions. By integrating &lt;strong&gt;Generative AI&lt;/strong&gt; and &lt;strong&gt;serverless architecture&lt;/strong&gt;, we demonstrate a scalable, real-time solution for creating personalized recipes or content.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Architecture Diagram&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppdfoxymhxa27bw6zfki.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppdfoxymhxa27bw6zfki.gif" alt="Architecture Diagram" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To follow along with this tutorial, you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Account&lt;/strong&gt;: Requires permissions for Amplify, Cognito, Bedrock, and Lambda.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt;: Needed for resource management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js &amp;amp; npm&lt;/strong&gt;: For running the React app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amplify CLI&lt;/strong&gt;: For initializing and configuring the Amplify project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic AWS Knowledge&lt;/strong&gt;: Familiarity with Amplify, Cognito, AppSync, and Lambda services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository&lt;/strong&gt;: To enable Amplify’s continuous deployment integration.&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Component Breakdown&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Frontend (React Application)&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
User-facing interface where users input ingredients and receive generated recipes. Hosted on &lt;strong&gt;AWS Amplify&lt;/strong&gt; with continuous deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Cognito&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Manages secure authentication for users, ensuring only authorized access to backend resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS AppSync&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Provides the GraphQL API to handle communication between the frontend and backend services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Serverless function that takes user input and forwards it to &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; for AI-powered recipe generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Bedrock (Claude 3 Sonnet)&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Utilizes generative AI to create unique recipes based on input ingredients.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;
&lt;/h3&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Task 1: Host a Static Website&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this task, you will create a React application and deploy it to the Cloud using &lt;strong&gt;AWS Amplify&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 1: Create a New React Application&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In a new terminal or command line window, run the following command to use Vite to create a React application:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm create vite@latest ai-recipe-generator &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--template&lt;/span&gt; react-ts &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Navigate to the project directory:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;cd &lt;/span&gt;ai-recipe-generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install the necessary dependencies:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the development server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start your React application locally. You should see the Vite + React app running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3cg5232jkrcbdjh4i1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3cg5232jkrcbdjh4i1i.png" alt="React App" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the terminal window, open the local link to view the application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qj1cejzg9rr5kif8y4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qj1cejzg9rr5kif8y4m.png" alt="Local Link" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2: Initialize a GitHub Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, you will create a GitHub repository and commit your code to it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to GitHub at &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Repository name&lt;/strong&gt;, enter &lt;code&gt;ai-recipe-generator&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;Public&lt;/strong&gt; radio button.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Create a new repository&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a new terminal window, navigate to your project’s root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following commands to initialize Git and push the application to your GitHub repository:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git init
   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"first commit"&lt;/span&gt;
   git remote add origin git@github.com:&amp;lt;your-username&amp;gt;/ai-recipe-generator.git
   git branch &lt;span class="nt"&gt;-M&lt;/span&gt; main
   git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the &lt;code&gt;git@github.com:&amp;lt;your-username&amp;gt;/ai-recipe-generator.git&lt;/code&gt; with your own GitHub SSH URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzy6481oamgjn8vavj0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzy6481oamgjn8vavj0z.png" alt="GitHub Commit" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 3: Install the Amplify Packages&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your app's root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm create amplify@latest &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will scaffold a lightweight Amplify project in your app’s directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After installation, push the changes to GitHub:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s1"&gt;'installing amplify'&lt;/span&gt;
   git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja94euu0euwt1khbnmsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja94euu0euwt1khbnmsh.png" alt="Amplify Install" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 4: Deploy Your App with AWS Amplify&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the &lt;strong&gt;AWS Management Console&lt;/strong&gt; and open the &lt;strong&gt;AWS Amplify&lt;/strong&gt; console at &lt;a href="https://console.aws.amazon.com/amplify/apps" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create new app&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Start building with Amplify&lt;/strong&gt; page, select &lt;strong&gt;GitHub&lt;/strong&gt; for &lt;strong&gt;Deploy your app&lt;/strong&gt; and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authenticate with GitHub. You will be automatically redirected back to the Amplify console. Choose the repository and main branch you created earlier, then select &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the default build settings and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review your inputs and click &lt;strong&gt;Save and deploy&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Amplify will now build your source code and deploy your app at &lt;code&gt;https://...amplifyapp.com&lt;/code&gt;. Your app will automatically update with every Git push. It may take up to 5 minutes to deploy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the build is complete, click &lt;strong&gt;Visit deployed URL&lt;/strong&gt; to see your web app live.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Task 2: Manage Users&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this task, you will configure &lt;strong&gt;Amplify Auth&lt;/strong&gt; for user authentication and enable &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; foundation model access.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 1: Set up Amplify Auth&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The app uses email as the default login mechanism. When users sign up, they will receive a verification email. Customize the verification email by updating the following code in the &lt;code&gt;ai-generated-recipes/amplify/auth/resource.ts&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineAuth&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineAuth&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;loginWith&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailStyle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CODE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailSubject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Welcome to the AI-Powered Recipe Generator!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailBody&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;createCode&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
           &lt;span class="s2"&gt;`Use this code to confirm your account: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;createCode&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;},&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nkluiubpy8qm052hxpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nkluiubpy8qm052hxpv.png" alt="Custom Verification Email" width="592" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image on the right shows an example of the customized verification email.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2: Set up Amazon Bedrock Model Access&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to the &lt;strong&gt;AWS Management Console&lt;/strong&gt; and open the &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; console at &lt;a href="https://console.aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;AWS Amazon Bedrock&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verify that you are in the &lt;strong&gt;N. Virginia (us-east-1)&lt;/strong&gt; region, and select &lt;strong&gt;Get started&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Foundation models&lt;/strong&gt; section, choose the &lt;strong&gt;Claude model&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the &lt;strong&gt;Claude models&lt;/strong&gt; section, select the &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt; tab, and click &lt;strong&gt;Request model access&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;If you already have access to some models, the button will show &lt;strong&gt;Manage model access&lt;/strong&gt; instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Base models&lt;/strong&gt; section, for &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt;, choose &lt;strong&gt;Available to request&lt;/strong&gt;, then select &lt;strong&gt;Request model access&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Edit model access&lt;/strong&gt; page, click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Review and Submit&lt;/strong&gt; page, click &lt;strong&gt;Submit&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Now your application should be up and running, with user authentication and AI-powered recipe generation using &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;'s &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here's the reformatted version of &lt;strong&gt;Task 3: Build a Serverless Backend&lt;/strong&gt; and &lt;strong&gt;Task 4: Deploy the Backend API&lt;/strong&gt;, with improved structure and readability for a Medium blog post:&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 3: Build a Serverless Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this task, you will use &lt;strong&gt;AWS Amplify&lt;/strong&gt; and &lt;strong&gt;AWS Lambda&lt;/strong&gt; to build a serverless function.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Create a Lambda Function for Handling Requests&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On your local machine, navigate to the &lt;code&gt;ai-recipe-generator/amplify/data&lt;/code&gt; folder, and create a new file named &lt;code&gt;bedrock.js&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the file with the following code:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code defines a request function that constructs the HTTP request to invoke the &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt; foundation model in &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;. The response function parses the response and returns the generated recipe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ingredients&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

     &lt;span class="c1"&gt;// Construct the prompt with the provided ingredients&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Suggest a recipe idea using these ingredients: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

     &lt;span class="c1"&gt;// Return the request configuration&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;resourcePath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="p"&gt;},&lt;/span&gt;
         &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
           &lt;span class="na"&gt;anthropic_version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock-2023-05-31&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
             &lt;span class="p"&gt;{&lt;/span&gt;
               &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                 &lt;span class="p"&gt;{&lt;/span&gt;
                   &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                   &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`\n\nHuman: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n\nAssistant:`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="p"&gt;},&lt;/span&gt;
               &lt;span class="p"&gt;],&lt;/span&gt;
             &lt;span class="p"&gt;},&lt;/span&gt;
           &lt;span class="p"&gt;],&lt;/span&gt;
         &lt;span class="p"&gt;}),&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="c1"&gt;// Parse the response body&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parsedBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

     &lt;span class="c1"&gt;// Extract the text content from the response&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;parsedBody&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;

     &lt;span class="c1"&gt;// Return the response&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;request&lt;/code&gt; function constructs the AI prompt and sends it to Amazon Bedrock. The &lt;code&gt;response&lt;/code&gt; function parses the generated recipe from the returned JSON.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Add Amazon Bedrock as a Data Source&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;code&gt;amplify/backend.ts&lt;/code&gt; file and add the following code. Save the file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code adds an HTTP data source for &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; to your API and grants it the necessary permissions to invoke the Claude model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineBackend&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./data/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PolicyStatement&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-cdk-lib/aws-iam&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./auth/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineBackend&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bedrockDataSource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;graphqlApi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addHttpDataSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrockDS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://bedrock-runtime.us-east-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;authorizationConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;signingRegion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;signingServiceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="nx"&gt;bedrockDataSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;grantPrincipal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addToPrincipalPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
       &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
         &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock:InvokeModel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
     &lt;span class="p"&gt;})&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code integrates Amazon Bedrock into your backend, enabling the serverless app to invoke the Claude 3 Sonnet model.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 4: Deploy the Backend API&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Set Up Amplify Data&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;code&gt;amplify/backend.ts&lt;/code&gt; file, define the schema and backend resources as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ClientSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defineData&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;BedrockResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;customType&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
       &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
       &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
     &lt;span class="p"&gt;}),&lt;/span&gt;

     &lt;span class="na"&gt;askBedrock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arguments&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;BedrockResponse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;authorization&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;authenticated&lt;/span&gt;&lt;span class="p"&gt;()])&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
         &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;custom&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./bedrock.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrockDS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
       &lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ClientSchema&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineData&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="na"&gt;authorizationModes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;defaultAuthorizationMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;apiKey&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;apiKeyAuthorizationMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;expiresInDays&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;},&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this step, you define a &lt;strong&gt;GraphQL&lt;/strong&gt; schema and link the &lt;code&gt;askBedrock&lt;/code&gt; query to the custom Lambda function you created earlier (&lt;code&gt;bedrock.js&lt;/code&gt;). The schema allows authenticated users to invoke the function and retrieve a recipe suggestion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs3tc08p2l8eoz8bdqmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs3tc08p2l8eoz8bdqmv.png" alt="Amplify Schema" width="770" height="572"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Deploy Cloud Resources&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your app's project folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command to deploy cloud resources into an isolated development space:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npx ampx sandbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets up a sandbox environment where you can quickly iterate on your changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4qv9mmq04j2rubgtdsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4qv9mmq04j2rubgtdsh.png" alt="Sandbox Deployment" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the sandbox has been fully deployed, you will see a confirmation message in the terminal, and an &lt;code&gt;amplify_outputs.json&lt;/code&gt; file will be generated and added to your project.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnngqeowg8yujnrghraqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnngqeowg8yujnrghraqx.png" alt="Deployment Confirmation" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The deployment process is now complete, and you can begin interacting with your serverless backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxs5y4th00wcuusq0jt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxs5y4th00wcuusq0jt1.png" alt="Deployment Output" width="586" height="586"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The serverless backend is now set up and ready to handle recipe generation requests using &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;. You've created a Lambda function to handle interactions with the Claude 3 Sonnet model, integrated it with AWS Amplify, and deployed it to the cloud.&lt;/p&gt;

&lt;p&gt;Here's how you can present &lt;strong&gt;Task 5: Build the Frontend&lt;/strong&gt; and the challenges faced in the process for your blog:&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 5: Build the Frontend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that the serverless backend is set up, it's time to create the frontend. This step will guide you through building the user interface (UI) using &lt;strong&gt;AWS Amplify&lt;/strong&gt; libraries, styled components, and React. We’ll also implement authentication to ensure that only authorized users can access the app.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Install the Amplify Libraries&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To get started with integrating AWS Amplify into your frontend, you'll need to install the core Amplify libraries. The &lt;code&gt;aws-amplify&lt;/code&gt; library provides all the necessary APIs to interact with your backend, while &lt;code&gt;@aws-amplify/ui-react&lt;/code&gt; contains pre-built UI components that help you scaffold the authentication flow and other UI elements.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your project’s root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command to install the libraries:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install &lt;/span&gt;aws-amplify @aws-amplify/ui-react
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd176go3ct67bdigeb6o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd176go3ct67bdigeb6o3.png" alt="Install Amplify" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Style the App UI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Next, we’ll style the frontend to create a clean, modern interface. We'll focus on centering the layout and styling the form where users will input their ingredients.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;ai-recipe-generator/src/index.css&lt;/code&gt;, and update it with the following code to set global styles and center the UI:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;   &lt;span class="nd"&gt;:root&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Inter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system-ui&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Avenir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Helvetica&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;line-height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0.87&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1280px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2em&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This CSS will help ensure that the app's layout is centered, the font is legible, and the UI components have a clean, consistent style.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqh5uzxrmiva5x1x9dc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqh5uzxrmiva5x1x9dc.png" alt="CSS" width="692" height="986"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, open &lt;code&gt;ai-recipe-generator/src/App.css&lt;/code&gt; and update it with the following code to style the ingredient input form:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;   &lt;span class="nc"&gt;.app-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.header-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;padding-bottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.5rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.main-header&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.25rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bold&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#1a202c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.main-header&lt;/span&gt; &lt;span class="nc"&gt;.highlight&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#2563eb&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.description&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.125rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;65ch&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#1a202c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.form-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin-bottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="py"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.wide-input&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#ccc&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-button&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;300px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#007bff&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;white&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;pointer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-button&lt;/span&gt;&lt;span class="nd"&gt;:hover&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#0056b3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.result-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin-top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;transition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;height&lt;/span&gt; &lt;span class="m"&gt;0.3s&lt;/span&gt; &lt;span class="n"&gt;ease-out&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;overflow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;hidden&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.loader-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="py"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.result&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#f8f9fa&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#e9ecef&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;white-space&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pre-wrap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;word-wrap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;break-word&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;black&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bold&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;left&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will ensure that the ingredients form and result display are properly styled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjcna5yvnsucsp74in5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjcna5yvnsucsp74in5b.png" alt="Form Styles" width="800" height="651"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 3: Implement the UI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now, let’s build the main React component for the app. We’ll integrate the &lt;strong&gt;AWS Amplify Authentication&lt;/strong&gt; components for user sign-up, sign-in, and password recovery.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;ai-recipe-generator/src/main.tsx&lt;/code&gt; and update it with the following code. This will use the Amplify &lt;code&gt;Authenticator&lt;/code&gt; component to wrap your app and provide a complete authentication flow:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ReactDOM&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-dom/client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./App.jsx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./index.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Authenticator&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createRoot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Authenticator&lt;/code&gt; component will handle user authentication, including sign-up, sign-in, and MFA (Multi-Factor Authentication).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlc5kkyu52xhakwlcx0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlc5kkyu52xhakwlcx0s.png" alt="Authenticator" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, open &lt;code&gt;ai-recipe-generator/src/App.tsx&lt;/code&gt; and update it with the following code to implement the form for ingredient submission and the logic for querying the &lt;strong&gt;askBedrock&lt;/strong&gt; function.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FormEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Placeholder&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./App.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Amplify&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-amplify&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Schema&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../amplify/data/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-amplify/data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;outputs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../amplify_outputs.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react/styles.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="nx"&gt;Amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amplifyClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;generateClient&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Schema&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;authMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;userPool&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;onSubmit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FormEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;HTMLFormElement&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preventDefault&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
       &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

       &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FormData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;currentTarget&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

         &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;errors&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;amplifyClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;askBedrock&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
           &lt;span class="na"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ingredients&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
         &lt;span class="p"&gt;});&lt;/span&gt;

         &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="nf"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;No data returned&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nf"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`An error occurred: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;

     &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"app-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"header-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"main-header"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             Meet Your Personal
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;br&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"highlight"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Recipe AI&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             Simply type a few ingredients using the format ingredient1,
             ingredient2, etc., and Recipe AI will generate an all-new recipe on
             demand...
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt; &lt;span class="na"&gt;onSubmit&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;onSubmit&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"form-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"search-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
               &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
               &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"wide-input"&lt;/span&gt;
               &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ingredients"&lt;/span&gt;
               &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ingredients"&lt;/span&gt;
               &lt;span class="na"&gt;placeholder&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Ingredient1, Ingredient2, Ingredient3,...etc"&lt;/span&gt;
             &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"search-button"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               Generate
             &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"result-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"loader-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Loading...&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Loader&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
             &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The app allows users to input a list of ingredients, submits the request to the backend via &lt;strong&gt;Amplify&lt;/strong&gt; and &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;, and then displays the generated recipe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foat3sk46j8c2f6svey5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foat3sk46j8c2f6svey5q.png" alt="Recipe Generation UI" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 4: Run and Test the App&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window and navigate to your project’s root directory (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), then run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Visit the local host URL to open the app in your browser and test it.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 5: Deploy the App&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Once you’ve confirmed that the app is working as expected locally, it’s time to deploy it to AWS Amplify.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the terminal, commit your changes:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s1"&gt;'connect to bedrock'&lt;/span&gt;
   git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;AWS Amplify&lt;/strong&gt; console, and your app should automatically build and deploy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After deployment, you can access your live app at the provided Amplify URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3hki55r9i6tdwdid556.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3hki55r9i6tdwdid556.png" alt="Deployed App" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Challenges and Solutions&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Handling Bedrock Latency&lt;/strong&gt;: Managed response delays with retry logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Data Sync&lt;/strong&gt;: AppSync ensures consistent updates across all instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project combines AWS Amplify, Amazon Bedrock, and AWS Cognito to build a scalable, AI-powered recipe generator. It highlights the potential of serverless applications with Generative AI, making it ideal for real-time, interactive user experiences.&lt;/p&gt;

&lt;p&gt;Explore my &lt;a href="https://github.com/shubhammurti/AWS-Projects-Portfolio/" rel="noopener noreferrer"&gt;GitHub repository.&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>learning</category>
      <category>aws</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Full-Stack Web Application with AWS Amplify: AWS Project</title>
      <dc:creator>Shubham Murti</dc:creator>
      <pubDate>Sun, 10 Nov 2024 16:57:39 +0000</pubDate>
      <link>https://forem.com/shubham_murti/full-stack-web-application-with-aws-amplify-aws-project-5f8e</link>
      <guid>https://forem.com/shubham_murti/full-stack-web-application-with-aws-amplify-aws-project-5f8e</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project demonstrates the creation of a full-stack web application using AWS Amplify, featuring a React frontend with user authentication, a serverless function for user sign-ups, and DynamoDB for data storage. AWS Amplify’s managed services make it easy to build a scalable and secure web application with seamless backend integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why AWS Amplify?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS Amplify provides backend services like hosting, authentication, and data storage, allowing developers to focus on application functionality without managing infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Learning Outcomes&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hosting&lt;/strong&gt;: Deploy a React app on AWS’s global CDN.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Enable user sign-in and sign-out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Integration&lt;/strong&gt;: Use a real-time API and DynamoDB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function Execution&lt;/strong&gt;: Trigger Lambda functions on user sign-up.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tech Stack&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Amplify&lt;/strong&gt;: Hosting and backend services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS AppSync&lt;/strong&gt;: Real-time API management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Serverless function for user data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt;: NoSQL database for storing user emails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React&lt;/strong&gt;: Frontend framework.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js &amp;amp; npm&lt;/strong&gt;: For dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git &amp;amp; GitHub&lt;/strong&gt;: Version control.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Familiarity with AWS services like Amplify, Lambda, and DynamoDB.&lt;/li&gt;
&lt;li&gt;AWS CLI configured with appropriate IAM permissions.&lt;/li&gt;
&lt;li&gt;Node.js, npm, and Git installed locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Problem Statement&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building scalable web applications often requires authentication, data storage, and API integration. AWS Amplify simplifies these needs with a managed backend that integrates seamlessly with the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture Diagram&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni8uwzl9hw8rucnefyr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni8uwzl9hw8rucnefyr9.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Implementation Steps&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Tasks Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This project is structured into six main tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create Web App&lt;/strong&gt;: Deploy a React app using AWS Amplify Console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Serverless Function&lt;/strong&gt;: Create a Lambda function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Data Table&lt;/strong&gt;: Set up DynamoDB for data persistence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link Function to Web App&lt;/strong&gt;: Deploy function with API Gateway.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add Web App Interactivity&lt;/strong&gt;: Update the frontend to call the API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clean Up Resources&lt;/strong&gt;: Delete resources to avoid charges.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Task 1: Host a Static Website&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this task, you will create a React application and deploy it to the Cloud using &lt;strong&gt;AWS Amplify&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a New React Application&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In a new terminal or command line window, run the following command to use Vite to create a React application:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm create vite@latest ai-recipe-generator &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--template&lt;/span&gt; react-ts &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Navigate to the project directory:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;cd &lt;/span&gt;ai-recipe-generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install the necessary dependencies:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the development server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start your React application locally. You should see the Vite + React app running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3cg5232jkrcbdjh4i1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3cg5232jkrcbdjh4i1i.png" alt="React App" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the terminal window, open the local link to view the application.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qj1cejzg9rr5kif8y4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qj1cejzg9rr5kif8y4m.png" alt="Local Link" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2: Initialize a GitHub Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step, you will create a GitHub repository and commit your code to it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to GitHub at &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Repository name&lt;/strong&gt;, enter &lt;code&gt;ai-recipe-generator&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;Public&lt;/strong&gt; radio button.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Create a new repository&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open a new terminal window, navigate to your project’s root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following commands to initialize Git and push the application to your GitHub repository:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git init
   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"first commit"&lt;/span&gt;
   git remote add origin git@github.com:&amp;lt;your-username&amp;gt;/ai-recipe-generator.git
   git branch &lt;span class="nt"&gt;-M&lt;/span&gt; main
   git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the &lt;code&gt;git@github.com:&amp;lt;your-username&amp;gt;/ai-recipe-generator.git&lt;/code&gt; with your own GitHub SSH URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzy6481oamgjn8vavj0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzy6481oamgjn8vavj0z.png" alt="GitHub Commit" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 3: Install the Amplify Packages&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your app's root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm create amplify@latest &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will scaffold a lightweight Amplify project in your app’s directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After installation, push the changes to GitHub:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s1"&gt;'installing amplify'&lt;/span&gt;
   git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja94euu0euwt1khbnmsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja94euu0euwt1khbnmsh.png" alt="Amplify Install" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 4: Deploy Your App with AWS Amplify&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the &lt;strong&gt;AWS Management Console&lt;/strong&gt; and open the &lt;strong&gt;AWS Amplify&lt;/strong&gt; console at &lt;a href="https://console.aws.amazon.com/amplify/apps" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create new app&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Start building with Amplify&lt;/strong&gt; page, select &lt;strong&gt;GitHub&lt;/strong&gt; for &lt;strong&gt;Deploy your app&lt;/strong&gt; and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authenticate with GitHub. You will be automatically redirected back to the Amplify console. Choose the repository and main branch you created earlier, then select &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leave the default build settings and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review your inputs and click &lt;strong&gt;Save and deploy&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Amplify will now build your source code and deploy your app at &lt;code&gt;https://...amplifyapp.com&lt;/code&gt;. Your app will automatically update with every Git push. It may take up to 5 minutes to deploy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the build is complete, click &lt;strong&gt;Visit deployed URL&lt;/strong&gt; to see your web app live.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Task 2: Manage Users&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this task, you will configure &lt;strong&gt;Amplify Auth&lt;/strong&gt; for user authentication and enable &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; foundation model access.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 1: Set up Amplify Auth&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The app uses email as the default login mechanism. When users sign up, they will receive a verification email. Customize the verification email by updating the following code in the &lt;code&gt;ai-generated-recipes/amplify/auth/resource.ts&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineAuth&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineAuth&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;loginWith&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailStyle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;CODE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailSubject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Welcome to the AI-Powered Recipe Generator!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;verificationEmailBody&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;createCode&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
           &lt;span class="s2"&gt;`Use this code to confirm your account: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;createCode&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;},&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nkluiubpy8qm052hxpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nkluiubpy8qm052hxpv.png" alt="Custom Verification Email" width="592" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image on the right shows an example of the customized verification email.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2: Set up Amazon Bedrock Model Access&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to the &lt;strong&gt;AWS Management Console&lt;/strong&gt; and open the &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; console at &lt;a href="https://console.aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;AWS Amazon Bedrock&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verify that you are in the &lt;strong&gt;N. Virginia (us-east-1)&lt;/strong&gt; region, and select &lt;strong&gt;Get started&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Foundation models&lt;/strong&gt; section, choose the &lt;strong&gt;Claude model&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the &lt;strong&gt;Claude models&lt;/strong&gt; section, select the &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt; tab, and click &lt;strong&gt;Request model access&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;If you already have access to some models, the button will show &lt;strong&gt;Manage model access&lt;/strong&gt; instead.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Base models&lt;/strong&gt; section, for &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt;, choose &lt;strong&gt;Available to request&lt;/strong&gt;, then select &lt;strong&gt;Request model access&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Edit model access&lt;/strong&gt; page, click &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the &lt;strong&gt;Review and Submit&lt;/strong&gt; page, click &lt;strong&gt;Submit&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Now your application should be up and running, with user authentication and AI-powered recipe generation using &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;'s &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here's the reformatted version of &lt;strong&gt;Task 3: Build a Serverless Backend&lt;/strong&gt; and &lt;strong&gt;Task 4: Deploy the Backend API&lt;/strong&gt;, with improved structure and readability for a Medium blog post:&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 3: Build a Serverless Backend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this task, you will use &lt;strong&gt;AWS Amplify&lt;/strong&gt; and &lt;strong&gt;AWS Lambda&lt;/strong&gt; to build a serverless function.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Create a Lambda Function for Handling Requests&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;On your local machine, navigate to the &lt;code&gt;ai-recipe-generator/amplify/data&lt;/code&gt; folder, and create a new file named &lt;code&gt;bedrock.js&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the file with the following code:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code defines a request function that constructs the HTTP request to invoke the &lt;strong&gt;Claude 3 Sonnet&lt;/strong&gt; foundation model in &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;. The response function parses the response and returns the generated recipe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ingredients&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

     &lt;span class="c1"&gt;// Construct the prompt with the provided ingredients&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Suggest a recipe idea using these ingredients: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;, &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

     &lt;span class="c1"&gt;// Return the request configuration&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;resourcePath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/model/anthropic.claude-3-sonnet-20240229-v1:0/invoke`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="p"&gt;},&lt;/span&gt;
         &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
           &lt;span class="na"&gt;anthropic_version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock-2023-05-31&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
             &lt;span class="p"&gt;{&lt;/span&gt;
               &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                 &lt;span class="p"&gt;{&lt;/span&gt;
                   &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                   &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`\n\nHuman: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n\nAssistant:`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="p"&gt;},&lt;/span&gt;
               &lt;span class="p"&gt;],&lt;/span&gt;
             &lt;span class="p"&gt;},&lt;/span&gt;
           &lt;span class="p"&gt;],&lt;/span&gt;
         &lt;span class="p"&gt;}),&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="c1"&gt;// Parse the response body&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parsedBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

     &lt;span class="c1"&gt;// Extract the text content from the response&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;parsedBody&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;

     &lt;span class="c1"&gt;// Return the response&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;request&lt;/code&gt; function constructs the AI prompt and sends it to Amazon Bedrock. The &lt;code&gt;response&lt;/code&gt; function parses the generated recipe from the returned JSON.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Add Amazon Bedrock as a Data Source&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;code&gt;amplify/backend.ts&lt;/code&gt; file and add the following code. Save the file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code adds an HTTP data source for &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; to your API and grants it the necessary permissions to invoke the Claude model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defineBackend&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./data/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PolicyStatement&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-cdk-lib/aws-iam&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./auth/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineBackend&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bedrockDataSource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;graphqlApi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addHttpDataSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrockDS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://bedrock-runtime.us-east-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;authorizationConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;signingRegion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="na"&gt;signingServiceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="nx"&gt;bedrockDataSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;grantPrincipal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addToPrincipalPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
       &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
         &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrock:InvokeModel&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
     &lt;span class="p"&gt;})&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code integrates Amazon Bedrock into your backend, enabling the serverless app to invoke the Claude 3 Sonnet model.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 4: Deploy the Backend API&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Set Up Amplify Data&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;code&gt;amplify/backend.ts&lt;/code&gt; file, define the schema and backend resources as follows:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ClientSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defineData&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/backend&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;BedrockResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;customType&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
       &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
       &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
     &lt;span class="p"&gt;}),&lt;/span&gt;

     &lt;span class="na"&gt;askBedrock&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;arguments&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;BedrockResponse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;authorization&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;allow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;authenticated&lt;/span&gt;&lt;span class="p"&gt;()])&lt;/span&gt;
       &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
         &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;custom&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./bedrock.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;dataSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bedrockDS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
       &lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;Schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ClientSchema&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineData&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="na"&gt;authorizationModes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="na"&gt;defaultAuthorizationMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;apiKey&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;apiKeyAuthorizationMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="na"&gt;expiresInDays&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
     &lt;span class="p"&gt;},&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this step, you define a &lt;strong&gt;GraphQL&lt;/strong&gt; schema and link the &lt;code&gt;askBedrock&lt;/code&gt; query to the custom Lambda function you created earlier (&lt;code&gt;bedrock.js&lt;/code&gt;). The schema allows authenticated users to invoke the function and retrieve a recipe suggestion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs3tc08p2l8eoz8bdqmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frs3tc08p2l8eoz8bdqmv.png" alt="Amplify Schema" width="770" height="572"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Deploy Cloud Resources&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your app's project folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command to deploy cloud resources into an isolated development space:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npx ampx sandbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets up a sandbox environment where you can quickly iterate on your changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4qv9mmq04j2rubgtdsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4qv9mmq04j2rubgtdsh.png" alt="Sandbox Deployment" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the sandbox has been fully deployed, you will see a confirmation message in the terminal, and an &lt;code&gt;amplify_outputs.json&lt;/code&gt; file will be generated and added to your project.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnngqeowg8yujnrghraqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnngqeowg8yujnrghraqx.png" alt="Deployment Confirmation" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The deployment process is now complete, and you can begin interacting with your serverless backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxs5y4th00wcuusq0jt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxs5y4th00wcuusq0jt1.png" alt="Deployment Output" width="586" height="586"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The serverless backend is now set up and ready to handle recipe generation requests using &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;. You've created a Lambda function to handle interactions with the Claude 3 Sonnet model, integrated it with AWS Amplify, and deployed it to the cloud.&lt;/p&gt;

&lt;p&gt;Here's how you can present &lt;strong&gt;Task 5: Build the Frontend&lt;/strong&gt; and the challenges faced in the process for your blog:&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Task 5: Build the Frontend&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that the serverless backend is set up, it's time to create the frontend. This step will guide you through building the user interface (UI) using &lt;strong&gt;AWS Amplify&lt;/strong&gt; libraries, styled components, and React. We’ll also implement authentication to ensure that only authorized users can access the app.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 1: Install the Amplify Libraries&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To get started with integrating AWS Amplify into your frontend, you'll need to install the core Amplify libraries. The &lt;code&gt;aws-amplify&lt;/code&gt; library provides all the necessary APIs to interact with your backend, while &lt;code&gt;@aws-amplify/ui-react&lt;/code&gt; contains pre-built UI components that help you scaffold the authentication flow and other UI elements.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window, navigate to your project’s root folder (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), and run the following command to install the libraries:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install &lt;/span&gt;aws-amplify @aws-amplify/ui-react
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd176go3ct67bdigeb6o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd176go3ct67bdigeb6o3.png" alt="Install Amplify" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2: Style the App UI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Next, we’ll style the frontend to create a clean, modern interface. We'll focus on centering the layout and styling the form where users will input their ingredients.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;ai-recipe-generator/src/index.css&lt;/code&gt;, and update it with the following code to set global styles and center the UI:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;   &lt;span class="nd"&gt;:root&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Inter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;system-ui&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Avenir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Helvetica&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;line-height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0.87&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1280px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2em&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="nc"&gt;.box&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="nt"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="err"&gt;3&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;grid-column&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This CSS will help ensure that the app's layout is centered, the font is legible, and the UI components have a clean, consistent style.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqh5uzxrmiva5x1x9dc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieqh5uzxrmiva5x1x9dc.png" alt="CSS" width="692" height="986"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, open &lt;code&gt;ai-recipe-generator/src/App.css&lt;/code&gt; and update it with the following code to style the ingredient input form:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;   &lt;span class="nc"&gt;.app-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.header-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;padding-bottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.5rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.main-header&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.25rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bold&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#1a202c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.main-header&lt;/span&gt; &lt;span class="nc"&gt;.highlight&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#2563eb&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.description&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.125rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;65ch&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#1a202c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.form-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin-bottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="py"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.wide-input&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#ccc&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-button&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;300px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#007bff&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;white&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;pointer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.search-button&lt;/span&gt;&lt;span class="nd"&gt;:hover&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#0056b3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.result-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;margin-top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;transition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;height&lt;/span&gt; &lt;span class="m"&gt;0.3s&lt;/span&gt; &lt;span class="n"&gt;ease-out&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;overflow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;hidden&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.loader-container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;flex-direction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;column&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;align-items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="py"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nc"&gt;.result&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#f8f9fa&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#e9ecef&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;white-space&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pre-wrap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;word-wrap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;break-word&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;black&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;font-weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bold&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;left&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will ensure that the ingredients form and result display are properly styled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjcna5yvnsucsp74in5b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjcna5yvnsucsp74in5b.png" alt="Form Styles" width="800" height="651"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 3: Implement the UI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now, let’s build the main React component for the app. We’ll integrate the &lt;strong&gt;AWS Amplify Authentication&lt;/strong&gt; components for user sign-up, sign-in, and password recovery.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;code&gt;ai-recipe-generator/src/main.tsx&lt;/code&gt; and update it with the following code. This will use the Amplify &lt;code&gt;Authenticator&lt;/code&gt; component to wrap your app and provide a complete authentication flow:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ReactDOM&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-dom/client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./App.jsx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./index.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Authenticator&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createRoot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Authenticator&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Authenticator&lt;/code&gt; component will handle user authentication, including sign-up, sign-in, and MFA (Multi-Factor Authentication).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlc5kkyu52xhakwlcx0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlc5kkyu52xhakwlcx0s.png" alt="Authenticator" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Next, open &lt;code&gt;ai-recipe-generator/src/App.tsx&lt;/code&gt; and update it with the following code to implement the form for ingredient submission and the logic for querying the &lt;strong&gt;askBedrock&lt;/strong&gt; function.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;FormEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Loader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Placeholder&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./App.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Amplify&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-amplify&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Schema&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../amplify/data/resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-amplify/data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;outputs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../amplify_outputs.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-amplify/ui-react/styles.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

   &lt;span class="nx"&gt;Amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amplifyClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;generateClient&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Schema&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;authMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;userPool&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;

   &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;onSubmit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FormEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;HTMLFormElement&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preventDefault&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
       &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

       &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;formData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FormData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;currentTarget&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

         &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;errors&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;amplifyClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;askBedrock&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
           &lt;span class="na"&gt;ingredients&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ingredients&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
         &lt;span class="p"&gt;});&lt;/span&gt;

         &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="nf"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;No data returned&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nf"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`An error occurred: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;

     &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"app-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"header-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"main-header"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             Meet Your Personal
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;br&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"highlight"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Recipe AI&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             Simply type a few ingredients using the format ingredient1,
             ingredient2, etc., and Recipe AI will generate an all-new recipe on
             demand...
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt; &lt;span class="na"&gt;onSubmit&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;onSubmit&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"form-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"search-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
               &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
               &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"wide-input"&lt;/span&gt;
               &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ingredients"&lt;/span&gt;
               &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"ingredients"&lt;/span&gt;
               &lt;span class="na"&gt;placeholder&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Ingredient1, Ingredient2, Ingredient3,...etc"&lt;/span&gt;
             &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"search-button"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               Generate
             &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"result-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"loader-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Loading...&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Loader&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
               &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Placeholder&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"large"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
             &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
             &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
         &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The app allows users to input a list of ingredients, submits the request to the backend via &lt;strong&gt;Amplify&lt;/strong&gt; and &lt;strong&gt;Amazon Bedrock&lt;/strong&gt;, and then displays the generated recipe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foat3sk46j8c2f6svey5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foat3sk46j8c2f6svey5q.png" alt="Recipe Generation UI" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 4: Run and Test the App&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Open a new terminal window and navigate to your project’s root directory (&lt;code&gt;ai-recipe-generator&lt;/code&gt;), then run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Visit the local host URL to open the app in your browser and test it.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Step 5: Deploy the App&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Once you’ve confirmed that the app is working as expected locally, it’s time to deploy it to AWS Amplify.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the terminal, commit your changes:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git add &lt;span class="nb"&gt;.&lt;/span&gt;
   git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s1"&gt;'connect to bedrock'&lt;/span&gt;
   git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;AWS Amplify&lt;/strong&gt; console, and your app should automatically build and deploy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After deployment, you can access your live app at the provided Amplify URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3hki55r9i6tdwdid556.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3hki55r9i6tdwdid556.png" alt="Deployed App" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Challenges Faced and Solutions&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Integrating Bedrock API with Lambda&lt;/strong&gt;: The response latency from Amazon Bedrock sometimes caused delays.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Error handling and retry logic were implemented to handle API delays and ensure smoother user experiences.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Frontend and Amplify Configuration&lt;/strong&gt;: Understanding the integration of Amplify DataStore with AppSync for real-time data updates was tricky.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: Amplify’s documentation and UI components helped speed up the integration process and simplified backend connectivity.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This project provided valuable experience in creating a web application with user authentication and data storage using AWS Amplify. This serverless application setup is ideal for rapid development and scalability, making it suitable for various applications, from prototypes to production-level solutions. AWS Amplify enables efficient web application management and backend integration, covering a range of use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Shubham Murti — Aspiring Cloud Security Engineer | Weekly Cloud Learning !!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="http://www.linkedin.com/in/shubham-murti-b6486a1aa" rel="noopener noreferrer"&gt;Linkdin&lt;/a&gt;, &lt;a href="https://x.com/murti_shubham" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/shubhammurti" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>learning</category>
      <category>cloud</category>
      <category>awschallenge</category>
    </item>
  </channel>
</rss>
