<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Minoltan Issack</title>
    <description>The latest articles on Forem by Minoltan Issack (@minoltan).</description>
    <link>https://forem.com/minoltan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/minoltan"/>
    <language>en</language>
    <item>
      <title>AWS Cloud Practitioner Questions | Security &amp; Encryption</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Tue, 14 Apr 2026 11:26:56 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-security-encryption-311n</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-security-encryption-311n</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;To enable In-flight Encryption (In-Transit Encryption), we need to have ........................&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgam2lpdjd3uv5t4awpby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgam2lpdjd3uv5t4awpby.png" alt=" " width="777" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; The correct answer, "an HTTPS endpoint with an SSL certificate," is right because HTTPS encrypts data in transit, ensuring security. HTTPS cannot be used without an SSL certificate, which verifies the server's identity. Other options are incorrect if they lack encryption or proper security measures. SSL certificates are essential for establishing trust and secure communication. This ensures data integrity and confidentiality during transmission.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;Server-Side Encryption means that the data is sent encrypted to the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswjgjfp6pbam1012bz4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswjgjfp6pbam1012bz4u.png" alt=" " width="781" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; Server-Side Encryption means the data is encrypted by the server after it's received, not while it's being sent. The statement is false because encryption during transmission is handled by protocols like TLS, known as in-flight encryption. Server-Side Encryption specifically refers to encrypting stored data, ensuring it is protected at rest. Other options that suggest encryption during transfer would refer to client-side or in-transit encryption, not server-side. This distinction helps ensure data security both in transit and at rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;In Server-Side Encryption, where do the encryption and decryption happen?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg16jzbkk16aoyakf12ms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg16jzbkk16aoyakf12ms.png" alt=" " width="780" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; The correct answer, "Both Encryption and Decryption happen on the server," is right because server-side encryption manages encryption keys and processes on the server side, meaning the server handles both tasks. The other options are incorrect because they involve the client performing encryption or decryption, which isn't the case with server-side encryption. In server-side encryption, the user doesn't have access to the keys, so they cannot encrypt or decrypt data themselves. This setup ensures secure handling of data by the server.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;In Client-Side Encryption, the server must know our encryption scheme before we can upload the data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax1wtsyejsayqggvj754.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax1wtsyejsayqggvj754.png" alt=" " width="785" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; In client-side encryption, the server acts as a "blind" storage provider and does not need to know the encryption scheme or keys to store the data. The data is fully encrypted before it leaves your device, ensuring the server only manages opaque blobs of information without any insight into the underlying cryptographic methods.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;You need to create KMS Keys in AWS KMS before you are able to use the encryption features for EBS, S3, RDS …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrmdrcjli2mpj85mgupu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrmdrcjli2mpj85mgupu.png" alt=" " width="785" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; AWS provides managed keys that can be used for encryption without creating your own KMS keys. You only need to create custom keys if you have specific security requirements. The other options are incorrect because creating your own keys is optional, not mandatory, to enable encryption for services like EBS, S3, or RDS. AWS Managed Keys simplify the process and are ready to use. Therefore, creating KMS keys in advance is not a required step.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;AWS KMS supports both symmetric and asymmetric KMS keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8xmuofiezhe27cqmzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8xmuofiezhe27cqmzt.png" alt=" " width="785" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; AWS KMS supports both symmetric and asymmetric keys. Symmetric keys are used for encryption and decryption with a single key. Asymmetric keys involve a key pair (RSA or ECC) used for encryption/decryption or signing/verification. The other option, "False," is incorrect because KMS indeed supports both types of keys. This allows flexible cryptographic operations for different security needs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;When you enable Automatic Rotation on your KMS Key, the backing key is rotated every ……………&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09s4x0v2yf65cld5xvc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09s4x0v2yf65cld5xvc5.png" alt=" " width="781" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; Automatic Rotation is enabled on a KMS key, it rotates every 12 months by default. The "90 days" option is incorrect because AWS does not rotate keys that frequently by default. The other options, "2 years" and "3 years," are incorrect because they exceed the standard rotation period set by AWS, which is one year. This rotation frequency balances security and operational consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;You have an AMI that has an encrypted EBS snapshot using KMS CMK. You want to share this AMI with another AWS account. You have shared the AMI with the desired AWS account, but the other AWS account still can't use it. How would you solve this problem?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru0q9zrjqquc5ifq4mxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru0q9zrjqquc5ifq4mxl.png" alt=" " width="788" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; KMS keys are customer-managed or AWS-managed, and sharing the AMI alone does not grant access to the encryption key. The other accounts must also have permission to use the CMK to access the encrypted snapshot. The first option, "logout and login," is incorrect because credential refresh doesn't resolve key sharing issues. The third option, "you can't share an encrypted AMI," is incorrect because encrypted AMIs can be shared if the CMK permissions are properly configured. Sharing the CMK ensures the other account can decrypt and use the AMI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;You have created a Customer-managed CMK in KMS that you use to encrypt both S3 buckets and EBS snapshots. Your company policy mandates that your encryption keys be rotated every 6 months. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnblsm8xx98gztqp9n0ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnblsm8xx98gztqp9n0ui.png" alt=" " width="786" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; AWS KMS supports automatic key rotation every year. However, since your policy requires rotation every 6 months, you need to manually rotate the key or create a new one, as automatic rotation is annual. Using AWS Managed Keys isn't suitable because their rotation is automatic but on a quarterly basis, and they don't allow custom retention periods. Manually creating and rotating keys gives control over the exact 6-month schedule. The other options do not meet the specific 6-month rotation requirement.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;What should you use to control access to your KMS CMKs?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15pokonmbyycf12nuzf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15pokonmbyycf12nuzf9.png" alt=" " width="786" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1) :&lt;/strong&gt; They directly define and control access permissions for each CMK. "KMS IAM Policy" is incorrect because IAM policies manage permissions at the user or role level, not specific to each key. "AWS GuardDuty" is incorrect as it is a security threat detection service, not an access control tool. "KMS Access Control List (KMS ACL)" is incorrect because KMS does not support ACLs for controlling access. Key policies are the primary method for managing access to KMS CMKs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;You have a Lambda function used to process some data in the database. You would like to give your Lambda function access to the database password. Which of the following options is the most secure?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed6coqexssyz0vc9wq3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed6coqexssyz0vc9wq3j.png" alt=" " width="775" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; It keeps the sensitive data secure while allowing the Lambda to access it securely during execution. Embedding the password in the code is insecure because it can be easily exposed if the code is accessed. Having it as plaintext environment variable is also insecure as it's visible in plain text within environment settings. Encrypting it and decrypting at runtime ensures the password remains protected at rest and only accessible in memory during execution. This approach balances security and accessibility effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;You have a secret value that you use for encryption purposes, and you want to store and track the values of this secret over time. Which AWS service should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9cp1fe5cpqjyjn6twkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9cp1fe5cpqjyjn6twkb.png" alt=" " width="783" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It allows secure storage of secrets with built-in version tracking, enabling you to see historical values. "AWS KMS" can rotate encryption keys but doesn't track or store different secret values over time. "Amazon S3" offers versioning and encryption but is not specifically designed for secret management or audit tracking of secret values. SSM Parameter Store provides dedicated secret management with version history, making it the best fit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 13:
&lt;/h2&gt;

&lt;p&gt;Your user-facing website is a high-risk target for DDoS attacks and you would like to get 24/7 support in case they happen and AWS bill reimbursement for the incurred costs during the attack. What AWS service should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu0hwjz3zx5urs6clcft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu0hwjz3zx5urs6clcft.png" alt=" " width="774" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It provides 24/7 support for DDoS attacks and offers cost reimbursement assistance through AWS's DDoS Response Team. "AWS WAF" helps protect web applications from common web exploits but does not offer 24/7 support or billing reimbursement. "AWS Shield" provides basic DDoS protection but lacks the dedicated support and cost reimbursement features of Shield Advanced. "AWS DDoS OpsTeam" is not a service but a support team; the appropriate service is AWS Shield Advanced.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 14:
&lt;/h2&gt;

&lt;p&gt;You would like to externally maintain the configuration values of your main database, to be picked up at runtime by your application. What's the best place to store them to maintain control and version history?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ydkoku6rdafmxqb5x64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ydkoku6rdafmxqb5x64.png" alt=" " width="774" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; It securely stores configuration values with version control, making it easy to update and track changes at runtime. "Amazon DynamoDB" is a NoSQL database suitable for application data but isn't mainly designed for configuration management or versioning. "Amazon S3" can store files and version data, but it's less ideal for sensitive configuration values due to lack of built-in secret management features. "Amazon EBS" provides block storage for EC2 instances and is not suitable for managing or versioning configuration data externally.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 15:
&lt;/h2&gt;

&lt;p&gt;AWS GuardDuty scans the following data sources, EXCEPT …………….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furo37zcyi0au05hbojxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furo37zcyi0au05hbojxe.png" alt=" " width="789" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; AWS GuardDuty does not directly scan CloudWatch Logs data sources; it primarily analyzes other specific logs. "CloudTrail Logs" are monitored because they record API activity for security analysis. "VPC Flow Logs" document network traffic, which GuardDuty analyzes for suspicious activity. "DNS Logs" are also scanned since they help detect malicious domain requests. GuardDuty focuses on certain data sources, and CloudWatch Logs are not one of them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 16:
&lt;/h2&gt;

&lt;p&gt;You have a website hosted on a fleet of EC2 instances fronted by an Application Load Balancer. What should you use to protect your website from common web application attacks (e.g., SQL Injection)?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc49xg5tdzna04ulupj8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc49xg5tdzna04ulupj8a.png" alt=" " width="774" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It allows you to create custom rules to block common web application attacks like SQL Injection and Cross-Site Scripting. "AWS Shield" provides protection against DDoS attacks but does not specifically target application-layer threats. "AWS Security Hub" is a centralized security management service and does not directly protect against web attacks. "AWS GuardDuty" detects malicious activity but is focused on threat detection rather than web application protection.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 17:
&lt;/h2&gt;

&lt;p&gt;You would like to analyze OS vulnerabilities from within EC2 instances. You need these analyses to occur weekly and provide you with concrete recommendations in case vulnerabilities are found. Which AWS service should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqr0hliir6t4pig876nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqr0hliir6t4pig876nc.png" alt=" " width="774" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3) :&lt;/strong&gt; It automatically analyzes EC2 instances for security vulnerabilities and provides detailed findings and recommendations. "AWS Shield" focuses on protecting against DDoS attacks and does not analyze OS vulnerabilities. "Amazon GuardDuty" detects threats and malicious activity but does not perform vulnerability assessments. "AWS Config" monitors configuration compliance but does not provide detailed vulnerability analysis or recommendations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 18:
&lt;/h2&gt;

&lt;p&gt;What is the most suitable AWS service for storing RDS DB passwords which also provides you automatic rotation?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j5u8tfidn59khe8euul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j5u8tfidn59khe8euul.png" alt=" " width="779" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1) :&lt;/strong&gt; It securely stores database passwords and provides automatic rotation, reducing manual management. "AWS KMS" is a key management service and does not store or rotate passwords directly. "AWS SSM Parameter Store" can store passwords but lacks built-in automatic rotation features. Secrets Manager is specifically designed for secret management and automated credential rotation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 19:
&lt;/h2&gt;

&lt;p&gt;Which AWS service allows you to centrally manage EC2 Security Groups and AWS Shield Advanced across all AWS accounts in your AWS Organization?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzkzmtfitcqo6lny2jfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzkzmtfitcqo6lny2jfc.png" alt=" " width="784" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; It centrally manages security policies across multiple AWS accounts, including Security Groups and Shield Advanced. "AWS GuardDuty" detects security threats but does not handle centralized management of security groups or Shield. "AWS Config" monitors resource compliance, but it does not manage security policies across accounts. It tracks changes but doesn't enforce security rules centrally.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 20:
&lt;/h2&gt;

&lt;p&gt;Which AWS service helps you protect your sensitive data stored in S3 buckets?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fson9syuige9r1ygj9gfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fson9syuige9r1ygj9gfp.png" alt=" " width="784" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer(3) :&lt;/strong&gt; It uses machine learning to identify and protect sensitive data in S3 buckets. "AWS KMS" is a key management service that encrypts data but does not identify or classify sensitive information in S3. "Amazon GuardDuty" detects security threats but doesn't specifically protect or identify sensitive data. "Amazon Shield" focuses on DDoS protection and does not manage or analyze data stored in S3.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 21:
&lt;/h2&gt;

&lt;p&gt;An online-payment company is using AWS to host its infrastructure. The frontend is created using VueJS and is hosted on an S3 bucket and the backend is developed using PHP and is hosted on EC2 instances in an Auto Scaling Group. As their customers are worldwide, they use both CloudFront and Aurora Global database to implement multi-region deployments to provide the lowest latency and provide availability, and resiliency. A new feature required which gives customers the ability to store data encrypted on the database and this data must not be disclosed even by the company admins. The data should be encrypted on the client side and stored in an encrypted format. What do you recommend to implement this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uat6otnx3w7aqr39ouq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uat6otnx3w7aqr39ouq.png" alt=" " width="784" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1) :&lt;/strong&gt; Lambda is not designed for client-side encryption of database data. "Using Aurora Client-side Encryption and CloudHSM" is incorrect because while CloudHSM provides hardware security, it is not specifically integrated for client-side encryption in this context. "Using Lambda Client-side Encryption and CloudHSM" is incorrect because Lambda alone doesn't handle client-side encryption for databases, and CloudHSM is not tailored for this use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 22:
&lt;/h2&gt;

&lt;p&gt;You have an S3 bucket that is encrypted with SSE-KMS. You have been tasked to replicate the objects to a target bucket in the same AWS region but with a different KMS Key. You have configured the S3 replication, the target bucket, and the target KMS key and it is still not working. What is missing to make the S3 replication work?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcpsost8fhnk2izbz9of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcpsost8fhnk2izbz9of.png" alt=" " width="785" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; You need to configure permissions for both the source KMS key (kms:Decrypt) and the target KMS key (kms:Encrypt) so that S3 replication can access and use them properly. The other options are incorrect because replication is supported, no support ticket is needed, and the source and target keys do not have to be the same. Proper permissions are necessary for encryption and decryption during replication.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 23:
&lt;/h2&gt;

&lt;p&gt;You have generated a public certificate using LetsEncrypt and uploaded it to the ACM so you can use and attach to an Application Load Balancer that forwards traffic to EC2 instances. As this certificate is generated outside of AWS, it does not support the automatic renewal feature. How would you be notified 30 days before this certificate expires so you can manually generate a new one?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde26snpb7heq05wz38wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde26snpb7heq05wz38wy.png" alt=" " width="785" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; allows you to receive notifications 30 days before the certificate expires. Linking ACM to a third-party provider like Let's Encrypt does not provide automated notifications from AWS. Using monthly expiration events or CloudWatch alarms won't give you the timely warning needed 30 days in advance. EventBridge is suitable for scheduled, daily checks, ensuring proactive renewal alerts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 24:
&lt;/h2&gt;

&lt;p&gt;You have created the main Edge-Optimized API Gateway in us-west-2 AWS region. This main Edge-Optimized API Gateway forwards traffic to the second level API Gateway in ap-southeast-1. You want to secure the main API Gateway by attaching an ACM certificate to it. Which AWS region are you going to create the ACM certificate in?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukhwsfj4379uvzq5zjqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukhwsfj4379uvzq5zjqa.png" alt=" " width="782" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1) :&lt;/strong&gt; ACM certificates for CloudFront distributions must be created in the us-east-1 region, as AWS only supports CloudFront-related certificates there. "us-west-2" is incorrect because ACM certificates in this region cannot be used directly with CloudFront or Edge-Optimized API Gateway. "ap-southeast-1" is incorrect since it's not the region for ACM certificates used with CloudFront. "Both us-east-1 and us-west-2" is incorrect because only us-east-1 supports ACM certificates for CloudFront distributions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 25:
&lt;/h2&gt;

&lt;p&gt;You are managing an AWS Organization with multiple AWS accounts. Each account has a separate application with different resources. You want an easy way to manage Security Groups and WAF Rules across those accounts as there was a security incident the last week and you want to tighten up your resources. Which AWS service can help you to do so?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nwt60wffonf3txrpr8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nwt60wffonf3txrpr8k.png" alt=" " width="782" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4) :&lt;/strong&gt; AWS Firewall Manager allows centralized management of security policies, such as Security Groups and WAF rules, across multiple AWS accounts in an organization. It simplifies enforcement and updates, especially after security incidents.&lt;br&gt;
Others are incorrect because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS GuardDuty is primarily for threat detection, not policy management.&lt;/li&gt;
&lt;li&gt;Amazon Shield provides DDoS protection but doesn't manage Security Groups or WAF rules.&lt;/li&gt;
&lt;li&gt;Amazon Inspector assesses security vulnerabilities but doesn't handle centralized rule management.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="//issackpaul95@gmail.com"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have. All blogs in this series will be optimized, fine-tuned, developed, and updated in a timely manner to reflect the latest AWS changes, exam updates, and real-world best practices.&lt;/p&gt;

</description>
      <category>cloudpractitioner</category>
      <category>aws</category>
      <category>awssecurity</category>
      <category>awsexam</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | RDS, Aurora, &amp; ElastiCache</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sun, 12 Apr 2026 08:11:16 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-rds-aurora-elasticache-2a0g</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-rds-aurora-elasticache-2a0g</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;Amazon RDS supports the following databases, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvm3du0ql3y227htob4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvm3du0ql3y227htob4o.png" alt=" " width="783" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; Amazon RDS does not support MongoDB. Instead, RDS supports other databases such as MySQL, MariaDB, and Microsoft SQL Server. This helps you understand which databases are compatible with Amazon RDS and clarifies that MongoDB is not included in this managed service.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;You're planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh3xmzpjs9s8ixwpy4m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh3xmzpjs9s8ixwpy4m2.png" alt=" " width="784" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer: (3)&lt;/strong&gt; Multi-AZ deployments in Amazon RDS automatically create a synchronous standby replica of your database in a different Availability Zone. This setup provides high availability and durability, ensuring that if one AZ experiences a failure or disaster, the database remains available in the other AZ without manual intervention. In contrast, Read Replicas are mainly used for scaling read operations rather than disaster recovery, as they are asynchronous and may not provide immediate failover support in case of an AZ failure. Enabling Multi-AZ is the recommended approach for disaster recovery within a single region to ensure continuous availability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don't post news very often. Which solution is NOT adapted to this problem?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dkfzdmouvpv0ymwuxf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dkfzdmouvpv0ymwuxf5.png" alt=" " width="784" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer: (2)&lt;/strong&gt; "RDS Multi-AZ" provides high availability and automatic failover in case of an Availability Zone failure. It ensures durability but does not improve read performance. "Read Replicas" are designed for scaling read operations, not for disaster recovery. "ElastiCache" improves read speed by caching data, not by providing database failover. Therefore, Multi-AZ is correct for high availability, while the others focus on scaling and caching.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iesgwd28ppiquo40su4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iesgwd28ppiquo40su4.png" alt=" " width="784" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2) :&lt;/strong&gt; Read Replicas use asynchronous replication, which can cause delays, leading to eventual consistency, so users might not see their updates immediately. Multi-AZ provides high availability and automatic failover but doesn't improve read scalability. ElastiCache speeds up read access by caching data but does not handle database replication or failover. Therefore, for ensuring data consistency, Read Replicas' asynchronous nature makes them less immediate. The other options serve different purposes like high availability or caching.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjck8ubm55mnwh0kuq1po.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjck8ubm55mnwh0kuq1po.png" alt=" " width="779" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; Multi-AZ maintains the same connection string because it automatically handles failover to the standby replica without requiring connection string changes. In contrast, Read Replicas have their own endpoints and DNS names, so applications need to be updated to connect to them directly. Multi-AZ provides high availability but not read scaling. Read Replicas support read scalability but require configuration changes in the application. Therefore, Multi-AZ does not require changes to the connection string.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don't want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9nck6i6qu4fq10974q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9nck6i6qu4fq10974q.png" alt=" " width="780" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; Storing session data in ElastiCache allows multiple EC2 instances to access user sessions quickly and efficiently, supporting stateless application design. RDS could store session data but offers lower performance compared to ElastiCache, which is optimized for fast access. Using your own load balancer doesn't address session management and can lead to complexity. EBS volumes are not suitable for shared session storage across instances due to limitations and performance concerns. Therefore, ElastiCache is the best choice for managing user sessions without sticky sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users' experience. What should you do to improve the users' experience?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00lso99lqto3wqn68axj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00lso99lqto3wqn68axj.png" alt=" " width="783" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; Setting up a Read Replica allows analytics queries to run independently, so they won't slow down the main database. Multi-AZ is mainly for high availability and automatic failover, not for offloading read workloads. Running queries at night limits real-time performance and doesn't address ongoing query impacts during the day. Read Replicas improve performance by distributing read traffic, making the user experience better. The other options do not effectively handle the problem of heavy, ongoing query load.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c3d36ziaq6kv4rcny5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3c3d36ziaq6kv4rcny5k.png" alt=" " width="783" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; Aurora Global Database is designed for disaster recovery across regions by allowing replicas in multiple AWS regions. RDS Read Replicas are limited to the same region and don't support cross-region disaster recovery. RDS Multi-AZ is for high availability within a single region and does not provide cross-region replication. Aurora Read Replicas are regional but do not have the global multi-region capability. Aurora Global Database is the best option for multi-region disaster recovery.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;How can you enhance the security of your ElastiCache Redis Cluster by allowing users to access your ElastiCache Redis Cluster using their IAM Identities (e.g., Users, Roles)?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuzqvyn2q84kihsol5ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuzqvyn2q84kihsol5ut.png" alt=" " width="784" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Using IAM Authentication allows users to securely access ElastiCache Redis with their IAM identities, enabling fine-grained access control and auditability. Redis Authentication relies on a password, which is less integrated with AWS identity management. Security Groups control network traffic but do not handle user authentication directly. IAM Authentication is specifically designed for integrating AWS user identities with ElastiCache for better security. The other options do not provide direct IAM-based user access control.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip72ty2pxy9lop0bgw4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fip72ty2pxy9lop0bgw4h.png" alt=" " width="784" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Creating a Read Replica in a different AZ allows the analytics workload to run without affecting the main database's performance. This minimizes disruption for the primary application while handling heavy analytics separately. Enabling Multi-AZ only provides high availability and automatic failover, not workload separation. Running analytics on the source database could slow down the main application and cause performance issues. Using a cross-AZ Read Replica is the most cost-effective and suitable solution for this scenario.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oli0nwhmvvsnza7rlxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oli0nwhmvvsnza7rlxw.png" alt=" " width="778" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Creating a read replica in a different region provides a backup that can be quickly promoted during a regional outage, ensuring high availability. Enabling Multi-AZ on the main database improves local availability but does not protect against regional failures. Creating a read replica in the same region with Multi-AZ doesn't provide cross-region disaster recovery. The "Enable Multi-Region" option does not exist in RDS; cross-region replication must be set up manually. The correct approach is to create a read replica in the target region for effective disaster recovery.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company's AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc5bem3zj2ru7ltzwdod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc5bem3zj2ru7ltzwdod.png" alt=" " width="778" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; Enabling IAM Database Authentication allows developers to access the RDS MySQL instance using their IAM credentials, simplifying user management. It eliminates the need to create individual database users and passwords for each developer. By default, IAM users do not have direct access to RDS databases without this feature enabled. Using Amazon Cognito is primarily for user authentication in mobile or web applications, not for direct database access. The correct choice streamlines access control while maintaining security via IAM.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 13:
&lt;/h2&gt;

&lt;p&gt;Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5ux8wsia72ksnv51lcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5ux8wsia72ksnv51lcz.png" alt=" " width="779" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Read Replicas use asynchronous replication, which allows data to be copied to the replica with a slight delay, suitable for scaling and offloading read traffic. Multi-AZ deployments use synchronous replication, ensuring data is written to both the primary and standby instances simultaneously for high availability. The other options incorrectly state both use asynchronous or synchronous replication, which is not accurate. Synchronous replication in Multi-AZ provides data consistency during failover. Therefore, the correct answer accurately reflects the different replication methods used.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 14:
&lt;/h2&gt;

&lt;p&gt;How do you encrypt an unencrypted RDS DB instance?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5526uxcnff3s4e4358c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5526uxcnff3s4e4358c1.png" alt=" " width="779" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; The correct method involves creating a snapshot, copying it with encryption enabled, and restoring the instance from this encrypted snapshot, as encryption cannot be directly enabled on an existing unencrypted RDS instance. The first option, encrypting directly from the console without snapshotting, is not possible because RDS does not support on-the-fly encryption of running instances. The second option, stopping the database before snapshotting, is unnecessary; snapshots can be created while the database is running. Restoring from an encrypted snapshot applies encryption to the new instance, which is the correct approach. This process ensures data encryption without downtime or complex configurations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 15:
&lt;/h2&gt;

&lt;p&gt;For your RDS database, you can have up to ............ Read Replicas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbpeddhppzvxx0ujs1tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbpeddhppzvxx0ujs1tz.png" alt=" " width="787" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; The correct answer is 15, which is the maximum number of Read Replicas allowed for an RDS database, providing scalable read capacity. The choice of 5 is too low and limits scalability unnecessarily. The option of 7 is also below the maximum limit, so it does not represent the highest possible replicas. The limit is set to 15 for most database engines, allowing significant read scaling. Therefore, 15 is the correct maximum number allowed by AWS.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 16:
&lt;/h2&gt;

&lt;p&gt;Which RDS database technology does NOT support IAM Database Authentication?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkrsechnl4o9ts6gcocf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkrsechnl4o9ts6gcocf.png" alt=" " width="787" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Oracle does not support IAM Database Authentication, so it cannot leverage AWS IAM for database access. PostgreSQL and MySQL, on the other hand, do support IAM authentication, enabling secure, centralized access management through IAM roles. The other options, "PostgreSQL" and "MySQL," support IAM, making them incorrect choices for this question. Oracle's architecture and authentication methods differ, which is why it does not integrate with IAM-based authentication. Therefore, Oracle is the correct answer as it does not support IAM Database Authentication.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 17:
&lt;/h2&gt;

&lt;p&gt;You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjvif5m1x4r2tit96yu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjvif5m1x4r2tit96yu7.png" alt=" " width="778" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; You cannot create encrypted Read Replicas from an un-encrypted RDS DB instance because encryption must be enabled at the source instance before replication. AWS does not allow converting or encrypting a Read Replica after it has been created from an unencrypted source. To have an encrypted Read Replica, you must first encrypt the source database through snapshot and restore procedures. This restriction ensures data at rest remains encrypted and secure. Therefore, the correct answer is "No."&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 18:
&lt;/h2&gt;

&lt;p&gt;An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3qcui1lzay8w2c46x48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3qcui1lzay8w2c46x48.png" alt=" " width="781" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; Aurora Serverless automatically scales capacity up or down based on workload, making it cost-effective for infrequent and variable usage, which matches the team's needs. Using a global database is more suited for multi-region replication and not cost-efficient for small, infrequent workloads. An RDS database or running Aurora on EC2 would require maintaining resources constantly, increasing costs when the app is unused. Shutting down EC2 instances only addresses compute, not the database cost, and is less flexible than Aurora Serverless. Therefore, Aurora Serverless best minimizes costs while handling variable workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 19:
&lt;/h2&gt;

&lt;p&gt;How many Aurora Read Replicas can you have in a single Aurora DB Cluster?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3ppj8ghvbf52jjqm10v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3ppj8ghvbf52jjqm10v.png" alt=" " width="781" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; Aurora natively supports both MySQL and PostgreSQL, making it compatible with those database engines. Aurora does not support MariaDB, Oracle, or MS SQL Server directly; these are separate from Aurora's supported engines. MariaDB is similar but not officially supported as an Aurora engine. Oracle and MS SQL Server are proprietary databases and are not compatible with Aurora. Therefore, "MySQL and PostgreSQL" is the correct answer, supporting Aurora's capabilities.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 20:
&lt;/h2&gt;

&lt;p&gt;Amazon Aurora supports both …………………….. databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm6m4hc1j5kip36337t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm6m4hc1j5kip36337t7.png" alt=" " width="781" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Aurora supports only MySQL and PostgreSQL engines, making it compatible with both. MariaDB is not supported by Aurora, so you can't use it directly. Oracle and MS SQL Server are proprietary databases with different architectures, so they are not compatible with Aurora. Aurora is designed to work specifically with MySQL and PostgreSQL for seamless integration. Therefore, "MySQL and PostgreSQL" is correct because only these two are supported by Aurora.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 21:
&lt;/h2&gt;

&lt;p&gt;You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdlfgng2dinktrw5xfeb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdlfgng2dinktrw5xfeb.png" alt=" " width="781" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; ElastiCache for Redis with Sorted Sets is ideal for real-time ranking because it allows fast, in-memory updates and retrievals of ordered data, making leaderboards highly responsive and available. RDS for MySQL can store data, but it's slower for real-time updates and querying, which is critical for gaming leaderboards. Amazon Aurora provides high availability but isn't optimized for the ultra-low latency and real-time ranking needed here. ElastiCache for Memcached offers fast caching but lacks built-in support for ordered data types like Sorted Sets. Therefore, Redis Sorted Sets are the best fit for creating a highly available, real-time gaming leaderboard.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 22:
&lt;/h2&gt;

&lt;p&gt;You need full customization of an Oracle Database on AWS. You would like to benefit from using the AWS services. What do you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sbider7w09k5fmdie77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sbider7w09k5fmdie77.png" alt=" " width="782" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; RDS Custom for Oracle provides full customization options on AWS, allowing more control over the database environment, including access to the underlying OS and configurations. RDS for Oracle offers managed service with limited customization, suitable for standardized use cases but not full control. Deploying Oracle on EC2 gives complete customization but requires managing the infrastructure and maintenance yourself, which is less optimized than RDS Custom. RDS Custom strikes a balance by providing control while reducing administrative overhead. Therefore, RDS Custom for Oracle is the best choice for full customization with managed AWS services.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 23:
&lt;/h2&gt;

&lt;p&gt;You need to store long-term backups for your Aurora database for disaster recovery and audit purposes. What do you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvwbxk26pqd6x147iftd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvwbxk26pqd6x147iftd.png" alt=" " width="782" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Perform On Demand Backups allows you to manually create backups that can be stored for as long as needed for disaster recovery and audits. Automated Backups have a maximum retention period of 35 days, which is insufficient for long-term storage. Aurora Database Cloning creates copies of the database but does not serve as a long-term backup solution. On Demand Backups give you control over backup retention duration beyond the automated retention period. Therefore, performing on-demand backups is best for long-term storage needs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 24:
&lt;/h2&gt;

&lt;p&gt;Your development team would like to perform a suite of read and write tests against your production Aurora database because they need access to production data as soon as possible. What do you advise?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30dr452if5fvdi8v3hc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30dr452if5fvdi8v3hc7.png" alt=" " width="779" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; Using Aurora Cloning creates a fast, separate copy of the database for testing without impacting production. Creating a Read Replica allows read-only access but isn't suitable for write testing or immediate data access. Testing directly against the production database risks affecting live users and data integrity. Making a DB Snapshot and restoring it is slower and unnecessary when cloning provides a quicker, safer option. Therefore, Aurora Cloning is the best choice for testing without affecting production performance or data.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 25:
&lt;/h2&gt;

&lt;p&gt;You have 100 EC2 instances connected to your RDS database and you see that upon a maintenance of the database, all your applications take a lot of time to reconnect to RDS, due to poor application logic. How do you improve this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i2nf6pmphcwss4u5xvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i2nf6pmphcwss4u5xvj.png" alt=" " width="779" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; Using RDS Proxy helps manage database connections efficiently, reducing connection time during failovers or maintenance. Fixing all the applications is impractical and time-consuming. Disabling Multi-AZ removes high availability features, risking longer downtime during failover. Enabling Multi-AZ improves availability but doesn't address connection interruptions during maintenance. Therefore, RDS Proxy is best for maintaining persistent connections and minimizing disruption.&lt;/p&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="//issackpaul95@gmail.com"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have. All blogs in this series will be optimized, fine-tuned, developed, and updated in a timely manner to reflect the latest AWS changes, exam updates, and real-world best practices.&lt;/p&gt;

</description>
      <category>rds</category>
      <category>aurora</category>
      <category>elasticcache</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | IAM Advanced</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sun, 22 Mar 2026 06:01:39 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-iam-advanced-3lgg</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-iam-advanced-3lgg</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;You have strong regulatory requirements to only allow fully internally audited AWS services in production. You still want to allow your teams to experiment in a development environment while services are being audited. How can you best set this up?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq51f27nvexouxiypxept.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq51f27nvexouxiypxept.png" alt=" " width="784" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; By creating an AWS Organization with separate Organizational Units (OUs) for Prod and Dev, and applying a Service Control Policy (SCP) on the Prod OU, you effectively enforce compliance in your production environment while allowing flexibility for experimentation in development. This setup aligns with your regulatory requirements by ensuring only vetted services are accessible in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;You are managing the AWS account for your company, and you want to give one of the developers access to read files from an S3 bucket. You have updated the bucket policy to this, but he still can't access the files in the bucket. What is the problem?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowsRead"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;

            &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::123456789012:user/Dave"&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;

        &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::static-files-bucket-xxx"&lt;/span&gt;&lt;span class="w"&gt;

     &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxdzywzbic76xb06l7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vxdzywzbic76xb06l7o.png" alt=" " width="786" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; The permission specified in the bucket policy only grants access to the bucket itself, not to the objects within it. By changing the resource to "arn:aws:s3:::static-files-bucket-xxx/*," you allow access to the individual files, which is necessary for object-level permissions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;You have 5 AWS Accounts that you manage using AWS Organizations. You want to restrict access to certain AWS services in each account. How should you do that?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu52qlrbbhhkyap3lpj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu52qlrbbhhkyap3lpj5.png" alt=" " width="783" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; By selecting "Using AWS Organizations SCP," you correctly identified the most effective way to restrict access to specific AWS services across multiple accounts, as Service Control Policies provide a centralized method for managing permissions within your organization. This aligns with your goal of implementing governance and compliance measures across your AWS accounts effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;Which of the following IAM condition key you can use only to allow API calls to a specified AWS region?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x4lmkaiy5zg38pyptwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x4lmkaiy5zg38pyptwp.png" alt=" " width="783" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; It specifically allows or denies API calls based on the region specified in the request, aligning perfectly with the requirement of controlling access to a specified AWS region. This understanding helps you effectively manage permissions and enforce regional restrictions in your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;When configuring permissions for EventBridge to configure a Lambda function as a target you should use ………………….. but when you want to configure a Kinesis Data Streams as a target you should use&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd90xw25vcshsh8l5kpxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd90xw25vcshsh8l5kpxb.png" alt=" " width="783" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Using a resource-based policy for EventBridge allows you to define permissions directly on the Lambda function, while an identity-based policy is appropriate for Kinesis Data Streams, as it manages permissions based on the IAM role or user accessing the service. This distinction is key for correctly configuring permissions in AWS.&lt;/p&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="//issackpaul95@gmail.com"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have. All blogs in this series will be optimized, fine-tuned, developed, and updated in a timely manner to reflect the latest AWS changes, exam updates, and real-world best practices.&lt;/p&gt;

</description>
      <category>iam</category>
      <category>aws</category>
      <category>serverless</category>
      <category>cloudpractitioner</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | Networking &amp; VPC</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sat, 21 Feb 2026 10:35:24 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-networking-vpc-285g</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-networking-vpc-285g</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;What does this CIDR 10.0.4.0/28 correspond to?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi19y1jsxec5pa73yy13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi19y1jsxec5pa73yy13.png" alt=" " width="783" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; CIDR notation "/28" indicates a subnet with 16 available IP addresses, ranging from the starting address 10.0.4.0 to 10.0.4.15, as only the last four bits change in this subnet. Great job understanding how CIDR notation works!&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;You have a corporate network of size 10.0.0.0/8 and a satellite office of size 192.168.0.0/16. Which CIDR is acceptable for your AWS VPC if you plan on connecting your networks later on?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw34heuk4g5m1khnskrme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw34heuk4g5m1khnskrme.png" alt=" " width="783" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It fits within the private IP address range and does not overlap with your existing networks, which is essential for proper routing and connectivity in your AWS VPC. This choice also adheres to the maximum CIDR size requirement in AWS, ensuring effective network management.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get the answer: A Step-by-Step Guide
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Identify the "Taken" Space&lt;/strong&gt;&lt;br&gt;
First, look at the private IP ranges already in use. According to RFC 1918, there are three main blocks reserved for private networks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10.0.0.0/8: (Used by your Corporate Network)&lt;/li&gt;
&lt;li&gt;172.16.0.0/12: (Available)&lt;/li&gt;
&lt;li&gt;192.168.0.0/16: (Used by your Satellite Office)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Apply the Rule of Non-Overlap&lt;/strong&gt;&lt;br&gt;
If you choose a VPC range that sits inside the 10.x.x.x or 192.168.x.x space, your routers won't know where to send a packet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example: If your VPC is 10.0.1.0/24 and your Corporate network is 10.0.0.0/8, the Corporate network contains the VPC range. When a computer in the office tries to talk to the VPC, it might think that IP address is just down the hall in the office rather than across the VPN/Direct Connect to AWS.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Select from the Remaining Private Space&lt;/strong&gt;&lt;br&gt;
Since the 10.x and 192.168.x blocks are occupied, the 172.16.0.0/12 block is your rest candidate, but a common choice is 172.16.0.0/16, which provides 65,536 IP addresses - plenty for most VPC needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note : A /12 is significantly larger than a /16. In networking, the smaller the prefix number, the larger the network. A /12 contains sixteen /16 networks. AWS simply won't let you type 172.16.0.0/12 into the console.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;You plan on creating a subnet and want it to have at least capacity for 28 EC2 instances. What's the minimum size you need to have for your subnet?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx56ewp96lcwm6xhhwovc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx56ewp96lcwm6xhhwovc.png" alt=" " width="783" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; The minimum size you need is a ** /26 **. While a /27 provides 32 total addresses, once AWS takes its 5 reserved IPs, you are left with only 27 usable slots. Since you need 28, you must move up to the next binary step, which is a /26.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Calculation&lt;/strong&gt;&lt;br&gt;
If you need 28 instances, your total IP requirement is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;28 (for your EC2 instances)&lt;/li&gt;
&lt;li&gt;+ 5 (AWS Reserved IPs)&lt;/li&gt;
&lt;li&gt;= 33 Total IP addresses required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we look at CIDR notation (which works in powers of 2) to find the smallest block that fits at least 33 addresses:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczwsrd5at10h3o2hjzcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczwsrd5at10h3o2hjzcf.png" alt=" " width="540" height="159"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;Security Groups operate at the ................. level while NACLs operate at the ................. level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd5zmtgy9tqec2hq45an.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd5zmtgy9tqec2hq45an.png" alt=" " width="795" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; Security Groups operate at the instance level while NACLs operate at the subnet level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;You have attached an Internet Gateway to your VPC, but your EC2 instances still don't have access to the internet. What is NOT a possible issue?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjxvv6fc006bl5vtayys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjxvv6fc006bl5vtayys.png" alt=" " width="799" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; Security groups in AWS are stateful, meaning that if an outgoing request is allowed, the corresponding inbound response will also be allowed, making this option not applicable to your EC2 instances' internet access issue. Keep up the great work understanding security groups!&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;You would like to provide Internet access to your EC2 instances in private subnets with IPv4 while making sure this solution requires the least amount of administration and scales seamlessly. What should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqvzzq0yoyf0nt6l9chx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqvzzq0yoyf0nt6l9chx.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; It is the best option for providing seamless internet access to your EC2 instances in private subnets while minimizing administrative overhead, as it automatically scales with your traffic demands. This choice aligns perfectly with your goal of efficient and hassle-free network management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the other answers are wrong:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Egress-Only Internet Gateway (EOIGW)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Flaw: Egress-Only IGWs are strictly for IPv6 traffic.&lt;/li&gt;
&lt;li&gt;Why it fails here: Your question specifically asks for IPv4 access. IPv4 and IPv6 use entirely different protocols for "hiding" private instances. An EOIGW cannot translate IPv4 addresses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. NAT Instances&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Flaw: These are DIY (Do-It-Yourself) virtual machines.
Why it fails here: * High Administration: You are responsible for managing the EC2 instance, patching the OS, and configuring the NAT software (like iptables).&lt;/li&gt;
&lt;li&gt;Poor Scaling: If your traffic exceeds the instance's bandwidth, you have to manually upgrade the instance size (vertical scaling) or set up a complex fleet (horizontal scaling). It does not scale "seamlessly" like a NAT Gateway does.&lt;/li&gt;
&lt;li&gt;Single Point of Failure: Unless you set up a high-availability script, if that one instance crashes, your entire private subnet loses internet access.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;VPC Peering has been enabled between VPC A and VPC B, and the route tables have been updated for VPC A. But, the EC2 instances cannot communicate. What is the likely issue?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54ldycvi0acafq721mc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54ldycvi0acafq721mc7.png" alt=" " width="793" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; In VPC Peering, both VPCs need updated route tables to allow communication between them; neglecting VPC B's route table can block traffic. This understanding highlights the importance of proper configuration in networking setups on AWS.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;You have set up a Direct Connect connection between your corporate data center and your VPC A in your AWS account. You need to access VPC B in another AWS region from your corporate datacenter as well. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgeh7baffcvridcgb8gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgeh7baffcvridcgb8gw.png" alt=" " width="793" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; It enables you to access multiple VPCs across different regions from your corporate data center, providing a seamless connection. This choice effectively aligns with the objective of optimizing network connectivity in multi-region architectures.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxakt7en21td2agyvnp7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxakt7en21td2agyvnp7d.png" alt=" " width="791" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; These are the only AWS services that support a Gateway Endpoint, which allows private connections to your VPC without using public IPs. This understanding is crucial for efficiently managing secure connections in your AWS architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;AWS reserves 5 IP addresses each time you create a new subnet in a VPC. When you create a subnet with CIDR 10.0.0.0/24, the following IP addresses are reserved, EXCEPT ....................&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj6uogqgxyzpll356ekx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj6uogqgxyzpll356ekx.png" alt=" " width="789" height="283"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; AWS reserves the first four IP addresses (10.0.0.0 to 10.0.0.3) in a subnet for specific functions, meaning 10.0.0.4 is the first usable address and not reserved. This understanding is key when managing IP addresses within your VPC's subnets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Reserved List for 10.0.0.0/24&lt;/strong&gt;&lt;br&gt;
For this specific subnet, the reserved addresses are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;10.0.0.0: Network address.&lt;/li&gt;
&lt;li&gt;10.0.0.1: Reserved by AWS for the VPC router.&lt;/li&gt;
&lt;li&gt;10.0.0.2: Reserved by AWS for mapping to Amazon Provided DNS.&lt;/li&gt;
&lt;li&gt;10.0.0.3: Reserved by AWS for future use.&lt;/li&gt;
&lt;li&gt;10.0.0.255: Network broadcast address (AWS does not support broadcast, but it reserves this address anyway).&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;You have 3 VPCs A, B, and C. You want to establish a VPC Peering connection between all the 3 VPCs. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5iprsukna7um9bwf7st.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5iprsukna7um9bwf7st.png" alt=" " width="785" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Because VPC Peering does not support transitive relationships, meaning each VPC must be directly peered with every other VPC to enable communication. This understanding is crucial for establishing effective connections among multiple VPCs in your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;How can you capture information about IP traffic inside your VPCs?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4f8nzl88vo5x8vvzsz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4f8nzl88vo5x8vvzsz1.png" alt=" " width="785" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; Because this feature allows you to capture and analyze IP traffic data for network interfaces in your VPC, essential for monitoring network activity and auditing connections. Understanding this capability aligns with your learning objective of effectively managing and securing your AWS network infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 13:
&lt;/h2&gt;

&lt;p&gt;If you want a 500 Mbps Direct Connect connection between your corporate datacenter to AWS, you would choose a .................. connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi485joyjnep49nljzalm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi485joyjnep49nljzalm.png" alt=" " width="790" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It supports connections specifically at 500 Mbps, making it the appropriate choice for establishing your desired Direct Connect connection to AWS. This understanding aligns well with your learning about optimizing network performance within your AWS architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2e85vkbysl1acl2mdtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2e85vkbysl1acl2mdtz.png" alt=" " width="559" height="185"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 14:
&lt;/h2&gt;

&lt;p&gt;When you set up an AWS Site-to-Site VPN connection between your corporate on-premises datacenter and VPCs in AWS Cloud, what are the two major components you want to configure for this connection?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finrscj44z1docmey1d7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finrscj44z1docmey1d7u.png" alt=" " width="784" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (4):&lt;/strong&gt; Because these are the essential components needed to establish a Site-to-Site VPN connection between your on-premises datacenter and the AWS Cloud. This understanding aligns with your goal of mastering AWS networking and ensuring secure communication between environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 15:
&lt;/h2&gt;

&lt;p&gt;Your company has several on-premises sites across the USA. These sites are currently linked using private connections, but your private connections provider has been recently quite unstable, making your IT architecture partially offline. You would like to create a backup connection that will use the public Internet to link your on-premises sites, that you can failover in case of issues with your provider. What do you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj7i2gtttss2ldvzk3gr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj7i2gtttss2ldvzk3gr.png" alt=" " width="784" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; It allows you to establish secure communications between multiple on-premises sites over the public Internet using a hub-and-spoke model. This solution aligns perfectly with your objective of ensuring reliable backup connectivity for your environments during potential outages.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 16:
&lt;/h2&gt;

&lt;p&gt;You need to set up a dedicated connection between your on-premises corporate datacenter and AWS Cloud. This connection must be private, consistent, and traffic must not travel through the Internet. Which AWS service should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w2gkgicf0ywpypjbr8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0w2gkgicf0ywpypjbr8f.png" alt=" " width="784" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; It provides a dedicated, private connection between your on-premises datacenter and AWS, ensuring consistent performance without passing through the public Internet. This aligns perfectly with your goal of establishing a reliable and secure network infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrong Choices
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. AWS Site-to-Site VPN&lt;/strong&gt;&lt;br&gt;
Think of this as the "Fast and Affordable" alternative to Direct Connect. It creates an encrypted tunnel between your on-premises data center and your AWS VPC using the Public Internet.&lt;br&gt;
&lt;strong&gt;2. AWS PrivateLink&lt;/strong&gt;&lt;br&gt;
PrivateLink is fundamentally different. It isn't a "network-to-network" connection; it is a "Service-to-Service" connection. It allows you to expose a specific service (like a database or a third-party API) to another VPC or on-premises network without ever using an Internet Gateway, NAT Gateway, or Peering.&lt;br&gt;
&lt;strong&gt;4. Amazon EventBridge&lt;/strong&gt;&lt;br&gt;
EventBridge is often a "distractor" answer when you are asked about establishing a network connection. The reason EventBridge is not the answer for a "dedicated connection" or "private network link" is a matter of Layer and Purpose.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 17:
&lt;/h2&gt;

&lt;p&gt;Using a Direct Connect connection, you can access both public and private AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi263bne3daevnio32vmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi263bne3daevnio32vmn.png" alt=" " width="785" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; You can indeed access both public resources, like AWS S3 buckets, and private resources, such as EC2 instances in a Virtual Private Cloud (VPC). This understanding reinforces your knowledge of how to optimize secure connectivity to AWS resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 18:
&lt;/h2&gt;

&lt;p&gt;You want to scale up an AWS Site-to-Site VPN connection throughput, established between your on-premises data and AWS Cloud, beyond a single IPsec tunnel's maximum limit of 1.25 Gbps. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq5wnem3hmy74sm7unkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq5wnem3hmy74sm7unkk.png" alt=" " width="782" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; It allows you to scale multiple Site-to-Site VPN connections and aggregate traffic efficiently, overcoming the 1.25 Gbps limit of a single IPsec tunnel. This choice showcases your understanding of how Transit Gateway can enhance connectivity and performance in AWS networking.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 19:
&lt;/h2&gt;

&lt;p&gt;You have a VPC in your AWS account that runs in a dual-stack mode. You are continuously trying to launch an EC2 instance, but it fails. After further investigation, you have found that you are no longer have IPv4 addresses available. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8722ld77zne3aakpdvyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8722ld77zne3aakpdvyq.png" alt=" " width="782" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; You chose the appropriate solution to increase the number of available IPv4 addresses, allowing you to launch your EC2 instance successfully. This action directly addresses the issue of address depletion in your VPC while maintaining your current network configuration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 20:
&lt;/h2&gt;

&lt;p&gt;A web application backend is hosted on EC2 instances in private subnets fronted by an Application Load Balancer in public subnets. There is a requirement to give some of the developers access to the backend EC2 instances but without exposing the backend EC2 instances to the Internet. You have created a bastion host EC2 instance in the public subnet and configured the backend EC2 instances Security Group to allow traffic from the bastion host. Which of the following is the best configuration for bastion host Security Group to make it secure?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0r22e8gbv97v87lw2bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0r22e8gbv97v87lw2bx.png" alt=" " width="783" height="272"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Answer (2):&lt;/strong&gt; Ensured that SSH access to the bastion host is secure, allowing developers to manage backend EC2 instances without exposing them to the internet. This configuration supports your learning objective of implementing secure access to resources in AWS environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 21:
&lt;/h2&gt;

&lt;p&gt;A company has set up a Direct Connect connection between their corporate data center to AWS. There is a requirement to prepare a cost-effective secure backup connection in case there are issues with this Direct Connect connection. What is the most cost effective and secure solution you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jy69w3qt6gz3y83r0q0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jy69w3qt6gz3y83r0q0.png" alt=" " width="786" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; By selecting "Setup a Site-to-Site VPN connection as a backup," you chose a cost-effective solution that provides a secure alternative in case the primary Direct Connect connection fails. This approach ensures continuous connectivity while balancing security and cost, aligning well with the goal of maintaining reliable access to AWS resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 22:
&lt;/h2&gt;

&lt;p&gt;Which AWS service allows you to protect and control traffic in your VPC from layer 3 to layer 7?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjum6gmm3of241z7rg8of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjum6gmm3of241z7rg8of.png" alt=" " width="792" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (1):&lt;/strong&gt; The service designed to protect and control traffic in your VPC across multiple layers, ensuring robust security for your cloud resources. This aligns with your learning objective of understanding traffic management and security within AWS environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrkyh35ava9ejfs0ss0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrkyh35ava9ejfs0ss0v.png" alt=" " width="662" height="188"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 23:
&lt;/h2&gt;

&lt;p&gt;A web application hosted on a fleet of EC2 instances managed by an Auto Scaling Group. You are exposing this application through an Application Load Balancer. Both the EC2 instances and the ALB are deployed on a VPC with the following CIDR 192.168.0.0/18. How do you configure the EC2 instances' security group to ensure only the ALB can access them on port 80?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90q6y44o4xoaijwe5434.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90q6y44o4xoaijwe5434.png" alt=" " width="786" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer (3):&lt;/strong&gt; By choosing "Add an Inbound Rule with port 80 and ALB's Security Group as the source," you ensured that only the Application Load Balancer can communicate with your EC2 instances, significantly enhancing your security posture. This aligns with your learning objective of understanding VPC traffic management and the importance of using security groups for precise access control.&lt;/p&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="//issackpaul95@gmail.com"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have. All blogs in this series will be optimized, fine-tuned, developed, and updated in a timely manner to reflect the latest AWS changes, exam updates, and real-world best practices.&lt;/p&gt;

</description>
      <category>vpc</category>
      <category>aws</category>
      <category>networking</category>
      <category>loadbalancer</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | High availability &amp; Scalability</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Wed, 11 Feb 2026 09:59:53 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-high-availability-scalability-4oi2</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-high-availability-scalability-4oi2</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;Scaling an EC2 instance from r4.large to r4.4xlarge is called .....................&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff545vpoar2wyt6dnfhkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff545vpoar2wyt6dnfhkm.png" alt=" " width="789" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; Scaling an EC2 instance from a smaller size (r4.large) to a larger one (r4.4xlarge) is an example of upgrading the resources of a single instance, which defines vertical scalability. This concept focuses on increasing the capacity of existing hardware rather than adding more instances.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called .....................&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoifsc1lgo3cokl9wurn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoifsc1lgo3cokl9wurn.png" alt=" " width="789" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Running an application on an Auto Scaling Group involves adding or removing instances to handle changes in demand, which perfectly exemplifies the concept of horizontally scaling by increasing capacity through multiple instances rather than upgrading a single instance's resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;Elastic Load Balancers provide a .......................&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdp850q37qzejlfknwkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdp850q37qzejlfknwkg.png" alt=" " width="788" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; Elastic Load Balancers provide a constant endpoint for your application, allowing you to manage changes in the underlying infrastructure without affecting how your users connect to your services. This ensures reliability and accessibility, aligning with best practices in application scalability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it's working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7otbmofguqeo7rkd5vdh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7otbmofguqeo7rkd5vdh.png" alt=" " width="788" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Sticky Sessions enabled on the Elastic Load Balancer, user requests may be routed to different EC2 instances, causing loss of session data and prompting re-authentication. This feature ensures that users are consistently directed to the same instance, maintaining their session state as they navigate the website.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer's IP addresses. What should you do to get the IP address of clients connected to your website?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypa1w6hgto8c3u3aivpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypa1w6hgto8c3u3aivpw.png" alt=" " width="787" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; To get the client IP address from the X-Forwarded-For header" is correct because the Application Load Balancer (ALB) uses this header to forward the original client's IP address to your EC2 instances, enabling accurate tracking of user traffic. This capability is essential for effective logging, analytics, and security measures on your site.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn't work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnznpdr19tznp4gmdrypv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnznpdr19tznp4gmdrypv.png" alt=" " width="793" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; This feature allows the Elastic Load Balancer to automatically monitor the health of your EC2 instances. By doing so, it prevents routing traffic to any instances that are unhealthy or crashed, ensuring a better experience for your users.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci3n842rlrcc5qpjtcfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci3n842rlrcc5qpjtcfs.png" alt=" " width="778" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It is designed to handle millions of requests per second, delivering the highest performance and lowest latency, making it ideal for high-performance applications. This choice aligns with the objective of optimizing application efficiency in demanding environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;Application Load Balancers support the following protocols, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7frpkhwulh5706ruam9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7frpkhwulh5706ruam9v.png" alt=" " width="785" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Application Load Balancers are specifically designed to support application-layer protocols such as HTTP, HTTPS, and WebSocket, but do not support transport-layer protocols like TCP. This distinction is crucial for understanding how different load balancers operate based on the protocols they manage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxkz79jxh9zvu9n5c234.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxkz79jxh9zvu9n5c234.png" alt=" " width="785" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Application Load Balancers do not route traffic based on geographic location; instead, they can route based on criteria like URL Path and Hostname. This distinction helps clarify how ALBs function in managing traffic efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn2xjo94tkwmhxdj0yl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn2xjo94tkwmhxdj0yl5.png" alt=" " width="785" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; Registered targets in an Application Load Balancer's Target Groups can only include EC2 Instances, Private IP Addresses, and Lambda Functions, but not other load balancers. This distinction highlights the specific roles each service plays within the AWS ecosystem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel72yyzbq1bnrenrivpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fel72yyzbq1bnrenrivpt.png" alt=" " width="779" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It allows you to attach an Elastic IP address, providing a stable and fixed static IP for compliance purposes, which is essential for your end-users. This capability makes it an ideal choice for ensuring consistency in firewall rules and regulatory approval.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8qg0892xbey4wgh698a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8qg0892xbey4wgh698a.png" alt=" " width="795" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; it is a valid cookie name you can define for your custom application-based cookies in an Application Load Balancer, while the other options are reserved names used by AWS. This distinction helps ensure you create custom cookies effectively for managing user sessions in your application.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 13:
&lt;/h2&gt;

&lt;p&gt;You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fla95nor3tjb6jqxcr3kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fla95nor3tjb6jqxcr3kk.png" alt=" " width="783" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; It ensures that traffic is distributed evenly across all your EC2 instances in different Availability Zones, helping to balance the CPU utilization among them. This effectiveness directly addresses the issue of uneven resource usage in your load-balanced environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 14:
&lt;/h2&gt;

&lt;p&gt;Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdje8mbjbnctenzz0k2qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdje8mbjbnctenzz0k2qu.png" alt=" " width="783" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It is the feature that allows multiple SSL certificates to be bound to a single listener in both Application Load Balancers and Network Load Balancers. This capability enables you to host multiple secure domains on the same IP address, making it efficient and cost-effective for managing SSL certificates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 15:
&lt;/h2&gt;

&lt;p&gt;You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdndnnsx3kjt8txjcrqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdndnnsx3kjt8txjcrqt.png" alt=" " width="784" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; SNI allows you to assign multiple SSL certificates to different hostnames on the same Application Load Balancer listener, making it possible to securely configure HTTPS for all your specified domains efficiently. This aligns with your learning objective of understanding how to manage SSL certificates in a load-balanced environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 16:
&lt;/h2&gt;

&lt;p&gt;You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4wkuriahi8e1ju2r098.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4wkuriahi8e1ju2r098.png" alt=" " width="784" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; The maximum capacity of your Auto Scaling Group is set to 3, which means it cannot scale beyond this limit regardless of the increased CPU utilization. This reinforces your understanding of Auto Scaling Group configurations and their constraints.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 17:
&lt;/h2&gt;

&lt;p&gt;You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rzqxbeuddnojsa23gau.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rzqxbeuddnojsa23gau.png" alt=" " width="784" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Auto Scaling Group (ASG) uses Application Load Balancer (ALB) health checks to monitor instance health. When an instance is marked unhealthy by the ALB, the ASG terminates it and launches a new instance to maintain the desired capacity and reliability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 18:
&lt;/h2&gt;

&lt;p&gt;Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eiwvmnhq0ig9wi2ne3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eiwvmnhq0ig9wi2ne3p.png" alt=" " width="785" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Standard CloudWatch metrics do not capture requests per minute for database connections. This approach allows you to effectively monitor your application's needs and scale the Auto Scaling Group accordingly, aligning with your objective of understanding dynamic scaling based on application performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 19:
&lt;/h2&gt;

&lt;p&gt;An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08sgrz929a8m20s641do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08sgrz929a8m20s641do.png" alt=" " width="785" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It allows you to automatically adjust the number of EC2 instances in your Auto Scaling Group to maintain a specific metric, such as the average number of connections, close to your target of 1000. This approach effectively simplifies scaling based on real-time performance metrics, aligning directly with your objective of automating resource management.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 20:
&lt;/h2&gt;

&lt;p&gt;You have an ASG and a Network Load Balancer. The application on your ASG supports the HTTP protocol and is integrated with the Load Balancer health checks. You are currently using the TCP health checks. You would like to migrate to using HTTP health checks, what do you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F346n7hpoyey71b2xwxua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F346n7hpoyey71b2xwxua.png" alt=" " width="786" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; The Network Load Balancer (NLB) is capable of using HTTP health checks, which are more tailored for applications supporting the HTTP protocol. This ensures more accurate monitoring of application availability and performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 21:
&lt;/h2&gt;

&lt;p&gt;You have a website hosted in EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. Currently, the website is served over HTTP, and you have been tasked to configure it to use HTTPS. You have created a certificate in ACM and attached it to the Application Load Balancer. What you can do to force users to access the website using HTTPS instead of HTTP?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5lxvv5dblbivtqjucfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5lxvv5dblbivtqjucfv.png" alt=" " width="793" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; By configuring the Application Load Balancer to redirect HTTP to HTTPS, you ensure that all traffic to your website is securely encrypted, enhancing user privacy and site security. This action directly meets the learning objective of effectively managing web application traffic and implementing security best practices within AWS environments.&lt;/p&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="//issackpaul95@gmail.com"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have. All blogs in this series will be optimized, fine-tuned, developed, and updated in a timely manner to reflect the latest AWS changes, exam updates, and real-world best practices.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudpractitioner</category>
      <category>elb</category>
      <category>scalability</category>
    </item>
    <item>
      <title>AWS Use Cases | Enhanced Streak System for Game Portal with Leaderboards &amp; Rewards</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Mon, 01 Dec 2025 17:04:50 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-use-cases-enhanced-streak-system-for-game-portal-with-leaderboards-rewards-17p0</link>
      <guid>https://forem.com/minoltan/aws-use-cases-enhanced-streak-system-for-game-portal-with-leaderboards-rewards-17p0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Streaks
&lt;/h2&gt;

&lt;p&gt;A streak is a consecutive count of days (or actions) a user performs a specific activity without breaking the chain. Streaks are commonly used in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Habit-tracking apps (e.g., Duolingo, Headspace)&lt;/li&gt;
&lt;li&gt;Gaming (daily login rewards, consecutive wins)&lt;/li&gt;
&lt;li&gt;Fitness apps (workout consistency)&lt;/li&gt;
&lt;li&gt;E-learning platforms (daily learning goals)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How AWS Helps Implement Streaks
&lt;/h2&gt;

&lt;p&gt;AWS provides serverless and scalable solutions to track streaks efficiently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda → Runs streak logic (increment, reset, reward checks) DynamoDB → Stores user streak data (last activity, current streak count)&lt;/li&gt;
&lt;li&gt;API Gateway → Exposes APIs for frontend (web/mobile apps)&lt;/li&gt;
&lt;li&gt;Amazon Cognito (Optional) → Handles user authentication&lt;/li&gt;
&lt;li&gt;AWS CDK → Easy Deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Cases for Streaks &amp;amp; Implementation Steps
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Daily Login Streaks (Gaming/Fitness Apps)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Reward users for logging in daily.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Steps:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Set Up DynamoDB Table&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Table: UserStreak&lt;/li&gt;
&lt;li&gt;Partition Key: userId (String)&lt;/li&gt;
&lt;li&gt;Sort Key: streakType&lt;/li&gt;
&lt;li&gt;Attributes: currentStreak, lastLogin, longestStreak&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Create streakTrack Lambda Function&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks if the user logged in today → skip&lt;/li&gt;
&lt;li&gt;If logged in yesterday → increment streak&lt;/li&gt;
&lt;li&gt;If missed a day → reset streak
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { UpdateItemCommand, GetItemCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";
import { ddbClient } from "./client";

const TABLE_NAME = process.env.STREAK_TABLE_NAME;
const MAX_FREEZE_DAYS = 2;

export const handler = async (event) =&amp;gt; {
  try {
    const { userId } = JSON.parse(event.body);
    if (!userId) {
      return { statusCode: 400, body: JSON.stringify({ error: "userId is required" }) };
    }

    const today = new Date().toISOString().split("T")[0];
    const yesterday = new Date();
    yesterday.setDate(yesterday.getDate() - 1);
    const yesterdayStr = yesterday.toISOString().split("T")[0];

    // ✅ Get current streak and freeze days
    const { currentStreak, lastLogin, freezeDaysRemaining } = await getUserData(userId);

    // ✅ If already logged in today
    if (lastLogin === today) {
      return success({ message: "Already logged in today", currentStreak, freezeDaysRemaining });
    }

    let newStreak = 1;
    let newFreeze = freezeDaysRemaining;

    // ✅ Case 1: Consecutive login (yesterday)
    if (lastLogin === yesterdayStr) {
      newStreak = currentStreak + 1;
    } 
    // ✅ Case 2: Missed days but has freeze days → use one
    else if (freezeDaysRemaining &amp;gt; 0) {
      newStreak = currentStreak; // keep streak intact
      newFreeze = freezeDaysRemaining - 1; // use one freeze day
    }

    // ✅ Update DB
    await updateUserData(userId, today, newStreak, newFreeze);

    return success({
      message: freezeDaysRemaining &amp;gt; 0 &amp;amp;&amp;amp; lastLogin !== yesterdayStr ? 
        "Missed day covered by a freeze day" : "Streak updated",
      currentStreak: newStreak,
      freezeDaysRemaining: newFreeze
    });

  } catch (err) {
    console.error("Error:", err);
    return { statusCode: 500, body: JSON.stringify({ error: err.message }) };
  }
};

// 🔹 Get user streak &amp;amp; freeze data
async function getUserData(userId) {
  const { Item } = await ddbClient.send(new GetItemCommand({
    TableName: TABLE_NAME,
    Key: marshall({ userId, streakType: "daily" }), // using same PK as freeze
  }));

  if (!Item) return { currentStreak: 0, lastLogin: null, freezeDaysRemaining: 0 };

  const data = unmarshall(Item);
  return {
    currentStreak: data.currentStreak || 0,
    lastLogin: data.lastLogin || null,
    freezeDaysRemaining: data.freezeDaysRemaining || 0
  };
}

// 🔹 Update streak and freeze count
async function updateUserData(userId, today, newStreak, newFreeze) {
  await ddbClient.send(new UpdateItemCommand({
    TableName: TABLE_NAME,
    Key: marshall({ userId, streakType: "daily" }),
    UpdateExpression: "SET currentStreak = :cs, lastLogin = :dt, freezeDaysRemaining = :fd",
    ExpressionAttributeValues: marshall({
      ":cs": newStreak,
      ":dt": today,
      ":fd": newFreeze
    })
  }));
}

// 🔹 Helper success response
function success(body) {
  return {
    statusCode: 200,
    headers: { "Access-Control-Allow-Origin": "*" },
    body: JSON.stringify(body)
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.Create streakFreeze Lambda Function&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { GetItemCommand, UpdateItemCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";
import { ddbClient } from "./client.js";

const STREAK_TABLE_NAME = process.env.STREAK_TABLE_NAME;

export const handler = async (event) =&amp;gt; {
    try {
        const { userId } = await validateAndParseInput(event.body);

        const { freezeDaysRemaining, itemExists } = await getCurrentFreezeDays(userId);

        if (freezeDaysRemaining &amp;gt;= 2) {
            return formatErrorResponse(400, "Maximum freeze days (2) already reached");
        }

        const updatedFreeze = await updateFreezeDays(userId, freezeDaysRemaining, itemExists);

        return {
            statusCode: 200,
            headers: { "Access-Control-Allow-Origin": "*" },
            body: JSON.stringify({
                status: "success",
                freezeDaysRemaining: updatedFreeze
            })
        };

    } catch (error) {
        console.error("handler: ", error);
        return formatErrorResponse(400, error.message);
    }
};

async function validateAndParseInput(body) {
    const payload = JSON.parse(body);
    const { userId } = payload;

    if (!userId) {
        throw new Error("Missing required field: userId");
    }

    return { userId };
}

async function getCurrentFreezeDays(userId) {
    const { Item } = await ddbClient.send(new GetItemCommand({
        TableName: STREAK_TABLE_NAME,
        Key: marshall({ userId, streakType: "daily" }),
        ProjectionExpression: "freezeDaysRemaining"
    }));

    return {
        freezeDaysRemaining: Item ? unmarshall(Item).freezeDaysRemaining || 0 : 0,
        itemExists: !!Item
    };
}

async function updateFreezeDays(userId, currentFreezeDays, itemExists) {
    const updateParams = {
        TableName: STREAK_TABLE_NAME,
        Key: marshall({ userId, streakType: "daily" }),
        UpdateExpression: "SET freezeDaysRemaining = :newVal",
        ExpressionAttributeValues: marshall({ ":newVal": currentFreezeDays + 1 }),
        ReturnValues: "ALL_NEW"
    };

    if (!itemExists) {
        // For new records, set additional default values
        updateParams.UpdateExpression = "SET freezeDaysRemaining = :newVal, currentStreak = :zero, longestStreak = :zero, lastActivity = :empty";
        updateParams.ExpressionAttributeValues = marshall({
            ":newVal": 1,
            ":zero": 0,
            ":empty": ""
        });
    }

    const { Attributes } = await ddbClient.send(new UpdateItemCommand(updateParams));
    return unmarshall(Attributes).freezeDaysRemaining;
}

function formatErrorResponse(statusCode, message) {
    return {
        statusCode,
        headers: { "Access-Control-Allow-Origin": "*" },
        body: message
    };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Set Up API Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POST /streak/track → Triggers Lambda&lt;/li&gt;
&lt;li&gt;POST /streak/freeze&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Frontend Integration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Call API when user logs in&lt;/li&gt;
&lt;li&gt;Display streak count&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Example Explanation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initial Conditions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;currentStreak = 3&lt;/li&gt;
&lt;li&gt;freezeDaysRemaining = 1&lt;/li&gt;
&lt;li&gt;lastLogin = 2025-07-28&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;✅ Case 1: User logs in on 2025–07–29 (yesterday was last login)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda receives event: { "userId": "1134" }&lt;/li&gt;
&lt;li&gt;It checks lastLogin === yesterday (2025-07-28) → ✅ yes.&lt;/li&gt;
&lt;li&gt;No freeze day is used.&lt;/li&gt;
&lt;li&gt;currentStreak = 4, freezeDaysRemaining = 1&lt;/li&gt;
&lt;li&gt;Response:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message": "Streak updated",
  "currentStreak": 4,
  "freezeDaysRemaining": 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Case 2: User skips 2025–07–29, logs in on 2025–07–30&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missed one day (2025–07–29)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Lambda checks: lastLogin = 2025-07-28, today = 2025-07-30&lt;/li&gt;
&lt;li&gt;lastLogin !== yesterday, so normally streak would reset.&lt;/li&gt;
&lt;li&gt;But freezeDaysRemaining &amp;gt; 0 → ✅ use one freeze.&lt;/li&gt;
&lt;li&gt;currentStreak stays 3, freezeDaysRemaining = 0&lt;/li&gt;
&lt;li&gt;Response:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message": "Missed day covered by a freeze day",
  "currentStreak": 3,
  "freezeDaysRemaining": 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Case 3: User skips 2025–07–31, logs in on 2025–08–01&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missed two consecutive days and has no freeze left&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Lambda checks: lastLogin = 2025-07-28, today = 2025-08-01&lt;/li&gt;
&lt;li&gt;lastLogin !== yesterday, and freezeDaysRemaining = 0&lt;/li&gt;
&lt;li&gt;No freeze day available → streak resets to 1&lt;/li&gt;
&lt;li&gt;currentStreak = 1, freezeDaysRemaining = 0&lt;/li&gt;
&lt;li&gt;Response:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message": "Streak updated",
  "currentStreak": 1,
  "freezeDaysRemaining": 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Case 4: User later earns a freeze day (via freeze API)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User calls /streak/freeze with { "userId": "1134", "action": "add" }&lt;/li&gt;
&lt;li&gt;Freeze Lambda increments freezeDaysRemaining but caps it at 2.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "status": "success",
  "freezeDaysRemaining": 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Case 5: User tries to manually use a freeze&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calls /streak/freeze with { "userId": "1134", "action": "use" }&lt;/li&gt;
&lt;li&gt;Lambda checks: freezeDaysRemaining &amp;gt; 0 → ✅ yes, decreases by 1.&lt;/li&gt;
&lt;li&gt;If already 0, returns error:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "error": "No freeze days remaining" }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🔥 How This Works Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Streak Lambda&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-consumes freeze only when needed (user missed a day).&lt;/li&gt;
&lt;li&gt;Never lets streak reset unnecessarily if freeze is available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Freeze Lambda&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adds freeze days when rewarded.&lt;/li&gt;
&lt;li&gt;Allows manual usage (optional) if needed.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. Consecutive Wins Streak (Gaming Leaderboards)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; Track players’ winning streaks and reward top performers.&lt;br&gt;
&lt;strong&gt;Implementation Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. DynamoDB Table&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UserStreak&lt;/li&gt;
&lt;li&gt;PK: userId&lt;/li&gt;
&lt;li&gt;Sort Key: streakType&lt;/li&gt;
&lt;li&gt;Attributes: currentWinStreak, maxWinStreak, lastWinDate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Lambda Function&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After a game ends, check if the player won &lt;/li&gt;
&lt;li&gt;Increment streak if last game was a win&lt;/li&gt;
&lt;li&gt;Reset if lost
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { GetItemCommand, UpdateItemCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";
import { ddbClient } from "./client.js";

const TABLE_NAME = process.env.STREAK_TABLE_NAME;

export const handler = async (event) =&amp;gt; {
  try {
    const { userId, won } = JSON.parse(event.body);

    if (!userId || won === undefined) {
      return formatResponse(400, { error: "userId and won (true/false) are required" });
    }

    const today = new Date().toISOString().split("T")[0];
    const yesterday = new Date();
    yesterday.setDate(yesterday.getDate() - 1);
    const yesterdayStr = yesterday.toISOString().split("T")[0];

    // Get current game streak data
    const { currentWinStreak, maxWinStreak, lastWinDate } = await getGameStreak(userId);

    let newWinStreak = currentWinStreak;
    let newMaxWinStreak = maxWinStreak;

    if (won) {
      // If last game was yesterday, continue streak, else reset to 1
      newWinStreak = lastWinDate === yesterdayStr ? currentWinStreak + 1 : 1;

      // Update max streak
      if (newWinStreak &amp;gt; maxWinStreak) {
        newMaxWinStreak = newWinStreak;
      }

      // Update DynamoDB
      await updateGameStreak(userId, today, newWinStreak, newMaxWinStreak);
    } else {
      // Player lost → reset current streak
      newWinStreak = 0;
      await updateGameStreak(userId, today, newWinStreak, maxWinStreak);
    }

    return formatResponse(200, {
      message: won ? "Game won streak updated" : "Game lost, streak reset",
      currentWinStreak: newWinStreak,
      maxWinStreak: newMaxWinStreak
    });

  } catch (err) {
    console.error("Error updating game streak:", err);
    return formatResponse(500, { error: err.message });
  }
};

// 🔹 Get current streak from DynamoDB
async function getGameStreak(userId) {
  const { Item } = await ddbClient.send(new GetItemCommand({
    TableName: TABLE_NAME,
    Key: marshall({ userId, streakType: "game" }),
    ProjectionExpression: "currentWinStreak, maxWinStreak, lastWinDate"
  }));

  if (!Item) {
    return { currentWinStreak: 0, maxWinStreak: 0, lastWinDate: null };
  }

  const data = unmarshall(Item);
  return {
    currentWinStreak: data.currentWinStreak || 0,
    maxWinStreak: data.maxWinStreak || 0,
    lastWinDate: data.lastWinDate || null
  };
}

// 🔹 Update streak in DynamoDB
async function updateGameStreak(userId, today, currentWinStreak, maxWinStreak) {
  await ddbClient.send(new UpdateItemCommand({
    TableName: TABLE_NAME,
    Key: marshall({ userId, streakType: "game" }),
    UpdateExpression: "SET currentWinStreak = :cws, maxWinStreak = :mws, lastWinDate = :ld",
    ExpressionAttributeValues: marshall({
      ":cws": currentWinStreak,
      ":mws": maxWinStreak,
      ":ld": today
    }),
    ReturnValues: "UPDATED_NEW"
  }));
}

// 🔹 Helper response formatter
function formatResponse(statusCode, body) {
  return {
    statusCode,
    headers: { "Access-Control-Allow-Origin": "*" },
    body: JSON.stringify(body)
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;✅ Example Flow&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;🟢 Case 1: User wins consecutive games&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lastWinDate: 2025–07–30&lt;/li&gt;
&lt;li&gt;today: 2025–07–31&lt;/li&gt;
&lt;li&gt;Result: currentWinStreak = 3, maxWinStreak = 3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔴 Case 2: User loses&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;won: false&lt;/li&gt;
&lt;li&gt;Result: currentWinStreak = 0, maxWinStreak stays as it was.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Streaks are a powerful engagement tool, and AWS makes implementation easy:&lt;br&gt;
&lt;strong&gt;✅ Serverless &amp;amp; Scalable (Lambda + DynamoDB)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;✅ Real-Time Updates (API Gateway)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;✅ Reward Integration (Lambda + DynamoDB)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;✅ Cost-Effective (Pay-per-use pricing)&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Steps:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Start with a basic &lt;strong&gt;daily login streak&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Expand to &lt;strong&gt;game win streaks&lt;/strong&gt; and &lt;strong&gt;habit tracking&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;rewards &amp;amp; leaderboards&lt;/strong&gt; for higher engagement&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Advance on Streaks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Milestone Offers (Risk/Reward)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When users hit milestones (e.g., 7 days), give them a choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Option A:&lt;/strong&gt; Continue safely (streak grows normally)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Option B:&lt;/strong&gt; Gamble (“Break your streak now for 3x rewards!”)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Smart Streak Logic&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracks timezone-aware daily activity&lt;/li&gt;
&lt;li&gt;Handles edge cases (midnight checks, server delays)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Leaderboard Logic&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add reward for higher in the leaderboard&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;For CDK Implementation — &lt;a href="https://github.com/minoltan/aws-usecases/tree/main/streak-system" rel="noopener noreferrer"&gt;My Reposiotry&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="http://issackpaul95@gmail.com/" rel="noopener noreferrer"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>serverless</category>
      <category>aws</category>
      <category>architecture</category>
    </item>
    <item>
      <title>AWS Use Cases | Spin Wheel | How big companies manage prize giveaways and prevent duplication at scale using AWS serverless</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sun, 30 Nov 2025 16:41:30 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-use-cases-spin-wheel-how-big-companies-manage-prize-giveaways-and-prevent-duplication-at-2f2c</link>
      <guid>https://forem.com/minoltan/aws-use-cases-spin-wheel-how-big-companies-manage-prize-giveaways-and-prevent-duplication-at-2f2c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine the thrill of a spin-to-win promotion on your favorite e-commerce site. The wheel spins, the music builds, and for a few seconds, you’re on the edge of your seat, hoping for that big prize. For businesses, these gamified promotions are a goldmine: they drive immense user engagement, collect valuable data, and can significantly boost sales. But behind the scenes, managing such a giveaway for millions of eager users isn’t about luck — it’s about a robust, scalable architecture. The wrong approach can turn a successful campaign into a financial disaster, leaving you with an empty prize chest and a legion of frustrated customers.&lt;/p&gt;

&lt;p&gt;This blog post will take you on a journey behind the curtain of a high-traffic spin-wheel promotion. We’ll explore the serious technical challenges that even the biggest companies face and, most importantly, show you how they build a bulletproof system using AWS Serverless to manage prize giveaways at scale and prevent a critical flaw: prize duplication.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem in Simple Terms
&lt;/h2&gt;

&lt;p&gt;Think of it like this: you have a single, highly-coveted prize — say, a brand new phone. You’ve announced that the first person to win on the spin wheel gets it.&lt;/p&gt;

&lt;p&gt;Now, imagine thousands of people are all spinning the wheel at the same exact time. In the chaos of this high traffic, two different users (let’s call them Alice and Bob) hit the “spin” button within a fraction of a second of each other.&lt;/p&gt;

&lt;p&gt;The problem, known as a race condition, occurs when your system’s prize check and prize deduction aren’t fast enough. The system might check for the phone, see that it’s available, and decide to award it to Alice. But before it can finalize that decision and remove the phone from the inventory, the system also checks for Bob, sees the same phone is available, and awards it to him as well.&lt;/p&gt;

&lt;p&gt;The result? The system mistakenly gives the same prize to both Alice and Bob. This leads to frustrated customers, lost revenue, and a serious blow to your brand’s credibility. It’s the digital equivalent of two people trying to grab the last item on a shelf at the exact same time, but in this case, both of them walk away with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Our Solution in Simple Terms
&lt;/h2&gt;

&lt;p&gt;So, how do big companies fix this problem? They don’t rely on luck. They use a special kind of database operation that acts like a digital bodyguard, ensuring a prize can only ever be claimed once.&lt;/p&gt;

&lt;p&gt;Imagine our prize is the single, brand-new phone. When you spin the wheel and win, your request goes to a serverless program (an AWS Lambda function). This program doesn’t just check if the prize is there — it tries to claim it in a single, un-interruptible step.&lt;/p&gt;

&lt;p&gt;This is the key. The program says to the database: “Give me this phone, but only if it’s still available.”&lt;/p&gt;

&lt;p&gt;If the phone is available, the database immediately gives it to you and updates its records so no one else can see it. This all happens so fast and so securely that no other person’s request can get in the way.&lt;br&gt;
If, however, another person (like Bob from our example) tried to claim the phone at the same time, their request would be denied instantly because the database would see that the prize count is now zero.&lt;/p&gt;

&lt;p&gt;This special “all-or-nothing” command is what big companies use. It ensures that even with millions of spins, the system will never give out the same prize twice. You get your immediate result, and the company’s prize inventory stays perfectly accurate, keeping everyone happy.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Grand Architecture: The Full Picture
&lt;/h2&gt;

&lt;p&gt;This spin wheel module is designed as an independent, reusable component that can be integrated into any existing system (e.g., e-commerce, gaming app) via API calls. It uses AWS serverless services for scalability and low maintenance. You can also leverage my CDK implementation to easily deploy your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe86u3sx25e08m3js8wvh.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe86u3sx25e08m3js8wvh.webp" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Key features:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Independence:&lt;/strong&gt; The module exposes a REST API endpoint via API Gateway. Your existing system can call it with a user_id (authenticated via API key, JWT, or Cognito — I don’t use authorization for simplicity).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Race Condition Handling:&lt;/strong&gt; Uses DynamoDB’s atomic conditional updates (optimistic locking) to prevent over-allocation of prizes or extra spins under high concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Prize Logic:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each prize has a configurable weight (relative probability).&lt;/li&gt;
&lt;li&gt;Overall prize winning probability (e.g., 0.3 for 30% chance of winning any prize) is configurable. Logic: First, generate a random number; if it’s less than overall_win_prob, select a prize based on weighted random; else, return “no prize”.&lt;/li&gt;
&lt;li&gt;Prize stock is decremented atomically only if won and available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Token Eligibility:&lt;/strong&gt; Configurable max_spins (default 2) per user. Tracks spins_remaining in DynamoDB and token expiry date.&lt;/p&gt;


&lt;h2&gt;
  
  
  Step-by-step AWS setup
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1: Set Up DynamoDB Table
&lt;/h3&gt;

&lt;p&gt;Use on-demand capacity for all tables to handle variable traffic.&lt;br&gt;
For single table Approach we are using SpinWheel as the table name using PK is the partition key and SK is the sort key for the minimal maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create global table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp8cj5h9xpxg806omfpa.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp8cj5h9xpxg806omfpa.webp" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create Config Record (for global configs):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Item:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "PK": "GLOBAL",
  "SK": "CONFIG",
  "overall_win_prob": 0.3,
  "max_spins": 2,
  "token_expire_at": "2025-08-24T12:00:00Z",
  "last_updated": "2025-08-26T12:00:00Z"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;3. Create Prize Record&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Items:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
   {
      "PK":"PRIZE",
      "SK":"12345678",
      "name":"Mobile Phone",
      "initial_stock": 100,
      "available_stock":95
      "weight":10,
      "last_updated":"2025-08-26T12:00:00Z",
      "active":true
   },
   {
      "PK":"PRIZE",
      "SK":"18765432",
      "name":"Bag",
      "initial_stock": 1000,
      "available_stock":905
      "weight":50,
      "last_updated":"2025-08-25T12:00:00Z",
      "active":true
   },
   {
      "PK":"PRIZE",
      "SK":"28765432",
      "name":"1 Million",
      "initial_stock": 10,
      "available_stock":5
      "weight":1,
      "last_updated":"2025-08-24T12:00:00Z",
      "active":false
   }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Token Record (for token history)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Example Item:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The initial record will update when the user reads the token before the spin wheel click&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
{
  "PK": "PRIZE_TOKEN",
  "SK": "12345kd2k249dmm3sd", // token
  "spins_remaining": 0,
  "expire_at": "2025-08-26T12:00:00Z", 
  "prizes_won": [
    { 
      "id": "12345678",
      "name": "Mobile",
      "spin_timestamp": "2025-08-26T12:00:00Z",
      "chanceAt": 1
    }
   ]
},
{
  "PK": "PRIZE_TOKEN",
  "SK": "45645kd2k249dmm3sd",
  "spins_remaining": 2,
  "expire_at": "2025-08-26T12:00:00Z",
  "prizes_won": []
},
,
{
  "PK": "PRIZE_TOKEN",
  "SK": "12345df2k249dmm3sd",
  "spins_remaining": 0,
  "expire_at": "2025-08-26T12:00:00Z",
  "prizes_won": []
}
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Set Up Lambda Function
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A. Set Up Dependencies and Environment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use Node.js 20.x runtime.&lt;/li&gt;
&lt;li&gt;Dependencies: @aws-sdk/client-dynamodb (for DynamoDB operations).&lt;/li&gt;
&lt;li&gt;No extra installs needed (crypto is built-in).&lt;/li&gt;
&lt;li&gt;Env Vars: TABLE_NAME (e.g., “SpinWheel”).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta7dxnk1omwpu9k4gsoe.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta7dxnk1omwpu9k4gsoe.webp" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb6mj3rw6jvk94re4f5x.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb6mj3rw6jvk94re4f5x.webp" alt=" " width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9p5yap35bmnqgir0k1q.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9p5yap35bmnqgir0k1q.webp" alt=" " width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  B. Add Permission for the lambda
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Go to claimSpinWheelRole (I AM role config)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48tnzwvx35w0nznafydr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48tnzwvx35w0nznafydr.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the permission&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6d7beo46sv2o4jnsp9i.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6d7beo46sv2o4jnsp9i.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the permissions to access dynamo DB&lt;/li&gt;
&lt;li&gt;Replace the account id
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:Query",
        "dynamodb:UpdateItem"
      ],
      "Resource": "arn:aws:dynamodb:*:&amp;lt;your-account-id&amp;gt;:table/SpinWheel"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj39gdpj5alqex1j0fjlm.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj39gdpj5alqex1j0fjlm.webp" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  C. Upload the zip file with node modules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You can locally create the index.js file and install needed dependency using npm command.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Initialize a Node.js Project: Create a new directory for your Lambda function and initialize a Node.js project.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
mkdir spin-wheel-lambda
cd spin-wheel-lambda
npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install @aws-sdk/client-dynamodb: Install the AWS SDK DynamoDB client.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @aws-sdk/client-dynamodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify package.json: After installation, your package.json should look like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "spin-wheel-lambda",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.645.0"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: The version (3.645.0) may vary; use the latest stable version available at the time of installation.&lt;br&gt;
consider this — “type”: “module”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Create ddbClient.js
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {DynamoDBClient} from "@aws-sdk/client-dynamodb";

const REGION= "ap-southeast-1";
export const ddbClient = new DynamoDBClient({ region: REGION });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create index.js
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ddbClient } from "./ddbClient.js";
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a ZIP File
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zip -r spin-wheel-lambda.zip index.js ddbClient.js node_modules package.json package-lock.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91vu51f7pl26wsgogb1o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91vu51f7pl26wsgogb1o.webp" alt=" " width="720" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload the file to created lambda function&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh015j1oo4b2f1lxxslth.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh015j1oo4b2f1lxxslth.webp" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Replace the lambda function
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import { GetItemCommand, QueryCommand, TransactWriteItemsCommand } from "@aws-sdk/client-dynamodb";
import { marshall, unmarshall } from "@aws-sdk/util-dynamodb";
import { ddbClient } from "./ddbClient.js";

const ERROR_MESSAGES = {
    INVALID_TOKEN: 'Invalid token or no spins remaining',
    NO_PRIZE: 'No prizes available'
};
export const handler = async (event) =&amp;gt; {
    const token = event.queryStringParameters?.token;

    try {
        const configItem = await loadConfig();
        const prizes = await loadEligiblePrizes();
        if (prizes.length === 0) {throw new Error("No prizes available");}

        const tokenItem = await checkTokenEligibility(token);
        if (!tokenItem || tokenItem.spins_remaining &amp;lt;= 0) {throw new Error(ERROR_MESSAGES.INVALID_TOKEN);}

        const overallWinProb = configItem.overall_win_prob;
        const result = await generateOutcomeAndUpdate(token, tokenItem.spins_total, tokenItem.spins_remaining, tokenItem.version, prizes, overallWinProb);

        return {
            statusCode: 200,
            headers: { 'Access-Control-Allow-Origin': '*' },
            body: JSON.stringify({
                outcome: result.outcome,
                prize: result.prizeWon,
                spins_remaining: tokenItem.spins_remaining - 1
            }),
        };

    } catch (error) {
        console.error({ level: 'ERROR', message: 'Handler error', error });
        if (error.message === ERROR_MESSAGES.INVALID_TOKEN) {
            return {
                statusCode: 400,
                headers: { 'Access-Control-Allow-Origin': '*' },
                body: JSON.stringify({ message: error.message }),
            };
        }

        if (error.message === ERROR_MESSAGES.NO_PRIZE) {
            return {
                statusCode: 200,
                headers: { 'Access-Control-Allow-Origin': '*' },
                body: JSON.stringify({ outcome: 'no_prize', message: error.message }),
            };
        }
        return {
            statusCode: 500,
            headers: { 'Access-Control-Allow-Origin': '*' },
            body: JSON.stringify({ message: 'Internal error' }),
        };
    }
};

const loadConfig = async () =&amp;gt; {
    const data = await ddbClient.send(new GetItemCommand({
        TableName: process.env.DYNAMO_TABLE_NAME,
        Key: marshall({ PK: 'GLOBAL', SK: 'CONFIG' })
    }));
    if (!data.Item) throw new Error('Config not found');
    return unmarshall(data.Item);
};


const loadEligiblePrizes = async () =&amp;gt; {
    const data = await ddbClient.send(new QueryCommand({
        TableName: process.env.DYNAMO_TABLE_NAME,
        KeyConditionExpression: 'PK = :pk',
        FilterExpression: 'active = :true AND available_stock &amp;gt; :zero AND weight &amp;gt; :zero',
        ProjectionExpression: 'SK, #name, available_stock, weight, version',
        ExpressionAttributeNames: { '#name': 'name' },
        ExpressionAttributeValues: marshall({
            ':pk': 'PRIZE',
            ':true': true,
            ':zero': 0
        })
    }));
    return data.Items.map(item =&amp;gt; unmarshall(item));
};

const checkTokenEligibility = async (token) =&amp;gt; {
    const data = await ddbClient.send(new GetItemCommand({
        TableName: process.env.DYNAMO_TABLE_NAME,
        Key: marshall({ PK: 'PRIZE_TOKEN', SK: token })
    }));
    const item = data.Item ? unmarshall(data.Item) : null;
    if (!item || new Date(item.expire_at) &amp;lt;= new Date()) {
        return null;
    }
    return {
        spins_remaining: item.spins_remaining,
        spins_total: item.spins_total,
        version: item.version
    };
};


const generateOutcomeAndUpdate = async (token, spinsTotal, spinsRemaining, tokenVersion, prizes, overallWinProb) =&amp;gt; {
    const result = calculateSpinOutcome(prizes, overallWinProb);
    const now = new Date().toISOString();
    const chanceAt = spinsTotal - spinsRemaining + 1;
    const transactItems = createTransactionItems(token, tokenVersion, spinsRemaining, result.prize, now, chanceAt);

    await ddbClient.send(new TransactWriteItemsCommand({ TransactItems: transactItems }));

    return {
        status: 'success',
        outcome: result.outcome,
        prizeWon: result.prize ? { id: result.prize.SK ,name:result.prize.name, spin_timestamp: now, chanceAt } : null
    };
};

const calculateSpinOutcome = (prizes, overallWinProb) =&amp;gt; {
    if (Math.random() &amp;gt; overallWinProb) {
        return { outcome: 'no_prize', prize: null };
    }

    const totalWeight = prizes.reduce((sum, p) =&amp;gt; sum + p.weight, 0);
    if (totalWeight === 0) {
        return { outcome: 'no_prize', prize: null };
    }

    let cumulative = 0;
    const randWeight = Math.random() * totalWeight;
    for (const prize of prizes) {
        cumulative += prize.weight;
        if (randWeight &amp;lt; cumulative) {
            return { outcome: 'win', prize };
        }
    }
    return { outcome: 'no_prize', prize: null }; // Fallback
};

const createTransactionItems = (token, tokenVersion, spinsRemaining, prize, now, chanceAt) =&amp;gt; {
    const transactItems = [{
        Update: {
            TableName: process.env.DYNAMO_TABLE_NAME,
            Key: marshall({ PK: 'PRIZE_TOKEN', SK: token }),
            UpdateExpression: 'SET spins_remaining = spins_remaining - :one, version = version + :one',
            ConditionExpression: 'spins_remaining &amp;gt; :zero AND version = :currentVersion',
            ExpressionAttributeValues: marshall({
                ':one': 1,
                ':zero': 0,
                ':currentVersion': tokenVersion
            })
        }
    }];

    if (prize) {
        transactItems[0].Update.UpdateExpression += ', prizes_won = list_append(prizes_won, :prizeList)';
        transactItems[0].Update.ExpressionAttributeValues = {
            ...transactItems[0].Update.ExpressionAttributeValues,
            ...marshall({
                ':prizeList': [{
                    id: prize.SK,
                    name: prize.name,
                    spin_timestamp: now,
                    chanceAt: chanceAt
                }]
            })
        };
        transactItems.push({
            Update: {
                TableName: process.env.DYNAMO_TABLE_NAME,
                Key: marshall({ PK: 'PRIZE', SK: prize.SK }),
                UpdateExpression: 'SET available_stock = available_stock - :one, version = version + :one',
                ConditionExpression: 'available_stock &amp;gt; :zero AND active = :true AND version = :currentVersion',
                ExpressionAttributeValues: marshall({
                    ':one': 1,
                    ':zero': 0,
                    ':true': true,
                    ':currentVersion': prize.version
                })
            }
        });
    }
    return transactItems;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  API Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create Rest API
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtt6wwofty2xmak685yg.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtt6wwofty2xmak685yg.webp" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give name&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9tvhn64f73x8qg0ta5o.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9tvhn64f73x8qg0ta5o.webp" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create root resource&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yk23jju8n25q9dbfbas.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yk23jju8n25q9dbfbas.webp" alt=" " width="720" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Post Method&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdfynendznocgufo5i9b.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdfynendznocgufo5i9b.webp" alt=" " width="720" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5960w4ngmwmvrlemyv.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5960w4ngmwmvrlemyv.webp" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy API with stage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjscj3sgp3ecq4a8tqog1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjscj3sgp3ecq4a8tqog1.webp" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;For Full Spin Wheel Module in CDK&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/minoltan/aws-usecases/tree/main/spin-wheel" rel="noopener noreferrer"&gt;Visit My GitHub&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Running a spin-to-win promotion at scale is far more than just adding a flashy wheel to your website — it’s a complex engineering challenge. Without the right safeguards, race conditions, duplicate prizes, and user frustration can quickly turn what should be an exciting campaign into a business nightmare. By leveraging AWS Serverless components like DynamoDB, Lambda, API Gateway, and managing everything with CDK, we can design a system that is scalable, reliable, and cost-efficient. The key lies in atomic operations and transactional logic, ensuring that no matter how many users spin at the same time, your prize inventory stays accurate and your customers get a fair experience.&lt;/p&gt;




&lt;p&gt;To stay informed on the latest technical insights and tutorials, connect with me on &lt;a href="https://medium.com/@issackpaul95" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/minoltan/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://dev.to/minoltan"&gt;Dev.to&lt;/a&gt;. For professional inquiries or technical discussions, please contact me via &lt;a href="http://issackpaul95@gmail.com/" rel="noopener noreferrer"&gt;email&lt;/a&gt;. I welcome the opportunity to engage with fellow professionals and address any questions you may have.&lt;/p&gt;

</description>
      <category>scalability</category>
      <category>aws</category>
      <category>spinwheel</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | EC2 SAA Level </title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sun, 30 Nov 2025 08:24:10 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-ec2-saa-level-a1c</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-ec2-saa-level-a1c</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can't access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F360mf8ov7eb5ttakddt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F360mf8ov7eb5ttakddt2.png" alt=" " width="789" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Allocating an Elastic IP provides a static public IPv4 address that remains associated with your EC2 instance, even when it is stopped and restarted, ensuring uninterrupted access to your application. This approach is important for maintaining reliable connectivity without the changes that occur with regular public IP addresses.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq0tx08lftzt9vj6jhb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq0tx08lftzt9vj6jhb1.png" alt=" " width="786" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It ensures that your EC2 instances are physically located close together, which enhances networking performance and reduces latency, making it ideal for high-performance applications like big data analysis. This setup allows your instances to communicate more efficiently, aligning perfectly with your application's needs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there's an AZ failure. Which EC2 Placement Group should you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterk5zp8fv872yw69v9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fterk5zp8fv872yw69v9t.png" alt=" " width="786" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It effectively ensures that your EC2 instances are distributed across different physical hardware in multiple Availability Zones (AZs), maximizing availability during an AZ failure. This configuration provides redundancy and reduces the risk of downtime for your critical application.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlmgmmms2s1xyl2n9gi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlmgmmms2s1xyl2n9gi3.png" alt=" " width="783" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; Elastic Network Interfaces (ENIs) are restricted to a specific Availability Zone (AZ) and cannot be attached to EC2 instances in other AZs, ensuring that the networking structure remains stable and localized within that zone. This understanding is essential for correctly managing resources in AWS.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;The following are true regarding EC2 Hibernate, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finqa03nimgd24lhu4tz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finqa03nimgd24lhu4tz4.png" alt=" " width="787" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; EC2 Hibernate to function, the root volume must be an EBS volume that is encrypted. This understanding is crucial as it aligns with knowing the specific requirements for using hibernation with AWS EC2 instances.&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>ec2placementgroups</category>
      <category>ec2hibernate</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | EC2 Fundamentals</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Sat, 29 Nov 2025 10:32:18 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-ec2-fundamentals-5d28</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-ec2-fundamentals-5d28</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2n0qnzfffqjab6mt5ll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2n0qnzfffqjab6mt5ll.png" alt=" " width="781" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; They offer the largest discounts among EC2 Purchasing Options, making them cost-effective for non-critical workloads. However, they come with the risk of termination, which makes them less suitable for critical jobs or databases that require consistent availability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;What should you use to control traffic in and out of EC2 instances?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8rdp5rksy7em5h1x3a1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8rdp5rksy7em5h1x3a1.png" alt=" " width="781" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; They are specifically designed to control inbound and outbound traffic at the EC2 instance level, allowing you to tailor access and enhance the security of your instances effectively. This understanding is crucial for managing network security within AWS.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;How long can you reserve an EC2 Reserved Instance?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pq1141nol44wxlxfs0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pq1141nol44wxlxfs0h.png" alt=" " width="785" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; EC2 Reserved Instances can only be reserved for those two specific durations, ensuring you understand the fixed options available for long-term capacity planning within AWS. This knowledge is essential for effectively managing resource costs and availability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2mbo1w9my2okcs28hqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2mbo1w9my2okcs28hqm.png" alt=" " width="782" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; These EC2 instances are specifically tailored for compute-intensive tasks, making them ideal for High-Performance Computing (HPC) applications that require powerful processors and high performance. This choice aligns perfectly with your need for efficient processing in demanding workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd3r151e8fro8vwj6udp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd3r151e8fro8vwj6udp.png" alt=" " width="785" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; They are specifically designed for long-term workloads, allowing you to reserve capacity for 1 or 3 years, ensuring consistency and cost-effectiveness for applications that need to run continuously. This choice effectively aligns with your need for a stable server environment over an extended period.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyc0mw28x8v1ojy1bb3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyc0mw28x8v1ojy1bb3d.png" alt=" " width="783" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Using EC2 User Data allows you to automatically execute a bash script on instance launch, streamlining the process of installing required software and updating OS packages without manual intervention. This method is efficient, especially for managing multiple instances, aligning well with best practices for cloud deployment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1he5lhpkjlu7gzaalj0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1he5lhpkjlu7gzaalj0p.png" alt=" " width="783" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; These EC2 instances are designed specifically for applications that require significant memory capacity, such as in-memory databases, enabling faster data access and processing. This aligns perfectly with your need for high performance in handling large data sets directly in memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database has thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhdu3zht47pcjftrxju2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhdu3zht47pcjftrxju2.png" alt=" " width="783" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; These EC2 instances are specifically designed to handle high, sequential read/write operations on large data sets, making them well-suited for high-frequency OLTP databases like your e-commerce application. This choice directly addresses the need for performance in handling thousands of requests per second efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;Security Groups can be attached to only one EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj0kaneidoxdfnlenvev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj0kaneidoxdfnlenvev.png" alt=" " width="780" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Security Groups are designed to be flexible, allowing you to attach the same group to multiple EC2 instances within the same AWS Region or VPC, simplifying management and enhancing security consistency across instances. This understanding aligns with best practices in AWS architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;You're planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbt2kjbgbjjped96jpi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbt2kjbgbjjped96jpi5.png" alt=" " width="791" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; They provide the required dedicated servers for compliance needs and allow you to use your existing server-bound software licenses, making them highly suitable for your situation. This choice emphasizes your understanding of how to align AWS services with strict compliance and licensing requirements.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypamqdfkqltjb9kwkjif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypamqdfkqltjb9kwkjif.png" alt=" " width="785" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; They provide you with visibility into the physical cores and network socket configurations, aligning perfectly with the licensing model based on those specifications. This choice demonstrates your understanding of how specific EC2 purchasing options can meet vendor compliance requirements effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;Spot Fleet is a set of Spot Instances and optionally ……………&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq9jkztqwj4dcnfaxdaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq9jkztqwj4dcnfaxdaj.png" alt=" " width="785" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; Spot Fleet can include both Spot Instances and On-Demand Instances, providing flexibility to automatically request the most cost-effective option available while maintaining resource availability. This understanding is key to utilizing EC2's pricing models effectively.&lt;/p&gt;

</description>
      <category>ec2</category>
      <category>aws</category>
      <category>cloudexam</category>
      <category>ec2basics</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | IAM &amp; CLI</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Thu, 27 Nov 2025 17:04:58 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-iam-cli-1pm7</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-iam-cli-1pm7</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;What is a proper definition of an IAM Role?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9jld1vi3rd4m93mc2rh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9jld1vi3rd4m93mc2rh.png" alt=" " width="786" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It accurately describes that IAM Roles are used to assign specific permissions for AWS services to perform actions on your behalf. This is essential for managing access and ensuring that services can interact securely with other AWS resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;Which of the following is an IAM Security Tool?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F815j07eec33jhsjk4u6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F815j07eec33jhsjk4u6v.png" alt=" " width="786" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; It provides a comprehensive overview of all your AWS Account's IAM Users and the status of their credentials, helping you monitor and manage access effectively. This tool is essential for maintaining security and ensuring that all users have the appropriate and up-to-date credentials.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;Which answer is INCORRECT regarding IAM Users?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0ogrtiopzji3e80us3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0ogrtiopzji3e80us3e.png" alt=" " width="785" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; IAM Users actually use their own unique credentials, such as usernames and passwords or access keys, to access AWS services. This distinction is important for maintaining security and proper access management within your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;Which of the following is an IAM best practice?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwjg0hsdv1p33e8uzln6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwjg0hsdv1p33e8uzln6.png" alt=" " width="785" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It's a best practice to limit the use of the root account for critical account management tasks only. By using IAM Users for everyday activities, you enhance security and better manage permissions within your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;What are IAM Policies?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6hxs7jlmevujbrym3ar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6hxs7jlmevujbrym3ar.png" alt=" " width="785" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; IAM Policies are essential for establishing permission controls that determine what actions these entities can perform in AWS. This understanding is crucial for effectively managing security and access within your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;Which principle should you apply regarding IAM Permissions?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr895myc6pgn7kyzvo0oa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr895myc6pgn7kyzvo0oa.png" alt=" " width="785" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It emphasizes giving users only the permissions necessary to perform their tasks, thereby reducing security risks. This principle is crucial for maintaining a secure AWS environment, as it limits potential damage from compromised accounts or human error.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;What should you do to increase your root account security?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9xhcxpuhpz1qq8q39zt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9xhcxpuhpz1qq8q39zt.png" alt=" " width="785" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (4)&lt;/strong&gt; Enabling Multi-Factor Authentication (MFA) significantly enhances your root account security by requiring an additional verification step beyond just a password, making it much harder for unauthorized users to access your account even if your password is compromised. This aligns with the principle of implementing strong security measures to protect sensitive accounts and data in your AWS environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;IAM User Groups can contain IAM Users and other User Groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faym52t1y3qzyfqiyuoif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faym52t1y3qzyfqiyuoif.png" alt=" " width="785" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; IAM User Groups can only contain IAM Users and not other User Groups. This highlights the structure of IAM in AWS, where roles and hierarchies are defined to organize user permissions effectively.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4npyb6ls5hfxh2sjlot2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4npyb6ls5hfxh2sjlot2.png" alt=" " width="780" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It is an important element of IAM Policies, it is not included within the statements of the policy itself. Understanding this distinction helps clarify how IAM Policies are structured and ensures you can effectively write and analyze permissions in AWS.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>iam</category>
      <category>cli</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | Serverless Solutions Architecture Discussions</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Mon, 13 Oct 2025 11:04:41 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-serverless-solutions-architecture-discussions-54mf</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-serverless-solutions-architecture-discussions-54mf</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;A startup company plans to run its application on AWS. As a solutions architect, the company hired you to design and implement a fully Serverless REST API. Which technology stack do you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr550pzztrw6bu808sij3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr550pzztrw6bu808sij3.png" alt=" " width="790" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; It allows you to handle HTTP requests without managing servers, enabling automatic scaling and cost efficiency. This aligns perfectly with your goal of implementing a serverless architecture, making your API both flexible and easy to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;The following AWS services have an out of the box caching feature, EXCEPT ……………..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f1e9z0z7avvww2fngma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f1e9z0z7avvww2fngma.png" alt=" " width="795" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; AWS Lambda does not offer an out-of-the-box caching feature, which distinguishes it from other AWS services like API Gateway and DynamoDB that do provide caching capabilities. This understanding of the different functionalities available in AWS services can enhance your expertise in building serverless architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;You have a lot of static files stored in an S3 bucket that you want to distribute globally to your users. Which AWS service should you use?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs5cbtaprbbzeuo1qa02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs5cbtaprbbzeuo1qa02.png" alt=" " width="793" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It is specifically designed as a content delivery network (CDN), which optimizes the distribution of static files globally, ensuring low latency and fast transfer speeds for your users. This capability perfectly aligns with your need to efficiently deliver static content stored in an S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;You have created a DynamoDB table in ap-northeast-1 and would like to make it available in eu-west-1, so you decided to create a DynamoDB Global Table. What needs to be enabled first before you create a DynamoDB Global Table?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74l49u4grijc8goap1lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74l49u4grijc8goap1lu.png" alt=" " width="793" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; It is essential for enabling the replication of data changes across different AWS Regions when creating a Global Table. This functionality aligns with your objective to ensure that your DynamoDB table is consistently updated and available in multiple regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;You have configured a Lambda function to run each time an item is added to a DynamoDB table using DynamoDB Streams. The function is meant to insert messages into the SQS queue for further long processing jobs. Each time the Lambda function is invoked, it seems able to read from the DynamoDB Stream but it isn't able to insert the messages into the SQS queue. What do you think the problem is?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c06np04o7zrhl36smxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c06np04o7zrhl36smxt.png" alt=" " width="793" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; For your Lambda function to successfully insert messages into an SQS queue, it must have the appropriate permissions assigned to its execution role. This highlights the importance of ensuring that IAM roles are properly configured to allow Lambda functions access to necessary AWS services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;You would like to create an architecture for a micro-services application whose sole purpose is to encode videos stored in an S3 bucket and store the encoded videos back into an S3 bucket. You would like to make this micro-services application reliable and has the ability to retry upon failures. Each video may take over 25 minutes to be processed. The services used in the architecture should be asynchronous and should have the capability to be stopped for a day and resume the next day from the videos that haven't been encoded yet. Which of the following AWS services would you recommend in this scenario?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxe4dbyzrxgu0a3ompbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpxe4dbyzrxgu0a3ompbr.png" alt=" " width="793" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Amazon SQS allows you to queue video encoding tasks, retaining them until you're ready to process them, while EC2 instances can be started and stopped as needed, enabling flexible management of your processing workload. This setup ensures reliability and supports your requirement to pause and resume encoding tasks over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;You are running a photo-sharing website where your images are downloaded from all over the world. Every month you publish a master pack of beautiful mountain images that are over 15 GB in size. The content is currently hosted on an Elastic File System (EFS) file system and distributed by an Application Load Balancer and a set of EC2 instances. Each month, you are experiencing very high traffic which increases the load on your EC2 instances and increases network costs. What do you recommend to reduce EC2 load and network costs without refactoring your website?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fxfoe05hqzktjc0ypbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fxfoe05hqzktjc0ypbe.png" alt=" " width="793" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (4)&lt;/strong&gt; It acts as a Content Delivery Network (CDN) that reduces load on your EC2 instances by caching and delivering content globally, which leads to lower latency and reduced network costs. This aligns with your objective of efficiently handling high traffic without significant changes to your existing architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;An AWS service allows you to capture gigabytes of data per second in real-time and deliver these data to multiple consuming applications, with a replay feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2lurkyg1l2mjnhfkiva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2lurkyg1l2mjnhfkiva.png" alt=" " width="796" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; It enables you to efficiently capture and process large volumes of real-time data from multiple sources, making it ideal for applications that require speedy data ingestion and processing. This capability supports the learning objective of understanding scalable solutions for real-time data handling in cloud environments.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>beginners</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Cloud Practitioner Questions | Serverless Overview from a Solutions Architect Perspective</title>
      <dc:creator>Minoltan Issack</dc:creator>
      <pubDate>Tue, 07 Oct 2025 07:26:09 +0000</pubDate>
      <link>https://forem.com/minoltan/aws-cloud-practitioner-questions-serverless-overview-from-a-solutions-architect-perspective-2555</link>
      <guid>https://forem.com/minoltan/aws-cloud-practitioner-questions-serverless-overview-from-a-solutions-architect-perspective-2555</guid>
      <description>&lt;h2&gt;
  
  
  Question 1:
&lt;/h2&gt;

&lt;p&gt;You have created a Lambda function that typically will take around 1 hour to process some data. The code works fine when you run it locally on your machine, but when you invoke the Lambda function it fails with a "timeout" error after 3 seconds. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c17dagv8y69lymy1vjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c17dagv8y69lymy1vjk.png" alt=" " width="786" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Lambda's maximum execution time is limited to 15 minutes, which is insufficient for your 1-hour processing task. Using services like EC2 allows you to run your code without these time constraints, enabling you to complete your processing as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 2:
&lt;/h2&gt;

&lt;p&gt;Before you create a DynamoDB table, you need to provision the EC2 instance the DynamoDB table will be running on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea8blkl226pt037dl99s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fea8blkl226pt037dl99s.png" alt=" " width="785" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; DynamoDB is a fully managed, serverless database service that doesn't require you to provision or manage any servers, allowing it to automatically handle capacity changes and maintain performance without your intervention. This clarity helps you understand the distinction between serverless and traditional server-based architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 3:
&lt;/h2&gt;

&lt;p&gt;You have provisioned a DynamoDB table with 10 RCUs and 10 WCUs. A month later you want to increase the RCU to handle more read traffic. What should you do?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42s8ry0ticy50hlcxnzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42s8ry0ticy50hlcxnzi.png" alt=" " width="791" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; DynamoDB allows you to adjust read and write capacities independently; since your need is only an increase in read capacity, you don't need to change the write capacity. This understanding demonstrates your grasp of how provisioned throughput works in DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 4:
&lt;/h2&gt;

&lt;p&gt;You have an e-commerce website where you are using DynamoDB as your database. You are about to enter the Christmas sale and you have a few items which are very popular and you expect that they will be read often. Unfortunately, last year due to the huge traffic you had the ProvisionedThroughputExceededException exception. What would you do to prevent this error from happening again?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5iigttwu2yxl8tral9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5iigttwu2yxl8tral9w.png" alt=" " width="791" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It enhances your DynamoDB performance by caching frequently accessed data, which helps prevent exceeding your provisioned throughput and eliminates the "ProvisionedThroughputExceededException" during high-traffic events like sales. This solution effectively balances data retrieval needs with cost efficiency, ensuring a smoother customer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 5:
&lt;/h2&gt;

&lt;p&gt;You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2appo3uwshdr8rrmahbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2appo3uwshdr8rrmahbl.png" alt=" " width="786" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; this approach allows your application to react instantly to new user sign-ups, sending welcome emails efficiently without the need for manual intervention. By leveraging DynamoDB Streams with Lambda, you ensure a scalable and real-time solution that enhances user engagement as soon as they join.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 6:
&lt;/h2&gt;

&lt;p&gt;To create a serverless API, you should integrate Amazon API Gateway with ………………….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngwaqr8udlyjh32kjkdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngwaqr8udlyjh32kjkdb.png" alt=" " width="786" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It enables you to create a serverless API by automatically running your code in response to HTTP requests without needing to manage any server infrastructure. This aligns with the learning objective of understanding serverless architecture and the components that support it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 7:
&lt;/h2&gt;

&lt;p&gt;When you are using an Edge-Optimized API Gateway, your API Gateway lives in CloudFront Edge Locations across all AWS Regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglbdvifgoohenfor7t9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglbdvifgoohenfor7t9x.png" alt=" " width="781" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; Edge-Optimized API Gateway primarily serves geographically distributed clients by routing requests to the nearest CloudFront Edge Location, but it is still fundamentally hosted in a single AWS Region. This distinction is key for understanding how latency is reduced while maintaining a centralized API design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 8:
&lt;/h2&gt;

&lt;p&gt;You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience the unpredictable volume of requests. What is the most cost-effective solution that you recommend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpgawus6dkzeujv2rq2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpgawus6dkzeujv2rq2w.png" alt=" " width="789" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It effectively balances cost and performance. In production, the predictable workload benefits from Provisioned Capacity with Auto Scaling, while development's unpredictable requests are handled flexibly and cost-effectively with On-Demand Capacity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 9:
&lt;/h2&gt;

&lt;p&gt;You have an application that is served globally using CloudFront Distribution. You want to authenticate users at the CloudFront Edge Locations instead of authentication requests go all the way to your origins. What should you use to satisfy this requirement?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst95dprdyf9snfbw9sxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst95dprdyf9snfbw9sxr.png" alt=" " width="788" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; it allows you to run code directly at CloudFront Edge Locations, enabling you to authenticate users closer to where they access your application, which enhances performance and minimizes latency. This aligns perfectly with your goal of efficiently managing user authentication in a global context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 10:
&lt;/h2&gt;

&lt;p&gt;The maximum size of an item in a DynamoDB table is ……………….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yuew7ma7n4cu4epy423.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yuew7ma7n4cu4epy423.png" alt=" " width="792" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; Maximum size of a single item in an Amazon DynamoDB table is indeed 400 KB. This knowledge is essential for understanding DynamoDB's storage limitations and how to effectively design your data structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 11:
&lt;/h2&gt;

&lt;p&gt;Which AWS service allows you to build Serverless workflows using AWS services (e.g., Lambda) and supports human approval?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vw1ajfq91lwpe0od6yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vw1ajfq91lwpe0od6yo.png" alt=" " width="788" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It allows you to orchestrate multiple AWS services, including Lambda, into serverless workflows and facilitates human approval steps, making it ideal for complex applications that require coordination between automated and manual processes. This aligns perfectly with understanding how to manage serverless architecture efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 12:
&lt;/h2&gt;

&lt;p&gt;A company has a serverless application on AWS which consists of Lambda, DynamoDB, and Step Functions. In the last month, there are an increase in the number of requests against the application which results in an increase in DynamoDB costs, and requests started to be throttled. After further investigation, it shows that the majority of requests are read requests against some queries in the DynamoDB table. What do you recommend to prevent throttles and reduce costs efficiently?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0l6f0b61nyophcvcvzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw0l6f0b61nyophcvcvzk.png" alt=" " width="785" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (4)&lt;/strong&gt; DAX significantly improves read performance for DynamoDB by providing in-memory caching, reducing latency and costs associated with frequent read requests, which is crucial in your scenario where throttling occurred. This approach aligns with the learning objective of optimizing serverless applications on AWS for efficiency and cost-effectiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 13:
&lt;/h2&gt;

&lt;p&gt;You are a DevOps engineer in a football company that has a website that is backed by a DynamoDB table. The table stores viewers' feedback for football matches. You have been tasked to work with the analytics team to generate reports on the viewers' feedback. The analytics team wants the data in DynamoDB json format and hosted in an S3 bucket to start working on it and create the reports. What is the best and most cost-effective way you can achieve this task?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnarq6x2gy6gv16exkkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnarq6x2gy6gv16exkkt.png" alt=" " width="785" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (1)&lt;/strong&gt; This method directly allows you to export data from DynamoDB in JSON format to an S3 bucket with minimal effort and cost, aligning with the analytics team's requirements for generating reports efficiently. This approach demonstrates your understanding of optimizing workflows in AWS services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 14:
&lt;/h2&gt;

&lt;p&gt;A website is currently in the development process and it is going to be hosted on AWS. There is a requirement to store user sessions for users logged in to the website with an automatic expiry and deletion of expired user sessions. Which of the following AWS services are best suited for this use case?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mcs17doq9fwa4lf6i8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mcs17doq9fwa4lf6i8c.png" alt=" " width="792" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It allows you to efficiently manage user sessions with automatic expiration through Time to Live (TTL), ensuring that expired sessions are deleted without manual intervention. This aligns with the learning objective of utilizing AWS services to automate data management processes effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 15:
&lt;/h2&gt;

&lt;p&gt;You have a mobile application and would like to give your users access to their own personal space in the S3 bucket. How do you achieve that?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xv2tkblnnu4sfcpbsdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xv2tkblnnu4sfcpbsdg.png" alt=" " width="792" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (2)&lt;/strong&gt; It effectively allows you to manage mobile user accounts, granting them individualized IAM permissions for secure access to their own areas in the S3 bucket. This directly aligns with the goal of providing personalized and secure storage solutions for users in mobile applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 16:
&lt;/h2&gt;

&lt;p&gt;You are developing a new web and mobile application that will be hosted on AWS and currently, you are working on developing the login and signup page. The application backend is serverless and you are using Lambda, DynamoDB, and API Gateway. Which of the following is the best and easiest approach to configure the authentication for your backend?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh6yj6v21q5332lcu9hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh6yj6v21q5332lcu9hx.png" alt=" " width="792" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; it provides a simplified and secure way to manage user authentication, including sign-up and login processes, which is essential for a serverless application backend. This choice aligns with the learning objective of understanding efficient user management in cloud-based applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 17:
&lt;/h2&gt;

&lt;p&gt;You are running a mobile application where you want each registered user to upload/download images to/from his own folder in the S3 bucket. Also, you want to give your users to sign-up and sign in using their social media accounts (e.g., Facebook). Which AWS service should you choose?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pgnwq3qye1ihuqsbpir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pgnwq3qye1ihuqsbpir.png" alt=" " width="792" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correct Answer: (3)&lt;/strong&gt; It efficiently manages user authentication and allows for social sign-in options, making it ideal for providing secure access for mobile users to their respective folders in S3. This aligns perfectly with the learning objective of implementing user management in cloud applications, ensuring a scalable and user-friendly experience.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>beginners</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
