<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: EightPLabs</title>
    <description>The latest articles on Forem by EightPLabs (@eightplabs).</description>
    <link>https://forem.com/eightplabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/eightplabs"/>
    <language>en</language>
    <item>
      <title>RDS on AWS - Step by step guide to creating RDS</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Wed, 06 Mar 2024 11:05:03 +0000</pubDate>
      <link>https://forem.com/eightplabs/rds-on-aws-creating-rds-5g2a</link>
      <guid>https://forem.com/eightplabs/rds-on-aws-creating-rds-5g2a</guid>
      <description>&lt;h1&gt;
  
  
  RDS on AWS - Creating RDS
&lt;/h1&gt;

&lt;p&gt;We discussed about pros and cons of RDS and whether to use it or not &lt;a href="https://eightplabs.com/aws-rds-when-to-shake-hands-and-when-to-ignore"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you made up your mind to use it then this blog covers step by step guide to create and use RDS.&lt;/p&gt;

&lt;p&gt;First, you'll need to choose from various database engines like MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB for your RDS instance. Then, select predefined templates suitable for production, development/testing, or utilize the Free Tier RDS setup. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure availability and durability by configuring settings for high availability and data redundancy to guarantee reliable database operation. &lt;/li&gt;
&lt;li&gt;Next, adjust configurations related to instance type, storage, networking, and security to meet your application's requirements. You can choose instance types based on workload characteristics, define storage options such as volume type, size, and auto-scaling settings to accommodate data requirements. &lt;/li&gt;
&lt;li&gt;Configure network access settings like internal access within AWS services, public access, VPC connectivity, and RDS Proxy setup. Determine connectivity options for EC2 compute resources to interact with the RDS instance, and choose whether to assign a public IP address for external access. &lt;/li&gt;
&lt;li&gt;Specify the Virtual Private Cloud (VPC) and associated subnet groups for deploying the RDS instance within a secure network environment. Additionally, create an RDS Proxy for enhanced database scalability, resilience, and security. &lt;/li&gt;
&lt;li&gt;Set up authentication methods and define user access permissions using password authentication, IAM roles, or Kerberos authentication. Enable monitoring features like Performance Insights and Enhanced Monitoring to track database performance metrics and diagnose issues. &lt;/li&gt;
&lt;li&gt;Configure automated backup settings such as retention periods, backup windows, and replication options for data protection, and enable encryption at rest and in transit to enhance data security and compliance with industry standards. &lt;/li&gt;
&lt;li&gt;Schedule automated maintenance tasks such as minor version upgrades and patching to ensure database health and security. Finally, understand the pricing model, estimate costs, and manage billing preferences to optimize spending and budget allocation for RDS usage. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following these steps, you'll effectively manage your RDS instance on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On -Creating RDS
&lt;/h2&gt;

&lt;p&gt;This blog lets us learn how to create an RDS database in the AWS console by following simple steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites:
&lt;/h3&gt;

&lt;p&gt;These are the prerequisites for creating RDS in the AWS management console&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console to access RDS.&lt;/li&gt;
&lt;li&gt;Choose a region for deploying your RDS. For this tutorial, we are choosing the Mumbai region. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f_uoLPBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-000%2520%281%29.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f_uoLPBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-000%2520%281%29.png%3Fraw%3Dtrue" alt="alt_text" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Sign in to the AWS management console, navigate to the RDS dashboard, and click on &lt;strong&gt;&lt;em&gt;Create Database&lt;/em&gt;&lt;/strong&gt;.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7twZPhSA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-001%2520%281%29.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7twZPhSA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-001%2520%281%29.png%3Fraw%3Dtrue" alt="alt_text" width="800" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Choose a database creation method.
&lt;/h3&gt;

&lt;p&gt;There are two options available, such as &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard Create:&lt;/strong&gt; This method provides full control over all configuration options, allowing customization of availability, security, backups, and maintenance settings according to specific requirements and preferences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy Create:&lt;/strong&gt; This method offers recommended best-practice configurations for quick deployment, with pre-set settings optimized for common use cases, while still allowing flexibility to change certain options post-creation if needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WGHTXo7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-002%2520%281%29.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WGHTXo7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-002%2520%281%29.png%3Fraw%3Dtrue" alt="alt_text" width="764" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select a method that is most suitable for your use case. In this guide, Let us choose the default &lt;strong&gt;&lt;em&gt;Standard Create method&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3:  Select an Engine option
&lt;/h3&gt;

&lt;p&gt;AWS console offers six types of database engines such as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Aurora (MySQL)&lt;/strong&gt;: a High-performance MySQL-compatible database engine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL:&lt;/strong&gt; Open-source RDBMS is known for speed and reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MariaDB:&lt;/strong&gt; MySQL-compatible fork with additional features.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL:&lt;/strong&gt; Powerful open-source object-relational database system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle:&lt;/strong&gt; Commercial RDBMS is known for scalability and security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft SQL Server:&lt;/strong&gt; Commercial RDBMS by Microsoft for data warehousing and BI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IBM Db2:&lt;/strong&gt; IBM's RDBMS with advanced analytics and integration capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gfQuH0Ph--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-003%2520%281%29.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gfQuH0Ph--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-003%2520%281%29.png%3Fraw%3Dtrue" alt="alt_text" width="509" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose an engine that is adaptable for your application. In this guide, Let us select the default &lt;strong&gt;_PostgreSQL _&lt;/strong&gt;database and its default engine version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Choose a template
&lt;/h3&gt;

&lt;p&gt;AWS offers 3 templates for creating RDS. They are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Production&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is a default setting that prioritizes high availability and consistent performance. Production is ideal for deploying databases in a production environment with minimal configuration required.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dev/Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is configured for development purposes outside of production, allowing for experimentation and testing without impacting live systems. This template offers flexibility for developers to customize settings as needed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Free Tier&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Utilizes RDS Free Tier to develop new applications, test existing ones, or gain hands-on experience with Amazon RDS at no cost. This template is suitable for small-scale projects and learning purposes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uL3koT09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-004.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uL3koT09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-004.png%3Fraw%3Dtrue" alt="alt_text" width="675" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this guide, we will be choosing a Free tier template. In this template, most of the AWS services for optimizing the RDS are restricted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Creating DB instance identifier
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DB instance identifier:&lt;/strong&gt; This is a unique name for your DB instance within your AWS account and region. It must be 1 to 60 alphanumeric characters or hyphens. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4mUuqd07--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-005.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4mUuqd07--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-005.png%3Fraw%3Dtrue" alt="alt_text" width="673" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Master username:&lt;/strong&gt; This is the login ID for the master user of your DB instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RWMOaBsQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-006.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RWMOaBsQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-006.png%3Fraw%3Dtrue" alt="alt_text" width="665" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manage master credentials in AWS Secrets Manager&lt;/strong&gt;: This option allows you to manage the master user credentials in AWS Secrets Manager. RDS can generate a password for you and manage it throughout its lifecycle.&lt;/li&gt;
&lt;li&gt;Create a master password for the master user of your DB instance and confirm it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TC5ZaSaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-007.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TC5ZaSaI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-007.png%3Fraw%3Dtrue" alt="alt_text" width="668" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: DB instance configuration.
&lt;/h3&gt;

&lt;p&gt;AWS RDS allows you to select the type of virtual machine (instance) for your database. The available classes depend on the engine you selected. Common options include&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard classes (includes m classes)&lt;/strong&gt;: Balanced compute and memory resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory-optimized classes (includes r and x classes):&lt;/strong&gt; Optimized for memory-intensive workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Burstable classes (includes t classes):&lt;/strong&gt; Suitable for workloads with occasional bursts of CPU activity, with a baseline performance level and the ability to burst above that baseline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the Free tier template, only the Burstable classes are available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8_gj_0H6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-008.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8_gj_0H6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-008.png%3Fraw%3Dtrue" alt="alt_text" width="673" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Select the storage type
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Storage Types
&lt;/h4&gt;

&lt;p&gt;There are 4 types of storage types available in the RDS storage configuration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;General Purpose SSD (gp2)&lt;/p&gt;

&lt;p&gt;This storage type offers a baseline performance determined by the volume size. It's suitable for a wide range of workloads where the performance requirements are not consistently high.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;General Purpose SSD (gp3)&lt;/p&gt;

&lt;p&gt;With this storage type, performance scales independently from storage. It allows you to adjust performance and size separately, offering flexibility and cost-effectiveness for varying workloads.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Provisioned IOPS SSD (io1)&lt;/p&gt;

&lt;p&gt;This storage type provides flexibility in provisioning I/O (input/output) operations per second (IOPS) to meet specific performance requirements. It's suitable for applications with demanding performance needs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Magnetic&lt;/p&gt;

&lt;p&gt;This storage type uses traditional magnetic hard drives and is limited to a maximum of 1,000 IOPS. It's generally not recommended for most modern workloads due to its relatively slower performance compared to SSD storage.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this guide let us choose &lt;strong&gt;&lt;em&gt;General Purpose SSD (gp2).&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--96nEmjB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-009.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--96nEmjB6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-009.png%3Fraw%3Dtrue" alt="alt_text" width="674" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Storage Configuration
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Allocated storage:&lt;/strong&gt; This specifies the initial amount of storage allocated for your database instance, measured in gigabytes (GiB). The minimum value is 20 GiB, and the maximum value is 6,144 GiB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq8SyWHW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-010.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq8SyWHW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-010.png%3Fraw%3Dtrue" alt="alt_text" width="672" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Storage Autoscaling
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage autoscaling:&lt;/strong&gt; When enabled, this feature provides dynamic scaling support for your database's storage based on your application's needs. It allows the storage to increase automatically after the specified threshold is exceeded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maximum storage threshold:&lt;/strong&gt; This specifies the maximum threshold for storage autoscaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xRU-Dj6_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-011.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xRU-Dj6_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-011.png%3Fraw%3Dtrue" alt="alt_text" width="676" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8: Setting up the connectivity
&lt;/h3&gt;

&lt;p&gt;In connectivity of the RDS, there are two types of computer resources&lt;/p&gt;

&lt;h4&gt;
  
  
  Internal Access
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Don’t connect to an EC2 compute resource&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Select this option if you don't want to set up a connection to an EC2 compute resource for this database. You can manually configure the connection later if needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Connect to an EC2 compute resource&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Choose this option to establish a connection to an EC2 compute resource for this database. This setting automatically adjusts connectivity settings to allow the compute resource to connect to the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XWChtoQV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-012.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XWChtoQV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-012.png%3Fraw%3Dtrue" alt="alt_text" width="675" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the appropriate option for your application. &lt;/p&gt;

&lt;h4&gt;
  
  
  Virtual Private Cloud
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Select the Virtual private cloud (VPC)
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Select the VPC where you want to deploy your database instance. The VPC defines the virtual networking environment for the DB instance. The default VPC typically includes subnets across multiple availability zones for high availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k776CjmI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-013.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k776CjmI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-013.png%3Fraw%3Dtrue" alt="alt_text" width="668" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  DB subnet group
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default:&lt;/strong&gt; Choose the DB subnet group that defines the subnets and IP ranges the DB instance can use within the selected VPC. This ensures proper network isolation and security for the database instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OdTLFUw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-014.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OdTLFUw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-014.png%3Fraw%3Dtrue" alt="alt_text" width="672" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Public access
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Yes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If selected, RDS assigns a public IP address to the database, allowing Amazon EC2 instances and resources outside of the VPC to connect. Resources inside the VPC can also connect. You must choose one or more VPC security groups to specify which resources can access the database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;No&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If selected, RDS doesn't assign a public IP address to the database. Only resources inside the VPC can't connect. You must still specify one or more VPC security groups to control access.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we are creating this RDS learning purpose choose option &lt;strong&gt;&lt;em&gt;NO&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yCfehpKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-015.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yCfehpKg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-015.png%3Fraw%3Dtrue" alt="alt_text" width="667" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  VPC security group (firewall)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Choose existing&lt;/p&gt;

&lt;p&gt;Select existing VPC security groups that define rules for allowing access to your database. Ensure that the security group rules allow the appropriate incoming traffic.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create new&lt;/p&gt;

&lt;p&gt;If needed, create a new VPC security group to control access to your database.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gGMN6601--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-016.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gGMN6601--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-016.png%3Fraw%3Dtrue" alt="alt_text" width="673" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Availability Zone
&lt;/h4&gt;

&lt;p&gt;Choose an availability zone where the database will be deployed.&lt;/p&gt;

&lt;h4&gt;
  
  
  RDS Proxy
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create an RDS Proxy&lt;/p&gt;

&lt;p&gt;This option allows you to create an RDS Proxy, which is a fully managed, highly available database proxy provided by RDS. RDS Proxy improves application scalability, resiliency, and security by efficiently managing database connections and pooling.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hTbC5uDB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-017.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hTbC5uDB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-017.png%3Fraw%3Dtrue" alt="alt_text" width="668" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Certificate authority&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using a server certificate (optional):&lt;/strong&gt; You have the option to use a server certificate for added security. This certificate validates that the connection is made to an Amazon database, providing an extra layer of security. The default certificate authority is "rds-ca-rsa2048-g1".&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ag-_kyH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-018.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ag-_kyH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-018.png%3Fraw%3Dtrue" alt="alt_text" width="669" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9: Database Authentication
&lt;/h3&gt;

&lt;p&gt;There are 3 types of authentication options available for AWS RDS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Password authentication&lt;/p&gt;

&lt;p&gt;This option authenticates users using database passwords. Users provide their database username and password to access the database. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Password and IAM database authentication&lt;/p&gt;

&lt;p&gt;In this option, authentication occurs using the database password and user credentials through AWS IAM (Identity and Access Management) users and roles.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Password and Kerberos authentication&lt;/p&gt;

&lt;p&gt;This option allows users to authenticate with the DB instance using Kerberos Authentication, in addition to the database password. You can choose a directory in which authorized users can authenticate using Kerberos Authentication.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Password authentication&lt;/em&gt;&lt;/strong&gt; will be selected as default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cbM2Fijg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-019.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cbM2Fijg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-019.png%3Fraw%3Dtrue" alt="alt_text" width="677" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10: RDS Monitoring Setup
&lt;/h3&gt;

&lt;h5&gt;
  
  
  Turn on Performance Insights
&lt;/h5&gt;

&lt;p&gt;Performance Insights provides a comprehensive view of your database's performance. Enabling it allows you to monitor performance metrics, query executions, and SQL statements in real-time.&lt;/p&gt;

&lt;h5&gt;
  
  
  The retention period for Performance Insights
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;This setting determines how long Performance Insights data is retained. In the free tier, Performance Insights data is retained for 7 days. After this period, the data is automatically purged. You can adjust this period based on your retention requirement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OKNT6llB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-020.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OKNT6llB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-020.png%3Fraw%3Dtrue" alt="alt_text" width="675" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Enhanced Monitoring
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Enabling Enhanced Monitoring is useful for monitoring how different processes or threads utilize the CPU and identifying insights about RDS performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4VUcu1BL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-021.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4VUcu1BL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-021.png%3Fraw%3Dtrue" alt="alt_text" width="676" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 11: Additional Configurations
&lt;/h3&gt;

&lt;h5&gt;
  
  
  Database Options
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initial database name:&lt;/strong&gt; Specify the initial name for your database. If not specified, Amazon RDS will not create a database upon instance creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DB parameter group:&lt;/strong&gt; Assigns a set of parameters to the database instance. The default parameter group for PostgreSQL version 16 is used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Backup
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enable automated backups:&lt;/strong&gt; Creates point-in-time snapshots of your database for data protection and recovery purposes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup retention period:&lt;/strong&gt; Specifies the number of days (1-35) for which automatic backups are kept. In this case, backups are retained for 1 day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup window:&lt;/strong&gt; Specifies the daily time range (in UTC) during which RDS takes automated backups. No specific window preference is set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copy tags to snapshots:&lt;/strong&gt; Copies tags from the database instance to its snapshots for better organization and management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup replication:&lt;/strong&gt; Enables replication of backups in another AWS Region for disaster recovery purposes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iOZKy-Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-022.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iOZKy-Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-022.png%3Fraw%3Dtrue" alt="alt_text" width="674" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Encryption
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enable encryption:&lt;/strong&gt; Activates encryption for the database instance. The default AWS Key Management Service (KMS) key "aws/rds" is used for encryption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1iaqowby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-023.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1iaqowby--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-023.png%3Fraw%3Dtrue" alt="alt_text" width="673" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Log exports
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Select the log types to publish to Amazon CloudWatch Logs: Specifies which types of logs (e.g., PostgreSQL log, upgrade log) should be published to CloudWatch Logs for monitoring and analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rAhnnBKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-024.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rAhnnBKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-024.png%3Fraw%3Dtrue" alt="alt_text" width="672" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Maintenance
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto minor version upgrade:&lt;/strong&gt; Enables automatic upgrades to new minor versions of the database engine as they are released.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance window:&lt;/strong&gt; Specifies the period during which pending modifications or maintenance tasks are applied to the database instance. No specific window preference is set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---7ANYagL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-025.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---7ANYagL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-025.png%3Fraw%3Dtrue" alt="alt_text" width="672" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Deletion protection
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enable deletion protection:&lt;/strong&gt; Prevents accidental deletion of the database instance by enabling deletion protection. While enabled, you cannot delete the database instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eEdZ1X----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-026.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eEdZ1X----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-026.png%3Fraw%3Dtrue" alt="alt_text" width="673" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 12: Billing estimates
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--91HY4HYP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-027.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--91HY4HYP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-027.png%3Fraw%3Dtrue" alt="alt_text" width="775" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After configuring your RDS instance, review the billing estimates to understand the expected costs associated with your database deployment.&lt;/li&gt;
&lt;li&gt;Take into account factors such as instance uptime, storage usage, data transfer, and any additional services enabled for monitoring or backup purposes.&lt;/li&gt;
&lt;li&gt;Adjust configurations as needed to optimize costs and align with budgetary requirements.&lt;/li&gt;
&lt;li&gt;Monitor your actual usage and costs regularly through the AWS Billing and Cost Management Console to track expenses and make informed decisions about resource management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step  13: After selecting all the options that satisfy the requirements of your application click on &lt;strong&gt;&lt;em&gt;Create Database&lt;/em&gt;&lt;/strong&gt;.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tLKMZM7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-028.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tLKMZM7e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-028.png%3Fraw%3Dtrue" alt="alt_text" width="705" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 14: Deleting your resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When you no longer require your RDS instance or want to stop incurring charges delete the resources associated with the database.&lt;/li&gt;
&lt;li&gt;Navigate to the RDS dashboard.&lt;/li&gt;
&lt;li&gt;Locate the RDS instance you wish to delete and select it.&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;_Actions _&lt;/strong&gt; and choose &lt;strong&gt;&lt;em&gt;Delete&lt;/em&gt;&lt;/strong&gt; to initiate the deletion process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WdVSEPg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-029.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WdVSEPg8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-029.png%3Fraw%3Dtrue" alt="alt_text" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G33EZNZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-030.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G33EZNZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-030.png%3Fraw%3Dtrue" alt="alt_text" width="582" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm the deletion by following the on-screen prompts and acknowledging any potential data loss.&lt;/li&gt;
&lt;li&gt;If you enabled deletion protection, make sure to modify your database configuration before deleting the database. You can modify it by navigating to &lt;strong&gt;_Modify _&lt;/strong&gt; and disable the protection. &lt;/li&gt;
&lt;li&gt;Review your billing statements to ensure that the resources have been successfully deleted and are no longer accruing charges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NL6HUIsl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-031.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NL6HUIsl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/EightPLabs/blogs/blob/main/Intro-AWS/RDS/Images_RDS/image-031.png%3Fraw%3Dtrue" alt="alt_text" width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
    </item>
    <item>
      <title>RDS on AWS - Leave it or choose it !</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Wed, 06 Mar 2024 10:14:30 +0000</pubDate>
      <link>https://forem.com/eightplabs/rds-on-aws-introduction-8n1</link>
      <guid>https://forem.com/eightplabs/rds-on-aws-introduction-8n1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A managed database service by Amazon Web Services (AWS).&lt;/li&gt;
&lt;li&gt;They are designed to simplify database setup, operation, and scaling for relational databases.&lt;/li&gt;
&lt;li&gt;RDS supports various database engines like MySQL, PostgreSQL, SQL Server, Oracle, and MariaDB.&lt;/li&gt;
&lt;li&gt;It offers features like automated backups, monitoring, and high availability.&lt;/li&gt;
&lt;li&gt;Ideal for businesses looking for reliable, scalable, and cost-effective database solutions without the challenge of infrastructure management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features of RDS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Availability&lt;/strong&gt;: RDS offers automated backups and user-initiated snapshots for quick database recovery and monitoring. Snapshots can be shared among multiple AWS accounts for expanded availability while ensuring data security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Users create restricted passwords for database access and are assigned the Admin role by default. Encryption using KMS enhances data security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: RDS allows for automatic scaling based on transaction volume, supporting both horizontal and vertical scaling to handle increased traffic or resource demands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: RDS offers SSD-backed storage options (General Purpose and Provisioned) impacting resource performance based on workload requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pricing&lt;/strong&gt;: Pay-as-you-go model with no minimum charge, allowing users to delete resources and avoid charges. Free-tier accounts have specific configurations and no bills upon resource deletion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Backup and Redundancy:&lt;/strong&gt; RDS provides automated backup mechanisms and redundancy features to ensure data integrity and availability reducing the risk of data loss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools for Monitoring and Management:&lt;/strong&gt; RDS offers tools for monitoring database performance, managing configurations, and diagnosing issues enabling users to maintain optimal database operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Software Updates and Other Security Features:&lt;/strong&gt; RDS automatically applies software updates and patches to the database engine, enhancing security and compliance without requiring user intervention.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Available Database Engines
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MySQL&lt;/strong&gt;: Amazon RDS provides managed MySQL database instances, offering scalable, reliable, and cost-effective business solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt;: Amazon RDS supports PostgreSQL, allowing users to easily deploy and manage PostgreSQL databases, ensuring high availability and compatibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle&lt;/strong&gt;: Amazon RDS offers managed Oracle database instances, enabling businesses to run Oracle Database workloads in the cloud with features like automated backups and monitoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft SQL Server:&lt;/strong&gt; Amazon RDS supports Microsoft SQL Server, providing managed instances for SQL Server databases with options for easy scaling and high availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MariaDB:&lt;/strong&gt; Amazon RDS supports MariaDB, offering fully managed instances for MariaDB databases, allowing users to focus on application development while RDS handles database management tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Available Server Types
&lt;/h2&gt;

&lt;h5&gt;
  
  
  Standard
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Provides a balanced mix of computing, memory, and networking resources.&lt;/li&gt;
&lt;li&gt;Suitable for a wide range of workloads with moderate resource requirements.&lt;/li&gt;
&lt;li&gt;Offers predictable performance and cost-effective pricing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Memory Optimized
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Optimized for memory-intensive workloads that require high memory-to-CPU ratios.&lt;/li&gt;
&lt;li&gt;Ideal for applications such as in-memory databases, caching, and analytics.&lt;/li&gt;
&lt;li&gt;Provides ample memory capacity to handle large datasets and high-throughput workloads efficiently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Burstable performance
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Offers baseline CPU performance with the ability to burst CPU usage when needed.&lt;/li&gt;
&lt;li&gt;Suitable for workloads with variable CPU demands or periodic spikes in activity.&lt;/li&gt;
&lt;li&gt;Provides cost savings for workloads that do not require consistently high CPU utilization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Optimized Read
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;It is designed for read-heavy database workloads that require high throughput and low latency for read operations.&lt;/li&gt;
&lt;li&gt;Utilizes replicas with optimized configurations to offload read traffic from the primary database instance.&lt;/li&gt;
&lt;li&gt;Improves overall database performance and scalability by distributing read requests across multiple replicas.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Should I go for it
&lt;/h2&gt;

&lt;h5&gt;
  
  
  YES!
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Want to move faster &lt;/li&gt;
&lt;li&gt;You lack manpower/time to manage DB&lt;/li&gt;
&lt;li&gt;Money is not a constraint&lt;/li&gt;
&lt;li&gt;Want to easily get many NFRs like backup/restore, redundancy , failover etc. with click of button&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  No!
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Well, the biggest challenge is cost -if you are short on money, then RDS can start burning your reserve sooner than expected&lt;/li&gt;
&lt;li&gt;You are techie, and can do a bit of hands dirty when needed&lt;/li&gt;
&lt;li&gt;Still in POC mode and NFR like backup/restore, failover are not that crucial for you&lt;/li&gt;
&lt;li&gt;Once DB is up, your interaction with it will be minimal (Minimal usage and less data so need of tuning etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wait for the next blog having detailed instructions as how to create a RDS DB in AWS&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
    </item>
    <item>
      <title>AWS S3 -solution architect certification guide</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Fri, 20 Jan 2023 11:32:14 +0000</pubDate>
      <link>https://forem.com/eightplabs/aws-s3-solution-architect-certification-guide-25lk</link>
      <guid>https://forem.com/eightplabs/aws-s3-solution-architect-certification-guide-25lk</guid>
      <description>&lt;h2&gt;
  
  
  What is S3?
&lt;/h2&gt;

&lt;p&gt;S3 is an on-demand cloud storage solution managed by AWS that can be used to store files, data, images, videos, etc. Did you notice the “on-demand” word -it means there is no limitation on the size and as per demand size will be allocated. Also, since AWS manages it, users need not worry about scalability, availability, durability, and other non-functional aspects. &lt;/p&gt;

&lt;h2&gt;
  
  
  What are the uses of S3?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Backup and archiving&lt;/li&gt;
&lt;li&gt;Storage of large files like video and images&lt;/li&gt;
&lt;li&gt;Content sharing, S3 buckets can be shared across apps&lt;/li&gt;
&lt;li&gt;Static website hosting&lt;/li&gt;
&lt;li&gt;Bigdata processing &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many other use cases that can leverage S3 but above are a few.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of using S3
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost-effective&lt;/strong&gt;: Storage in S3 is really cheap. There are some nuances with networking cost, still, it is much cheaper than other storage options available. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-scaling&lt;/strong&gt;: The user is concerned with adding data, there is no need for any capacity planning. This is a big plus. In the traditional disk-based approach, as soon as our data grows we need to scale the infrastructure which involves capacity planning, upfront expenses, and much more complexity. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt;: S3 does not go down, literally, the data can be accessed from anywhere and anytime. If not using S3, programmers need to take care of this aspect, and can be a big distraction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt;:  Data in s3 is stored at multiple places, so corruption at one place does not impact the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global access&lt;/strong&gt;: S3 buckets are global and can be accessed from anywhere in the world.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How S3 storage is different
&lt;/h2&gt;

&lt;p&gt;Storage is of different types, for e.g., RAID, Object Storage -S3 falls into the category of Object storage. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Block storage&lt;/strong&gt;: operates at raw storage device level and manages data as a set of numbered fixed-size blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File storage&lt;/strong&gt;: Manages data as named hierarchy of files and folders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both of these are closely associated with the server and OS that is hosting the storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Object Storage&lt;/strong&gt;: Contrary to these, S3 does not expose any underlying storage mechanism, it is managed as objects which can be accessed via API using standard HTTP verbs like GET, PUT, POST, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Core concepts of S3
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Buckets&lt;/strong&gt;: Buckets are the primary containers to hold data.  There is no limit on the number of objects which can be stored in the bucket. By default, the number of buckets in each account is 100 which can be increased to a max of 1000. There is no performance impact due to the number of buckets. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Objects&lt;/strong&gt;: Objects are entities that we store in S3, e.g. files, photos, videos, etc. AWS treats objects as streams of bytes so there is no restriction as to what can be stored in the file. It can be as small as 0 bytes and as large as 5TB. Each object contains data as well as metadata. Metadata is a set of name-value pair which provides additional information about the object stored.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keys&lt;/strong&gt;: A key is a unique identifier for the stored object. They are used to access objects in the bucket and to perform other operations. It consists of the bucket name, path, and file name. If versioning is enabled, it is also part of the key. If there is a file named “test.jpg” in bucket “My_BUCKET” under path “test1/test2”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The URL to access it will be &lt;a href="https://MY_BUCKET.s3.us-west-1.amazonaws.com/test1/test2/test.jpg"&gt;https://MY_BUCKET.s3.us-west-1.amazonaws.com/test1/test2/test.jpg&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The bucket name is MY_BUCKET and the key is test1/test2/test.jpg&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A common mistake, people take the “/” for folders but let’s repeat, there is no folder in S3, it’s just a convenient way to group stuff. &lt;/p&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Accessing Objects
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;S3 is storage on the web, so every object has to go ta unique URL. From the above example, &lt;a href="https://MY_BUCKET.s3.us-west-1.amazonaws.com/test1/test2/test.jpg"&gt;https://MY_BUCKET.s3.us-west-1.amazonaws.com/test1/test2/test.jpg&lt;/a&gt; represents a file that can be accessed from the web depending on the permission and policies.&lt;/li&gt;
&lt;li&gt;S3 supports GET rest API as well as SDK through which Objects can be accessed&lt;/li&gt;
&lt;li&gt;A unique link of an Object can be signed and made available as a public URL for a fixed period of time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Consistency
&lt;/h2&gt;

&lt;p&gt;S3 provides two types of consistency&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read-after-write consistency&lt;/strong&gt;: For creating and deleting existing objects, S3 provides read-after-write consistency -this means when a new object is created or existing once deleted, S3 guarantees that the latest version of the Object is available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eventual consistency&lt;/strong&gt;: If objects are updated, it may take some time to propagate across all of its servers. It might be possible to read an older version of the object for a while even it got updated. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Static hosting
&lt;/h2&gt;

&lt;p&gt;This is one of the prominent use cases, where static hosting can be enabled for a bucket and then the files inside the bucket can be used to host a static website. Typical steps would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a bucket and enable static website hosting for it&lt;/li&gt;
&lt;li&gt;Add index and error files -these files are mandatory to be provided&lt;/li&gt;
&lt;li&gt;Create a route53 record to point to the bucket&lt;/li&gt;
&lt;li&gt;Create cloud-front distribution so that site is faster and more reliable &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Versioning
&lt;/h2&gt;

&lt;p&gt;Versioning can be enabled on S3 -by default, it is disabled. Once enabled, it can be suspended but cannot be disabled. When versioning is enabled, the files never get permanently deleted -older copies are maintained by AWS. This is very handy if we want to restore to an earlier version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Control
&lt;/h2&gt;

&lt;p&gt;Access policies can be used to define who can access what. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource-based policies&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;ACL- Each ACL Identifies resources and the permission granted. The resource here can be a bucket or just an Object. It can be used to grant read-write permissions to other AWS accounts.&lt;/li&gt;
&lt;li&gt;Bucket policy- these can be applied at the bucket level and not the object level as in the case of ACL. Users or accounts can be granted access to content in a particular bucket.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User policies&lt;/strong&gt;: AWS IAM can be used to create groups, and users and then assign them permission for specific buckets and specific operations on that.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signed URLs&lt;/strong&gt;: Presigned URLs can be created which will be public for a certain period of time and then will expire. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CORS&lt;/strong&gt;: by default any client or website which is on a different domain, cannot access the resources of another domain. S3 allows enabling CORS for clients and domains so that its object can be cross-referenced.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Encryption
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Securing data in transit&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;S3 supports SSL/TLS to encrypt data in transit when it's sent to or retrieved from S3&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Securing data at rest&lt;/strong&gt;: Server side encryption with AWS managed keys (SSE-S3) is now applied by default on all the data stored in the S3 bucket. This change has been introduced on January 5, 2023, and this will not be charged separately.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSE-S3&lt;/strong&gt;: This is the default and simplest approach. S3 automatically encrypts data when it's stored, and decrypts it when it's retrieved. The encryption keys are managed by S3 and are rotated regularly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSE-KMS&lt;/strong&gt;: Same as SSE-S3, but the Key Management Service (KMS) is used to manage the encryption keys. KMS provides a full audit trail of how the key has been used and modified over a period of time. This has additional cost implications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSE-C&lt;/strong&gt;: SSE-C allows you to provide your own encryption keys, which S3 will use to encrypt and decrypt your data. This option is helpful if you have existing encryption infrastructure or if you have specific compliance or regulatory requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client-side encryption&lt;/strong&gt;: Data uploaded to S3 is already encrypted. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Replication
&lt;/h2&gt;

&lt;p&gt;Data in S3 is always replicated in multiple availability zones, but all zones are part of the same region. If for some reason, entire regions is unavailable, for e.g., natural disaster, political scenarios, etc. then we can lose data. Cross-region replication is a feature in S3 that can be used to replicate S3 objects in different regions. This comes at an additional price of storage, and network cost -so the pros and cost of cost vs the importance of data should be judged. &lt;/p&gt;

&lt;h2&gt;
  
  
  Event Notification
&lt;/h2&gt;

&lt;p&gt;AWS S3 supports event notification which will notify registered listeners if some action happens on the object. For e.g, when an object is added/updated or deleted events will be generated which can be used to invoke lambda, send messages to SQS or SNS, integrate with event-bridge, and many more use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the different storage classes in S3?
&lt;/h2&gt;

&lt;p&gt;Depending on the access pattern of data we can store it in different classes of storage and further optimize on cost&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage class for frequently accessed Objects&lt;/strong&gt;: This storage class is most suitable for “hot” data, meaning this is the data on which an app or program operates frequently, is critical and needs a millisecond response.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 standard&lt;/strong&gt;: This is the default, if no storage class is identified, this will be used. Its availability is 99.99% (only this storage class has 2 9’s availability) and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Redundancy&lt;/strong&gt;: This is intended for non-critical and reproducible data. But AWS &lt;strong&gt;&lt;em&gt;does not recommend&lt;/em&gt;&lt;/strong&gt; using it, S3 standard can have a better cost advantage.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage class for automatically optimizing data with changing or unknown access pattern&lt;/strong&gt;: This storage class can be used for data that can be “hot” for a few days, but as it olds, it’s relevance may decrease.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 intelligent tiering&lt;/strong&gt;: This moves data to the most effective cost tier without impacting the performance impact. For e.g., if there is a news app that has videos and images, it makes sense to keep the latest data in S3 standard but move infrequently accessed data (as news grows older, the likelihood of it’s access also reduces) to cheaper storage. It’s availability is 99.9% and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage classes for infrequently accessed objects&lt;/strong&gt;: This is cheaper than the S3 standard but still provides millisecond access. This is most suited for data that is long-lived, is infrequently accessed, and still needs to be retrieved in milliseconds.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 standard-IA&lt;/strong&gt;: Data is stored in multiple availability zones and resilient to the loss of data in one availability zone. Its availability is 99.9% and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 one zone-IA&lt;/strong&gt;: This is even cheaper than S3 standard-IA, but data will only be stored in a single availability zone. If that crashes, data will be lost. This is suitable for data for which replication is enables or data that can be easily recreated. Its availability is 99.5%.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage classes for archiving objects&lt;/strong&gt;: When data is not frequently accessed and mostly will be needed on an “on-demand” basis, for example, to do some audit, or to analyze some data past. The most important aspect is, the time to retrieve data is not critical. Data in the glacier is first restored to S3 standard or S3 standard IA and then only it’s available for use.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier instant retrieval&lt;/strong&gt;: Although this data is archived, it can be accessed in milliseconds. Its availability is 99.99% and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier flexible retrieval&lt;/strong&gt;: Effective when data is not accessed for at least 90 days. Retrieval can take minutes to hours. Its availability is 99.99% (after restore) and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Glacier Deep Archive&lt;/strong&gt;: This is the cheapest storage and should be used if data is not going to be needed for at least 180 days. Data can be retrieved before that but it may be subject to an early deletion fee. It has a default retrieval time of 12 hours. Its availability is 99.99% (after restore) and is available in &amp;gt;=3 Availability zones.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All storage classes except RRS (which is anyway not recommended by AWS) have durability of 99.999999999% (Nine 9’s). RRS (Reduced redundancy storage) has a durability of 99.99%&lt;/p&gt;

&lt;h2&gt;
  
  
  What are lifecycle rules?
&lt;/h2&gt;

&lt;p&gt;Lifecycle rules can be used to transition objects from one storage class to another, delete, archive files or restore files from the glacier. For example, if we are using S3 to store application logs, we may need weekly logs for active analysis, but once they are older than 30 days we may want to archive them, and maybe after a year we no longer need them, so permanently delete the logs. These all transitions can be achieved automatically by setting transition rules.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>certification</category>
      <category>career</category>
    </item>
    <item>
      <title>Identity and Access Management In AWS</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Wed, 07 Dec 2022 10:55:44 +0000</pubDate>
      <link>https://forem.com/eightplabs/identity-and-access-management-in-aws-46mb</link>
      <guid>https://forem.com/eightplabs/identity-and-access-management-in-aws-46mb</guid>
      <description>&lt;p&gt;This is one of the comprehensive and most pre-requisite topics on the path of learning AWS. If you refer to the documentation, it is whooping ~1200 pages! But do we need to know all? Well depends:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are you going to be an AWS administrator who is going to manage company accounts, organization policies, etc. -then yes!&lt;/li&gt;
&lt;li&gt;But for developers or solution architects who want to start building -only a subset of this is required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The purpose of this series is to get started and get certified. We will start with the most basic concepts which will be sufficient for getting started and then in the next part, we will move on to more advanced concepts which will be helpful from point of view of certification.&lt;/p&gt;

&lt;p&gt;What does IAM mean? IAM stands for Identity and Access Management :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identity -who is trying to access. A user via console/CLI, or a program?&lt;/li&gt;
&lt;li&gt;Access Management -well, managing which Identities(user/program) can access what?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s not go into jargon -start analyzing, there are resources on amazon, e.g. DynamoDB, S3, Athena, SQS, etc. These all are shared and multiple users will be accessing it -but then how to restrict access, isolate usage of each user, and bill these users separately? These are the “resources” that need to be protected. Users or programs will have a different levels of access and usage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read-only&lt;/li&gt;
&lt;li&gt;Read and write&lt;/li&gt;
&lt;li&gt;Throughput requirements will vary for many users&lt;/li&gt;
&lt;li&gt;Quotas will change for each user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we have resources that need to be accessed in a restricted manner and in different ways -and we have users and programs who will be accessing these resources. How do we manage which users/programs have what permission on these resources? Policy or simply put it as a set of rules like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Policy1

&lt;ul&gt;
&lt;li&gt;Allow read-only access to S3&lt;/li&gt;
&lt;li&gt;Allow read-only access to DynamoDB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Policy2

&lt;ul&gt;
&lt;li&gt;Allow read and write access to a particular bucket in S3&lt;/li&gt;
&lt;li&gt;Allow read and write access to all tables in Dynamo DB&lt;/li&gt;
&lt;li&gt;Allow deletion on S3&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;By default, everything is disallowed, so typically permissions need to be granted as what is allowed. What’s this now -permissions? The policy attached to a user or program defines what permission a user has on the resources. For example,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Policy1 above can be attached to regular users in an organization&lt;/li&gt;
&lt;li&gt;Policy2 can be attached to admin users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So now consider this, if there are 10k users in an organization -are we going to create 10k Policies? Nope, here comes the group. We will group the users as per their role, like regular employees, sales, admin, etc, and then we will create a policy for each group.&lt;/p&gt;

&lt;p&gt;In nutshell, AWS IAM tries to answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
WHO → CAN ACCESS → WHAT?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4o3o1f2h1jilp3cc5hn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4o3o1f2h1jilp3cc5hn.jpg" alt="AWS IAM Components" width="781" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Resource, user, and group are pretty straightforward, the crux of IAM is policies, how we define them, and then how we associate them with users, applications, or groups. Let’s dive a bit deeper into policies:&lt;/p&gt;

&lt;p&gt;There are many more elaborate concepts but then I will not cover them here, I will have a separate write-up for that. Let’s do a quick example to understand what we have learned so far&lt;/p&gt;

&lt;p&gt;Sign up for AWS, if there is no existing account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp58xs11zbxgq3i8e5qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp58xs11zbxgq3i8e5qh.png" alt="AWS Sign UP Screen" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on "create a new AWS account" if a first-time user or log in with your account if you already have signed up. Creating a new account is self-explanatory, it will need personal details, and card details so no point in showing it here. Everyone gets a free tier for 1 year, which will be sufficient for most of our experimentations. Also, AWS has quick labs which can be used to play around without paying any money.&lt;/p&gt;

&lt;p&gt;Refer to video at end for more detailed steps and precise navigation&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the AWS console&lt;/li&gt;
&lt;li&gt;Select IAM from the services&lt;/li&gt;
&lt;li&gt;Go to the "users" tab on the left side navigation panel&lt;/li&gt;
&lt;li&gt;Click on Add users and provide user name

&lt;ul&gt;
&lt;li&gt;Notice the "Aws credential type" section, there are two ways to access AWS services "Access key - Programmatic access" and "Password - AWS Management Console access"&lt;/li&gt;
&lt;li&gt;select "Programmatic access" as we intend to use this key to programmatically access the resources. The second option allows user to login in to the console -which we do not want at this stage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For getting started easily, we will Attach policies directly. Click on tab "Attach existing policies directly". These are policies provided by AWS for most common permissions like read only for certail resources, complete access on resources etc. -we can select one of these. In future, we can also edit and further extend it. Start typing "AmazonS3" in "Filter policies" and all predefined policies related to S3 will appear. Let's select "AmazonS3ReadOnlyAccess".&lt;/li&gt;
&lt;li&gt;Once a managed policy is attached, we are ready to create the user. Skip the tags for now.&lt;/li&gt;
&lt;li&gt;After user is successfully created, it will navigate to screen having option to download creadentials for programmatic access of AWS resources in the boundary of permissions provided by policies. There are two parts

&lt;ul&gt;
&lt;li&gt;access key and secret key&lt;/li&gt;
&lt;li&gt;Access key is visible by default but secret key is not. The keys should be downloaded and stored safely. Once we navigate away from the screen, there is no way to retrieve the secret key (Access key will always be visible).&lt;/li&gt;
&lt;li&gt;A new Access key and secret key can always be generated&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;But if we have thousands of users -we cannot keep attaching policy to each and every user. Here comes the concept of groups. We create groups and add users to the group. Attach policy to the group instead of attaching to individual user. Changing the policy at the group level will effectively change the policy for all of the users

&lt;ul&gt;
&lt;li&gt;We can create a new group, for example, "S3_read_only". Select "User groups" from the left panel on IAM services&lt;/li&gt;
&lt;li&gt;If you notice, there is an option to add existing users to this group as well as assign a policy at the group level. As mentioned earlier, once a policy is assigned at the group level, it is applicable to all the users in the group.&lt;/li&gt;
&lt;li&gt;Next time, when we add a user, instead of attaching a policy, we can simply add them to a group and that policy will be applicable to all the users in the group.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's have quick look at the policy. Search for "IAM" service and select "User groups" from left navigation. Select the created group, for e.g., "s3_read_only_group" and go to the permissions tab. Click on the "+" on the policy as shown below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rslu3obvtxaze7xzz55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rslu3obvtxaze7xzz55.png" alt="Policy" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will expand the policy as shown here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytxpwfhmoiu78lt5lbkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytxpwfhmoiu78lt5lbkg.png" alt="Expanded Policy" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy is very easy to understand,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Line 5 indicates we are going to allow certain actions(line 7 to 10) on resources listed at line 12 to all the users who are part of this group&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 7 to 10 identifies what actions are allowed -we can easily edit this and include or remove actions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Line 12 list the resources, '*" indicates all the buckets in S3, but we can easily restrict this to a particular bucket, or a bucket with certain name pattern&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will conclude the blog post with this summary&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Resources -Services available on AWS which need to be secured&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Policies -A set of rules governing what operations are allowed on each resource by a particular identity (User or Program)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Users -or rather more correct word is Principal -actual users or programs which will be accessing the resources as per rules defined in the policies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Groups -A set of users grouped together based on some role, for example, all developers, all testers, sales, marketing, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some highlights&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;IAM is universal → does not apply to Region. We will understand this more when we cover global infrastructure of AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a new signs up for AWS, for e.g., we did during this tutorial, it is always root user. Do not forget to set up MFA on the root account. Multi factor authentication, single sign on, enterprise integrations -this will cover in next iteration of IAM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For new users, unless we attach explicit policy, NO permissions will be granted when they are created&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will need to learn at least one more service for example S3, then we can use these policies to access the resource and do some experimentation. I will be back soon with S3 and we will build our first application in the next session -so watch out.&lt;/p&gt;

&lt;p&gt;Following is a video which explains the navigation in console in more details! Thanks and bye for now.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NliOrL1SMEE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Study plan for certification, AWS Solution Architect -Associate</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Sun, 04 Dec 2022 05:47:59 +0000</pubDate>
      <link>https://forem.com/eightplabs/study-plan-for-certification-aws-solution-architect-associate-24o5</link>
      <guid>https://forem.com/eightplabs/study-plan-for-certification-aws-solution-architect-associate-24o5</guid>
      <description>&lt;p&gt;In my &lt;a href="https://eightplabs.com/are-certifications-money-wasted-or-invested"&gt;last&lt;/a&gt; blog post, I emphasized the importance of getting certified especially “deep certified” and not “dump certified”. As promised, the following is a quick plan which we will follow to get to the goal of learning AWS, and as a by-product, we will also get certified.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Iteration 1, timeline ~(8 articles and 4 weeks)&lt;/strong&gt;, The purpose of this iteration is to get hands dirty rather than overload the mind with theoretical details. We will talk less about concepts and do more hands-on. Don’t worry, we will revisit the concepts later but this iteration will give you enough confidence to map your existing projects to a cloud one and you can claim to be cloud-ready! A quick outline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand the basics of AWS global infrastructure.&lt;/li&gt;
&lt;li&gt;Understand the basics of AWS security philosophy -this is the most fundamental aspect when building for the cloud as we operate in a shared model.&lt;/li&gt;
&lt;li&gt;Understand how to use groups, roles, and policies to authenticate and authorize users.&lt;/li&gt;
&lt;li&gt;Apply IAM policies, this is the bare minimum to get started developing solutions.&lt;/li&gt;
&lt;li&gt;Basic cloud formation script to create some users and resources via scripts.&lt;/li&gt;
&lt;li&gt;How to build APIs on amazon -APIs are lifelines, so we must understand them.&lt;/li&gt;
&lt;li&gt;Understand some basic services like AWS lambda, s3, dynamodb, Athena, etc. which will get us going and at the same time will not be very heavy.&lt;/li&gt;
&lt;li&gt;Build a real application -this is going to be fun, I will use an existing in-premise application and try to convert it to a cloud-based. Will share the code on GitHub.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Iteration 2, timeline ~(12 articles and 6 weeks)&lt;/strong&gt;, The purpose here will be to revise the last iteration and then build on some advanced concepts. By end of this iteration, you will have a very good hold on designing a scalable solution in AWS by using the appropriate choice of technology among many available.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Revision with some sample papers and tests&lt;/li&gt;
&lt;li&gt;Extend security concepts and learn about single sign-on, network security, app security, token-based authentication, etc.&lt;/li&gt;
&lt;li&gt;Learn more about building scalable architecture like on-demand scaling, managing a growing database, routing, content delivery, and many more.&lt;/li&gt;
&lt;li&gt;API management, horizontal scaling vs vertical scaling, event-driven architectures, serverless architectures, queue and messaging concepts -these all will help you build real-time scalable system design rather than just mugging up hypothetical system design courses&lt;/li&gt;
&lt;li&gt;How to make choices for different options like relational vs NoSQL or dynamo vs Athena -when to use what&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Iteration 3, timeline ~(4 articles and 2 weeks)&lt;/strong&gt;, This will be the last iteration on our learning path for the associate exam. We will cover some advanced theoretical concepts, no need to get hands dirty there -at least from a certification point of view. We will cover&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with revision papers and some summary from the last two iteration&lt;/li&gt;
&lt;li&gt;Data analytics solutions and concepts&lt;/li&gt;
&lt;li&gt;Geo redundancy, backup recovery, and other non-functional concepts&lt;/li&gt;
&lt;li&gt;A bit more internals of networking like Internet gateways, route tables, etc.&lt;/li&gt;
&lt;li&gt;Cost optimizations, AWS cost management tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Iteration 4, timeline ~(10 revision papers and 1 week)&lt;/strong&gt; -7 days and 10 papers, this will be the most comprehensive week from the perspective of exam preparations. We will concentrate more and more on practice.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ultimate goal of the Amazon solution architect exam is to crack this ratio.&lt;/p&gt;

&lt;p&gt;And we surely will but instead of going waterfall which is so old fashioned ;), we will hit the buzzword -agile! We will take a slice from all 4 domains in each of the iterations, build something, validate our previous knowledge and then keep advancing. At end of each iteration, we will be more and more cloud-ready and also a lot near our goal post of getting certified.&lt;/p&gt;

&lt;p&gt;I will publish these tutorials on youtube also, many are comfortable in more interactive mode rather than just text! This is the introductory video&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/XuMdKkESN0s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Watch out for the next blog and video covering IAM -no more plans and intros, let's get our hands dirty!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>certification</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Are certifications money wasted or invested?</title>
      <dc:creator>EightPLabs</dc:creator>
      <pubDate>Wed, 23 Nov 2022 03:19:56 +0000</pubDate>
      <link>https://forem.com/eightplabs/are-certifications-money-wasted-or-invested-1e4f</link>
      <guid>https://forem.com/eightplabs/are-certifications-money-wasted-or-invested-1e4f</guid>
      <description>&lt;p&gt;There have been a lot of debates around whether one should "invest" in certifications or just be away from "wasting" money there. I have been in the same boat and I do have at least 5 certifications (Java language, Java architect, AWSx2, Spring), so I feel I am qualified to share my view on this topic.&lt;/p&gt;

&lt;p&gt;For people in a hurry, YES, it is never "money wasted", it is always "the money invested". One prominent argument against certification is that most people use dumps to get certified and certificates do not exhibit real knowledge -there is truth in there, still, I would advocate getting one. Let me list the benefits of being certified&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resume selection&lt;/strong&gt; -this itself is a big advantage, many times we all struggle to get that first call! In today's era of bots, what keywords are there on the resume matters a lot! They will simply increase the ranking for certified professionals -bots cannot differentiate whether it's from the dump or deep knowledge. Even when a resume goes to a recruiter, they tend to look at buzzwords, the way we try to judge a doctor's knowledge by the number of diplomas on board! Just human nature! The call itself is a big headstart and it can be used iteratively to build on expectations. No exception, what follows is based on our knowledge but at least we get going!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;One learns in the process:&lt;/strong&gt; Yes, if you just used dumps, you may not be able to justify the knowledge but it has given a headstart. Keep building based on interviews -understand the gap, and learn once onboarded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You are not lost in the ocean:&lt;/strong&gt; Have you tried looking at all the services offered by AWS, got in the feeling of where to start and how to start? We literally get lost -certification provides a stepwise framework. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connect the dots:&lt;/strong&gt; This is a very important point. Certification provides a breadth, we know our options. We build on the minimal knowledge and then we can choose how deeper we want to go.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helps with side gigs:&lt;/strong&gt; Many organizations, especially startups cannot afford full-time employees. Having a certificate helps in landing that first gig, after that it's all up to the person if he got "dump" or "deep" knowledge. It can also help in teaching gigs, with the market becoming competitive day by day, training and certification will definitely be a big area.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I rest my case -Let's keep it concise. &lt;/p&gt;

&lt;p&gt;I am a certified AWS professional solution architect. I am going to start a series of blogs targeting &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-associate/?ch=sec&amp;amp;sec=rmg&amp;amp;d=1"&gt;AWS solution architect -Associate&lt;/a&gt; and then will move on to &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-professional/?ch=sec&amp;amp;sec=rmg&amp;amp;d=1"&gt;AWS solution architect -Professional&lt;/a&gt;. No this is not going to be #AWeek or #30Days, we will go deep starting right from where to begin, what is the minimal knowledge needed for being called a cloud professional. Here is a bit of an outline of how we will progress:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish Plan&lt;/li&gt;
&lt;li&gt;Publish gradual reading material&lt;/li&gt;
&lt;li&gt;Weekly live sessions on zoom&lt;/li&gt;
&lt;li&gt;Career Guidance and resume consultation &lt;/li&gt;
&lt;li&gt;Practice papers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what are you waiting for? Do connect me on &lt;a href="https://www.linkedin.com/in/eightplabs/"&gt;LinkedIn&lt;/a&gt;, this will be fun and it's all free -so even more fun!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certifications</category>
      <category>career</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
