<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Simranjeet Singh</title>
    <description>The latest articles on Forem by Simranjeet Singh (@singhs020).</description>
    <link>https://forem.com/singhs020</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/singhs020"/>
    <language>en</language>
    <item>
      <title>Understanding Attribute Types in DynamoDB</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Sat, 22 Jul 2023 06:14:01 +0000</pubDate>
      <link>https://forem.com/singhs020/understanding-attribute-types-in-dynamodb-48ok</link>
      <guid>https://forem.com/singhs020/understanding-attribute-types-in-dynamodb-48ok</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0GUcV-pz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AB7c7byxiWvuZQZ8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0GUcV-pz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AB7c7byxiWvuZQZ8d.png" alt="awsmag.com" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Amazon DynamoDB, attribute types define the nature and format of the data stored within attributes. DynamoDB provides a variety of attribute types to accommodate different data requirements and use cases. By selecting the appropriate attribute types, you can ensure efficient storage, retrieval, and querying of data. In this section, we’ll explore the attribute types supported by DynamoDB and discuss their characteristics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalar Types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;String: Represents a sequence of Unicode characters with a maximum length of 400KB. Commonly used for storing textual data such as names, descriptions, or identifiers.&lt;/li&gt;
&lt;li&gt;Number: Represents a numeric value, which can be either an integer or a floating-point number. Numbers can be positive, negative, or zero.&lt;/li&gt;
&lt;li&gt;Binary: Represents binary data, such as images, audio files, or serialized objects. Binary attributes store raw byte arrays.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set Types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;String Set: Represents an unordered collection of unique string values. Suitable for scenarios where you need to store multiple, distinct string values.&lt;/li&gt;
&lt;li&gt;Number Set: Represents an unordered collection of unique numeric values. Useful for scenarios that require storing multiple, distinct numeric values.&lt;/li&gt;
&lt;li&gt;Binary Set: Represents an unordered collection of unique binary values. Ideal for scenarios involving multiple, distinct binary data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Document Types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List: Represents an ordered collection of elements. Each element within the list can be of any DynamoDB data type, including other lists and maps. Lists are useful for representing ordered data structures.&lt;/li&gt;
&lt;li&gt;Map: Represents an unordered collection of key-value pairs. The keys within a map must be unique, and the values can be of any DynamoDB data type, including other maps and lists. Maps enable flexible and nested data structures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each attribute within an item can have a specific data type. DynamoDB is schema-less, allowing different items within the same table to have different attributes and attribute types. This flexibility enables you to adapt to changing data requirements without modifying the table structure.&lt;/p&gt;

&lt;p&gt;Choosing the appropriate attribute types is important for optimizing storage, query performance, and cost efficiency. By accurately representing the data type, you can leverage DynamoDB’s indexing and querying capabilities effectively.&lt;/p&gt;

&lt;p&gt;Additionally, DynamoDB supports a rich set of operations and functions for manipulating attribute values, such as comparisons, filtering, concatenation, and more. These operations provide flexibility in working with attribute values and enable powerful query capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Understanding attribute types in DynamoDB is crucial for accurately representing and manipulating data within your tables. By selecting the appropriate attribute types for your data, you can optimize storage, query performance, and cost efficiency. DynamoDB’s support for scalar types, set types, and document types offers a wide range of options to handle different data scenarios.&lt;/p&gt;

&lt;p&gt;In the next article, we will explore how to model relationships between items in DynamoDB and discuss various strategies for handling one-to-one, one-to-many, and many-to-many relationships. Stay tuned for more insights and best practices on working with Amazon DynamoDB!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/understanding-attribute-types-in-dynamodb/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 22, 2023.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>DynamoDB Data Model: Tables, Items, and Attributes Explained</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Sun, 16 Jul 2023 13:11:31 +0000</pubDate>
      <link>https://forem.com/singhs020/dynamodb-data-model-tables-items-and-attributes-explained-16dk</link>
      <guid>https://forem.com/singhs020/dynamodb-data-model-tables-items-and-attributes-explained-16dk</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vTR8XPXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhkDsShkMH6ymfIsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vTR8XPXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhkDsShkMH6ymfIsq.png" alt="AWSMag.com" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When working with Amazon DynamoDB, it’s essential to understand the data model it employs. DynamoDB, a fully managed No-SQL database service offered by Amazon Web Services (AWS), uses a schema-less and flexible data model, allowing for rapid and scalable application development. In this section, we’ll provide an introduction to DynamoDB’s data model, exploring the key concepts and components that form its foundation.&lt;/p&gt;

&lt;p&gt;At the core of DynamoDB’s data model are three key elements: tables, items, and attributes. Let’s take a closer look at each of these components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tables&lt;/strong&gt; : In DynamoDB, data is organized into tables, which serve as containers for storing and managing related information. Tables consist of a collection of items and have a primary key that uniquely identifies each item within the table. Tables are schema-less, meaning that each item in a table can have a different set of attributes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Items&lt;/strong&gt; : Items represent individual records within a DynamoDB table. Each item is a collection of attributes, which can vary in number and type between different items in the same table. Items are analogous to rows in a traditional relational database but provide more flexibility since they do not require a fixed schema.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attributes&lt;/strong&gt; : Attributes are the key-value pairs that make up the data stored within DynamoDB. Each item can have one or more attributes, where the attribute name represents the key, and the attribute value represents the corresponding value. DynamoDB supports different attribute types, including numbers, strings, binary data, sets, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DynamoDB Tables: The Foundation of Data Storage
&lt;/h3&gt;

&lt;p&gt;In Amazon DynamoDB, tables serve as the foundation for storing and managing your data. They act as containers that organize and structure your information. Understanding DynamoDB tables is essential for effectively working with the service and building scalable and high-performing applications. In this section, we’ll explore DynamoDB tables in detail and discuss their role in data storage.&lt;/p&gt;

&lt;p&gt;Key Characteristics of DynamoDB Tables:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Schema-less Nature&lt;/strong&gt; : Unlike traditional relational databases, DynamoDB tables are schema-less. This means that each item within a table can have a different set of attributes, offering flexibility in data modelling. You can easily add, remove, or modify attributes for different items without altering the table’s structure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Primary Key&lt;/strong&gt; : Every DynamoDB table has a primary key that uniquely identifies each item within the table. The primary key can be of two types:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Partition Key (Hash Key):&lt;/strong&gt; A single attribute that determines the partition in which an item is stored. DynamoDB uses the partition key value to distribute data across multiple storage nodes for scalability and performance. Partition keys should have a high cardinality to evenly distribute the workload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composite Primary Key (Partition Key + Sort Key)&lt;/strong&gt;: In addition to the partition key, a sort key (also known as the range key) allows for the efficient querying and sorting of items within a table. The combination of the partition key and sort key creates a unique identifier for each item.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Distribution and Scalability&lt;/strong&gt; : DynamoDB automatically distributes data across multiple partitions based on the partition key to achieve high scalability and performance. As data grows, DynamoDB transparently manages the distribution of data across partitions, allowing your applications to handle increasing workloads without manual intervention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Replication&lt;/strong&gt; : DynamoDB automatically replicates data across multiple Availability Zones within a region to ensure high availability and durability. This replication strategy provides fault tolerance and protects against data loss in the event of a failure in one Availability Zone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Capacity&lt;/strong&gt; : DynamoDB offers flexible capacity management. You can specify the desired provisioned throughput capacity (measured in read and write capacity units) when creating a table. Provisioned capacity ensures that your application can handle the expected workload. Additionally, DynamoDB can automatically scale your table’s capacity up or down based on demand using auto scaling, providing cost optimization and elasticity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global Tables&lt;/strong&gt; : For globally distributed applications, DynamoDB offers Global Tables, which replicate tables across multiple AWS regions. Global Tables enable low-latency access to data from any region and provide redundancy and disaster recovery capabilities.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Designing DynamoDB Tables:
&lt;/h3&gt;

&lt;p&gt;When designing DynamoDB tables, it’s crucial to consider your application’s access patterns, query requirements, and data relationships. Effective table design can significantly impact query performance and cost efficiency. Factors to consider include choosing appropriate attribute types, defining primary key structures, and leveraging secondary indexes to support various access patterns.&lt;/p&gt;

&lt;p&gt;DynamoDB provides flexible and scalable data storage, allowing you to focus on developing your applications without worrying about managing underlying infrastructure. By leveraging DynamoDB’s powerful table capabilities, you can build robust, high-performance, and scalable solutions to meet your data storage and retrieval needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Items: Individual Records within DynamoDB Tables
&lt;/h3&gt;

&lt;p&gt;In Amazon DynamoDB, items represent individual records within a table. They are the fundamental units of data storage and retrieval. Understanding how items are structured and how they relate to tables is essential for effectively working with DynamoDB. In this section, we’ll explore items in detail and discuss their role in storing and retrieving data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Characteristics of DynamoDB Items:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Attribute-Value Pairs&lt;/strong&gt; : Each item in a DynamoDB table consists of one or more attribute-value pairs. Attributes represent the keys, and their corresponding values represent the data associated with those keys. Attributes can be of various data types, such as strings, numbers, binary data, sets, or documents, offering flexibility in data modeling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema-less Nature&lt;/strong&gt; : DynamoDB’s schema-less nature extends to items as well. Each item within a table can have a different set of attributes. This allows you to store and retrieve items with varying attributes, without the need for a fixed table schema. As your application evolves, you can add or remove attributes from items without impacting other items in the table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Primary Key&lt;/strong&gt; : Every item in a DynamoDB table is uniquely identified by its primary key. The primary key can be either a partition key (also known as a hash key) or a composite key consisting of a partition key and a sort key (also known as a range key). The primary key ensures the uniqueness and efficient retrieval of items within the table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accessing Items&lt;/strong&gt; : You can access items in DynamoDB using the primary key. Retrieving an item by its primary key is an efficient operation, providing fast and predictable access to data. DynamoDB supports both single-item retrieval and batch retrieval for multiple items.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Data Modeling:&lt;/strong&gt; DynamoDB’s flexible data model allows you to store heterogeneous items within the same table. Items can have different attributes, allowing you to represent diverse data structures. This flexibility enables you to adapt to evolving application requirements and easily incorporate new attributes or data types.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Designing DynamoDB Items:
&lt;/h3&gt;

&lt;p&gt;When designing items in DynamoDB, it’s important to consider the access patterns and query requirements of your application. The attributes you choose and the way you structure your items can significantly impact the efficiency of data retrieval operations. It’s often beneficial to denormalize your data and include all necessary attributes within an item to minimize the need for additional queries.&lt;/p&gt;

&lt;p&gt;Additionally, you can leverage DynamoDB’s support for secondary indexes to enhance query flexibility. Secondary indexes allow you to define additional attributes as alternate keys for querying items based on different access patterns.&lt;/p&gt;

&lt;p&gt;By effectively structuring and modeling your items, you can optimize data retrieval, ensure scalability, and minimize the need for complex joins or multiple round-trip operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Attributes: Key-Value Pairs of Data
&lt;/h3&gt;

&lt;p&gt;In Amazon DynamoDB, attributes are the fundamental components that make up the data stored within items. They are represented as key-value pairs, where the attribute name acts as the key, and the attribute value represents the corresponding data associated with that key. Attributes play a crucial role in the structure, organization, and retrieval of data in DynamoDB. Let’s explore attributes in more detail.&lt;/p&gt;

&lt;p&gt;Key Characteristics of DynamoDB Attributes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Key-Value Structure&lt;/strong&gt; : Attributes in DynamoDB follow a key-value structure, where the attribute name serves as the key and the attribute value holds the corresponding data. This structure allows for flexible and dynamic data modeling since the attributes associated with an item can vary in number and type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attribute Names:&lt;/strong&gt; Attribute names are used to identify and access specific pieces of data within an item. Attribute names must be unique within an item and should follow the naming rules defined by DynamoDB, such as being case-sensitive and avoiding reserved words.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attribute Values&lt;/strong&gt; : Attribute values hold the actual data associated with the attribute name. DynamoDB supports various data types for attribute values, including strings, numbers, binary data, sets, documents, and more. The choice of data type depends on the nature of the data being stored and the desired operations to be performed on that data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Data Modeling&lt;/strong&gt; : DynamoDB’s flexible data model allows you to include different attributes within an item. Each item can have its own set of attributes, providing a schema-less approach. This flexibility allows you to add or remove attributes dynamically, adapting to changing requirements without altering the table’s structure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nested Attributes&lt;/strong&gt; : DynamoDB also supports nested attributes or complex data structures within items. You can represent hierarchical data by using attribute names separated by dots (e.g., “address.city”). This enables you to store and retrieve nested data structures within a single item, simplifying data representation and retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attribute Size Limitations&lt;/strong&gt; : DynamoDB imposes certain size limitations on attribute names and values. The maximum size for an attribute name is 255 bytes, while the maximum size for an attribute value varies depending on the data type. It’s important to consider these limitations and ensure that your data fits within the allowed boundaries.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Attributes in DynamoDB provide a versatile and dynamic approach to storing and accessing data. By leveraging the key-value structure and flexible data modeling capabilities, you can design efficient data models and adapt to changing requirements easily.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;&lt;br&gt;
In this comprehensive guide, we’ve delved into the essential components of the DynamoDB data model, understanding how tables, items, and attributes work together to form a robust and flexible NoSQL database. We’ve explored the key concepts behind each element and how they influence data organization, retrieval, and scalability in DynamoDB.&lt;/p&gt;

&lt;p&gt;Tables serve as the foundation of data storage, and in upcoming post we will learn how to design effective table structures by defining primary keys, choosing the right attributes, and considering the benefits of Local Secondary Indexes (LSIs) and Global Secondary Indexes (GSIs). With this knowledge, you can create well-optimized DynamoDB tables that cater to diverse access patterns and query requirements.&lt;/p&gt;

&lt;p&gt;Items represent individual records within tables, and we’ve discussed their uniqueness, size considerations, and the significance of primary keys in locating specific items efficiently. By mastering items, you can ensure quick and precise data retrieval, critical for delivering a responsive user experience in your applications.&lt;/p&gt;

&lt;p&gt;Attributes are the key-value pairs that define the data within items, and we’ve explored various attribute types, including scalar types, set types, and nested attributes. Understanding attribute types is fundamental to modeling your data effectively and utilizing DynamoDB’s full capabilities.&lt;/p&gt;

&lt;p&gt;As you embark on your journey with DynamoDB, we encourage you to dive deeper into its vast potential. DynamoDB is more than just a database; it’s a game-changer for modern application development. By harnessing its power, you can build scalable, high-performance applications that cater to the ever-changing demands of your users.&lt;/p&gt;

&lt;p&gt;For the latest insights, expert tips, and practical examples on working with DynamoDB and other AWS services, we invite you to join our growing community at AWSMAG. By signing up today, you’ll gain access to exclusive content, in-depth tutorials, and stay updated with the latest trends and advancements in the AWS ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/dynamodb-data-model-tables-items-and-attributes-explained/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 16, 2023.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Getting Started with Amazon DynamoDB: A Beginner’s Guide</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Sat, 01 Jul 2023 07:10:08 +0000</pubDate>
      <link>https://forem.com/singhs020/getting-started-with-amazon-dynamodb-a-beginners-guide-5d0b</link>
      <guid>https://forem.com/singhs020/getting-started-with-amazon-dynamodb-a-beginners-guide-5d0b</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bAR_1Axt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AW_7sZnTRXQwmyNtx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bAR_1Axt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AW_7sZnTRXQwmyNtx.png" alt="AWSMAG.com" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon DynamoDB is a fully managed NoSQL database service provided by Amazon Web Services (AWS). It is designed to provide fast and predictable performance with seamless scalability for applications that require low-latency, high-performance data storage. DynamoDB is built to handle massive workloads and is capable of scaling horizontally across multiple servers to accommodate growing data volumes and traffic.&lt;/p&gt;

&lt;p&gt;DynamoDB offers a flexible, key-value store model where data is organized into tables. Each table consists of multiple items, and each item is a collection of attributes. Unlike traditional relational databases, DynamoDB does not require a fixed schema, allowing for dynamic and agile data modeling. This flexibility makes it well-suited for applications with evolving data requirements.&lt;/p&gt;

&lt;p&gt;Key features of Amazon DynamoDB include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; : DynamoDB automatically scales its storage and throughput capacity to accommodate the workload demands. It can handle millions of requests per second and scales up or down seamlessly to meet changing traffic patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; : DynamoDB provides low-latency, single-digit millisecond response times, making it ideal for applications that require real-time data access and fast query performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully Managed&lt;/strong&gt; : AWS handles the operational aspects of DynamoDB, such as hardware provisioning, software patching, and infrastructure management. This allows developers to focus on building applications without worrying about the underlying infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability and Availability&lt;/strong&gt; : DynamoDB replicates data across multiple Availability Zones within a region to ensure high availability and durability. It provides built-in fault tolerance, automatic data backups, and multi-region replication capabilities for global deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Querying&lt;/strong&gt; : DynamoDB supports fast and efficient querying using primary keys and secondary indexes. It offers various querying options, including key-value access, range queries, and filtering capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Security&lt;/strong&gt; : DynamoDB offers robust security features, including fine-grained access control using AWS Identity and Access Management (IAM), encryption at rest with AWS Key Management Service (KMS), and network isolation using Amazon Virtual Private Cloud (VPC).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB Streams&lt;/strong&gt; : This feature captures and provides a time-ordered sequence of item-level modifications in a DynamoDB table. Streams enable real-time data processing, replication to other services, and event-driven architectures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon DynamoDB is commonly used for a wide range of applications, such as e-commerce, gaming, mobile, ad tech, IoT, and more. Its flexible scalability, high performance, and serverless architecture make it a popular choice for developers who want to focus on building their applications while offloading the burden of managing databases at scale.&lt;/p&gt;

&lt;p&gt;Overall, DynamoDB provides a reliable and feature-rich NoSQL database solution in the AWS ecosystem, allowing developers to build scalable and responsive applications with ease.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB’s Data Model
&lt;/h3&gt;

&lt;p&gt;When working with Amazon DynamoDB, it’s essential to understand the data model it employs. DynamoDB, a fully managed NoSQL database service offered by Amazon Web Services (AWS), uses a schema-less and flexible data model, allowing for rapid and scalable application development. In this section, we’ll provide an introduction to DynamoDB’s data model, exploring the key concepts and components that form its foundation.&lt;/p&gt;

&lt;p&gt;At the core of DynamoDB’s data model are three key elements: tables, items, and attributes. Let’s take a closer look at each of these components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tables&lt;/strong&gt; : In DynamoDB, data is organized into tables, which serve as containers for storing and managing related information. Tables consist of a collection of items and have a primary key that uniquely identifies each item within the table. Tables are schema-less, meaning that each item in a table can have a different set of attributes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Items&lt;/strong&gt; : Items represent individual records within a DynamoDB table. Each item is a collection of attributes, which can vary in number and type between different items in the same table. Items are analogous to rows in a traditional relational database but provide more flexibility since they do not require a fixed schema.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attributes&lt;/strong&gt; : Attributes are the key-value pairs that make up the data stored within DynamoDB. Each item can have one or more attributes, where the attribute name represents the key, and the attribute value represents the corresponding value. DynamoDB supports different attribute types, including numbers, strings, binary data, sets, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To uniquely identify items within a table, DynamoDB uses a primary key. The primary key can be of two types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Partition Key&lt;/strong&gt; : Also known as the hash key, it is a single attribute that DynamoDB uses to distribute data across multiple partitions for scalability and performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composite Primary Key&lt;/strong&gt; : In addition to the partition key, a composite primary key includes a sort key (also known as the range key). The combination of the partition key and sort key allows for efficient querying and sorting of items within a table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging primary keys and secondary indexes, DynamoDB provides powerful querying capabilities. Secondary indexes allow you to define additional attributes to support different access patterns, enhancing query flexibility and performance.&lt;/p&gt;

&lt;p&gt;Understanding DynamoDB’s data model is crucial for effectively designing and working with your database. It empowers you to make informed decisions about table structures, key designs, and access patterns to optimize performance and scalability for your applications.&lt;/p&gt;

&lt;p&gt;In the upcoming sections of this blog series, we will delve deeper into each of these components, explore advanced data modeling techniques, and provide best practices to help you make the most out of DynamoDB.&lt;/p&gt;

&lt;p&gt;Stay tuned for our next article, where we’ll discuss DynamoDB tables and their role in organizing data. Subscribe to our blog to receive updates and never miss a post!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/getting-started-with-amazon-dynamodb-a-beginners-guide/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 1, 2023.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>serverless</category>
      <category>aws</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>Amazon API Gateway HTTP Errors</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Mon, 24 Jan 2022 06:07:00 +0000</pubDate>
      <link>https://forem.com/singhs020/amazon-api-gateway-http-errors-32mb</link>
      <guid>https://forem.com/singhs020/amazon-api-gateway-http-errors-32mb</guid>
      <description>&lt;p&gt;&lt;a href="https://awsmag.com/introduction-to-api-gateway/"&gt;Amazon API Gateway&lt;/a&gt; is a fully managed service that helps developers to create and deploy scalable APIs on AWS. These APIs act as an entry point for the applications to connect and get access to data, perform business logic or access any other AWS service.&lt;/p&gt;

&lt;p&gt;Amazon API Gateway also returns some HTTP Errors and we will be discussing some of the errors in this blog post and what they mean when returned from the Amazon API Gateway. Usually, the errors returned are in the range of 4xx or 5xx and examples for the same are 400 or 500. As a rule of thumb errors in the range of 400–499 are usually returned if there are problems with the client or you are breaking some of the rules defined by the Amazon API Gateway.&lt;/p&gt;

&lt;p&gt;The errors in the range of 500–599 mean the server is not working problem or you have an issue with the network or the issues in the infrastructure which runs your server.&lt;/p&gt;

&lt;h3&gt;
  
  
  400 Error: Bad Request
&lt;/h3&gt;

&lt;p&gt;The HTTP Status 400: Bad Request is the broadest error and depending on what AWS service API Gateway is integrating with, this error means many things. Some of the reasons for this error can be an invalid JSON, wrong data types and required fields etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  403 Error: Access Denied
&lt;/h3&gt;

&lt;p&gt;The HTTP Status 403: Forbidden means there are some permission issues. In AWS, this can be an issue with a wrong IAM role configuration. If your service uses an auth mechanism like AWS Cognito or a custom authorizer, this can be a permission issue because of this, then this error code will be returned.&lt;/p&gt;

&lt;h3&gt;
  
  
  404 Error: Not Found
&lt;/h3&gt;

&lt;p&gt;The HTTP Status 404: This means the resource is not available or the URL does not exist. You can check the URL if it is right or not or have been implemented right to make sure that you are not making any mistake.&lt;/p&gt;

&lt;h3&gt;
  
  
  409 Error: Conflict
&lt;/h3&gt;

&lt;p&gt;The HTTP Status 409: indicates that your request is trying to do something that conflicts with the current state of the target resource. It is most likely to occur in response to a PUT request.&lt;/p&gt;

&lt;h3&gt;
  
  
  429 Error: Too Many Requests
&lt;/h3&gt;

&lt;p&gt;There are two cases when you can receive 429 errors from API Gateway.&lt;/p&gt;

&lt;p&gt;The first one for HTTP Status 429: “Too Many Request”. This usually happens when the downstream resource is not able to handle the number of requests coming in.&lt;/p&gt;

&lt;p&gt;For example, if you have a Lambda which gets triggered via an API Gateway and there is a reserved concurrency assigned to it let’s say 20 then 21 requests will same time probably give you this error.&lt;/p&gt;

&lt;p&gt;This can also happen if your API keys are not allowing more than x number of requests concurrently. If the number of requests exceeds the number even if the downstream resource can handle it, the API Gateway will give this error.&lt;/p&gt;

&lt;h3&gt;
  
  
  429 Error: Limit Exceeded
&lt;/h3&gt;

&lt;p&gt;The second one for HTTP Status 429 is “Limit Exceeded Exception,” which means that you have exceeded the allowed number of requests. This happens when the request is metered using an API key in API Gateway. The usage plan is associated with the key and the plan decides how many requests are allowed in a month by that particular resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  500 Error: Internal Server Error
&lt;/h3&gt;

&lt;p&gt;HTTP Status 500: It is the most generic HTTP error you will see. If the downstream service is Lambda, this error can mean an issue or a bug in the code of the function.&lt;/p&gt;

&lt;p&gt;This can also happen if the status code mapping in the API is wrong. The default mapping if the error mapping is not configured properly, then the status code returned to the client is HTTP Status code 500.&lt;/p&gt;

&lt;h3&gt;
  
  
  502 Error: Bad Gateway
&lt;/h3&gt;

&lt;p&gt;HTTP Status 502: this usually happens when the downstream service is not able to provide a response that can be mapped easily with the API Gateway. Sometimes the downstream service is not ready and cannot return a response.&lt;/p&gt;

&lt;p&gt;Amazon API Gateway has a hard limit of 30 seconds timeouts. If the downstream service is not able to respond in this time frame, the API Gateway returns the HTTP Status 503.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;The above-mentioned codes are some of the common errors which you may encounter while working with the Amazon API Gateway. If you would like to be familiar with the other things related to Amazon API Gateway, we have a collection that lists all the articles related to it. You can find it &lt;a href="https://awsmag.com/tag/amazon-api-gateway/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/amazon-api-gateway-http-errors/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on January 24, 2022.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>awsapigateway</category>
    </item>
    <item>
      <title>What is Amazon Cognito?</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Tue, 04 Jan 2022 12:43:15 +0000</pubDate>
      <link>https://forem.com/singhs020/what-is-amazon-cognito-5acc</link>
      <guid>https://forem.com/singhs020/what-is-amazon-cognito-5acc</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WxPMNTeM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhxlfBJC-SE_I3MEr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WxPMNTeM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhxlfBJC-SE_I3MEr.png" alt="Cognito" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mobile or web apps mostly requires a user management solution to manage, authenticate users before giving them access to a restricted area in the app. Creating a user management system from scratch is a big task and requires a deep understanding of handling the PII data i.e. personally identifiable information. Amazon Cognito provides this solution to the developers. Using Amazon Cognito, you can manage and authenticate your users before giving them access to the restricted area in your app.&lt;/p&gt;

&lt;p&gt;Amazon Cognito provides authentication, authorization, and user management as a service for your web and mobile apps. It allows users to create an account using a username or password, have a configured MFA (Multi-Factor Auth) and also enable them to log in using one of the third-party providers like Facebook and Google.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User Pools&lt;/strong&gt; are user directories in Amazon Cognito which provide sign-up and sign-in options for your users. Users can create their accounts and use the credentials to log in to your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity Pools&lt;/strong&gt; allows you to grant access to your users so that they can access other AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Concepts in Amazon Cognito
&lt;/h3&gt;

&lt;p&gt;The two main concepts of Amazon Cognito are:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User Pools&lt;/strong&gt; are user directories in Amazon Cognito which provide sign-up and sign-in options for your users. Users can create their accounts and use the credentials to log in to your application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity Pools&lt;/strong&gt; allows you to grant access to your users so that they can access other AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What are user pools in Cognito?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Your users can Sign-up and Sign-in to your app.&lt;/li&gt;
&lt;li&gt;Amazon Cognito has a built-in, customizable web UI to Sign-in to your app.&lt;/li&gt;
&lt;li&gt;You can use social Sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple with your user pool.&lt;/li&gt;
&lt;li&gt;There is also an option for Sign-in through SAML and OIDC identity providers.&lt;/li&gt;
&lt;li&gt;User pools are user directories with features that can help you manage your users and their profiles easily.&lt;/li&gt;
&lt;li&gt;User Pools from Amazon Cognito have security features like multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.&lt;/li&gt;
&lt;li&gt;Customized workflows for user Sign-up and Sign-in are also available through AWS Lambda triggers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;User Pools&lt;/strong&gt; are user directories in Amazon Cognito which provide sign-up and sign-in options for your users. Users can create their accounts and use the credentials to log in to your application. It also allows your users to federate through a third-party identity provider (IdP) like Facebook or Google. Whether your users use the password-based account or use a third party to create an account with your app, all the users will be created as a member of the user pool and every member will have a directory profile that you can access through an SDK.&lt;/p&gt;

&lt;p&gt;Following are the features of a User Pool:&lt;/p&gt;

&lt;p&gt;User pools along with identity pools allow your application to federate using a third-party provider and save the information in your user directory. With the use of the Identity Pools, you can also grant temporary access to AWS services like S3 or Dynamodb.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration With Cognito
&lt;/h3&gt;

&lt;p&gt;Many AWS services like API Gateway directly integrate with Amazon Cognito user pools to authenticate the API request to the gateway. AWS Amplify is another library that provides auth setup using Amazon Cognito. it is easy to spin up an auth service using AWS Amplify and many new age apps are using it.&lt;/p&gt;

&lt;p&gt;If you are interested to check out how can we use &lt;a href="https://awsmag.com/how-to-use-cognito-user-pool-authorizer-with-amazon-api-gateway/"&gt;Amazon API Gateway and Amazon Cognito user pool&lt;/a&gt;, you can read it &lt;a href="https://awsmag.com/how-to-use-cognito-user-pool-authorizer-with-amazon-api-gateway/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Amazon Cognito is a cost-effective, secure and highly scalable Authentication service. If you are not worried about the vendor lock or most of your infrastructure is deployed on AWS, you can give it a try.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/what-is-amazon-cognito/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on January 4, 2022.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cognito</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How to create a VPC using Terraform?</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Sun, 01 Aug 2021 07:25:50 +0000</pubDate>
      <link>https://forem.com/singhs020/how-to-create-a-vpc-using-terraform-gm5</link>
      <guid>https://forem.com/singhs020/how-to-create-a-vpc-using-terraform-gm5</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hgTBXLVe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/931/0%2AZ-3eNQ0BWK2NKFVC.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hgTBXLVe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/931/0%2AZ-3eNQ0BWK2NKFVC.jpg" alt="VPC" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running your applications comes up with other challenges too and one of those challenges is having a robust network set up to host all parts in one place. We have to set up VPC (Virtual Private Cloud), internet gateway, subnet, etc. to make sure our application is working properly. The other aspect of this is to manage the infrastructure once it is ready and deployed. This is where Terraform comes in handy. Terraform is an Infrastructure-as-a-code that helps you to define infrastructure in code and you can easily maintain it for future updates.&lt;/p&gt;

&lt;p&gt;If you are not aware of the networking fundamentals on AWS, read the article &lt;a href="https://awsmag.com/aws-networking-fundamentals/"&gt;AWS Networking Fundamentals&lt;/a&gt; before going deep with Terraform in this article.&lt;/p&gt;

&lt;p&gt;You need to have some information about how Terraform works. If you don’t know about Terraform, I suggest going through its documentation to have a basic idea about it before diving into the article.&lt;/p&gt;

&lt;p&gt;I will post the snippets and add some description in steps here. You can also find the complete module at a GitHub repo &lt;a href="https://github.com/awsmag/aws-vpc-terraform"&gt;&lt;strong&gt;aws-vpc-terraform&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Directory Structure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OGgqMgLw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A62im3dzknGOqDjoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OGgqMgLw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A62im3dzknGOqDjoz.png" alt="" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above directory structure of the module has the following key files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;main&lt;/strong&gt; : Contains the entire module and all the resources we will discuss in some time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;output&lt;/strong&gt; : Defines the output provided by the module. This provider returns the vpcId of the VPC created by the module.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;provider:&lt;/strong&gt; Defines the provider required for the module to work properly. You can also think of it as dependencies required for the module. This module needs the AWS module from Hashicorp (creators of Terraform). The AWS module will allow us to use the resources available in the AWS to create our desired infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;variables:&lt;/strong&gt; It contains the input variables required by the module to complete its task. For the sale of this article, I have set default values to the variables but they can be easily made required by removing the default value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned above the most important file is &lt;strong&gt;main.tf&lt;/strong&gt; which contains all the code for the resources we are about to create. Let’s go through each resource statement in file and understand them a bit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_availability_zones" "availableAZ" {}

# VPC 
resource "aws_vpc" "vpc" { 
  cidr_block = var.cidr 
  instance_tenancy = "default" 
  enable_dns_support = true 
  enable_dns_hostnames = true 
  assign_generated_ipv6_cidr_block = true 
  tags = { 
    Name = var.namespace 
    Namespace = var.namespace 
  } 
}

# Public Subnet 
resource "aws_subnet" "publicsubnet" { 
  count = 3 
  cidr_block = tolist(var.publicSubnetCIDR)[count.index] 
  vpc_id = aws_vpc.vpc.id 
  map_public_ip_on_launch = true 
  availability_zone = data.aws_availability_zones.availableAZ.names[count.index] 
  tags = { 
    Name = "${var.namespace}-publicsubnet-${count.index + 1}" 
   AZ = data.aws_availability_zones.availableAZ.names[count.index]
   Namespace = var.namespace 
  } 
  depends_on = [aws_vpc.vpc] 
}

# Private Subnet 
resource "aws_subnet" "privatesubnet" { 
  count = 3 
  cidr_block = tolist(var.privateSubnetCIDR)[count.index] 
  vpc_id = aws_vpc.vpc.id 
  availability_zone = data.aws_availability_zones.availableAZ.names[count.index]
  tags = { 
    Name = "${var.namespace}-privatesubnet-${count.index + 1}"
    AZ = data.aws_availability_zones.availableAZ.names[count.index]
    Namespace = var.namespace 
  }
  depends_on = [aws_vpc.vpc] 
}

# Internet Gateway 
resource "aws_internet_gateway" "internetgateway" { 
  vpc_id = aws_vpc.vpc.id 
  tags = { 
    Name = "${var.namespace}-InternetGateway" 
    Namespace = var.namespace 
  }
  depends_on = [aws_vpc.vpc]
}

# Elastic IP 
resource "aws_eip" "elasticIPs" { 
  count = 3 
  vpc = true 
  tags = { 
    Name = "elasticIP-${count.index + 1}"
    Namespace = var.namespace 
  } 
  depends_on = [aws_internet_gateway.internetgateway] 
} 

# NAT Gateway 
resource "aws_nat_gateway" "natgateway" { 
  count = 3 
  allocation_id = aws_eip.elasticIPs[count.index].id 
  subnet_id = aws_subnet.publicsubnet[count.index].id 
  tags = { 
    Name = "${var.namespace}-NATGateway-${count.index + 1}"
    AZ = data.aws_availability_zones.availableAZ.names[count.index]
    Namespace = var.namespace 
  } 
  depends_on = [aws_internet_gateway.internetgateway] 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We have all the major parts of the network and now it is time to create route tables. Route Tables define which traffic can flow to which resource. We will create a Route Table for public and private subnets.&lt;/li&gt;
&lt;li&gt;Public Route Table will have the traffic flowing from Internet Gateway directly. We will also create an association record to associate the newly created route table with the public subnets.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Route Table for Public Routes 
resource "aws_route_table" "publicroutetable" { 
  vpc_id = aws_vpc.vpc.id 
  route { 
    cidr_block = "0.0.0.0/0" 
    gateway_id = aws_internet_gateway.internetgateway.id 
  } 
  tags = { 
     Name = "${var.namespace}-publicroutetable" 
     Namespace = var.namespace 
  } 
  depends_on = [aws_internet_gateway.internetgateway] 
} 

# Route Table Association - Public Routes 
resource "aws_route_table_association" "routeTableAssociationPublicRoute" { 
  count = 3 
  route_table_id = aws_route_table.publicroutetable.id 
  subnet_id = aws_subnet.publicsubnet[count.index].id 
  depends_on = [aws_subnet.publicsubnet, aws_route_table.publicroutetable]
}

# Route Table for Private Routes 
resource "aws_route_table" "privateroutetable" { 
  count = 3 
  vpc_id = aws_vpc.vpc.id 
  route { 
    cidr_block = "0.0.0.0/0" 
    gateway_id = aws_nat_gateway.natgateway[count.index].id 
  } 
  tags = { 
    Name = "${var.namespace}-privateroutetable-${count.index + 1}"
    AZ = data.aws_availability_zones.availableAZ.names[count.index]
    Namespace = var.namespace 
  } 
  depends_on = [aws_nat_gateway.natgateway] 
} 

# Route Table Association - Private Routes 
resource "aws_route_table_association" "routeTableAssociationPrivateRoute" { 
  count = 3 
  route_table_id = aws_route_table.privateroutetable[count.index].id
  subnet_id = aws_subnet.privatesubnet[count.index].id 
  depends_on = [aws_subnet.privatesubnet, aws_route_table.privateroutetable] 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our entire module is ready. In order to run it, you have to first initialize the Terraform, see the plan and apply it to create your VPC using Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init 
terraform plan 
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s all related to deploying and managing your VPC using Terraform. In upcoming articles, I will write more about creating other services and deploying some common things using Terraform. Till then Happy Coding.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/how-to-create-a-vpc-using-terraform/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on August 1, 2021.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraformmodules</category>
      <category>awsvpc</category>
      <category>terraform</category>
    </item>
    <item>
      <title>What is AWS SAM (Serverless Application Model)?</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Thu, 08 Jul 2021 11:03:00 +0000</pubDate>
      <link>https://forem.com/singhs020/what-is-aws-sam-serverless-application-model-184j</link>
      <guid>https://forem.com/singhs020/what-is-aws-sam-serverless-application-model-184j</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--unaQmLg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AdyYwAMyk8YpQiPFG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--unaQmLg_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AdyYwAMyk8YpQiPFG.png" alt="AWS SAM" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS SAM (Serverless Application Model) is an open-source framework to develop and deploy serverless applications on AWS. The serverless application in the case of AWS is a combination of &lt;a href="https://awsmag.com/an-introduction-to-aws-lambda/"&gt;Amazon Lambda&lt;/a&gt;, &lt;a href="https://awsmag.com/purpose-built-aws-database/"&gt;databases&lt;/a&gt;, &lt;a href="https://awsmag.com/introduction-to-api-gateway/"&gt;Amazon API Gateway&lt;/a&gt; etc. If you like to read more about &lt;a href="https://awsmag.com/what-is-serverless-computing/"&gt;serverless computing&lt;/a&gt; before diving deep into the AWS SAM, you can read it &lt;a href="https://awsmag.com/what-is-serverless-computing/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Serverless Application?
&lt;/h3&gt;

&lt;p&gt;Before we understand AWS SAM, let us first understand what is a serverless application.&lt;/p&gt;

&lt;p&gt;A Serverless Application is a combination of various serverless services provided by AWS. These services are Amazon Lambda, Amazon API Gateway etc. All these services work together with each other and form an application to serve your customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of AWS SAM
&lt;/h3&gt;

&lt;p&gt;Some of the benefits of AWS SAM are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testing &amp;amp; Debugging&lt;/strong&gt; : We discussed in the serverless computing post that it is hard to test and debug serverless applications. To solve this AWS SAM provides an AWS Lambda like local execution environment which you can use to run and debug the functions locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudFormation Support&lt;/strong&gt; : AWS Sam is built on CloudFormation and supports all the resources of Cloudformation in its config file. The AWS SAM team have also created some special construct which you can use to create and deploy resources with less amount of code. For example, A &lt;strong&gt;WS::Serverless::Function&lt;/strong&gt; allows you to create an AWS Lambda function along with all the required events configured in the config. If you have configured an API gateway event for your AWS Lambda function, the AWS SAM will deploy that API gateway as well.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Deployment Configuration:&lt;/strong&gt; As I mentionedthat AWS SAM can deploy resources using a construct and also supports other CloudFormation resources as well, you will only need a single deployment config to define and deploy your resources on AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt; : AWS SAM can also integrate with your CI/CD tool of choice to automate your deployment pipeline. In AWS, AWS Cloud9 IDE supports AWS SAM so you can author, test serverless applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was a quick introduction to AWS SAM. I will be adding more posts around getting started with AWS SAM and building applications using AWS SAM.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://awsmag.com/what-is-aws-sam-serverless-application-model/"&gt;&lt;em&gt;https://awsmag.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 8, 2021.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>awssam</category>
      <category>awslambda</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Networking Fundamentals</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Fri, 24 Jul 2020 03:51:38 +0000</pubDate>
      <link>https://forem.com/singhs020/aws-networking-fundamentals-55ao</link>
      <guid>https://forem.com/singhs020/aws-networking-fundamentals-55ao</guid>
      <description>&lt;p&gt;PS: The post was originally posted on my weekly AWS newsletter - &lt;a href="https://awsmag.com"&gt;AWSMAG&lt;/a&gt;. If you wish to receive more like these every week, join the newsletter.&lt;/p&gt;

&lt;p&gt;When I started using AWS as a developer, I was bombarded with the lots of jargon about VPC, subnet, CIDR ranges and all those words, I was not able to remember in the beginning. If this is something happened to you as well, you are in the right place my friend. Let's understand the fundamentals of AWS networking in this blog post.&lt;/p&gt;

&lt;p&gt;First of all, we will be using some words like Region, Availability zones etc. in this and if you are not familiar with these words, you can read my other post about &lt;a href="https://awsmag.com/blog/aws-global-infrastructure"&gt;AWS Global Architecture&lt;/a&gt;. Once you have an idea what these are, we can move ahead with other jargons.&lt;/p&gt;

&lt;p&gt;So, what are we going to talk about here. Following is the List:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC(Virtual private cloud)&lt;/li&gt;
&lt;li&gt;Subnet&lt;/li&gt;
&lt;li&gt;Security Groups&lt;/li&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;li&gt;NAT Gateway&lt;/li&gt;
&lt;li&gt;CIDR Range&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following Image will give you an idea how all this fits in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6qeEKqZX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ne42hb8d5doz7d5hu59c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6qeEKqZX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ne42hb8d5doz7d5hu59c.jpg" alt="AWS Networking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is VPC (Virtual Private Cloud)?
&lt;/h2&gt;

&lt;p&gt;VPC or Virtual Private Cloud is a logical isolation in AWS cloud which is defined by you for creating your own &lt;br&gt;
infrastructure in it. Consider this as your own space.&lt;br&gt;
You can create your own network and configure it in they way you like it to work. All the resources are deployed in it and they are isolated from the resources deployed in any other VPC. From a hierarchical point of view, you have a Region and you will create a VPC in it which will hold all of your resources. By default, every Region comes up with a default VPC which is a good starting point and you can use it if you don't want to get your hands dirty. My advice will be to get your hands dirty and create a network which suits your requirements. Default VPC is already configured with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A CIDR Range&lt;/li&gt;
&lt;li&gt;Subnets to access private and public network&lt;/li&gt;
&lt;li&gt;Internet and NAT Gateway&lt;/li&gt;
&lt;li&gt;Security Groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we want to create our own VPC, we will need to understand and create all the things mentioned above. so lets try to understand them first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a CIDR Range?
&lt;/h2&gt;

&lt;p&gt;A CIDR(Classless Inter-Domain Routing) range is a group of IP addresses which you can use in your network. As it is a private network, we use IP Address from &lt;strong&gt;RFC1918&lt;/strong&gt; standard. These addresses are not used over internet, so it is safe to use them internal even though when we connect our network over internet. Also, one thing we should keep in mind is&lt;br&gt;
if we are connecting to different VPC, we should take care that both of them are not using same private address range. Otherwise we will have issues while connecting them. Let's look at a CIDR Range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--01lDl-ia--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2fstnmn5ce4onxux8t22.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--01lDl-ia--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2fstnmn5ce4onxux8t22.jpg" alt="CIDR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first part is network, second part defines the host and the part after / tells you how many addresses can be utilised. This will have 65,500 addresses approx.&lt;/p&gt;

&lt;p&gt;Usually applications are deployed in different &lt;a href="https://awsmag.com/blog/aws-global-infrastructure"&gt;Availability Zones&lt;/a&gt; to maintain high availability and have redundancy.&lt;br&gt;
This allows us to handle any failover situations. To reduce the blast radius and to use of this offering by AWS, we divide CIDR range in equal parts to all availability zones.&lt;br&gt;
For eg: -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AZ-a will have 172.31.0.0/24 - this means it has approx 250 address.&lt;/li&gt;
&lt;li&gt;AZ-a will have 172.31.1.0/24 - this means it has approx 250 address.&lt;/li&gt;
&lt;li&gt;AZ-a will have 172.31.2.0/24 - this means it has approx 250 address.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the division you will still have addresses left for your rest of the structure. Now we have created and divided the addresses, how will they talk to each other.&lt;br&gt;
That is where subnet comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Subnet?
&lt;/h2&gt;

&lt;p&gt;The division we mentioned above i.e. the division of the Ip addresses in multiple AZ's is actually using subnet. Subnet is a logical group of Ip addresses&lt;br&gt;
that is a subsection of the wider network which we talked earlier. Subnets are of two types and here they are:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnet:&lt;/strong&gt; A private subnet is a group of addresses which you don't want anyone can access from outside the VPC. We usually use private subnet for things like database.
We want to keep it out of the hands of anything which can access over the internet. But, any application deployed in our network should be able to access it. So any application you have deployed internally which needs to process data stored in your database should be able to access it. Now, when I came to know about this in my early days, I asked a question.
What if I want to apply a patch on my database system which is released on internet? If you also have that question in your mind, well the answer is &lt;strong&gt;NAT Gateway&lt;/strong&gt;. and we will talk about them after some time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnet:&lt;/strong&gt; Yes, you guessed it right, A public subnet is a logic group of addresses which we want to access via internet. These are a good example for your web servers. If you are hosting an application which is web based and you want to access it, we will assign them in our public subnet and they should be able to get request and also call things over internet with the help of an &lt;strong&gt;Internet Gateway&lt;/strong&gt;. We will talk about it in a moment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each Subnet is assigned a route table which will tell you what it can access. The route table holds rules which allow them to access VPC resources or internet via an&lt;br&gt;
Internet Gateway. You can provide CIDR ranges in here to define the flow of the network. something like 0.0.0.0/0 means all traffic goes to that ip address range. We often use &lt;br&gt;
this to setup Internet Gateway.&lt;/p&gt;

&lt;p&gt;Subnets are a very important piece in your architecture. They tell you what type of network access an instance has and in which Availability Zone the instance lies in.&lt;br&gt;
They also have a feature called Network Access Control List(ACLs) which is a security feature and tells you which IP addresses and ports are allowed to flow traffic&lt;br&gt;
in and out in the subnet.&lt;/p&gt;

&lt;p&gt;Ok, you made it so far. Do you need a cup of coffee before we move ahead? if Yes take a break. If no, let's tackle three big words we came across while discussing Subnets. Let's Start with Internet Gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Internet Gateway?
&lt;/h2&gt;

&lt;p&gt;Internet Gateway is a component which provides internet accessibility to your VPC. Why do we need it? Because we want someone from the internet should be able to access our resources. Like I mentioned above, this is a usual case for a web server. If you have a website hosted you want people to access it over internet.&lt;br&gt;
They can only do it when you allow it on your network. Getting an Internet Gateway does not solve your problem. You also need to connect your public subnet to this gateway by adding an entry in the route table like 0.0.0.0/0. This allows all the traffic to flow from the Internet Gateway providing you the access over internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is NAT Gateway?
&lt;/h2&gt;

&lt;p&gt;NAT Gateway is managed service by AWS which lets you connect to the internet. Now, you will be like, Dude!! then what the hell Internet Gateway is used for? Well they both provide access to internet but NAT gateway is used with your private subnet and technically goes via your internet gateway. Any instance in the private subnet should be able to go to internet to access any patches or downloading dependencies.&lt;br&gt;
Someone on the internet should not be able to talk to our private instances. Thats where NAT gateway helps us. Initially we used to deploy NAT instances to achieve the same functionality.&lt;/p&gt;

&lt;p&gt;We discussed the whole networking and getting our infra right. Deploying our instances on different AZ for better availability and redundancy and everything. The most important piece in all this is security. How will we maintain security across all this? The answer is Security Groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Security Group?
&lt;/h2&gt;

&lt;p&gt;Security Groups are virtual firewall around our instances. Do not get confuse with Network Access List which we talked about in Subnets. ACL's are at subnet level and control the traffic for all the instances in that subnet. Security groups are at instance level and controls the flow of data at instance level. One example will be, you can create a security group for your database service and application service.&lt;br&gt;
Database service should be able to access database in the private subnet on a particular port but this flexibility will not be allowed in the application service security group.&lt;br&gt;
Any application service under the security group will have access to database service but not to database directly. So they have to go via the service to talk to database. Another example can be to not allow ssh access to your instance on any port to allow it for one IP range only which will be your enterprise range.&lt;/p&gt;

&lt;p&gt;That's all the fundamental jargon you should know if you are working with the AWS and developing application deployed on it. Understanding these is also important as the defined rules and constraints over the network also affect the decisions we take while developing the applications. Else we will be in a position when some people say, It works on my local, but blows up when I deploy.&lt;br&gt;
Don't be like that.&lt;/p&gt;

&lt;p&gt;Have a nice cup of coffee and digest all these. In future, I will write about some advance things related to the AWS networking.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>AWS Global Infrastructure</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Tue, 07 Jul 2020 04:43:39 +0000</pubDate>
      <link>https://forem.com/singhs020/aws-global-infrastructure-1jgl</link>
      <guid>https://forem.com/singhs020/aws-global-infrastructure-1jgl</guid>
      <description>&lt;p&gt;Ever wondered how AWS provide robust infrastructure and high availability to our apps running in cloud. Well in this blog post, we will&lt;br&gt;
answer the question. And the Answer is &lt;strong&gt;AWS Global Infrastructure&lt;/strong&gt;. AWS maintains a global infrastructure which is uses to give you the&lt;br&gt;
peace of mind that my app is secure and is running in high availability. This does not mean it is by default as you have to configure it for&lt;br&gt;
high availability. So let's look into the parts of the AWS Global infrastructure and try to understand what it is.&lt;/p&gt;

&lt;p&gt;The AWS Global Infrastructure consists of &lt;strong&gt;Regions, Availability Zones, Local Zones, Point of Presence&lt;/strong&gt; etc. Let's understand the heavy words we came across above. Have a look at the following image:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--12d3yBF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2cmws0q6z77rphr2bgt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--12d3yBF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2cmws0q6z77rphr2bgt0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above Image, we have region as the starting point. Everything is wrapped inside the region. So what is a &lt;strong&gt;Region&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region:&lt;/strong&gt; Region is an actual geographic location maintained by AWS to provide a global footprint. Every Region provides&lt;br&gt;
&lt;strong&gt;full redundancy and connectivity to the network&lt;/strong&gt;. All the regions in the AWS are connected using a global network so one region &lt;br&gt;
can talk to another region. Every region consistence of &lt;strong&gt;multiple Availability Zones&lt;/strong&gt; which are different data centers to provide redundancy for&lt;br&gt;
your application. Lets look at the naming convention of the regions: &lt;strong&gt;eu-west-1&lt;/strong&gt; and &lt;strong&gt;eu-west-2&lt;/strong&gt;. Here &lt;strong&gt;"eu"&lt;/strong&gt; means that it is in European continent,&lt;br&gt;
&lt;strong&gt;"west"&lt;/strong&gt; means it is on the west side and &lt;strong&gt;"1"&lt;/strong&gt; is the number of the aws region in that area. eu-west-2 is second aws region on west side of European&lt;br&gt;
continent. All the regions also have their city names where they are located. &lt;strong&gt;eu-west-1&lt;/strong&gt; is in &lt;strong&gt;Ireland&lt;/strong&gt; and &lt;strong&gt;eu-west-2&lt;/strong&gt; is in &lt;strong&gt;London&lt;/strong&gt;.&lt;br&gt;
There are special regions which have Gov cloud in their name. This is because these regions are utilised by govt. and are not used by any other clients of AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Availability Zones:&lt;/strong&gt; We talked about Availability Zones when we were defining regions. It is a fully isolated partition in AWS infrastructure.&lt;br&gt;
What does this mean? It means every zone is a &lt;strong&gt;physicaaly different location using separate connections, power supply etc&lt;/strong&gt;. This isolation gurantees&lt;br&gt;
the redundancy and high availability for your applications. So if due to some reason, one availability zone goes down, the application hosted on the other one will&lt;br&gt;
be able to serve your customers. You have to configure your deployments to use availability zones. Some of the AWS services provide this by default&lt;br&gt;
like SQS. Every Availbility Zone consists of multiple data centers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Point of Presence:&lt;/strong&gt; These locations are consists of &lt;strong&gt;Edge Locations and Regional Cache Servers&lt;/strong&gt;. There is a CDN service by Amazon called&lt;br&gt;
&lt;strong&gt;Amazon Cloudfront&lt;/strong&gt;, uses the location to cater customers globally with low latency and high transfer speeds. Another service launched by AWS called &lt;strong&gt;Global&lt;br&gt;
Accelerator&lt;/strong&gt; also use these to upgrade the performance of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Zones:&lt;/strong&gt; Local Zones are new offering in AWS. These are &lt;strong&gt;extensions of the Regions in geographic proximity to users&lt;/strong&gt;. Every Local&lt;br&gt;
Zone has their own internet connection and support AWS Direct Connect. Local Zones are not available in every region and they have to be&lt;br&gt;
enabled in order to be utilised. The naming convention of the Local Zone is &lt;strong&gt;"us-west-2-laz-1a"&lt;/strong&gt;. Here &lt;strong&gt;us-west-2&lt;/strong&gt; is the name of the Region and&lt;br&gt;
&lt;strong&gt;lax-1a&lt;/strong&gt; indicates the location.&lt;/p&gt;

&lt;p&gt;That's a higher level picture of AWS Global infrastructure and we will talk more about them when we start looking into other AWS Services.&lt;/p&gt;

&lt;p&gt;PS: The post was originally posted on my newsletter and a blog &lt;a href="https://awsmag.com"&gt;awsmag&lt;/a&gt;. If you like to recieve these post and other stuff related to AWS, subscribe to my weekly newsletter at &lt;a href="https://awsmag.com"&gt;awsmag&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Getting Started With AWS SNS</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Fri, 20 Mar 2020 06:12:15 +0000</pubDate>
      <link>https://forem.com/singhs020/getting-started-with-aws-sns-57n</link>
      <guid>https://forem.com/singhs020/getting-started-with-aws-sns-57n</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Amazon SNS (Simple Notification Service) is a fully managed pub/sub messaging service which enables you to write distributed applications. Using SNS, you can manage and send notifications to all the subscribed system using endpoints like SQS and webhooks. It can also send messages to Lambda function further processing. SNS can also be used to maintain large number of human subscribers too. People can get notifications using SMS and emails.&lt;/p&gt;

&lt;p&gt;In this part we will see how can we publish message using SNS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5TI14er--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALtx-HCH6zJPAEJY1ZATyNQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5TI14er--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ALtx-HCH6zJPAEJY1ZATyNQ.png" alt="" width="800" height="363"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The Flow in SNS.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before we begin let us first understand what is Publisher/Subscriber model.&lt;/p&gt;
&lt;h3&gt;
  
  
  Publish/Subscriber Model
&lt;/h3&gt;

&lt;p&gt;There are two components in a system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Publisher&lt;/strong&gt; : A service that can broadcast out messages to its subscribers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscriber&lt;/strong&gt; : Any service that wish to receive the messages broadcasted by the publisher.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a service wish to subscribe to a publisher, it needs to notify the publisher that it wants to receive its broadcasts along with where it wishes to receive i.e. the endpoint. It can be Http endpoint, SQS or lambda function.&lt;/p&gt;

&lt;p&gt;In the above diagram the publisher is sending message to a SNS topic via the service and all the subscribers will receive the message but the mode or endpoint using which they have subscribed if different.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;p&gt;You will need a valid AWS account and credentials to access the SNS. You will also need to have access to the AWS console to create a SNS topic and some subscribers to it.&lt;/p&gt;
&lt;h3&gt;
  
  
  Setting up a SNS Topic
&lt;/h3&gt;

&lt;p&gt;To setup a SNS topic you first login to AWS and navigate to SNS. Follow the instruction to create a SNS and a topic. Once created, you will need the ARN property of the SNS to use in the code. Make sure, the credentials you are using have access to publish message from the SNS. Add some subscribers and confirm them to see the full action.&lt;/p&gt;
&lt;h3&gt;
  
  
  Publishing a message
&lt;/h3&gt;

&lt;p&gt;Let’s assume following is the message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "foo": "bar"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have a message structure, we need to publish it out to the desired SNS. We have to import the AWS SDK for node.js and use it to publish a message . The SDK is capable of using the credentials stored in your env. It looks for the following environment variable:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=your_access_key_idexport AWS_SECRET_ACCESS_KEY=your_secret_access_keyexport AWS_REGION=the_region_you_are_using
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following is the code to publish the message:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* Getting Started with AWS SNS using node js. This part shows how to publish content to SNS */

// Load the AWS SDK for Node.js
const AWS = require("aws-sdk");

const sns = new AWS.SNS({apiVersion: "2010-03-31"});
const params = {
  "Message": JSON.stringify({"foo": "bar"}),
  "TopicArn": "ARN FOR TOPIC YOU WANT TO PUBLISH TO"
};

// By using Callback
sns.publish(params, (err, data) =&amp;gt; {
  if (err) {
    console.log("There was an Error: ", err);
  } else {
    console.log("Successfully published.", data);
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above is implemented using the callback. if you wish to achieve the implementation using promise, following is the implementation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Promise implementation
sns.publish(params).promise()
.then(data =&amp;gt; console.log("Successfully published.", data))
.catch(err =&amp;gt; console.log("There was an Error: ", err));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also find the code sample in my github repo at the following link&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/singhs020/examples/blob/master/src/SNS/index.js"&gt;singhs020/examples&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/sns/"&gt;AWS Simple Notification Service&lt;/a&gt; (SNS) is a super scalable service that allows us to implement the publish/subscribe model with ease. We can use this to send texts, emails, push notifications, or other automated messages to multiple channels at the same time. There are many other uses cases and some advanced filtering logic, message templating and mixed message available in SNS. Give it a try and Happy Coding.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>node</category>
      <category>programming</category>
      <category>aws</category>
    </item>
    <item>
      <title>Getting Started With AWS SNS</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Fri, 20 Mar 2020 06:10:51 +0000</pubDate>
      <link>https://forem.com/singhs020/getting-started-with-aws-sns-44b0</link>
      <guid>https://forem.com/singhs020/getting-started-with-aws-sns-44b0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon SNS (Simple Notification Service) is a fully managed pub/sub messaging service which enables you to write distributed applications. Using SNS, you can manage and send notifications to all the subscribed system using endpoints like SQS and webhooks. It can also send messages to Lambda function further processing. SNS can also be used to maintain large number of human subscribers too. People can get notifications using SMS and emails.&lt;br&gt;
In this part we will see how can we publish message using SNS.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L7vu6Zca--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wp3e0a8s1vfqe8gdz6r1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L7vu6Zca--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wp3e0a8s1vfqe8gdz6r1.png" alt="Flow of SNS"&gt;&lt;/a&gt;&lt;br&gt;
Before we begin let us first understand what is Publisher/Subscriber model.&lt;/p&gt;
&lt;h2&gt;
  
  
  Publish/Subscriber Model
&lt;/h2&gt;

&lt;p&gt;There are two components in a system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publisher: A service that can broadcast out messages to its subscribers.&lt;/li&gt;
&lt;li&gt;Subscriber: Any service that wish to receive the messages broadcasted by the publisher.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a service wish to subscribe to a publisher, it needs to notify the publisher that it wants to receive its broadcasts along with where it wishes to receive i.e. the endpoint. It can be Http endpoint, SQS or lambda function.&lt;br&gt;
In the above diagram the publisher is sending message to a SNS topic via the service and all the subscribers will receive the message but the mode or endpoint using which they have subscribed if different.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;You will need a valid AWS account and credentials to access the SNS. You will also need to have access to the AWS console to create a SNS topic and some subscribers to it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting up a SNS Topic
&lt;/h2&gt;

&lt;p&gt;To setup a SNS topic you first login to AWS and navigate to SNS. Follow the instruction to create a SNS and a topic. Once created, you will need the ARN property of the SNS to use in the code. Make sure, the credentials you are using have access to publish message from the SNS. Add some subscribers and confirm them to see the full action.&lt;/p&gt;
&lt;h2&gt;
  
  
  Publishing  a message
&lt;/h2&gt;

&lt;p&gt;Let's assume following is the message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "foo": "bar"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have a message structure, we need to publish it out to the desired SNS. We have to import the AWS SDK for node.js and use it to publish a message . The SDK is capable of using the credentials stored in your env. It looks for the following environment variable:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=your_access_key_idexport
AWS_SECRET_ACCESS_KEY=your_secret_access_keyexport
AWS_REGION=the_region_you_are_using
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Following is the code to publish the message:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* Getting Started with AWS SNS using node js. This part shows how to publish content to SNS */


// Load the AWS SDK for Node.js
const AWS = require("aws-sdk");

const sns = new AWS.SNS({apiVersion: "2010-03-31"});
const params = {
  "Message": JSON.stringify({"foo": "bar"}),
  "TopicArn": "ARN FOR TOPIC YOU WANT TO PUBLISH TO"
};

// By using Callback
sns.publish(params, (err, data) =&amp;gt; {
  if (err) {
    console.log("There was an Error: ", err);
  } else {
    console.log("Successfully published.", data);
  }
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above is implemented using the callback. if you wish to achieve the implementation using promise, following is the implementation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Promise implementation
sns.publish(params).promise()
.then(data =&amp;gt; console.log("Successfully published.", data))
.catch(err =&amp;gt; console.log("There was an Error: ", err));
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also find the code sample in my &lt;a href="https://github.com/singhs020/examples/blob/master/src/SNS/index.js"&gt;github repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Simple Notification Service (SNS) is a super scalable service that allows us to implement the publish/subscribe model with ease. We can use this to send texts, emails, push notifications, or other automated messages to multiple channels at the same time. There are many other uses cases and some advanced filtering logic, message templating and mixed message available in SNS. Give it a try and Happy Coding.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>javascript</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Getting Started with AWS SQS using Node.js - Part 2</title>
      <dc:creator>Simranjeet Singh</dc:creator>
      <pubDate>Fri, 13 Mar 2020 05:09:05 +0000</pubDate>
      <link>https://forem.com/singhs020/getting-started-with-aws-sqs-using-node-js-part-2-1o78</link>
      <guid>https://forem.com/singhs020/getting-started-with-aws-sqs-using-node-js-part-2-1o78</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous part i.e. &lt;a href="https://dev.to/singhs020/getting-started-with-aws-sqs-using-node-js-part-1-4p8h"&gt;Getting Started with AWS SQS using Node.js - Part 1&lt;/a&gt;, we had a look on how to send messages in the SQS. You can also call this as the producer of the message.&lt;br&gt;
In this part we will see how can we connect to SQS and receive message for further processing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;You should have followed the previous part of the article and can produce messages to a SQS.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Application Flow
&lt;/h2&gt;

&lt;p&gt;In the previous part we were building an e-commerce app where an order service is producing messages to the SQS for further processing. In this part we will be looking at a fulfilment service which will receive the message and process it further.&lt;/p&gt;
&lt;h2&gt;
  
  
  Receiving a message
&lt;/h2&gt;

&lt;p&gt;This was the message which was produced in the last part for fulfilment service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "orderId": "this-is-an-order-id",
  "date": "2020–02–02",
  "shipBy": "2020–02–04",
  "foo": "bar"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Like we did last time, we have to import the AWS SDK for node.js and use it to send a message . The SDK is capable of using the credentials stored in your env. It looks for the following environment variable:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=your_access_key_idexport
AWS_SECRET_ACCESS_KEY=your_secret_access_keyexport
AWS_REGION=the_region_you_are_using
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Following is the code to receive the message:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* Getting Started with AWS SQS using node js. This part shows how to consume message from the SQS */


// Load the AWS SDK for Node.js
const AWS = require("aws-sdk");

const sqs = new AWS.SQS({apiVersion: "2012-11-05"});

const qurl = "ADD YOUR SQS URL HERE";

const params = {
  "QueueUrl": qurl,
  "MaxNumberOfMessages": 1
};

sqs.receiveMessage(params, (err, data) =&amp;gt; {
  if (err) {
    console.log(err, err.stack);
  } else {
    if (!Array.isArray(data.Messages) || data.Messages.length === 0) { 
      console.log("There are no messages available for processing."); 
      return;
    }    

    const body = JSON.parse(data.Messages[0].Body);
    console.log(body);

    // process the body however you see fit.
    // once the processing of the body is complete, delete the message from the SQS to avoid reprocessing it.

    const delParams = {
      "QueueUrl": qurl,
      "ReceiptHandle": data.Messages[0].ReceiptHandle
    };

    sqs.deleteMessage(delParams, (err, data) =&amp;gt; {
      if (err) {
        console.log("There was an error", err);
      } else {
        console.log("Message processed Successfully");
      }
    });
  }
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Do, not forget to delete the message after you are done with your task. This is important to avoid any re-processing of the message. The above is implemented using callback. if you wish to achieve the implementation using promise, following is the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// the above message can be implemented using promise as well.
sqs.receiveMessage(params).promise()
.then(data =&amp;gt; {
  console.log(data);
  // do the processing here
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also find the code sample in my &lt;a href="https://github.com/singhs020/examples/blob/master/src/SQS/consumingMessage.js"&gt;github repo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS SQS is a powerful messaging service which allows you to use your own creativity to find the right fit for it in your application. The most common way to consume messages is using a polling mechanism which can poll the SQS and process all the message. This is a very basic integration of SQS in an application, there are other advanced use cases too like dead-letter queues, FIFO queues and Lambda integration with SQS to process streams.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
