<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: techD</title>
    <description>The latest articles on Forem by techD (@techd).</description>
    <link>https://forem.com/techd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/techd"/>
    <language>en</language>
    <item>
      <title>AWS Power Hour Week 4: Design High-Performance Architecture</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Tue, 25 Jul 2023 15:24:38 +0000</pubDate>
      <link>https://forem.com/techd/aws-power-hour-week-4-design-high-performance-architecture-17nl</link>
      <guid>https://forem.com/techd/aws-power-hour-week-4-design-high-performance-architecture-17nl</guid>
      <description>&lt;p&gt;For this week's 'HomeFun' assignment, we were asked to don our proverbial 'thinking caps' for a thought experiment on designing a High-Performance Architecture. The Scope is very broad, so the answers (by necessity) need to be broad as well, but staying within the purview of the Well-Architected Framework.  The Question is: &lt;/p&gt;

&lt;p&gt;As a new solution architect you are tasked with fixing the outages that occur when marketing runs sales campaigns that drive heavy demand on company website. consider improvements in performance in the following areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;storage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compute&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;data ingestion&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to determine how to optimize the architecture, we need to understand the workload requirements for the site under normal circumstances versus the workload requirements during the sales campaigns.  We need to benchmark the current environment so that we can understand the performance and find our bottlenecks. If we don't have Cloudwatch or other metric gathering service enabled, then that should be our first steps. If we configure Kinesis Firehose, we can get real-time metrics and analyze them in order to determine what our next steps should be. This will enable us to utilize a data-driven approach to designing a high-performance architecture&lt;/p&gt;

&lt;p&gt;Once we have initial benchmarks, we can identify the areas where improvements can be made after working with the Marketing Team to determine their requirements for optimal customer experience. To make improvements, we need to identify the  most important metrics to target. Script out the user journeys to better understand the performance requirements. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Compute: Consider the use of Auto-Scaling Groups with EC2 instances with a Minimum, Maximum and Desired Size. Consider running them across Availability Zones for resiliency as well as performance. Or, conversely, consider offloading some compute functions to a serverless architecture, such as AWS Lambda, with Content Delivery via Cloudfront. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage: Consider storing video using S3 or other object storage. File storage can utilize EFS, or FSx for enhanced throughput and resiliency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database: If an Relational Database is not needed, use a No-SQL  Database such as Amazon DynamoDB to store the data, with DynamoDB Accelerator to provide a fully-managed in-memory cache for increased throughput (to  microseconds).  Also, use only SSD based storage to increase throughput. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Networking: Understanding how networking affects performance is critical to provide the optimal user experience. Insufficient capacity will create a bottleneck and an poor user experience. If necessary, take advantage of AWS Global Accelerator, or the newer N-Series of Network Enhanced EC2 instances such as M5n and M5dn, which use the 4th Gen nitro cards and elastic network adapters to deliver up to 100 Gbps to a single instance. Leverage Load Balancing to distribute traffic more efficiently across resources. Determine optimal placement of Data by determining where the majority of your users will access data from.  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have determined your required Architecture, configure a test environment and load-test everything. Try to make it fail so that you can see your constraints and work to mitigate them. Consider cost in every step--optimized based on your stakeholder requirements, but ensure that the cost is well-documented and presented. You will need to consider the trade-offs for an optimal performance experience. &lt;/p&gt;

&lt;p&gt;Get buy-in from your stakeholders and implement the design, ensuring that you leverage Infrastructure as Code to rapidly deploy and evolve the Architecture. Configure the necessary Cloudwatch alarms and/or Kinesis Firehose to monitor the environment and swiftly move to mitigate bottlenecks.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Power Hour Week 3: Design Reslient Architecture</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Wed, 19 Jul 2023 15:42:48 +0000</pubDate>
      <link>https://forem.com/techd/aws-power-hour-week-3-design-reslient-architecture-4258</link>
      <guid>https://forem.com/techd/aws-power-hour-week-3-design-reslient-architecture-4258</guid>
      <description>&lt;p&gt;For Week 3 of the AWS Power Hour: AWS Solutions Architect Associate, they covered Domain 2 of the exam, which is Designing Resilient Architectures. They did a quick overview (it's not possible to deep dive in 90 minutes) and then presented us with the architecture below for our 'HomeFun' project. We were assigned to review and make recommendations for a more robust, resilient architecture, using the &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html"&gt;AWS Well-Architected Framework&lt;/a&gt; as a guide. This framework is integral to the Solutions Architect (and indeed, any) role, as it provides 'Best Practices' to follow. As with any Architecture, Client RTO and RPO will dictate how much or how little resiliency we build in--the changes to this model architecture is designed for High-Availability and Fault-tolerance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cajXiNXN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to6ad9kfkxqwcr64901h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cajXiNXN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to6ad9kfkxqwcr64901h.png" alt="AWS Architecture Diagram" width="719" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first glance, the diagram is definitely not resilient. There is no fault tolerance, it exists in a single Availability Zone, exposed to the internet. But, if you look deeper, it is &lt;strong&gt;trying&lt;/strong&gt; to be resilient, by having two Web &lt;br&gt;
Servers running on Amazon EC2 instances, to serve up content. However, if I put on my &lt;em&gt;network administrator&lt;/em&gt; hat, I see that the Web Servers are not load balanced (so no 'fault tolerance) and are connected to a Database Server, which leads me to believe this is some type of Content Management System (CMS). That same hat sees that the Database Server, a MySQL database running on an Amazon EC2 instance, is sitting exposed to the public internet--which is not only a failure in Security Best Practices, it also makes this not resilient. &lt;/p&gt;

&lt;p&gt;If we make the assumption that this is some type of CMS, then a three-tier Architecture immediately comes to mind. We can use Elastic Load Balancer and Auto Scaling Groups to scale the application. We can split out the content of the CMS using Elastic File System, and finally, place our database into a MySQL RDS or Amazon Aurora DB instance, and configure at least one Standby Replica.&lt;/p&gt;

&lt;p&gt;How do we do this, you might ask? Let's take a high-level look at the steps involved. We are making the assumption that the EC2 instances were originally configured with a User Data script that installs and does the initial configuration of your CMS. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create the environment required--VPC, Subnets (for a 3-Tier Architecture, we need three in each AZ we will use, one each for Web, App, and DB) with appropriate routing table and Network ACLs.  You will need an Internet Gateway (IGW), and a security group for each tier plus the Load Balancer with appropriately designed inbound and outbound rules (for example, the Database should only allow inbound connections from the Web tier Security Group on port 3306, and the Web tier should only allow access from the Load Balancer). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, Create a Launch Template using one of the running EC2 instances as a template. This way, we will capture all current configurations, and we can &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html"&gt;modify this launch template&lt;/a&gt; later. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, we will need to create Subnet Group in the RDS Console and assign it to our VPC for our client and the Database Subnets in each Availability Zone. Then, we will create a Database and extract the data from the current database to run in the new. If you use RDS, there is a multi-AZ deployment model with automated failover, and you can set up Automated Backups for point-in time recovery. Once created, migrate the data from the current MySQl database using the backup-restore method. Fore more details, review the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html"&gt;documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, we need to create an &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html"&gt;EFS File System&lt;/a&gt;, which will be mounted on our App Tier. We will need to use our EFS Security Group and from the Command Line of our EC2 instance, we will migrate the existing files from the CMS content folder(s) to the EFS. Reboot the EC2 instance and confirm you can still access the content. Next, Update the Launch Template to add the EFS File system changes. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As our final step, we will create the &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html"&gt;ELB&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/get-started-with-ec2-auto-scaling.html"&gt;Auto Scaling Group&lt;/a&gt; and integrate them togehter. We will set up our scaling policy, based on the business rules of the company we are doing the work for. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our Architecture has evolved to provide Resilience and greater security to the company. We have locked the database down to only being accessed by our Web Servers, and both are designed to scale with demand, based on business rules. This environment could easily be duplicated over other Availability Zones as need arises. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dyuET5TO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmiypsf6rvjaxpks0e3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dyuET5TO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmiypsf6rvjaxpks0e3b.png" alt="Image description" width="800" height="855"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Securing AWS Accounts</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Mon, 10 Jul 2023 12:08:03 +0000</pubDate>
      <link>https://forem.com/techd/securing-aws-accounts-50ba</link>
      <guid>https://forem.com/techd/securing-aws-accounts-50ba</guid>
      <description>&lt;p&gt;AWS Accounts are opened with a Root Account. By Default, this Root Account has full access to all aspects of an AWS Environment. As such, this account should &lt;strong&gt;never&lt;/strong&gt; be used for routine, daily access. This account should be secured by MFA and only known to a few individuals within your organization. Instead, identities (human and machine) should be created to securely access your AWS workloads. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For all human identities, the Best Practice is to rely on a centralized identity provider (Identity Federation) for all human users who access AWS using the SAML 2.0 protocol or Open ID Connect. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All users who require access to Resources within your Amazon account for your organization should be provided with an account. This aids in monitoring and auditing of resources and allows granular control of access to resource, when combined with groups and policies. All users should be set up to use MFA and their devices allowed to be registered. Strong passwords should be required, and periodic audits and rotation of credentials should be mandatory. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Groups should be created for access to internal resources. All users should belong to one or more groups, to provide them with access to resources. Group memberships should be periodically reviewed and membership revoked for those who no longer require access, due to changes in role or exit from your organization. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Policies should be created that allow (or deny) access to resources within your AWS Account. These policies should then be assigned appropriately to the Groups containing the users requiring access to resources. Privileges should be granted using the Principle of Least Privilege--meaning that only the permissions required for an individual to do their job should be applied. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This barely scratches the surface of what is needed to secure user access to AWS Resources. The &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html"&gt;Security Pillar&lt;/a&gt; of the AWS Well-Architected Framework contains the Best Practices for securing AWS Workloads. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon EC2 Basics</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Tue, 04 Apr 2023 12:15:12 +0000</pubDate>
      <link>https://forem.com/techd/amazon-ec2-basics-10kc</link>
      <guid>https://forem.com/techd/amazon-ec2-basics-10kc</guid>
      <description>&lt;p&gt;Amazon Elastic Compute Cloud (EC2) is a popular cloud computing service offered by Amazon Web Services (AWS). It provides scalable computing resources in the cloud, allowing users to easily deploy and manage virtual machines (VMs) to meet their computing needs. In this article, we will explore the technical details of Amazon EC2, including its architecture, features, and management tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;Amazon EC2 is built on a virtualization infrastructure that enables users to launch and manage instances of virtual machines, each of which is a complete computing environment. Each instance runs on a hypervisor, which isolates the instance from other instances running on the same physical server. This isolation ensures that each instance is secure and provides a high level of performance.&lt;/p&gt;

&lt;p&gt;EC2 instances are available in a variety of configurations, including multiple operating systems, various types of processors, memory, and storage options. Instances can be launched in various regions around the world, allowing users to choose the best location for their application to optimize latency, availability, and compliance requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;p&gt;Amazon EC2 provides a wide range of features that make it easy for customers to deploy and manage their VMs. Some of the key features of Amazon EC2 include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Elasticity: EC2 enables users to quickly and easily scale their computing capacity up or down, based on their changing needs. This feature makes it easy to handle fluctuating workloads, such as seasonal spikes in traffic or unexpected increases in demand.&lt;/li&gt;
&lt;li&gt;Security: EC2 provides a highly secure computing environment, with features such as firewalls, security groups, and encryption. Users can also create custom security policies to ensure that their instances are protected from unauthorized access.&lt;/li&gt;
&lt;li&gt;Availability: EC2 offers a highly reliable service, with multiple availability zones (AZs) within each region. These AZs are physically separated from each other and are designed to provide independent power, cooling, and networking. This design ensures that applications running on EC2 instances are highly available and can continue to function even if one AZ experiences an outage.&lt;/li&gt;
&lt;li&gt;Customizability: EC2 offers a high degree of customization, with many options for instance types, storage, and networking. This feature allows users to tailor their computing environment to meet the specific needs of their application.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Management tools
&lt;/h3&gt;

&lt;p&gt;Amazon EC2 provides a variety of tools for managing VMs and other resources in the cloud. Some of the key management tools of Amazon EC2 include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Management Console - The AWS Management Console is a web-based interface that allows customers to manage their VMs and other resources in the cloud. The console provides a range of features, such as instance launch wizard, security group editor, and load balancer configuration.&lt;/li&gt;
&lt;li&gt;Amazon EC2 CLI - The Amazon EC2 Command Line Interface (CLI) is a tool that allows customers to manage their VMs and other resources using a command line interface. The CLI provides a range of commands, such as instance start, stop, and terminate, which can be used to automate the management of VMs.&lt;/li&gt;
&lt;li&gt;AWS SDKs - AWS Software Development Kits (SDKs) are programming libraries that allow developers to build applications that interact with Amazon EC2 and other AWS services. The SDKs provide a range of features, such as API authentication, resource management, and error handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits
&lt;/h3&gt;

&lt;p&gt;EC2 provides several benefits to businesses and developers, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost-effectiveness: EC2 is a cost-effective solution for businesses that need to scale their computing capacity up or down quickly. Because users only pay for the computing resources they use, they can avoid the upfront costs of purchasing and maintaining their own hardware.&lt;/li&gt;
&lt;li&gt;Agility: EC2 enables businesses to launch and manage instances quickly and easily, allowing them to respond quickly to changing market conditions and customer demands.&lt;/li&gt;
&lt;li&gt;Global reach: EC2 enables businesses to deploy their applications in multiple regions around the world, ensuring that they can deliver a fast and reliable service to customers no matter where they are located.&lt;/li&gt;
&lt;li&gt;Scalability: EC2 allows businesses to scale their computing capacity up or down quickly and easily, making it easy to handle growth or changes in demand.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Amazon EC2 is a highly scalable and customizable cloud computing service that provides businesses with a cost-effective and reliable solution for their computing needs. With its highly secure and highly available environment, EC2 is a popular choice for businesses of all sizes that need to quickly and easily scale their computing capacity up or down. Whether you are a developer looking to launch a new application or a business looking to scale your existing infrastructure, Amazon EC2 offers a flexible and highly customizable solution that can meet your needs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Multi Region Access Points</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Wed, 22 Mar 2023 00:53:13 +0000</pubDate>
      <link>https://forem.com/techd/aws-multi-region-access-points-4fo6</link>
      <guid>https://forem.com/techd/aws-multi-region-access-points-4fo6</guid>
      <description>&lt;p&gt;Amazon S3 Multi Region Access Points provide a global endpoint for routing Amazon S3 request traffic between AWS Regions. Each global endpoint routes Amazon S3 data request traffic from multiple sources, including traffic originating in Amazon Virtual Private Clouds (VPCs), from on-premises data centers over AWS PrivateLink, and from the public internet without building complex networking configurations with separate endpoints. Establishing an AWS PrivateLink connection to an S3 Multi-Region Access Point allows you to route S3 requests into AWS, or across multiple AWS Regions and accounts over a private connection using a simple network architecture and configuration without the need to configure a VPC peering connection.&lt;br&gt;
More Amazon documentation can be found here: &lt;a href="https://aws.amazon.com/s3/features/multi-region-access-points/"&gt;https://aws.amazon.com/s3/features/multi-region-access-points/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Multi Region Access Point Mini Project
&lt;/h2&gt;

&lt;p&gt;(thanks to Adrian Cantrill for this mini-project)&lt;/p&gt;

&lt;p&gt;We will create buckets in two different regions&lt;/p&gt;

&lt;h3&gt;
  
  
  Step One: Setup Buckets
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open S3 Console under your profile&lt;/li&gt;
&lt;li&gt;Create 2 buckets in 1st region (each must have a unique name, no upper-case letters, appending random numbers is a good idea)&lt;/li&gt;
&lt;li&gt;Enable bucket versioning in each bucket&lt;/li&gt;
&lt;li&gt;In the left menu, select 'Multi-Region Access Point, then click 'Create Multi-Region Access Point'&lt;/li&gt;
&lt;li&gt;Give it a name that is unique to the AWS Accounts&lt;/li&gt;
&lt;li&gt;Add the Buckets&lt;/li&gt;
&lt;li&gt;Click 'Create Multi Region Access Point' at the bottom and wait for completion. &lt;em&gt;Note&lt;/em&gt;. Can take up to 24 hours to complete, but typically only takes between 10–30 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step Two: Setup Replication
&lt;/h3&gt;

&lt;p&gt;Next, we configure replication between the buckets&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note the ARN (Amazon Resource Name) and Alias and copy them for possible later use&lt;/li&gt;
&lt;li&gt;Click Replication and Failover Tab&lt;/li&gt;
&lt;li&gt;Click the Replication button and note there is no replication&lt;/li&gt;
&lt;li&gt;Click the Failover Button and note that the two buckets are in 'Active/Active' Failover&lt;/li&gt;
&lt;li&gt;Scroll down and click "Create Replication Rules"&lt;/li&gt;
&lt;li&gt;Since we are 'Active/Active', we will use the 'Replicate Objects Among All Specified Buckets' template&lt;/li&gt;
&lt;li&gt;Click to select both Buckets&lt;/li&gt;
&lt;li&gt;The Scope can be limited by filters (beyond the scope of this project—experiment on your own), or applied to all objects in the bucket. Click 'Apply to all objects in the bucket' for this project.&lt;/li&gt;
&lt;li&gt;Accept the default checkboxes for 'Additional Replication Options', and click 'Create Replication Rules&lt;/li&gt;
&lt;li&gt;We will see that Replication is in place&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step Three: Testing Multi-Region Access
&lt;/h3&gt;

&lt;p&gt;To Test Multi-Region Access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go back to the main console page in AWS&lt;/li&gt;
&lt;li&gt;Select another region, not one of the two you configured&lt;/li&gt;
&lt;li&gt;Click Cloudshell to pull up a command line interface (&lt;em&gt;Note&lt;/em&gt;: CloudShell is not available in every region, see here for a list of available regions you can use: &lt;a href="https://docs.aws.amazon.com/general/latest/gr/cloudshell.html"&gt;https://docs.aws.amazon.com/general/latest/gr/cloudshell.html&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Create a 10 MB Test file using the command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dd if=/dev/urandom of=test1.file bs=1M count=10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Upload it to the arn you created earlier using the command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp test1.file s3://{insertyourarnhere}/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check your buckets, you will see the file in one and ultimately replicated to the second bucket. (&lt;em&gt;Note&lt;/em&gt;: There is no set time for S3 replication to complete; it can take up to a couple of hours according to their documentation. You can enable Replication Time Control, which advertises 99.999% of objects replicated within 15 minutes, but there is a cost associated with that)&lt;/li&gt;
&lt;li&gt;Let's do another test—switch to another region that has Cloudshell and create another file, naming it test2.file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; dd if=/dev/urandom of=test2.file bs=1M count=10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;And upload it to the ARN:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp test2.file s3://{insertyourarnhere}/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Open the two buckets in separate windows and see which region is first&lt;/li&gt;
&lt;li&gt;For the third test, Keep the two bucket regions open and pick a region that is mostly central to your two regions&lt;/li&gt;
&lt;li&gt;Create and upload a 3rd file (name it test3.file)&lt;/li&gt;
&lt;li&gt;See which site wins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For our 4th and final test, we're going to try to get an object, via our Multi-Region Access Point that has been created in one bucket, but our get request is routed to another bucket that has not had the file replicated yet.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open two Cloudshells, each one in the region for each bucket&lt;/li&gt;
&lt;li&gt;• In one region, create a new file as above, name it test4.file&lt;/li&gt;
&lt;li&gt;Enter the command to copy the file to the bucket, but do not execute it yet&lt;/li&gt;
&lt;li&gt;Go to the other Cloudshell and enter this command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp s3://{insertyourarnhere}/test4.file . (that's a space and a period after the command)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Go back to the first Cloudshell and run the command to copy the file to the bucket.&lt;/li&gt;
&lt;li&gt;Go to the other region and run the command you typed in. You should get a failure like so:   &lt;em&gt;fatal error: An error occurred (404) when calling the HeadObject operation: Key "test4.file" does not exist.&lt;/em&gt; That’s because the file hasn't replicated to that region yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shows what can happen if you have replication enabled and an application calls a file in a region where it does not exist. If your application requires all objects available immediately, Multi-Region Access Points may not be the best solution; or at least the application should be able to handle 404 errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step Four: Clean up AWS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Head to the Multi-Region Access Points page in the S3 console and delete the Access point. You will need to wait for this to complete before you can delete your buckets&lt;/li&gt;
&lt;li&gt;Empty each S3 bucket and delete them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This concludes our mini-project on AWS Multi-Region Access.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Amazon S3 Basics</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Tue, 21 Mar 2023 14:39:28 +0000</pubDate>
      <link>https://forem.com/techd/amazon-s3-basics-18ke</link>
      <guid>https://forem.com/techd/amazon-s3-basics-18ke</guid>
      <description>&lt;p&gt;Amazon S3 is their Simple Storage Service. It is an economical, scalable and resilient service used to store large amounts of data. The storage platform is global and exists in the AWS Public Zone. Resiliency is in the ability to be replicated across not only Availability Zones (one or more discrete data centers with physical security as well as redundant power, networking and connectivity), but regions (physical location around the world with a cluster of Data Centers) as well. It is accessible via the Command Line Interface, the AWS UI, using an API or even via HTTP/HTTPS.&lt;/p&gt;

&lt;p&gt;S3 stores data (called 'objects') in a container called a Bucket. Conceptually speaking, an 'object' is like a file. Objects have a 0-byte to 5 Terabyte size limit, and you have an unlimited number of objects. You can use Amazon S3 as a Data Lake, or to store large volumes of Media Files--audio, video, photos, or for a myriad of other purposes. Objects consist of a Key (name), and a Value (data) plus other information such as Version ID, Metadata, Access Controls or Subresources.&lt;/p&gt;

&lt;p&gt;An S3 Bucket is a container for an Object. Buckets are created in a specific region and they never leave their primary home region. This makes buckets stable and also enables you to control Data Sovereignty. Any failure is contained within a Region. A Bucket has a capacity of 0 to unlimited objects. They have no structure; all data (objects) are stored at the root level; however the UI will present what appear to be 'folders', but are really just a pointer (called a 'prefix') to the object and are part of the object name (ex: /dave/elephant.jpg or /sam/description.txt). There is also no concept of a file type.&lt;/p&gt;

&lt;p&gt;Bucket names must be Globally unique--meaning that you cannot have a generic bucket name like 'Video' if it exists anywhere within AWS in any region in any account. Buckets are also where a lot of permissions and options are set as well.A bucket name must start with a lowercase letter or number, and can only consist of lowercase letters or numbers. They cannot be formatted like an IP address (1.1.1.1) and must be between 3–63 characters in length. There is a soft limit of 100 buckets per AWS Account--over 100 will require a support request to AWS, and a hard limit of 1,000 per account. This means a large organization may have to create one or more buckets&lt;br&gt;
and use prefixes to organize their data among users.&lt;/p&gt;

&lt;p&gt;Other capabilities of S3 include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Object Versioning, which is a feature that allows you to keep multiple versions of an object in the same bucket. This is useful for applications that require a rollback or recovery of data.&lt;/li&gt;
&lt;li&gt;MFA Delete, which is a feature that requires you to provide a Multi-Factor Authentication (MFA) device to delete an object version. This is useful for applications that require a rollback or recovery of data.&lt;/li&gt;
&lt;li&gt;Object Storage Classes, which are features that allow you to store objects in different storage classes. This is useful for applications that require a rollback or recovery of data.

&lt;ul&gt;
&lt;li&gt;Standard Storage Class, which is the default storage class for objects.&lt;/li&gt;
&lt;li&gt;Standard Infrequent Access (SIA) Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;li&gt;Standard One Zone Infrequent Access (SIA) Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;li&gt;S3 Intelligent Tiering Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;li&gt;Glacier Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;li&gt;S3 Glacier Deep Archive Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;li&gt;S3 Glacier Flexible Storage Class, which is a storage class for objects that are accessed less frequently, but requires rapid access when needed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Static Website Hosting, which is a feature that allows you to host a static website on S3. This is useful for applications that require a website to be hosted.&lt;/li&gt;
&lt;li&gt;Lifecycle Management, which is a feature that allows you to automatically move objects between storage tiers or delete objects after a certain period of time. This is useful for applications that require a rollback or recovery of data.&lt;/li&gt;
&lt;li&gt;Encryption of objects at rest using server-side encryption with Amazon S3-managed keys (SSE-S3) or AWS Key Management Service-managed keys (SSE-KMS) or client-side encryption with AWS KMS-managed keys (SSE-C) or customer-provided encryption keys (CSE-C).
S3 Replication, which is a feature that allows you to replicate objects across buckets or across AWS accounts.&lt;/li&gt;
&lt;li&gt;Cross-Region Replication, which is a feature that allows you to replicate objects across buckets or across AWS accounts.&lt;/li&gt;
&lt;li&gt;Pre-signed URLs, which are URLs that provide temporary access to objects in S3.&lt;/li&gt;
&lt;li&gt;Access points, which are unique URLs for an object in S3 that can be shared with others.

&lt;ul&gt;
&lt;li&gt;Multi-Region Access Points which are unique URLs for an object in S3 that can be shared with others.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, Amazon S3 is a very powerful and flexible service. It is cost effective, scalable and resilient. It is a great service to use for a Data Lake, or for storing large volumes of Media Files--audio, video, photos, or for a myriad of other purposes. More details may be found at: &lt;a href="https://aws.amazon.com/s3/"&gt;https://aws.amazon.com/s3/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Networking Fundamentals: OSI vs TCP/IP (DOD) Model</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Sun, 19 Mar 2023 15:50:05 +0000</pubDate>
      <link>https://forem.com/techd/networking-fundamentals-osi-vs-tcpip-dod-model-1nfg</link>
      <guid>https://forem.com/techd/networking-fundamentals-osi-vs-tcpip-dod-model-1nfg</guid>
      <description>&lt;p&gt;The OSI Model is a conceptual model, 'providing a common basis for the coordination of standards development for the purpose of systems interconnection'{&lt;a href="https://www.iso.org/obp/ui/#iso:std:iso-iec:7498:-1:ed-1:v2:en"&gt;ISO/IEC7498-1:1998&lt;/a&gt;}. Numerous models have been tried, but none were as successful at clarifying networking concepts as the OSI Model. This commonly accepted user-friendly framework is an important piece among professionals and non-professionals alike.   &lt;/p&gt;

&lt;p&gt;The model was first developed in the 1970s to support the diverse computer networks that were emerging and competing for application in the world. This period is known as the &lt;a href="https://tinyurl.com/53zw8jdn"&gt;Protocol Wars&lt;/a&gt;, which culminated in the - Internet-OSI Standards War, ultimately "won" by the - Internet Protocol Suite (TCP/IP). The OSI Model consists of 7 layers (mnemonically known as 'All people Seem To Need Data Processing'): Application, Presentation, Session, Transport, Network, Data Link, and Physical. Each layer has a separate function, which, when combined, allows for the interconnection of systems and the transmission of information between them. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Physical Layer - This layer is responsible for the transmission and reception of unstructured raw data between a device (router, switch, Network Interface Card, etc.) and the physical transmission medium. Transmission is achieved by converting digital 'bits' into Electrical, radio, or optical signals and the layer specifications define the characteristics--such as voltage levels, maximum transmission distances or physical connections.&lt;/li&gt;
&lt;li&gt;Data Layer -  This layer is responsible for node-to-node transfer; it is the link between connected nodes. It detects and corrects errors if they occur in the physical layer. IEE-802 divides the layer into two sublayers: MAC Layer and Logical Link Controller layer. &lt;/li&gt;
&lt;li&gt;Network Layer - this layer provides the functional and procedural means of transferring the packets from one node to another in connected networks. A 'network' is the medium to which many nodes can be connected, and each node will have an address. If the message is too large, then the Network Layer will split the packet into multiple messages, commonly called fragments, which will be reassembled at the destination node in order.&lt;/li&gt;
&lt;li&gt;Transport Layer - this layer is responsible for provided the means by which variable-length data sequences are transmitted from one node to another, while also maintaining Quality of Service. &lt;/li&gt;
&lt;li&gt;Session Layer - This layer creates the setup, controls the connection and ends the teardown between two or more computers, which is called a 'Session'. DNS and other name-resolution protocols operate at this layer as well. &lt;/li&gt;
&lt;li&gt;Presentation Layer - This layer establishes data formatting and data translation into the format specified by the application layer. &lt;/li&gt;
&lt;li&gt;Application Layer - this is the layer closest to the End User. The Application Layer interacts directly with software applications that implement a component of communication between the client and server. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As data is passed down from the Application to the Data Layer, it is 'encapsulated' into what is called a Protocol Data Unit (PDU). Each layer adds a 'header' to the Data from the previous layer (which will contain both a header and data from that layer), and passes it down to the next layer, who then adds a header, and passes the new data to the next layer. At Layer 2, the Data Layer, a trailer is also added before passing the data to Layer 1, the Physical layer, who adds its own header, before handing it off to the transmission medium. &lt;/p&gt;

&lt;p&gt;The Design of the TCP/IP Model of the Internet does not concern itself with any sort of 'strict hierarchy of encapsulation' or layering. It does, however, recognize four broad layers of functionality, derived from the scope of operation of the protocols contained within each layer. The TCP/IP (or DOD) model condenses these 7 layers into four. The top three layers (Application, Presentation and Session) into a single layer, called the Application Layer. The Transport layer is next, followed by the Internet Layer (which corresponds directly to the Network Layer). Finally, we have the Link Layer, which combines both the Data and Physical layer. &lt;/p&gt;

&lt;p&gt;The Internet Protocol Suite has become the standard for networking. As a pragmatic approach to computer networking and to simplified, independent implementations of the protocols, it is the more practical methodology. The Foundational Protocols are TCP (Transmission Control Protocol), IP (Internet Protocol) and UDP (User Datagram Protocol). The Technical Standards are maintained by the Internet Engineering Task Force, and the Internet Protocol Suite actually pre-dates the OSI model. As a result, the OSI Model has become a more theoretical, rather than practical application. &lt;/p&gt;

</description>
      <category>networking</category>
      <category>fundamentals</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS IAM Users and Policy</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Sun, 26 Feb 2023 10:58:38 +0000</pubDate>
      <link>https://forem.com/techd/aws-iam-users-and-policy-37pa</link>
      <guid>https://forem.com/techd/aws-iam-users-and-policy-37pa</guid>
      <description>&lt;p&gt;I have undertaken the journey to achieve the AWS Solutions Architect certification as the first step to learning about Cloud Computing. The ability to automate the rollout of production systems and have those systems scale up or out based on nothing more than utilization is a powerful attraction; as long as you can control the costs.&lt;/p&gt;

&lt;p&gt;In the first part of the journey, after learning about Elastic Computing and Storage; we started down the path of learning about Identity and Account Management. IAM Users are used when you need to have long-term AWS access for humans, applications or services. There is a limit of users (5,000) and each user can only be a member of 10 groups which limits using them for large organizations (though there are other ways such as using IAM Roles or Identity Federation), they still have their use.&lt;/p&gt;

&lt;p&gt;IAM users are Principals (which can be Human, applications, etc.) which authenticate via username and password or via a public/private Access Key pair. Humans can use either method depending upon whether they are accessing via the Web Console or the Command Line Interface.  The principal become authenticated once it proves its identity.&lt;/p&gt;

&lt;p&gt;AWS Identities can have IAM Policies applied to them. These policies are a set of security statements that allow or deny access to AWS Resources. Policies consist of a JSON Document consisting of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SID (Statement ID, which is optional)&lt;/li&gt;
&lt;li&gt;Effect (allow or Deny)&lt;/li&gt;
&lt;li&gt;Resource (can be multiple and can use the wildcard character *)&lt;/li&gt;
&lt;li&gt;Principal (IAM User/Role)&lt;/li&gt;
&lt;li&gt;Action (the format is Service:Action, wildcards are allowed. Can also use the ARN--Amazon Resource Name format)&lt;/li&gt;
&lt;li&gt;Condition (option conditions for when the policy is in effect)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;IAM Policy documents are written in a JSON Document and provides the ability to automate the application of policies to multiple users/groups. Armed with templates for various levels of permissions, a Cloud Administrator could easily apply polices to IAM Users rapidly and consistently.&lt;/p&gt;

&lt;p&gt;Human IAM Users should have MFA applied to them as a Best Practice; especially those IAM users who have administrative roles. The type of MFA you use is up to your own policy, either a physical device an app on a mobile device or even biometric information.&lt;/p&gt;

&lt;p&gt;This is just scratching the surface of IAM users and policies, the versatility is far-reaching.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Benefits of Cloud Architecture</title>
      <dc:creator>techD</dc:creator>
      <pubDate>Wed, 15 Feb 2023 12:42:17 +0000</pubDate>
      <link>https://forem.com/techd/benefits-of-cloud-architecture-1o3c</link>
      <guid>https://forem.com/techd/benefits-of-cloud-architecture-1o3c</guid>
      <description>&lt;p&gt;Cloud architecture refers to the design of a cloud computing environment including the hardware, software, and networking components that make up the system. Cloud provides multiple choices, from simple Software as a Service (SaaS) where the customer is responsible for data, identities, devices and information with shared responsibility for identity and directory infrastructure; to Infrastructure as a Service (IaaS) where the customer is responsible for everything but the physical hosts, network and datacenter.  Here are some key benefits of using cloud architecture, backed by research:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Cost savings: Studies show that organizations using cloud computing can gain significant savings to their IT costs compared to those using on-premises infrastructure. You might ask how these cost savings can be achieved. Cloud computing uses a "pay-as-you-go" model--you only pay for what you use. Moving to the cloud will also help organizations reduce their energy consumption and lower their carbon footprint by reducing the amount of "real estate" needed to run their business. You won't need a dedicated datacenter to run your business.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: Cloud architecture enables organizations to easily scale up or down their computing resources as demand for resources increases and decreases, eliminating on-premise or hosted hardware that is idle for much of its uptime.  This helps organizations to quickly respond to changing business needs and reduce the risk of over or under provisioning resources. One study from IDC found that organizations using the cloud saw an average of 68% faster time to market and 74% faster time to value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reliability: Cloud providers offer high levels of uptime and reliability, thanks to their judicious use of redundancy in hardware, software and other means to reduce outages. This provides the company the ability to avoid costly downtime and lost productivity. The top three Cloud providers (Google AWS and Microsoft Azure) all have SLA's of at least 99.9%. This translates to slightly less than 9 hours of downtime per year. This results in more customer engagement for the company, which can mean increased profits for the company.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: Cloud providers invest heavily in security measures to protect their customers' data and applications. This does not eliminate breaches, however it takes the bulk of the mitigation away from the company and places it in the hands of the provider. Customer Data Security and Integrity is vital to the survival of any Cloud Provider, and all the providers not only take care to secure their physical datacenters and prevent intrusion, but they also provide a means by where Company IT teams can monitor their data for breaches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexibility: Cloud architecture enables organizations to easily switch between different types of resources, providing the flexibility to choose the best resources for their specific needs. Application deployment can be easily automated with tools provided by the vendor; and those automation tools can be used repeatedly to deploy new or update applications seamlessly and securely.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, cloud architecture offers a range of benefits that can help organizations improve their efficiency, reduce costs, and better serve their customers. By leveraging the scalability, reliability, security, and flexibility of the cloud, organizations can focus on their core business objectives, rather than the underlying IT infrastructure.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>programming</category>
      <category>frontend</category>
    </item>
  </channel>
</rss>
