<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mohamed Zahra</title>
    <description>The latest articles on Forem by Mohamed Zahra (@mohamed_zahra_).</description>
    <link>https://forem.com/mohamed_zahra_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mohamed_zahra_"/>
    <language>en</language>
    <item>
      <title>Best Practices Design Patterns: Optimizing Amazon S3 Performance</title>
      <dc:creator>Mohamed Zahra</dc:creator>
      <pubDate>Fri, 23 Jul 2021 16:22:23 +0000</pubDate>
      <link>https://forem.com/awsmenacommunity/best-practices-design-patterns-optimizing-amazon-s3-performance-1kf6</link>
      <guid>https://forem.com/awsmenacommunity/best-practices-design-patterns-optimizing-amazon-s3-performance-1kf6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;br&gt;
When building applications that upload and retrieve storage from Amazon S3, follow the AWS best practices guidelines to optimize performance. AWS also offers more detailed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Amazon S3 automatically scales to high request rates when uploading and retrieving storage from Amazon S3. Your application can achieve at least 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 reads per second. Data lake applications on Amazon S3 scan many millions or billions of objects for queries that run over petabytes of data. These applications aggregate throughput across multiple instances to get multiple terabits per second. Data lake applications achieve single- instance transfer rates that maximize the network interface use. Some applications can achieve consistent small object latencies (and first-byte-out latencies for larger objects) of around 100–200 milliseconds. Other AWS services can also help accelerate performance for different application architectures. For example, if you want higher transfer rates over a single HTTP connection or single-digit millisecond latencies, use &lt;strong&gt;Amazon CloudFront&lt;/strong&gt; or &lt;strong&gt;Amazon ElastiCache&lt;/strong&gt;. The following topics describe best practice guidelines and design patterns for optimizing performance for applications that use Amazon S3. This guidance supersedes any previous guidance on how you can optimize performance for Amazon's cloud storage service. If your workload uses server-side encryption with AWS Key Management Service (SSE-KMS), see AWS KMS Limits for information about the request rates supported for your use case. You no longer have to randomize prefix naming for performance for performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Guidelines for Amazon S3&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Measure Performance&lt;/strong&gt;&lt;br&gt;
    When optimizing performance, look at network throughput, CPU, and Dynamic Random Access Memory (DRAM) requirements. Depending on the mix of demands for these different resources, it might be worth evaluating different Amazon EC2 instance types. It's also helpful to look at DNS lookup time, latency, and data transfer speed using HTTP analysis tools when measuring performance.&lt;br&gt;
&lt;strong&gt;Scale Storage Connections Horizontally&lt;/strong&gt;&lt;br&gt;
   Amazon S3 is a very large distributed system, not a single network endpoint like a traditional storage server. You can achieve the best performance by issuing multiple concurrent requests to Amazon S3. Spread these requests over separate connections to maximize the accessible bandwidth from Amazon S3.&lt;br&gt;
&lt;strong&gt;Use Byte-Range Fetches&lt;/strong&gt;&lt;br&gt;
   Using the Range HTTP header in a GET Object request, you can fetch a byte-range from an object, transferring only the specified portion. You can use concurrent connections to Amazon S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. Fetching smaller ranges also allows your application to improve retry times when requests are interrupted.&lt;br&gt;
&lt;strong&gt;Retry Requests for Latency-Sensitive Applications&lt;/strong&gt;&lt;br&gt;
   Aggressive timeouts and retries help drive consistent latency. Given the large scale of Amazon S3, if the first request is slow, a retried request is likely to take a different path and quickly succeed. The AWS SDKs have configurable timeout and retry values that you can tune to the tolerances of your specific application.&lt;br&gt;
&lt;strong&gt;Combine Amazon S3 (Storage) and Amazon EC2 (Compute) in the Same AWS Region&lt;/strong&gt;&lt;br&gt;
    Although S3 bucket names are globally unique, each bucket is stored in a Region that you select when you create the bucket. To optimize performance, we recommend that you access the bucket from Amazon EC2 instances in the same AWS Region when possible. This helps reduce network latency and data transfer costs.&lt;br&gt;
&lt;strong&gt;Use Amazon S3 Transfer Acceleration to Minimize Latency Caused by Distance&lt;/strong&gt;&lt;br&gt;
   Amazon S3 Transfer Acceleration manages fast, easy, and secure transfers of files over long geographic distances between the client and an S3 bucket. As the data arrives at an edge location, it is routed to Amazon S3 over an optimized network path. It's ideal for transferring gigabytes to terabytes of data regularly across continents. The Amazon S3 Transfer Acceleration Speed Comparison tool lets you compare upload speeds across Amazon's S3 Regions. The tool uses multipart uploads to transfer a file from your browser to other regions with and without the use of Amazon's S3 Transfer Accelerator. You can also see how much time it takes to upload a file to another region using the tool.&lt;br&gt;
&lt;strong&gt;Use the Latest Version of the AWS SDKs&lt;/strong&gt;&lt;br&gt;
   AWS SDKs provide a simpler API for taking advantage of Amazon S3 from within an application. The SDKs include logic to automatically retry requests on HTTP 503 errors and are investing in code to respond and adapt to slow connections. The latest version of the Amazon's AWS SDKs have improved performance optimization features. The Transfer Manager automates horizontally scaling connections to achieve thousands of requests per second, using byte-range requests where appropriate. It's important to use the latest version to obtain the latest performance optimization tools. You can also optimize performance when you are using HTTP REST API requests. When using the REST API, you should follow the same best practices that are part of the SDKs. Allow for timeouts and retries on slow requests, and multiple connections to allow fetching of object data in parallel&lt;br&gt;
&lt;strong&gt;Performance Design Patterns for Amazon S3&lt;/strong&gt;&lt;br&gt;
  When designing applications to upload and retrieve storage from Amazon S3, use our best practices design patterns for achieving the best performance for your application&lt;br&gt;
 &lt;strong&gt;1.Using Caching for Frequently Accessed Content&lt;/strong&gt;&lt;br&gt;
      If a workload is sending repeated GET requests for a common &lt;br&gt;
   set of objects, you can use a cache such as Amazon CloudFront, &lt;br&gt;
   Amazon ElastiCache, or AWS Elemental MediaStore to optimize &lt;br&gt;
   performance.&lt;br&gt;
     Amazon CloudFront is a fast content delivery network (CDN) &lt;br&gt;
   that transparently caches data from Amazon S3 in a large set of &lt;br&gt;
   geographically distributed points of presence (PoPs).&lt;br&gt;
     Amazon ElastiCache is a managed, in-memory cache.&lt;br&gt;
  With ElastiCache, you can provision Amazon EC2 instances that &lt;br&gt;
  cache objects in memory.&lt;br&gt;
    AWS Elemental MediaStore is a caching and content distribution system specifically built for video workflows and media delivery from Amazon S3.&lt;br&gt;
&lt;strong&gt;2.Timeouts and Retries for Latency-Sensitive Applications&lt;/strong&gt;&lt;br&gt;
• Amazon S3 automatically scales in response to sustained new request rates, dynamically optimizing performance.&lt;br&gt;
• While Amazon S3 is internally optimizing for a new request rate, you will receive HTTP 503 request responses temporarily until the optimization completes.&lt;br&gt;
• After Amazon S3 internally optimizes performance for the new request rate, all requests are generally served without retries.&lt;br&gt;
• For latency-sensitive applications, Amazon S3 advises tracking and aggressively retrying slower operations.&lt;br&gt;
•When you retry a request, we recommend using a new connection to Amazon S3 and performing a fresh DNS lookup.&lt;br&gt;
• If additional retries are needed, the best practice is to back off.&lt;br&gt;
• If your application makes fixed-size requests to Amazon S3, you should expect more consistent response times for each of these requests.&lt;br&gt;
• In this case, a simple strategy is to identify the slowest 1 percent of requests and to retry them.&lt;br&gt;
•Even a single retry is frequently effective at reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Horizontal Scaling and Request Parallelization for High Throughput&lt;/strong&gt;&lt;br&gt;
Amazon S3 is a very large distributed system.&lt;br&gt;
To help you take advantage of its scale, we encourage you to horizontally scale parallel requests to the Amazon S3 service endpoints. For high-throughput transfers, Amazon S3 advises using applications that use multiple connections to GET or PUT data in parallel. For some applications, you can achieve parallel connections by launching multiple requests concurrently in different application threads, or in different application instances. You can use the AWS SDKs to issue GET and PUT requests directly rather than employing the management of transfers in the AWS SDK. As a general rule, when you download large objects within a Region from Amazon S3 to Amazon EC2, we suggest making concurrent requests for byte ranges of an object at the granularity of 8–16 MB. Make one concurrent request for each 85–90 MB/s of desired network throughput. Measuring performance is important when you tune the number of requests to issue concurrently. Measure the network bandwidth being achieved and the use of other resources that your application uses in processing the data. If your application issues requests directly to Amazon S3 using the REST API, we recommend using a pool of HTTP connections and re-using each connection for a series of requests.&lt;br&gt;
For information about using the REST API, see the Amazon S3 REST API Introduction. Finally, it's worth paying attention to DNS and double-checking that requests are being spread over a wide pool of Amazon S3 IP addresses. DNS queries for Amazon S3 cycle through a large list of IP endpoints. Network utility tools such as the netstat command line tool can show the IP addresses being used for communication with Amazon S3, and we provide guidelines for DNS configurations to use.&lt;br&gt;
 &lt;strong&gt;4.Using Amazon S3 Transfer Acceleration to Accelerate Geographically Disparate Data Transfers&lt;/strong&gt;&lt;br&gt;
 Transfer Acceleration uses the globally distributed edge locations in CloudFront for data transport.&lt;br&gt;
The AWS edge network has points of presence in more than 50 locations. The edge network also helps to accelerate data transfers into and out of Amazon S3. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.&lt;br&gt;
In general, the farther away you are from an Amazon S3 Region, the higher the speed improvement you can expect from using Transfer Acceleration. You can use a separate Amazon S3 Transfer Acceleration endpoint to use the AWS edge locations.&lt;br&gt;
The best way to test whether Transfer Acceleration helps client request performance is to use the Amazon S3 Transfer Acceleration Speed Comparison tool.So, you are charged only for transfers where Amazon S3 Transfer Acceleration can potentially improve your upload performance.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Security best practices in IAM.. Summarized. </title>
      <dc:creator>Mohamed Zahra</dc:creator>
      <pubDate>Tue, 20 Jul 2021 18:22:26 +0000</pubDate>
      <link>https://forem.com/mohamed_zahra_/security-best-practices-in-iam-summarized-2ma2</link>
      <guid>https://forem.com/mohamed_zahra_/security-best-practices-in-iam-summarized-2ma2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Lock away your AWSaccount root user access keys&lt;/strong&gt;&lt;br&gt;
Do not use your AWSaccount root user access key to make programmatic requests to Amazon Web Services (AWS). The access key for your AWS account root user gives full access to all your resources for all services, including your billing information. You cannot reduce the permissions associated with this key and must protect it like you would your credit card numbers or other sensitive secret.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you don't already have an access key for your AWS account root user, don't create one. Instead, use your account email address and password to sign in to the AWSManagement Console and create an IAM user for yourself that has administrative permissions. If you do need to create a new account, you can do so by signing into your existing account.&lt;/li&gt;
&lt;li&gt;If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly. To delete or rotate your root user access keys, go to the My Security Credentials page in the AWSManagement Console.&lt;/li&gt;
&lt;li&gt;Never share your AWSaccount root user password or access keys with anyone. The remaining sections of this document discuss various ways to avoid having to share your account's root user credentials with other users. They also explain how to avoid embedding them in an application.&lt;/li&gt;
&lt;li&gt;Use a strong password to help protect account-level access to the AWSManagement Console. For information about managing your AWSaccount root user password, see Changing the AWS account root user password.&lt;/li&gt;
&lt;li&gt;Enable AWS multi-factor authentication (MFA) on your AWSaccount root user account. For more information, see Using multi-factor authentication (MFA) in AWS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create individual IAM users&lt;/strong&gt;&lt;br&gt;
       Don't give your AWS account  credentials to anyone else. &lt;br&gt;
  Instead, create individual users for anyone who needs access to &lt;br&gt;
  your account. Create an IAM user for yourself as well, give that &lt;br&gt;
  user administrative permissions and use it for all your work. &lt;br&gt;
     You can create individual IAM users for people who access &lt;br&gt;
  your account. Each IAM user has a unique set of security &lt;br&gt;
  credentials, and you can grant different permissions to them. If &lt;br&gt;
  necessary, you can change or revoke an I AM user's permissions &lt;br&gt;
  anytime. It is possible to revoke the root user's password if &lt;br&gt;
  you don't want to give it to someone else. AWS recommends that &lt;br&gt;
  you create new users without permissions and require them to &lt;br&gt;
  change their password immediately. After they sign in for the &lt;br&gt;
  first time, you can add policies to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use user groups to assign permissions to IAM users&lt;/strong&gt;&lt;br&gt;
   You can create user groups that relate to job functions &lt;br&gt;
   (administrators, developers, accounting, etc.) Next, define the &lt;br&gt;
   relevant permissions for each user group. All the users in an &lt;br&gt;
   IAM zser group inherit the permissions of the user group they &lt;br&gt;
   belong to. That way, you can make changes for everyone in a &lt;br&gt;
   user group in just one place.&lt;br&gt;
&lt;strong&gt;Grant least privilege&lt;/strong&gt;&lt;br&gt;
    When you create IAM policies, follow the standard security &lt;br&gt;
    advice of granting least privilege, or granting only the &lt;br&gt;
    permissions required to perform a task. This is more secure &lt;br&gt;
    than starting with too many permissions and then trying to &lt;br&gt;
    tighten them later. It's important to start with a minimum set &lt;br&gt;
    of permissions and grant additional permissions as necessary.&lt;br&gt;
  &lt;strong&gt;A.Understand access level groupings&lt;/strong&gt;&lt;br&gt;
   You can use access level groupings to understand the level of &lt;br&gt;
   access that a policy grants. Policy actions are classified as &lt;br&gt;
   List, Read, Write, Permissions management, or Tagging. For &lt;br&gt;
   example, you can choose actions from the List and Read access &lt;br&gt;
   levels to grant read-only access to your users.&lt;br&gt;
  &lt;strong&gt;B.Validate your policies&lt;/strong&gt;&lt;br&gt;
   IAM Access Analyzer provides over 100 policy checks to validate &lt;br&gt;
   your policies. It generates security warnings when a statement &lt;br&gt;
   in your policy allows access we consider overly permissive. You &lt;br&gt;
   can use the actionable recommendations that are provided &lt;br&gt;
   through the security warnings as you work toward granting least &lt;br&gt;
   privilege.&lt;br&gt;
 &lt;strong&gt;C.Generate a policy based on access activity&lt;/strong&gt;&lt;br&gt;
  You can create a policy based on the access activity for an IAM &lt;br&gt;
  entity. IAM Access Analyzer reviews your CloudTrail logs and &lt;br&gt;
  generates a policy template that contains the permissions that &lt;br&gt;
  have been used by the entity in your specified time frame. You &lt;br&gt;
  can use the template to create a managed policy with fine- &lt;br&gt;
  grained permissions.&lt;br&gt;
 &lt;strong&gt;D.Use last accessed information&lt;/strong&gt;&lt;br&gt;
   Last accessed information includes information about the &lt;br&gt;
  actions that were last accessed for some services, such as &lt;br&gt;
  Amazon EC2,IAM, Lambda, and Amazon S3. View this information on &lt;br&gt;
  the Access Advisor tab on the IAM console details page for an &lt;br&gt;
  IAM user, group, role, or policy. You can use this information &lt;br&gt;
  to identify unnecessary permissions so that you can refine your &lt;br&gt;
  IAM or Organizations policies to better adhere to the principle &lt;br&gt;
 of least privilege.&lt;br&gt;
&lt;strong&gt;E.Review account events in AWS CloudTrail&lt;/strong&gt;&lt;br&gt;
   To further reduce permissions, you can view your account's &lt;br&gt;
   events in AWS CloudTrail Event history. CloudTrail event logs &lt;br&gt;
   include detailed event information that you can use to reduce &lt;br&gt;
   the policy's permissions. The logs include only the actions and &lt;br&gt;
   resources that your IAM entities need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get started using permissions with AWS managed policies&lt;/strong&gt;&lt;br&gt;
  AWS managed policies can give your employees the permissions &lt;br&gt;
  they need to get started. These policies are already available &lt;br&gt;
  in your account and are maintained and updated by AWS. To get &lt;br&gt;
  started quickly, you can use AWS managed policies to give your &lt;br&gt;
  employee access to the services they want or need.AWS managed &lt;br&gt;
  policies are designed to provide permissions for many common use &lt;br&gt;
  cases. Full access AWS managed policies define permissions for &lt;br&gt;
  service administrators by granting full access to a service. &lt;br&gt;
  Power-user policies such as AWSCodeCommitPowerUser and &lt;br&gt;
  AWSKeyManagementServicePowerUser provide multiple levels of &lt;br&gt;
  access to services without allowing permissions management &lt;br&gt;
  permissions. Partial-access policies like &lt;br&gt;
  AmazonMobileAnalyticsWriteOnlyAccess and AmazonEC2ReadOnlyAccess &lt;br&gt;
  provide specific levels of permission for users, user groups, &lt;br&gt;
  and roles.&lt;br&gt;
&lt;strong&gt;Validate your policies&lt;/strong&gt;&lt;br&gt;
  It is a best practice to validate the policies that you create. &lt;br&gt;
  You can perform policy validation when you create and edit JSON &lt;br&gt;
  policies. IAM identifies any JSON syntax errors, while IAM &lt;br&gt;
  Access Analyzer provides over 100 policy checks and actionable &lt;br&gt;
  recommendations to help you author secure and functional &lt;br&gt;
  policies. We recommend that you review and validate all of your &lt;br&gt;
  existing policies&lt;br&gt;
&lt;strong&gt;Use customer managed policies instead of inline policies&lt;/strong&gt;&lt;br&gt;
  For custom policies, we recommend that you use managed policies &lt;br&gt;
  instead of inline policies. A key advantage of using these &lt;br&gt;
  policies is that you can view all of your managed policies in &lt;br&gt;
  one place in the console. You can also view this information &lt;br&gt;
  with a single AWS CLI or AWS API operation. Inline policies are &lt;br&gt;
  policies that exist only on an IAM identity (user, user group, &lt;br&gt;
  or role). Managed policies are separate IAM resources that you &lt;br&gt;
  can attach to multiple identities.&lt;br&gt;
 &lt;strong&gt;Use access levels to review IAM permissions&lt;/strong&gt;&lt;br&gt;
  AWS categorizes each service action into one of five access &lt;br&gt;
  levels based on what each action does: List, Read, Write, &lt;br&gt;
  Permissions management, or Tagging. You can use these access &lt;br&gt;
  levels to determine which actions to include in your policies. &lt;br&gt;
  Make sure that your policies grant the least privilege that is &lt;br&gt;
  needed to perform only the necessary actions.&lt;br&gt;
&lt;strong&gt;Configure a strong password policy for your users&lt;/strong&gt;&lt;br&gt;
  If you allow users to change their own passwords, create a &lt;br&gt;
  custom password policy that requires them to create strong &lt;br&gt;
  passwords and rotate their passwords periodically. You can &lt;br&gt;
  upgrade from the default password policy to define password &lt;br&gt;
  requirements, such as minimum length, minimum length and &lt;br&gt;
  whether it requires nonalphabetic characters. For more &lt;br&gt;
  information, see Setting an account password policy for IAM &lt;br&gt;
  users.&lt;br&gt;
 &lt;strong&gt;Enable MFA&lt;/strong&gt;&lt;br&gt;
  For extra security, we recommend that you require multi-factor &lt;br&gt;
  authentication (MFA) for all users in your account. With MFA, &lt;br&gt;
  users have a device that generates a response to an &lt;br&gt;
  authentication challenge. Both the user's credentials and the &lt;br&gt;
  device-generated response are required to complete the sign-in &lt;br&gt;
  process. If a user's password or access keys are compromised, &lt;br&gt;
  your account resources are still secure because of the &lt;br&gt;
  additional authentication requirement.&lt;br&gt;
      The response is generated in one of the following ways:&lt;br&gt;
 1.Virtual and hardware MFA devices generate a code that you view &lt;br&gt;
   on the app or device and then enter on the sign-in screen.&lt;br&gt;
 2.U2F security keys generate a response when you tap the device. &lt;br&gt;
   The user does not manually enter a code on the sign-in screen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use roles for applications that run on Amazon EC2 instances&lt;/strong&gt;&lt;br&gt;
  Amazon EC2 is a cloud computing platform that allows you to run &lt;br&gt;
  applications on an Amazon EC2 instance. When you launch an EC2 &lt;br&gt;
  server, you can specify a role for the instance as a launch &lt;br&gt;
  parameter. Roles provide temporary credentials to the EC2 &lt;br&gt;
  instances and these credentials are automatically rotated for &lt;br&gt;
  you. The role's permissions determine what the application is &lt;br&gt;
  allowed to do.&lt;br&gt;
&lt;strong&gt;Use roles to delegate permissions&lt;/strong&gt;&lt;br&gt;
  Don't share security credentials between accounts to allow users &lt;br&gt;
  from another AWS account to access resources in your AWS &lt;br&gt;
  account. Instead, use IAM roles. You can define a role that &lt;br&gt;
  specifies what permissions the IAM users in the other account &lt;br&gt;
  are allowed. You can also designate which AWS accounts have the &lt;br&gt;
  IAM users that are allowed to assume the role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not share access keys&lt;/strong&gt;&lt;br&gt;
  Do not embed access keys within unencrypted code or share these &lt;br&gt;
  security credentials between users in your AWS account. For &lt;br&gt;
  applications that need access to AWS, configure the program to &lt;br&gt;
  retrieve temporary security credentials using an IAM role. To &lt;br&gt;
  allow your users individual programmatic access, create a user &lt;br&gt;
  with personal access keys.&lt;br&gt;
&lt;strong&gt;Rotate credentials regularly&lt;/strong&gt;&lt;br&gt;
  If a password or access key is compromised without your &lt;br&gt;
  knowledge, you limit how long the credentials can be used to &lt;br&gt;
  access your resources. Change your own passwords and access keys &lt;br&gt;
  regularly, and make sure that all IAM users in your account do &lt;br&gt;
  as well. You can apply a custom password policy to your account &lt;br&gt;
  to require all your IAM user to rotate their IAM passwords.&lt;br&gt;
&lt;strong&gt;Remove unnecessary credentials&lt;/strong&gt;&lt;br&gt;
  You can find unused passwords or access keys using the console, &lt;br&gt;
  using the CLI or API, or by downloading the credentials report. &lt;br&gt;
  Passwords and access keys that have not been used recently might &lt;br&gt;
  be good candidates for removal. If you created an IAM user for &lt;br&gt;
  an application that does not use the console then the IAM &lt;br&gt;
  doesn't need a password.&lt;br&gt;
&lt;strong&gt;Use policy conditions for extra security&lt;/strong&gt;&lt;br&gt;
  You can define the conditions under which your IAM policies &lt;br&gt;
  allow access to a resource. For example, you can write &lt;br&gt;
  conditions to specify a range of allowable IP addresses that a &lt;br&gt;
  request must come from. You can also set conditions that require &lt;br&gt;
  the use of SSL or MFA (multi-factor authentication).&lt;br&gt;
&lt;strong&gt;Monitor activity in your AWS account&lt;/strong&gt;&lt;br&gt;
  Logging features are available in the following AWS services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon CloudFront.&lt;/li&gt;
&lt;li&gt;AWS CloudTrail.&lt;/li&gt;
&lt;li&gt;Amazon CloudWatch.&lt;/li&gt;
&lt;li&gt;AWS Config.&lt;/li&gt;
&lt;li&gt;Amazon Simple Storage Service&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>cloudskills</category>
      <category>security</category>
    </item>
    <item>
      <title>Overview of Amazon EC2 Spot Instances | AWS Whitepaper Summary</title>
      <dc:creator>Mohamed Zahra</dc:creator>
      <pubDate>Thu, 08 Jul 2021 09:25:53 +0000</pubDate>
      <link>https://forem.com/awsmenacommunity/overview-of-amazon-ec2-spot-instances-3kph</link>
      <guid>https://forem.com/awsmenacommunity/overview-of-amazon-ec2-spot-instances-3kph</guid>
      <description>&lt;h6&gt;
  
  
  &lt;strong&gt;Abstract&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;This paper provides an overview of Amazon EC2 Spot Instances, as well as best practices for using them effectively.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;When to Use Spot Instances&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Spot Instances is the fourth Amazon Elastic Compute Cloud (Amazon EC2) pricing model. With Spot Instances, you can use spare Amazon EC2 computing capacity at discounts of up to 90% compared to On-Demand pricing. Unlike Reserved Instances or Savings Plans, Spot Instance do not require a commitment in order to achieve cost savings. Because they can be terminated by EC2 if there is no available capacity in the capacity pool, they are best suited for flexible workloads.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;How to Launch Spot Instances&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;The most recommended service for launching Spot Instances is Amazon EC2 Auto Scaling. If you require more flexibility, have built your own instance launch workflows, or want to control individual aspects of the instance launches or the scaling mechanisms, you can use EC2 Fleet in Instant mode. When you use AWS services for running your cloud workloads, you can also use them for launching Spot Instances. Examples include Amazon EMR, Amazon EKS, Amazon ECS, AWS Batch, and AWS Elastic Beanstalk.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;How Spot Instances Work&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Amazon's Spot Instances can be interrupted by Amazon EC2 when EC2 needs the capacity back. When EC2 interrupts your Spot Instance, it either terminates, stops, or hibernates the instance, depending on the interruption behavior that you choose. You're not charged for the first hour of running time if you stop or terminate your Spot instance. If you pay for any partial hour used (as you do for On-Demand or Reserved Instances), you will have to pay for the full hour. The Spot price for each instance type in each Availability Zone is determined by long-term trends in supply and demand for EC2 spare capacity. You pay the Spot price that is in effect, billed to the nearest second. We recommend that you do not specify a maximum price, but rather let the maximum price default to the On-Demand price. A high maximum price does not increase your chances of launching a Spot Instance.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Managing Spot Instance Interruptions&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Amazon EC2 instance rebalance recommendations is a signal that notifies you when a Spot Instance is at elevated risk of interruption also Spot Instance interruption notices can help you manage your application to be fault tolerant.&lt;br&gt;&lt;br&gt;
You can decide to rebalance your workload to new or existing Spot Instances that are not at an elevated risk of interruption. You can take advantage of EC2's Capacity Rebalancing feature in EC2 Auto Scaling groups.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Spot Instance Limits&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;There is a limit on the number of running and requested Spot Instances per AWS account per region. There are six Spot Instance limits, listed in the following table. Each limit specifies the vCPU limit for one or more instance families. If you terminate your Spot instances but do not cancel the requests, the requests count against your Spot Instances vCPU limit until Amazon EC2 detects the terminations and closes them. &lt;/p&gt;

&lt;p&gt;With vCPU limits, you can use your limit in terms of the number of vCPUs that are required to launch any combination of instance types that meet your changing application needs. With an All-Standard Spot Instance Requests limit of 256 vCPUs, you could request 32 m5.2xlarge Spot Instances (32 x 8 vCPU) or 16 c5.4xlarge spot instances (16 x 16 vPCs) or a combination of all sizes that total 256 vCPU's.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Spot Instance Best Practices&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Your instance type requirements, budget requirements, and application design will determine how to apply the following best practices for your application:&lt;br&gt;
     •Be flexible about instance types. A Spot Instance pool is a set of unused EC2 instances with the same instance type and Availability Zone. You should be flexible about which instance types you request and in which Availability Zones you can deploy your workload. Don't just ask for c5.large if you'd be willing to use larges from the c4, m5, and m4 families.&lt;br&gt;
   • Use the capacity optimized allocation strategy Allocation strategies in EC2 Auto Scaling groups help you to provision your target capacity without the need to manually look for the Spot Instance pools with spare capacity. We recommend using the capacity optimized strategy because this strategy automatically provisions instances from the most-available Spot Instances pools. Because your Spot Instancy capacity is sourced from pools with optimal capacity, this decreases the possibility that your spot Instances are interrupted.&lt;br&gt;
   • Use proactive capacity rebalancing. Capacity Rebalancing helps you maintain workload availability by proactively augmenting your Auto Scaling group with a new Spot Instance before a running Spot Instances receives the two-minute interruption notice. When capacity rebalancing is enabled, auto-scaling attempts to proactively replace Spot instances that have received a rebalance recommendation. This gives you the opportunity to rebalance your workload to new spots that are not at elevated risk of interruption.&lt;br&gt;
  • Use integrated AWS services to manage your Spot Instances. Other AWS services integrate with Spot to reduce overall compute costs without the need to manage the individual instances or fleets. We recommend that you consider the following solutions for your applicable workloads: Amazon EMR, Amazon ECS, AWS Batch, Amazon EKS, SageMaker, AWS Elastic Beanstalk, and Amazon GameLift.&lt;br&gt;
 • Choose the modern and correct launch tool for Spot Instances. If you need to build your application with control over the launch of Spot Instances, use the right tool. For most workloads, you should use EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads. If you need more control over individual requests and are looking for a "launch only" tool, try EC2 Fleet in Instant mode.&lt;br&gt;
Spot Integration with Other AWS Services&lt;/p&gt;

&lt;h6&gt;
  
  
  Amazon EMR Integration
&lt;/h6&gt;

&lt;p&gt;You can run Amazon EMR clusters on Spot Instances and significantly reduce the cost of processing vast amounts of data for your analytics workloads. You can easily mix Spot Instance with On-Demand and Reserved Instances using the EMR Instance Fleets feature. &lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;EC2 Auto Scaling Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;You can use Amazon EC2 Auto Scaling groups to launch and manage Spot Instances, maintain application availability, diversify instance type and purchase option (On-Demand/Spot) selection, and scale your Amazon EC2 capacity using dynamic, scheduled, and predictive scaling policies.  &lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon EKS Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Amazon EKS is a cloud computing platform that lets you cost-optimize your Kubernetes-based workloads. EKS managed node groups manage the entire Spot Instance lifecycle, by replacing soon-to-be-interrupted Spot Instances with newly launched instances. This reduces the chances of impact on application performance or availability when EC2 is interrupted.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon ECS Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;You can run Amazon ECS clusters on Spot Instances to reduce the operational cost of running containerized applications. Amazon ECS supports automatic draining of Spot Instances that are soon-to-be interrupted.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon ECS with AWS Fargate Spot Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;If your containerized tasks are interruptible and flexible, you can choose to run your ECS tasks with the AWS Fargate Spot capacity provider, meaning that your tasks will run on AWS Fargate, a serverless containers platform, and you will benefit from cost savings driven by Fargate Spot&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon Batch Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;AWS Batch plans, schedules, and executes your batch computing workloads on AWS. AWS Batch dynamically requests Spot Instances on your behalf, reducing the cost of running your batch jobs.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon SageMaker Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Amazon SageMaker makes it easy to train machine learning models using managed Spot Instances. Managed Spot training can optimize the cost of training models by up to 90% over On-Demand Instances. SageMaker manages the Spot interruptions on your behalf.&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;Amazon Gamelift Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;Amazon GameLift is a game server hosting solution that deploys, operates, and scales cloud servers for multiplayer games. Support for Spot Instances in Amazon Gamelift gives you the opportunity to significantly lower your hosting costs. When creating fleets of hosting resources, you can choose between On-Demand Instances or Spot Instances. While Spot Instances might be interrupted with two minutes of notification, Amazon GameLift's FleetIQ minimizes the chance of interruptions&lt;/p&gt;

&lt;h6&gt;
  
  
  &lt;strong&gt;AWS Elastic Beanstalk Integration&lt;/strong&gt;
&lt;/h6&gt;

&lt;p&gt;AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code, and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, and auto scaling, to application health monitoring. You can use Spot Instances in your Elastic Beanstalk environments for cost optimizing the underlying infrastructure of your web application&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Original Document&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-leveraging-ec2-spot-instances/introduction.html?did=wp_card&amp;amp;trk=wp_card"&gt;Overview of Amazon EC2 Spot Instances&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcompute</category>
      <category>architecture</category>
      <category>design</category>
    </item>
  </channel>
</rss>
