<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Pablo Salas</title>
    <description>The latest articles on Forem by Pablo Salas (@pablosalas).</description>
    <link>https://forem.com/pablosalas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pablosalas"/>
    <language>en</language>
    <item>
      <title>5 Best Practices for Securing Amazon SageMaker.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Thu, 29 Jan 2026 09:21:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/5-best-practices-for-securing-amazon-sagemaker-408h</link>
      <guid>https://forem.com/aws-builders/5-best-practices-for-securing-amazon-sagemaker-408h</guid>
      <description>&lt;p&gt;What is SageMaker?&lt;/p&gt;

&lt;p&gt;Amazon’s SageMaker is a comprehensive, managed machine learning (ML) offering that allows you to plug your models directly into an easily configurable host environment. This removes the need to build servers of your own or spend hours writing bespoke specifications.&lt;/p&gt;

&lt;p&gt;What this means for developers is gaining the freedom to focus on thinking and writing code, with SageMaker working to remove many of the tedious underlying details. Not only does this enhance their productivity, but it tends to be the kind of thing developers look for in new tools. That said, it’s always good to do your due diligence and ensure you’re taking the necessary security precautions when relying on any external solution, like AWS.&lt;/p&gt;

&lt;p&gt;1 - Network security&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2tqqrp9m2ztcyvhwn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq2tqqrp9m2ztcyvhwn1.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;br&gt;
By default, SageMaker communicates with other AWS services over the public internet.&lt;br&gt;
To enhance security: Deploy SageMaker AI resources in a Virtual Private Cloud (VPC). Configuring VPC endpoints allows private, secure connections to services like S3, KMS, and Amazon Elastic Container Registry (ECR), avoiding exposure to the public internet. Always review your configurations to ensure SageMaker endpoints are not publicly accessible unless explicitly required.&lt;br&gt;
Whenever possible, make sure you are using a private link for VPC endpoints and disabling internet access. If you allow the VPCs to be accessed from the internet you will be exposing yourself to additional security risks.&lt;/p&gt;

&lt;p&gt;2 - Authentication and Authorization&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78rgwk0qzplm5htxhxgu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78rgwk0qzplm5htxhxgu.png" alt=" " width="800" height="425"&gt;&lt;/a&gt;&lt;br&gt;
There may be instances where you don’t want certain workloads to be able to access your SageMaker resources. AWS’s Identity and Access Management (IAM) solutions can make this process manageable. Think of IAM as the “master key” that’s in your sole possession, and you can hand out individual “door keys” to as many other individuals or third-party technology solutions as required. This is all part of a concept known as “least privilege”, which refers to the belief that an entity should only have access to the bare minimum amount of information it needs to complete a task. Least privilege is a common and effective way of reducing your attack surface and your likelihood of data leaks. &lt;br&gt;
Another great security option is multi-factor authentication, sometimes called 2FA. This is quickly being embraced by everyone from nontechnical laymen up to enterprises operating at a massive scale – and that is because it is one of the simplest and most effective ways you can ensure your data is protected in SageMaker.&lt;/p&gt;

&lt;p&gt;3 - Data Protection&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiau45fvvpxemwofie96p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiau45fvvpxemwofie96p.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;br&gt;
Encrypt datasets in Amazon S3 using KMS keys. While AWS-managed keys are convenient, CMKs provide more control, allowing you to define permissions, key rotation policies, and access auditing. &lt;br&gt;
Additionally, restrict access to your S3 buckets using IAM policies. Employ scoped-down policies to limit access to only the users, groups, or roles that require it. Pair these policies with S3 bucket policies that enforce secure transport using aws:SecureTransport to require SSL/TLS for all communications.&lt;/p&gt;

&lt;p&gt;4 - Monitoring&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lgjicl95qkoni34fvlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1lgjicl95qkoni34fvlc.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Monitoring play a critical role in detecting and responding to security incidents in SageMaker AI. AWS CloudTrail logs all API activity, making auditing actions like creating training jobs or deploying models easier. Amazon CloudWatch provides detailed metrics and logs for notebook instances, training jobs, and endpoints for real-time monitoring. &lt;br&gt;
To enhance security: Use these tools to get visibility into your SageMaker environment and respond quickly to anomalies. For example, you can set up CloudWatch alarms to notify you if a training job runs longer than expected or endpoint latency exceeds a certain threshold. &lt;/p&gt;

&lt;p&gt;5 - Compliance Certifications&lt;br&gt;
Every industry has regulatory standards, and cloud solutions are no different. For this reason, cloud compliance is a must-have, and AWS’s SageMaker supports over 143 security and compliance standards. When using SageMaker, it’s important to ensure you’re meeting all the necessary compliance standards, not just to play by the rules, but also to ensure you’re doing everything possible to keep your environment (and your healthcare data) secure.&lt;br&gt;
By using AWS’s Compliance offering, you will be able to reduce the hassle of trying to check all the myriad accreditation criteria on your own. However, it’s worth noting that when using AWS Compliance, the responsibility of compliance is jointly shared between AWS and the customer, so be sure to understand the shared responsibility model and uphold the compliance responsibilities that fall on you.&lt;/p&gt;

&lt;p&gt;Conclusion.&lt;br&gt;
Protecting your SageMaker environment requires a multifaceted approach that encompasses best practices in cloud security, AI model management and continuous monitoring. By leveraging AWS's tools and configurations, you can create robust, scalable, and secure ML solutions that meet the demands of even the most sensitive environments. &lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>security</category>
    </item>
    <item>
      <title>5 Best Practices for Securing Amazon Bedrock Agents from Prompt Injections.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Fri, 02 Jan 2026 00:33:57 +0000</pubDate>
      <link>https://forem.com/aws-builders/5-best-practices-for-securing-amazon-bedrock-agents-from-prompt-injections-30eg</link>
      <guid>https://forem.com/aws-builders/5-best-practices-for-securing-amazon-bedrock-agents-from-prompt-injections-30eg</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Prompt Injection Attacks Are the Biggest Security Risk Facing Amazon Bedrock Agents&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock is making Generative and Agentic AI easier to adopt than ever. But as organizations deploy autonomous Bedrock Agents across business workflows, prompt injection attacks have emerged as a critical new threat—one that targets how agents think, reason, and act, not traditional code vulnerabilities.&lt;/p&gt;

&lt;p&gt;Unlike classic cyberattacks, prompt injections manipulate agent behavior through cleverly crafted inputs, often hidden inside emails, documents, or retrieved knowledge. The result? Unauthorized actions, data leakage, and broken trust in AI automation.&lt;/p&gt;

&lt;p&gt;To safely scale Bedrock Agents, security must be agent-aware and multi-layered. Here are 5 essential best practices every organization should implement to defend against prompt injections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Require Human Approval for High-Risk Actions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Enable user confirmation for mutating or sensitive actions within Bedrock Agent action groups. Even if an agent is manipulated, a human approval step creates a fail-safe that prompt injections can’t bypass.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tephjvu4jy5l1w0sqgn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tephjvu4jy5l1w0sqgn.jpg" alt=" " width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enforce Amazon Bedrock Guardrails on All Inputs and Outputs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use Bedrock Guardrails to moderate both incoming content and agent responses. Properly tag all untrusted data—including RAG outputs and third-party API responses—as user input so hidden instructions are filtered before reaching the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00aecnllddh1t59uzx4j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00aecnllddh1t59uzx4j.jpg" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Apply Secure Prompt Engineering with Clear Data Boundaries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Design system prompts that explicitly instruct agents to ignore embedded commands in processed content. Use techniques like nonces and strict context separation to prevent agents from confusing data with instructions during multi-step workflows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify Agent Plans Before Execution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Adopt custom orchestration strategies (such as plan-verify-execute) to ensure agents only take actions that align with their original intent. This prevents injected instructions from hijacking workflows mid-execution.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Monitor Inputs, Outputs, and Actions Continuously&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Implement comprehensive logging and anomaly detection for agent behavior. Monitor what goes into the agent, what comes out, and what actions it takes—so suspicious patterns are detected before real damage occurs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Securing Amazon Bedrock Agents isn’t about limiting innovation—it’s about protecting trust.
Organizations that combine agent-specific safeguards with proven security practices will be best positioned to scale AI automation safely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr66030e0p832ietr5dwh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr66030e0p832ietr5dwh.jpg" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloudelligent, an AWS Advanced Consulting Partner, helps enterprises secure Amazon Bedrock Agent deployments through guardrails configuration, secure prompt engineering, and real-time monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ready to fortify your Bedrock Agents against prompt injections?&lt;br&gt;
Book a free security assessment and protect your AI investments before attackers exploit them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you very much for your time.&lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS Security Tools for Your Environment.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Fri, 28 Mar 2025 18:48:38 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-security-tools-for-your-environment-5c8l</link>
      <guid>https://forem.com/aws-builders/aws-security-tools-for-your-environment-5c8l</guid>
      <description>&lt;p&gt;Amazon Web Services enables organizations to build and scale applications quickly and securely. However, continuously adding new tools and services introduces new security challenges. According to reports, 70 percent of enterprise IT leaders are concerned about how secure they are in the cloud and 61 percent of small- to medium-sized businesses (SMBs) believe their cloud data is at risk.&lt;/p&gt;

&lt;p&gt;AWS provides security tools designed to improve both account security and application and service security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top 6 AWS Account Security Tools&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;1. AWS Identity and Access Management (IAM):&lt;/em&gt;&lt;br&gt;
AWS IAM is essential for controlling access to your AWS resources. It enables you to create users and roles with permissions to specific resources in your AWS environment. Always assigning least-privilege permissions to these users and roles minimizes the impact of a breach where an attacker has gained access. AWS IAM also has multi-factor authentication and supports single sign-on (SSO) access to further secure and centralize user access.&lt;/p&gt;

&lt;p&gt;Use the IAM policy simulator to test and troubleshoot the extent of permissions you assign to your users and roles, and make sure you're following the principle of least privilege when configuring your IAM permissions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2. Amazon GuardDuty:&lt;/em&gt;&lt;br&gt;
Amazon GuardDuty uses machine learning to look for malicious activity in your AWS environments. It combines your CloudTrail event logs, VPC Flow Logs, S3 event logs, and DNS logs to continuously monitor and analyze all activity. GuardDuty identifies issues such as privilege escalation, exposed credentials, and communication with malicious IP addresses and domains. It can also detect when your EC2 instances are serving malware or mining bitcoin.&lt;/p&gt;

&lt;p&gt;In addition, GuardDuty can detect anomalies in your access patterns such as API calls in new regions. Pricing is based on the amount of data analyzed, so costs will increase linearly as your AWS environments grow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;3. Amazon Macie:&lt;/em&gt;&lt;br&gt;
Amazon Macie discovers and protects your sensitive data stored in AWS S3 buckets. It first identifies sensitive data in your buckets, such as personally-identifiable information or personal health information, through discovery jobs. You can schedule these jobs to monitor new data added to your buckets. After it finds sensitive data, Macie continuously evaluates your buckets and alerts you when a bucket is unencrypted, is publicly accessible, or is shared with AWS accounts outside of your organization.&lt;/p&gt;

&lt;p&gt;Macie’s pricing scales with the amount of data it processes and the number of S3 buckets it monitors.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;4. AWS Config:&lt;/em&gt;&lt;br&gt;
AWS Config records and continuously evaluates your AWS resource configuration. This includes keeping a historical record of all changes to your resources, which is useful for compliance with legal requirements and your organization’s policies. AWS Config evaluates new and existing resources against rules that validate certain configurations. For example, if all EC2 volumes must be encrypted, AWS Config can detect non-encrypted volumes and send a notification. In addition, it can also execute remediation actions such as encrypting the volume or deleting it.&lt;/p&gt;

&lt;p&gt;Config is configured per region, so it’s essential to enable AWS Config in all regions to ensure all resources are recorded, including in regions where you don’t expect to create resources.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;5. AWS CloudTrail:&lt;/em&gt;&lt;br&gt;
AWS CloudTrail tracks all activity in your AWS environment. It records all actions a user executes in the AWS console and all API calls as events. You can view and search these events to identify unexpected or unusual requests in your AWS environment.&lt;/p&gt;

&lt;p&gt;CloudTrail is enabled by default in all AWS accounts since August 2017. If you also use AWS Organizations to manage multiple accounts, you can enable CloudTrail within the organization on all existing accounts.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;6. Security Hub:&lt;/em&gt;&lt;br&gt;
AWS Security Hub combines information from all the above services in a central, unified view. It collects data from all security services from multiple AWS accounts and regions, making it easier to get a complete view of your AWS security posture. In addition, Security Hub supports collecting data from third-party security products. Security Hub is essential to providing your security team with all the information they may need.&lt;/p&gt;

&lt;p&gt;A key feature of Security Hub is its support for industry recognized security standards including the CIS AWS Foundations Benchmark and Payment Card Industry Data Security Standard (PCI DSS).&lt;/p&gt;

&lt;p&gt;Combine Security Hub with AWS Organizations for the simplest way to get a comprehensive security overview of all your AWS accounts.&lt;/p&gt;

&lt;p&gt;Now that we have addressed the top account security tools, let’s focus on the top four AWS application sSecurity tTools you should consider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top 4 AWS Application Security Tools&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;1. Amazon Inspector:&lt;/em&gt;&lt;br&gt;
Amazon Inspector is a security assessment service for applications deployed on EC2. These assessments include network access, common vulnerabilities and exposures (CVEs), Center for Internet Security (CIS) benchmarks, and common best practices such as disabling root login for SSH and validating system directory permissions on your EC2 instances.&lt;/p&gt;

&lt;p&gt;Based on data provided on an agent application you can install on your EC2 VMs, Inspector generates a report with a detailed list of security findings prioritized by severity. Run Inspector as part of a gated check in your deployment pipeline to assess your applications’ security before deploying to production.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2. AWS Shield:&lt;/em&gt;&lt;br&gt;
AWS Shield is a fully-managed distributed denial-of-service (DDoS) protection service. Shield is enabled by default as a free standard service with protection against common DDoS attacks against your AWS environment. &lt;/p&gt;

&lt;p&gt;Shield Advanced goes a step further by integrating with AWS WAF to prevent a wide variety of malicious traffic from reaching your websites and applications. It can cover multiple accounts under an organization to ensure that all of your organization's internet-facing endpoints are protected from attackers. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;3. AWS Web Application Firewall:&lt;/em&gt;&lt;br&gt;
AWS Web Application Firewall (WAF) monitors and protects applications and APIs built on services such as CloudFront, API Gateway, and AppSync. You can block access to your endpoints based on different criteria such as the source IP address, the request’s origin country, values in headers and bodies, and more (i.e, you can enable rate limiting, only allowing a certain number of requests per IP&lt;/p&gt;

&lt;p&gt;The AWS Marketplace also includes a set of managed rules you can associate with your WAF, along with 3rd party managed rules from leading security vendors. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;4. AWS Secrets Manager:&lt;/em&gt;&lt;br&gt;
AWS Secrets Manager is a managed service where you can store and retrieve sensitive information such as database credentials, certificates, and tokens. Use fine-grained permissions to specify exact actions an entity can perform on the secrets, such as creating, updating, deleting, or retrieving secrets. &lt;/p&gt;

&lt;p&gt;Secrets Manager also supports automatic rotation for AWS services such as Amazon Relational Database Service (RDS). Through Lambda functions, secrets for other services can be automatically rotated as well. Never store your sensitive information in source control management systems, such as Git. Always use a tool like Secrets Manager.&lt;/p&gt;

&lt;p&gt;Thank you for your time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>How to prevent DNS Spoofing in AWS.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Tue, 11 Mar 2025 09:04:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-prevent-dns-spoofing-in-aws-1n6c</link>
      <guid>https://forem.com/aws-builders/how-to-prevent-dns-spoofing-in-aws-1n6c</guid>
      <description>&lt;p&gt;&lt;strong&gt;DNS spoofing, or DNS cache poisoning&lt;/strong&gt;, is a type of phishing and cyber attack where false Domain Name System (DNS) information is introduced into a DNS resolver's cache. This causes DNS queries to return an incorrect response, which commonly redirects users from a legitimate website to a malicious website designed to steal sensitive information or install malware.  &lt;/p&gt;

&lt;p&gt;There are a number of reasons why DNS spoofing is possible, but the principle problem is DNS was built in the 1980s when the Internet was much smaller and security was not a primary concern. As such, there is no in-built way for DNS resolvers to verify the validity of the data they store, and incorrect DNS information can remain until the time to live (TTL) expires or is manually updated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the Domain Name System (DNS)?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Domain Name System (DNS) is a hierarchical and decentralized naming system for computers, services, and other resources that connect to the Internet. In short, it assigns and maps human-readable domains (such as mac.com) to their underlying IP addresses that machines use to communicate. DNS also defines the DNS protocol, which is a specification of data structures and data exchanges used in the DNS.&lt;/p&gt;

&lt;p&gt;In practice, the DNS delegates this responsibility to the authoritative nameservers of each domain, creating a distributed, fault-tolerant system that isn't centrally hosted. &lt;/p&gt;

&lt;p&gt;The Internet as you know it depends on the DNS functioning correctly. Every web page, email sent, and picture received relies on DNS to translate its human-friendly domain name to an IP address used by servers, routers, and other networked devices. &lt;/p&gt;

&lt;p&gt;DNS records are also used to configure email security settings. See our full guide on email security for more information.&lt;br&gt;
What Do DNS Resolvers Do?&lt;/p&gt;

&lt;p&gt;When you type in a domain, such as example.com, your web browser will use your operating systems stub resolver to translate the site's domain name into an IP address. If the stub resolver doesn't know the translation, it will relay the request for DNS data to more complicated recursive resolvers, which are often operated by Internet service providers (ISPs), governments, and organizations such as Google, OpenDNS, and Quad9.&lt;/p&gt;

&lt;p&gt;Once the recursive resolver has your request, it then sends its own DNS requests to multiple authoritative name servers until it can find a definitive answer. &lt;/p&gt;

&lt;p&gt;Domain name servers are the Internet's equivalent to a phone book, maintaining a directory of domain names and translating them to IP addresses, just as a regular phone book translates names to phone numbers.&lt;/p&gt;

&lt;p&gt;Some organizations even run their own, but most will outsource this function to a third-party like a registrar, Internet service provider or web hosting company.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does DNS Caching Work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To improve performance, the stub resolver and recursive resolvers will cache (remember) the domain name to IP address translation so that next time you ask to go the website it doesn't need to query the nameservers for a certain amount of time known as the time to live (TTL). When the TTL expires, the process is repeated.&lt;/p&gt;

&lt;p&gt;In general, this is a good thing as it saves time and speeds up the Internet. However, when successful DNS attacks change DNS settings and provide a DNS resolver with an incorrect IP address the traffic can go to the wrong place until the TTL expires or the cached information is manually corrected.&lt;/p&gt;

&lt;p&gt;When a resolver receives false information, it is known as a DNS cache poisoning attack and the resolver is said to have a poisoned DNS cache. &lt;/p&gt;

&lt;p&gt;In this type of attack, the resolver may return an incorrect IP address diverting traffic from the real website to a fake website. &lt;br&gt;
How Do Attackers Poison DNS Caches?&lt;/p&gt;

&lt;p&gt;DNS poisoning or DNS spoofing attacks work by impersonating DNS nameservers, making a request to a DNS resolver, and then forging the reply when the DNS resolver queries a nameserver. &lt;/p&gt;

&lt;p&gt;This is possible because DNS uses UDP, an unencrypted protocol, which makes it easy to intercept traffic with spoofing and DNS servers do not validate the IP addresses that they are directing traffic to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does DNS Poisoning Work?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Man-in-the-Middle (MTM) attacks&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With man-in-the-middle (MITM) duping, the attacker gets between the web browser you are using and the DNS server. They then use a tool to alter the information in the cache on your device, as well as the information on the DNS server. You then get redirected to a malicious site.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;DNS server hijack&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When hijacking a DNS server, the attacker makes adjustments to the server, causing it to direct users to a malicious site. The fake DNS information causes every user who enters that website’s address to get sent to the fraudulent site.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;DNS cache poisoning via spam&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When an attacker uses spam for DNS spoofing attacks, they put the code used for the cache poisoning inside an email. The email will often try to scare users into clicking on the link that ends up launching the DNS poisoning attack.&lt;br&gt;
What Are The Risks Of DNS Poisoning?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Data theft&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An attacker can have the user redirected to a phishing website that can collect the user’s private information. When the user enters it, it gets sent to the attacker, who can then use it or sell it to another criminal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Malware infection&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A cyber criminal may have the user sent to a website that infects their computer with malware. This can be done through drive-by downloads, which automatically put the malware on the user’s system or through a malicious link on the site that installs malware, such as a Trojan virus or a botnet.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Halted security updates&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An attacker can spoof an internet security provider’s site. This way, when the computer attempts to visit the site to update its security, it will be sent to the wrong one. As a result, it does not get the security update it needs, leaving it exposed to attacks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Censorship&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Censorship can be executed via manipulation of the DNS as well. For instance, in China, the government changes the DNS to make sure only approved websites can be viewed within China.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How To Prevent DNS Poisoning&lt;/strong&gt;&lt;br&gt;
For Website Owners And DNS Service Providers&lt;/p&gt;

&lt;p&gt;Website owners and DNS service providers have the responsibility of defending users from DNS attacks. There are several ways to protect your users.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;DNS spoofing detection tools&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These tools scan the DNS data being sent to make sure it is accurate before allowing it to go to the user.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Domain name system security extensions&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Domain Name System Security Extension (DNSSEC) appends a label to a DNS that verifies that it is authentic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;End-to-End encryption&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With end-to-end encryption, the data that gets sent out is encrypted, so cyber criminals cannot access the DNS data to copy it and redirect users to the wrong sites.&lt;br&gt;
For Endpoint Users&lt;/p&gt;

&lt;p&gt;Users can be an easy target for DNS spoofing. Here are ways to prevent becoming a victim.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Never click a link you do not recognize&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is better to manually enter a Uniform Resource Locator (URL) into your web browser than click on a link that may look suspicious. Clicking the wrong link can lead to a DNS attack.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Regularly scan your computer for malware&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Spoofed websites can be used by attackers to deliver malware to your computer. Regularly scanning your computer for infections can get rid of malware you downloaded accidentally as a result of DNS poisoning.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Flush your DNS cache to solve poisoning&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Flushing your DNS cache gets rid of false information. All major operating systems come with cache-flushing functions. Flushing the DNS cache gives your device a fresh start, ensuring that any DNS information that gets processed will correlate with the correct site.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Use a Virtual Private Network (VPN)&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With a virtual private network (VPN), all data going to and from your computer is encrypted. You can connect to a private DNS server that only connects using encryption. Cyber criminals do not have the encryption code so they cannot decipher the DNS data that gets sent back and forth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqwzao8nn3wb98b26f12.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqwzao8nn3wb98b26f12.jpg" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Firewall Manager.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections, including AWS WAF, AWS Shield Advanced, Amazon VPC security groups and network ACLs, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall. With Firewall Manager, you set up your protections just once and the service automatically applies them across your accounts and resources, even as you add new accounts and resources.&lt;/p&gt;

&lt;p&gt;Firewall Manager provides these benefits:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Helps to protect resources across accounts

Helps to protect all resources of a particular type, such as all Amazon CloudFront distributions

Helps to protect all resources with specific tags

Automatically adds protection to resources that are added to your account

Allows you to subscribe all member accounts in an AWS Organizations organization to AWS Shield Advanced, and automatically subscribes new in-scope accounts that join the organization

Allows you to apply security group rules to all member accounts or specific subsets of accounts in an AWS Organizations organization, and automatically applies the rules to new in-scope accounts that join the organization

Lets you use your own rules, or purchase managed rules from AWS Marketplace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nzlofr4v1j5ypsp0lxs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nzlofr4v1j5ypsp0lxs.jpg" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you very much.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS GitHub &amp; S3 Backup</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Sun, 10 Mar 2024 17:05:51 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-github-s3-backup-2opg</link>
      <guid>https://forem.com/aws-builders/aws-github-s3-backup-2opg</guid>
      <description>&lt;p&gt;Introduction:&lt;br&gt;
Data backup and disaster recovery services are critical aspects of protecting a business’s most asset — its data.&lt;br&gt;
Losing this data can result in severe consequences, including financial loss, reputational damage, and operational disruptions. Therefore, it’s essential to understand the importance of data backup and disaster recovery planning and implement effective strategies to safeguard your business’s assets.&lt;/p&gt;

&lt;p&gt;AWS S3 BACKUPS STORAGE:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqthmgn48zro8t4xn2wbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqthmgn48zro8t4xn2wbj.png" alt=" " width="312" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Always consider both folders:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fska0joixaf3bq12h4yo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fska0joixaf3bq12h4yo0.png" alt=" " width="291" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Backup procedures:&lt;br&gt;
Github:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mz0xuddpt51cgmy5w1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mz0xuddpt51cgmy5w1m.png" alt=" " width="540" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker file:&lt;/p&gt;

&lt;p&gt;FROM debian:12-slim&lt;/p&gt;

&lt;p&gt;ENV DEBIAN_FRONTEND=noninteractive&lt;/p&gt;

&lt;p&gt;RUN apt-get update &amp;amp;&amp;amp; \&lt;br&gt;
    apt-get install -y \&lt;br&gt;
    awscli&lt;/p&gt;

&lt;p&gt;COPY script.sh /usr/local/bin/script.sh&lt;br&gt;
RUN chmod +x /usr/local/bin/script.sh&lt;/p&gt;

&lt;p&gt;CMD ["/usr/local/bin/script.sh"]&lt;/p&gt;

&lt;p&gt;Script:&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Directory paths
&lt;/h1&gt;

&lt;p&gt;SOURCE_DIR="/app/storage"&lt;br&gt;
DEST_DIR="/app/backup"&lt;br&gt;
TIMESTAMP=$(date "+%Y-%m-%d")&lt;/p&gt;

&lt;h1&gt;
  
  
  Ensure destination directory exists
&lt;/h1&gt;

&lt;p&gt;mkdir -p "$DEST_DIR"&lt;/p&gt;

&lt;h1&gt;
  
  
  Count total directories, excluding lost+found
&lt;/h1&gt;

&lt;p&gt;TOTAL_DIRS=$(find "$SOURCE_DIR" -mindepth 1 -maxdepth 1 -type d ! -name 'lost+found' | wc -l)&lt;/p&gt;

&lt;h1&gt;
  
  
  Counter for processed directories
&lt;/h1&gt;

&lt;p&gt;PROCESSED_DIRS=0&lt;/p&gt;

&lt;p&gt;RETENTION_PERIOD=13&lt;/p&gt;

&lt;h1&gt;
  
  
  List directories in the source directory
&lt;/h1&gt;

&lt;p&gt;for dir in "$SOURCE_DIR"/*; do&lt;br&gt;
    if [ -d "$dir" ]; then&lt;br&gt;
        # Get directory name&lt;br&gt;
        dir_name=$(basename "$dir")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    # Skip the lost+found directory
    if [ "$dir_name" = "lost+found" ]; then
        continue
    fi

    echo "Processing directory: $dir"

    # Copy directory to destination
    cp -r "$dir" "$DEST_DIR/$dir_name"

    # Compress the copied directory
    tar -czf "$DEST_DIR/$dir_name.tar.gz" -C "$DEST_DIR" "$dir_name"

    # Upload to AWS S3
    aws s3 cp /app/backup/${dir_name}.tar.gz s3://${BUCKET}/files/${TIMESTAMP}/${NAMESPACE}/${dir_name}/${dir_name}.tar.gz

    # Clean up
    rm -rf "$DEST_DIR/$dir_name" "$DEST_DIR/$dir_name.tar.gz"

    # Increment processed directories counter
    PROCESSED_DIRS=$((PROCESSED_DIRS+1))

    # Calculate and display progress
    PROGRESS=$(( (PROCESSED_DIRS * 100) / TOTAL_DIRS))
    echo "Progress: $PROGRESS% ($PROCESSED_DIRS/$TOTAL_DIRS directories processed)"

    # Deleting folders older than retention period
    RETENTION_DATE=$(date -d "${TIMESTAMP} -${RETENTION_PERIOD} days" "+%Y-%m-%d")

    # List all date folders
    FOLDER_LIST=$(aws s3 ls s3://${BUCKET}/files/ | awk '$0 ~ /PRE/ {print $2}' | grep -E '^[0-9]{4}-[0-9]{2}-[0-9]{2}/' | sed 's/\/$//')

    # Loop through each folder and delete if older than retention period
    for folder in $FOLDER_LIST; do
        FOLDER_TIMESTAMP=$(date -d "${folder}" "+%s")
        RETENTION_TIMESTAMP=$(date -d "${RETENTION_DATE}" "+%s")
        if [ $FOLDER_TIMESTAMP -lt $RETENTION_TIMESTAMP ]; then
            aws s3 rm s3://${BUCKET}/files/${folder}/ --recursive --quiet
        fi
    done 
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;done&lt;/p&gt;

&lt;p&gt;The backup retention period is 14 days, you can restore back any day you want.&lt;/p&gt;

&lt;p&gt;YAML file:&lt;/p&gt;

&lt;p&gt;apiVersion: batch/v1&lt;br&gt;
kind: CronJob&lt;br&gt;
metadata:&lt;br&gt;
  name: backup-files&lt;br&gt;
  namespaces: backup&lt;br&gt;
spec:&lt;br&gt;
  schedule: "0 3 * * *"&lt;br&gt;
  jobTemplate: &lt;br&gt;
    spec:&lt;br&gt;
      template:&lt;br&gt;
        spec:&lt;br&gt;
          containers: &lt;br&gt;
          - name: backup-container&lt;br&gt;
            image: ghcr.io/backup/backup-files:latest&lt;br&gt;
            env:&lt;br&gt;
            - name: NAMESPACE &lt;br&gt;
              value: "backup"&lt;br&gt;
            - name: BUCKET &lt;br&gt;
              value: "nombre del bucket"&lt;br&gt;
            - name: AWS_ACCESS_KEY_ID&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: aws-secret&lt;br&gt;
                  key: AWS_ACCESS_KEY_ID&lt;br&gt;
            - name: AWS_SECRET_ACCESS_KEY&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: aws-secret&lt;br&gt;
                  key: AWS_SECRET_ACCESS_KEY&lt;br&gt;
            volumeMounts:&lt;br&gt;
            - name: my-pvc&lt;br&gt;
              mounthPath: /app/storage&lt;br&gt;
        restartPolicy: OnFailure&lt;br&gt;
        imagePullSecrets:&lt;br&gt;
          - name: registry-credentials-back&lt;br&gt;
        volumes:&lt;br&gt;
        - name: my-pvc&lt;br&gt;
          persistentVolumeClaim:&lt;br&gt;
            claimName: backend-upload-storage-pvc&lt;br&gt;
      backoffLimit: 4&lt;/p&gt;

&lt;p&gt;Postgres YAML file:&lt;/p&gt;

&lt;p&gt;apiVersion: batch/v1&lt;br&gt;
kind: CronJob&lt;br&gt;
metadata:&lt;br&gt;
  name: backup-files&lt;br&gt;
  namespaces: backup&lt;br&gt;
spec:&lt;br&gt;
  schedule: "0 3 * * *"&lt;br&gt;
  jobTemplate: &lt;br&gt;
    spec:&lt;br&gt;
      template:&lt;br&gt;
        spec:&lt;br&gt;
          containers: &lt;br&gt;
          - name: backup-container&lt;br&gt;
            image: ghcr.io/backup/backup-files:latest&lt;br&gt;
            env:&lt;br&gt;
            - name: NAMESPACE &lt;br&gt;
              value: "backup"&lt;br&gt;
            - name: BUCKET &lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: postgres-pguser-backup&lt;br&gt;
                  key: host&lt;br&gt;
            - name: PG_PORT&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: postgres-pguser-backup&lt;br&gt;
                  key: host&lt;br&gt;
            - name: PG_USER&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: postgres-pguser-backup&lt;br&gt;
                  key: user&lt;br&gt;
            - name: PG_PASS&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: postgres-pguser-backup&lt;br&gt;
                  key: password&lt;br&gt;&lt;br&gt;
            - name: BUCKET&lt;br&gt;
              value: "nombre del bucket"&lt;br&gt;
            - name: AWS_ACCESS_KEY_ID&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: aws-secret&lt;br&gt;
                  key: AWS_ACCESS_KEY_ID&lt;br&gt;
            - name: AWS_SECRET_ACCESS_KEY&lt;br&gt;
              valueFrom:&lt;br&gt;
                secretKeyRef:&lt;br&gt;
                  name: aws-secret&lt;br&gt;
                  key: AWS_SECRET_ACCESS_KEY&lt;br&gt;
        restartPolicy: OnFailure&lt;br&gt;
        imagePullSecrets:&lt;br&gt;
          - name: registry-credentials-back&lt;br&gt;
    backoffLimit: 4&lt;/p&gt;

&lt;p&gt;Thank you for your time.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>AWSecurity Best Practices</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Tue, 16 Jan 2024 12:55:50 +0000</pubDate>
      <link>https://forem.com/aws-builders/awsecurity-best-practices-3l0g</link>
      <guid>https://forem.com/aws-builders/awsecurity-best-practices-3l0g</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS Cloud Security Best Practices&lt;/strong&gt;&lt;br&gt;
One quick thing before we get started: As you can tell from their name, AWS loves acronyms. This can create some confusion on what various AWS services do, so here’s a quick breakdown:&lt;br&gt;
• S3 (Simple Storage Service) = Object Storage&lt;br&gt;
• EC2 (Elastic Compute Cloud) instance = Virtual machine/server&lt;br&gt;
• AMI (Amazon Machine Image) = A machine image that contains an operating system and sometimes additional software that’s run on an EC2 instance.&lt;br&gt;
• VPC (Virtual Private Cloud) = Virtual network that closely resembles the network of a traditional data center. All modern EC2 instances run inside a VPC.&lt;br&gt;
We discuss additional AWS services in more depth below, but we wanted to make sure you were familiar with the basics. Now then, onto best practices:&lt;br&gt;
&lt;strong&gt;1. Plan ahead&lt;/strong&gt;&lt;br&gt;
Ideally, you should start thinking through how you will secure your AWS environment before you begin adopting it. If that ship has already sailed, no worries—it just might require a bit more effort to implement some best practices.&lt;br&gt;
&lt;strong&gt;2. Embrace the cloud&lt;/strong&gt;&lt;br&gt;
When approaching a cloud environment for the first time, some security teams try to make the cloud mimic the on-premises environments they are used to protecting by doing things like prohibiting developers from making infrastructure changes. In almost all cases, the end result is the team gets relieved of responsibility for cloud security or engineers find ways to bypass the restrictions (see best practice No. 9 for why this is bad).&lt;/p&gt;

&lt;p&gt;Security teams need to recognize that potentially risky aspects of the cloud, such as the rapid rate of change and the ease of deployment, are also some of the biggest benefits to using cloud infrastructure. To be successful, security teams must endeavor to be seen as enablers of the cloud. They must find ways to keep cloud infrastructure secure without overly stifling those aspects that make the cloud beneficial to the organization. This starts by adopting an open mind and recognizing that successfully managing risk in a cloud environment will require new tactics and processes. &lt;br&gt;
&lt;strong&gt;3. Define a security baseline for your AWS environment&lt;/strong&gt;&lt;br&gt;
Your security and DevOps teams should work together to define what your AWS environment should look like from a security perspective. The baseline should clearly describe everything from how assets must be configured to an incident response plan. The teams should consider using resources like the AWS Well-Architected Framework and the CIS Benchmarks for security on AWS as starting points. They might also want to ask for assistance from an AWS Solutions Architect, who is a technical expert skilled in helping customers construct their AWS environment.&lt;/p&gt;

&lt;p&gt;Make sure your baseline is applied to your production environment as well as any test and pre-production environments. Reevaluate your baseline at least every six months to incorporate things like new threats and changes in your environment.&lt;br&gt;
&lt;strong&gt;4. Enforce your baseline&lt;/strong&gt;&lt;br&gt;
Once your security and DevOps teams have defined what your AWS security baseline looks like, you need to enforce it. Make it easy for developers to adhere to your baseline by providing them with infrastructure templates that have already been properly configured. You can do this using AWS CloudFormation or an infrastructure as code vendor like Terraform. &lt;br&gt;
You also need a monitoring solution in place to detect when something is out of compliance with the baseline (either because it was deployed with a misconfiguration or because a change was made after deployment). To do this, one option is to use AWS Security Hub, but several third-party vulnerability management solutions include built-in monitoring for cloud misconfigurations. &lt;br&gt;
There are two benefits to using a VM solution with built-in misconfiguration detection. First, it consolidates two types of risk monitoring (asset vulnerabilities and cloud infrastructure misconfigurations) into one tool. Second, with most of the vulnerability management solutions all the misconfiguration rules and detections are managed for you by the vendor, whereas with AWS Security Hub you need to set up and manage the rules yourself. Learn more about InsightVM for vulnerability assessment + cloud configuration.&lt;br&gt;
Another option for enforcing your security baseline is a Cloud Security Posture Management (CSPM) solution. A quality CSPM will have the ability to monitor accounts from multiple cloud providers for misconfigurations. This is a big deal, as it allows your organization to set one security baseline for all your cloud providers, then enforce it using a single tool. Beyond being able to monitor cloud accounts for misconfigurations, you should look for a CSPM with the ability to automatically fix misconfigurations as soon as they are detected. This will greatly reduce the burden on your security team and ensure that nothing slips through the cracks. &lt;br&gt;
Other capabilities to look for in a CSPM include the ability to flag issues in infrastructure as code before anything is deployed, IAM governance (see the next section for more on IAM), and compliance auditing. CSPMs tend to be a bit pricey, but for organizations that use multiple cloud providers or who have a large number of accounts with a single provider, a CSPM is the way to turn the chaos of managing all those accounts into order.&lt;br&gt;
&lt;strong&gt;5. Limit access&lt;/strong&gt;&lt;br&gt;
Few things are more important to creating a secure AWS environment than restricting access to just those users and systems that need it. This is accomplished using AWS Identity Access Management (IAM). IAM consists of the following components:&lt;br&gt;
• Users: These represent individual people or systems that need to interact with AWS. A user consists of a name and credentials.&lt;br&gt;
• Credentials: The ways that a user can access AWS. Credentials include console passwords, access keys, SSH keys, and server certificates.&lt;br&gt;
• Groups: A collection of users. With groups, you can manage permissions for all users in the group at once, rather than having to change the permissions for each user individually.&lt;br&gt;
• Roles: These are similar to users, but don’t have long-term credentials like a password or access key. A role can be assumed by a user or service. When a role is assumed, it provides temporary credentials for the session. Only users, roles, accounts, and services that you specify can assume a role. Roles let you do things like give a user access to multiple AWS accounts or give an application access to AWS services without having to store long-term credentials inside the app.&lt;br&gt;
• Policies: These are JSON docs that give permission to perform an action or actions in specific AWS services. In order to give a user, group, or role the ability to do something in AWS, you have to attach a policy. AWS provides several hundred predefined “AWS Managed Policies” to choose from, or you can build your own.&lt;br&gt;
Now that you have a basic understanding of the components that make up IAM, let’s talk about best practices. AWS has a list of IAM best practices that you should read through. Similar practices are mentioned in the CIS Benchmarks for AWS. All of these best practices are important, but in the interest of brevity, we’ll call out a few of the most vital (and commonly broken) guidelines:&lt;br&gt;
• Don’t use the root user: The root user is the user that is associated with the email address used to create an AWS account. The root user can do things even a full admin cannot. If a malicious actor gets their hands on root user credentials, massive damage can be done. Make sure you use a very complex password on your root user, enable MFA (ideally using hardware MFA), and lock away the MFA device in a safe. Yes, literally lock the MFA device away. You should also delete any access keys that have been created for the root user. Only use the root user in those very rare circumstances where it’s required.&lt;br&gt;
• Manage users through federated SSO: It’s a security best practice to use federated SSO to manage employee access to resources, and that includes AWS. You should take advantage of IAM’s Identity Provider functionality so that you can centrally manage individual access to AWS through your existing SSO solution.&lt;br&gt;
• Don’t attach policies to individual users: Instead, apply them to groups and roles. This makes it far easier to maintain visibility into who can access what and minimizes the chances that an individual passes under the radar with access to more than what they need.&lt;br&gt;
• Require a strong password: You should configure IAM to require a strong password. CIS recommends you set IAM to require a password at least 14 characters with at least one uppercase and lowercase character, one number, and one symbol. CIS also recommends that passwords expire at least every 90 days and that previous passwords cannot be reused.&lt;br&gt;
• Require MFA: Along with a strong password, you should ensure that all users have enabled MFA.&lt;br&gt;
• Delete unused credentials: IAM can generate a credentials report that shows when credentials for each user were last used. You should regularly go into this report and disable or delete credentials that haven’t been used in the past 90 days.&lt;br&gt;
• Regularly rotate access keys: In many cases, you can (and should) use IAM roles instead of access keys for programmatic access to AWS. In those situations where you have to use access keys, you should make sure they are rotated at least every 90 days. The IAM credentials report shows when access keys were last rotated. Use this report to ensure any overdue access keys are changed.&lt;br&gt;
&lt;strong&gt;6. Watch for vulnerabilities&lt;/strong&gt;&lt;br&gt;
A lot of people don’t realize that even in the cloud, unpatched vulnerabilities still present a threat. To detect vulnerabilities in EC2 instances, you can use AWS Inspector or a third-party vulnerability management solution. Using a vulnerability management solution allows you to better prioritize your work, improve your reporting capabilities, and facilitate communication with infrastructure owners and help everyone monitor progress towards reducing risk. In addition, security teams that are dealing with a hybrid or multi-cloud environment often prefer to use a third-party solution because it allows them to oversee vulnerability and risk management for all their environments in one place (more on that in best practice item No. 9).&lt;br&gt;
Although vulnerability management should be familiar to most cybersecurity professionals, there are a few unique aspects of VM in a cloud environment like AWS that you should be aware of. As we mentioned earlier, a cloud environment can quickly change. Assets appear and disappear minute-by-minute. In such a dynamic world, weekly or even daily scans aren’t enough to get an accurate understanding of vulnerabilities and your risk exposure. It’s important to have some way to make sure you have a complete picture of which EC2 instances exist, as well as a way to continuously monitor the instances throughout their lifetime. To ensure you have a complete picture of your EC2 instances, invest in a vulnerability management solution with dynamic asset discovery, which automatically detects new instances as they are deployed. A similar capability can be achieved with AWS Inspector by using CloudWatch Events, although setup is a little more manual. &lt;br&gt;
When vulnerabilities are detected in an EC2 instance, they can be addressed in several ways. One option is to use the Patch Manager in AWS Systems Manager. This approach is the most similar to how you traditionally manage vulnerabilities in an on-premises network. However, many cloud environments are designed to be immutable. In other words, assets like EC2 instances should not be changed once they’re deployed. Instead, when a change needs to be made, the existing asset is terminated and replaced with a new one that incorporates the change. &lt;br&gt;
So, in immutable environments, you don’t deploy patches, but rather deploy new instances that include the patches. One way to do this is to create and maintain a base AMI that gets regularly updated to run the most recent version of whatever operating systems you’re using. With this approach, when a vulnerability is detected, you can create a new baseline AMI that incorporates patches for the vulnerability. This will eliminate the vulnerability from any future EC2 instance you deploy, but you’ll need to make sure you also redeploy any currently running EC2 instances. &lt;br&gt;
Another option is to use an infrastructure automation tool like Chef or Puppet to update and redeploy AMIs. This approach makes sense if you are already using one of these tools to maintain your EC2 instances.&lt;br&gt;
&lt;strong&gt;7. Collect and protect logs&lt;/strong&gt;&lt;br&gt;
In AWS, most logs are captured using CloudTrail. This service automatically captures and stores AWS API activity as what AWS calls Management Events in your AWS account for no charge (although you will need to pay the cost of storage). CloudTrail captures tens of thousands of events, including critical security information like logins and configuration changes to AWS services. For a fee, you can also create “trails” in CloudTrail, which allows you to do things like capture additional activity and send your logs to S3 for long-term storage and/or export. Here are some best practices for setting up CloudTrail in your AWS account:&lt;br&gt;
• Create a trail for all regions: Although it costs money, you should create a trail in CloudTrail so you can send all your logs to an S3 bucket. This will allow you to store your logs indefinitely (CIS recommends keeping them for at least 365 days). When creating your trail, you should make sure the option Apply trail to all regions is enabled. This will allow your trail to show you activity from every AWS region. If you don’t enable this option, your trail will only collect logs for activity occurring in whatever AWS region you are using when you create the trail. It’s important to capture data from all regions so that you have visibility in case something suspicious happens in a region you don’t normally use. If you use multiple AWS accounts, you also might want to use one bucket to store logs for all your accounts.&lt;br&gt;
• Protect the S3 bucket holding your logs: Since your logs are a key part of detecting and remediating an incident, the S3 bucket where you store your logs is a prime target for an attacker. Therefore, you should make sure you do everything possible to protect it. Make sure the bucket isn’t publicly accessible and restrict access to only those users who absolutely need it. Log all access to the bucket and make sure this S3 log bucket is only accessible by users who can’t access the CloudTrail log bucket. You should also consider requiring MFA in order to delete your log buckets.&lt;br&gt;
• Encrypt log files with SSE-KMS: Although CloudTrail logs are encrypted by default, you can add an additional level of defense enabling server-side encryption with AWS KMS. With this option, a user will not only need permission to access the S3 bucket holding your log files, but they will also need access to a customer master key (CMK) to decrypt said files. It’s a great way to ensure only a select few can access your logs. When you create your CMK, make sure you also enable automatic key rotation.&lt;br&gt;
• Use log validation: CloudTrail can automatically create validation files that are used to detect if a CloudTrail log has been tampered with. Since manipulating log files is a great way for an attacker to cover their tracks, you should make sure log validation is enabled for your trail.&lt;br&gt;
Although most logs are collected in CloudTrail, there are a few other logs you should make sure you capture. VPC Flow Logs show data on the IP traffic going to and from the network interfaces in your virtual private cloud (VPC). They can help you identify intra-VPC port scanning, network traffic anomalies, and known malicious IP addresses. If you use AWS Route 53 as your DNS, you should also log DNS queries. You can use these logs to match against threat intelligence and identify known-bad or quickly spreading threats. Keep in mind that you will need to use AWS CloudWatch to view your DNS logs.&lt;br&gt;
&lt;strong&gt;8. Monitor, detect, and react&lt;/strong&gt;&lt;br&gt;
Now that you know how to use logs to obtain visibility into the activity in your AWS environment, the next question is how to leverage this visibility. One (very manual) option is to use AWS CloudWatch alarms. With this approach, you build alarms for various suspicious actions such as unauthorized API calls, VPC changes, etc. A list of recommended alarms is included in the CIS Benchmarks for AWS. The challenge with this approach is that each alarm must be manually built and maintained.&lt;br&gt;
Another option is to use AWS GuardDuty. GuardDuty uses CloudTrail, VPC Flow Logs, and DNS logs to detect and alert on suspicious behavior. The nice thing about GuardDuty is that it is powered by an AWS-managed list of findings (aka potential security issues), as well as machine learning. That means no manual setup or maintenance is needed to receive alerts about suspicious activity. However, detecting suspicious activity is just the first step in responding to an incident. &lt;br&gt;
Your security team will need to pull relevant log files and other data to verify that an incident has occurred, then determine the best way to respond and recover. If the team needs to search multiple different data sources to find this information, it can dramatically lengthen the time needed to conduct an investigation. This challenge is exacerbated in a hybrid or multi-cloud environment.&lt;br&gt;
Having all relevant data automatically centralized during an investigation is just one of the reasons why many security teams decide to use a modern SIEM and incident detection tool. A good SIEM solution will have a CloudTrail integration and let you store all logs from AWS alongside logs from on-prem networks and other cloud providers like Azure and Google Cloud Platform (GCP). This ability to centralize all data can be massively helpful in speeding up investigations, especially when you need to track a malicious actor who has moved across your environments. &lt;br&gt;
A good SIEM will also provide a host of other features to enhance your security team’s ability to detect, confirm, and respond to an attack. For example, the most advanced SIEMs use multiple techniques to detect suspicious behavior. Other features to look for include the ability to create custom alerts, deception technology (things like pre-built honeypots and honey users that will trigger alerts when accessed) and File Integrity Monitoring (FIM). &lt;br&gt;
All these capabilities provide additional layers of detection. You should also look for a SIEM that provides visualization capabilities like customizable dashboards and investigation timelines, which make your centralized data more usable. In addition, make sure any SIEM you’re considering has built-in automation, as this can dramatically reduce reaction times when an incident occurs. Finally, many teams like to use both AWS and third-party tools to secure their AWS environment, in which case it’s important to find a SIEM that includes a GuardDuty integration.&lt;br&gt;
&lt;strong&gt;9. Unify AWS with on-premises and other cloud security&lt;/strong&gt;&lt;br&gt;
One very common mistake is to approach AWS security in a silo, separate from efforts to secure existing IT infrastructure. This creates hole that can be exploited by a malicious actor. For example, we’ve seen situations where an organization’s on-premises and AWS security were designed to address different potential threats. The resulting gaps left both networks vulnerable.&lt;br&gt;
Having a single team responsible for securing all IT infrastructure ensures that no assumptions are made about what the “other” security team is or isn’t doing. Instead, there is one team that knows it is accountable for all aspects of your organization’s cybersecurity posture. Unifying your security efforts under one team can also be extremely important during an incident. The team has immediate access to far more data. It’s also much easier to maintain clarity around each team member’s area of responsibility.&lt;br&gt;
Not only is it important to unify responsibility for security under one team, but it’s important to unify all your security data in one set of tools. The vast majority of organizations are not just using AWS. At a minimum they have on-premises networks and employee endpoints to secure. In many cases, organizations also utilize multiple cloud providers. &lt;br&gt;
If you use different security solutions for each environment, it increases the likelihood of there being blind spots. In addition, the more tools your security team uses, the higher their workload, as they are forced to constantly bounce between tools in order to manually piece together a complete picture of the organization’s current cybersecurity posture.&lt;br&gt;
&lt;strong&gt;10. Automate&lt;/strong&gt;&lt;br&gt;
With so many best practices for securing AWS, it’s not reasonable to expect everyone to remember them all. Even if they did, mistakes happen. To ensure your AWS environment continuously adheres to your security baseline, you should turn to automation. &lt;br&gt;
For example, you can use a combination of CloudFormation and Lambda or a tool like Terraform, or one of the more advanced CSPMs to automate deployment of new AWS infrastructure and ensure that everything complies with the baseline you’ve established. You can also have these tools automatically flag or terminate infrastructure that is not in compliance.&lt;br&gt;
Another benefit of using automation is the capacity it frees up within your security team. The ongoing shortage of security professionals means teams are overtaxed. That issue is only exacerbated when an organization starts migrating to the cloud, which dramatically expands the infrastructure footprint the team has to secure. &lt;/p&gt;

&lt;p&gt;Keep learning all the time!!!&lt;br&gt;
Thanks for your time.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is a Whaling Attack?</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Tue, 02 Aug 2022 10:01:25 +0000</pubDate>
      <link>https://forem.com/aws-builders/what-is-a-whaling-attack-544l</link>
      <guid>https://forem.com/aws-builders/what-is-a-whaling-attack-544l</guid>
      <description>&lt;p&gt;A whaling attack, also known as whaling phishing or a whaling phishing attack, is a specific type of phishing attack that targets high-profile employees, such as the chief executive officer or chief financial officer, in order to steal sensitive information from a company. In many whaling phishing attacks, the attacker's goal is to manipulate the victim into authorizing high-value wire transfers to the attacker.&lt;br&gt;
The term whaling stems from the size of the attacks, and the whales are thought to be picked based on their authority within the company.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How whaling attacks work&lt;/strong&gt;&lt;br&gt;
The goal of a whaling attack is to trick an individual into disclosing personal or corporate information through social engineering, email spoofing and content spoofing efforts. For example, the attackers may send the victim an email that appears to be from a trusted source; some whaling campaigns include a customized malicious website that has been created especially for the attack.&lt;br&gt;
In addition, the sender's email address typically looks like it's from a believable source and may even contain corporate logos or links to a fraudulent website that has also been designed to look legitimate. Because a whale's level of trust and access within their organization tends to be high, it's worth the time and effort for the cybercriminal to put extra effort into making the endeavor seem believable.&lt;br&gt;
Whaling attacks often depend on social engineering techniques, as attackers will send hyperlinks or attachments to infect their victims with malware or to solicit sensitive information. By targeting high-value victims, especially chief executive officers (CEOs) and other corporate officers, attackers may also induce them to approve fraudulent wire transfers using business email compromise (BEC) techniques. In some cases, the attacker impersonates the CEO or other corporate officers to convince employees to carry out financial transfers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5 ways to protect against whaling phishing&lt;/strong&gt;&lt;br&gt;
Defending against whaling attacks involves a mix of employee security awareness, data detection policy and infrastructure. Some best practices for preventing whaling include the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Employee awareness. Preventing any type of cybersecurity threat requires every employee to take responsibility for protecting the company's assets. In the case of whaling phishing, all employees -- not just high-level executives -- must be trained about these attacks and how to identify them. Although high-level executives are the targets, lower-level employees could indirectly expose an executive to an attack through a security lapse. &lt;/li&gt;
&lt;li&gt; Multistep verification. All requests for wire transfers and access to confidential or sensitive data should pass through several levels of verification before being permitted. Check all emails and attachments from outside of the organization for malware, viruses and other issues to identify potentially malicious traffic.&lt;/li&gt;
&lt;li&gt; Data protection policies. Introduce data security policies to ensure emails and files are monitored for suspicious network activity. These policies should provide a layered defense against whale phishing and phishing in general to decrease the chances of a breach occurring at the last line of defense. One such policy might involve monitoring emails for indicators of phishing attacks and automatically blocking those emails from reaching potential victims.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Indicators of a potential phishing email include the following: &lt;br&gt;
o   The display or domain name differs slightly from the trusted address.&lt;br&gt;
o   The email body contains requests for money or information.&lt;br&gt;
o   The domain age does not match the domain age of the trusted correspondent.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Social media education. As an extension of employee awareness, make high-level executives aware of social media's potential role in enabling a whaling breach. Social media contains a wealth of information that cybercriminals can use to craft social engineering attacks like whale phishing. Executives can limit access to this information by setting privacy restrictions on their personal social media accounts. CEOs are often visible on social media in ways that telegraph behavioral data that criminals can mimic and exploit.&lt;/li&gt;
&lt;li&gt; Anti-phishing tools and organizations. Many vendors offer anti-phishing software and managed security services to help prevent whaling and other phishing attacks. Social engineering tactics remain prevalent, however, because they focus on exploiting human error, which exists with or without cybersecurity technology.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Examples of whaling attacks&lt;/strong&gt;&lt;br&gt;
One notable whaling attack occurred in 2016 when a high-ranking employee at Snapchat received an email from an attacker pretending to be the CEO. The employee was tricked into giving the attacker employee payroll information; ultimately, the Federal Bureau of Investigation (FBI) looked into the attack.&lt;br&gt;
Another whaling attack from 2016 involved a Seagate employee who unknowingly emailed the income tax data of several current and former company employees to an unauthorized third party. After reporting the phishing scam to the Internal Revenue Service (IRS) and the FBI, it was announced that thousands of people's personal data was exposed in that attack.&lt;br&gt;
A third notable example of whaling occurred in 2018 when the European cinema company Pathé was attacked and lost $21.5 million in the wake of the attack. The attackers, posing as high-ranking employees, emailed the CEO and chief financial officer (CFO) with a fraudulent request for a highly confidential financial transaction. Despite red flags, the CEO and CFO transferred roughly $800,000 to the attackers, which was only the beginning of the company's losses from the incident.&lt;br&gt;
HP has predicted that 2021 will likely see an increase in whaling attacks, along with other cybersecurity threats, such as ransomware, phishing emails and thread hijacking. The massive shift to remote work in response to the COVID-19 pandemic is, in part, responsible for exposing organizations to new vulnerabilities, HP said.&lt;br&gt;
Thank you very much for your time.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is a Phishing Attack?</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Sun, 13 Feb 2022 12:38:26 +0000</pubDate>
      <link>https://forem.com/aws-builders/what-is-a-phishing-attack-3khb</link>
      <guid>https://forem.com/aws-builders/what-is-a-phishing-attack-3khb</guid>
      <description>&lt;p&gt;Phishing is a type of cyberattack that uses email, SMS, phone, or social media to entice a victim to share personal information — such as passwords or account numbers — or to download a malicious file that will install viruses on their computer or phone.&lt;br&gt;
Features of a Phishing Email:&lt;br&gt;
Typical characteristics of phishing messages make them easy to recognize. Phishing emails usually have one or more of the following indicators:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Asks for Sensitive Information   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses a Different Domain    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contains Links that Don’t Match the Domain    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Includes Unsolicited Attachments    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is Not Personalized    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses Poor Spelling and Grammar    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tries to Panic the Recipient&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Phishing attack examples:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
The following illustrates a common phishing scam attempt:    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A spoofed email ostensibly from myuniversity.edu is mass-distributed to as many faculty members as possible.    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The email claims that the user’s password is about to expire. Instructions are given to go to myuniversity.edu/renewal to renew their password within 24 hours.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58700qoidqpj4jnslwet.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58700qoidqpj4jnslwet.jpg" alt=" " width="745" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Several things can occur by clicking the link. For example:&lt;/em&gt;    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The user is redirected to myuniversity.edurenewal.com, a bogus page appearing exactly like the real renewal page, where both new and existing passwords are requested. The attacker, monitoring the page, hijacks the original password to gain access to secured areas on the university network.   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The user is sent to the actual password renewal page. However, while being redirected, a malicious script activates in the background to hijack the user’s session cookie. This results in a reflected XSS attack, giving the perpetrator privileged access to the university network.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Types of Phishing&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Spear phishing:&lt;/em&gt;&lt;br&gt;
Spear phishing is a phishing attempt that targets a specific individual or group of individuals. One adversary group, known as Helix Kitten, researches individuals in specific industries to learn about their interests and then structures spear phishing messages to appeal to those individuals. Victims may be targeted in an effort to reach a more valuable target; for example, a mid-level financial specialist may be targeted because her contact list contains email addresses for financial executives with greater access to sensitive information. Those higher-level executives may be targeted in the next phase of the attack.&lt;br&gt;
&lt;em&gt;Smishing:&lt;/em&gt;&lt;br&gt;
Smishing is a phishing campaign conducted through SMS messages instead of email. Smishing attacks are unlikely to result in a virus being downloaded directly. Instead, they usually lure the user into visiting a site that entices them to download malicious apps or content.&lt;br&gt;
&lt;em&gt;Vishing:&lt;/em&gt;&lt;br&gt;
Vishing is a phishing attack conducted by telephone. These attacks may use a fake Caller ID profile to impersonate a legitimate business, government agency or charitable organization. The purpose of the call is to steal personal information, such as bank account or credit card numbers.&lt;br&gt;
&lt;em&gt;Whaling:&lt;/em&gt;&lt;br&gt;
Whaling, also called business email compromise (BEC), is a type of spear-phishing that targets a high-profile victim, such as a CEO or CFO. Whaling attacks usually employ a sense of urgency to pressure the victim into wiring funds or sharing credentials on a malicious website.&lt;/p&gt;

&lt;p&gt;What happens if you open a phishing email?&lt;br&gt;
Simply reading a phishing message is normally not unsafe. The user must click a link or download a file to activate malicious activity. Be cautious about all communications you receive, and remember that although phishing may most commonly happen through email, it can also occur through cell phone, SMS and social media.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagjh4jmmngvo7qp44g41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagjh4jmmngvo7qp44g41.png" alt=" " width="624" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Phishing Prevention:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Use anti-virus software: Anti-malware tools scan devices to prevent, detect and remove malware that enter the system through phishing.&lt;/p&gt;

&lt;p&gt;Use an anti-spam filter: Anti-spam filters use pre-defined blacklists created by expert security researchers to automatically move phishing emails to your junk folder, to protect against human error.&lt;/p&gt;

&lt;p&gt;Use an up-to-date browser and software: Regardless of your system or browser, make sure you are always using the latest version. Companies are constantly patching and updating their solutions to provide stronger defenses against phishing scams, as new and innovative attacks are launched each day.&lt;/p&gt;

&lt;p&gt;Never reply to spam: Responding to phishing emails lets cybercriminals know that your address is active. They will then put your address at the top of their priority lists and retarget you immediately.&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>How to Prepare for Ransomware Attacks</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Sat, 09 Oct 2021 00:38:03 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-prepare-for-ransomware-attacks-81h</link>
      <guid>https://forem.com/aws-builders/how-to-prepare-for-ransomware-attacks-81h</guid>
      <description>&lt;p&gt;Ransomware attacks continue to increase, using techniques that are growing more and more sophisticated and targeted. Security and risk management leaders need to look beyond just the endpoints to help protect the organization from ransomware.&lt;/p&gt;

&lt;p&gt;What exactly is Ransomware:&lt;br&gt;
Ransomware is malware that employs encryption to hold a victim’s information at ransom. A user or organization’s critical data is encrypted so that they cannot access files, databases, or applications. A ransom is then demanded to provide access. Ransomware is often designed to spread across a network and target database and file servers, and can thus quickly paralyze an entire organization. &lt;/p&gt;

&lt;p&gt;How does ransomware work?&lt;br&gt;
Ransomware uses asymmetric encryption. This is cryptography that uses a pair of keys to encrypt and decrypt a file. The public-private pair of keys is uniquely generated by the attacker for the victim, with the private key to decrypt the files stored on the attacker’s server. The attacker makes the private key available to the victim only after the ransom is paid, though as seen in recent ransomware campaigns, that is not always the case. Without access to the private key, it is nearly impossible to decrypt the files that are being held for ransom.&lt;br&gt;
Many variations of ransomware exist. Often ransomware (and other malware) is distributed using email spam campaigns or through targeted attacks. Malware needs an attack vector to establish its presence on an endpoint. After presence is established, malware stays on the system until its task is accomplished.&lt;br&gt;
After a successful exploit, ransomware drops and executes a malicious binary on the infected system. This binary then searches and encrypts valuable files, such as Microsoft Word documents, images, databases, and so on. The ransomware may also exploit system and network vulnerabilities to spread to other systems and possibly across entire organizations.&lt;br&gt;
Once files are encrypted, ransomware prompts the user for a ransom to be paid within 24 to 48 hours to decrypt the files, or they will be lost forever. If a data backup is unavailable or those backups were themselves encrypted, the victim is faced with paying the ransom to recover personal files.&lt;/p&gt;

&lt;p&gt;Why is ransomware spreading?&lt;br&gt;
Ransomware attacks and their variants are rapidly evolving to counter preventive technologies for several reasons:&lt;br&gt;
• Easy availability of malware kits that can be used to create new malware samples on demand&lt;br&gt;
• Use of known good generic interpreters to create cross-platform ransomware (for example, Ransom32 uses Node.js with a JavaScript payload)&lt;br&gt;
• Use of new techniques, such as encrypting the complete disk instead of selected files&lt;br&gt;
Today’s thieves don’t even have to be tech savvy. Ransomware marketplaces have sprouted up online, offering malware strains for any would-be cybercrook and generating extra profit for the malware authors, who often ask for a cut in the ransom proceeds.&lt;/p&gt;

&lt;p&gt;Ransomware Defense Life Cycle&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyg0qipf4jrw26xrm95n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyg0qipf4jrw26xrm95n.png" alt="Alt Text" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to defend against ransomware:&lt;br&gt;
To avoid ransomware and mitigate damage if you are attacked, follow these tips:&lt;br&gt;
• Back up your data. The best way to avoid the threat of being locked out of your critical files is to ensure that you always have backup copies of them, preferably in the cloud and on an external hard drive. This way, if you do get a ransomware infection, you can wipe your computer or device free and reinstall your files from backup. This protects your data and you won’t be tempted to reward the malware authors by paying a ransom. Backups won’t prevent ransomware, but it can mitigate the risks.&lt;br&gt;
• Secure your backups. Make sure your backup data is not accessible for modification or deletion from the systems where the data resides. Ransomware will look for data backups and encrypt or delete them so they cannot be recovered, so use backup systems that do not allow direct access to backup files.&lt;br&gt;
• Use security software and keep it up to date. Make sure all your computers and devices are protected with comprehensive security software and keep all your software up to date. Make sure you update your devices’ software early and often, as patches for flaws are typically included in each update.&lt;br&gt;
• Practice safe surfing. Be careful where you click. Don’t respond to emails and text messages from people you don’t know, and only download applications from trusted sources. This is important since malware authors often use social engineering to try to get you to install dangerous files.&lt;br&gt;
• Only use secure networks. Avoid using public Wi-Fi networks, since many of them are not secure, and cybercriminals can snoop on your internet usage. Instead, consider installing a VPN, which provides you with a secure connection to the internet no matter where you go.&lt;br&gt;
• Implement a security awareness program. Provide regular security awareness training for every member of your organization so they can avoid phishing and other social engineering attacks. Conduct regular drills and tests to be sure that training is being observed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2g9xry7dwskdok05rwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2g9xry7dwskdok05rwn.png" alt="Alt Text" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anatomy of a Ransomware Attack&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40fkxufh4k6efdgz76lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40fkxufh4k6efdgz76lj.png" alt="Alt Text" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Final and more important:&lt;br&gt;
Why shouldn’t I just pay the ransom?&lt;br&gt;
When faced with the possibility of weeks or months of recovery, it might be tempting to give in to a ransom demand. But there are several reasons why this is a bad idea:&lt;br&gt;
• You may never get a decryption key. When you pay a ransomware demand, you’re supposed to get a decryption key in return. But when you conduct a ransomware transaction, you’re depending on the integrity of criminals. Many people and organizations have paid the ransom only to receive nothing in return—they’re then out tens or hundreds or thousands of dollars, and they still have to rebuild their systems from scratch.&lt;br&gt;
• You could get repeated ransom demands. Once you pay a ransom, the cybercriminals who deployed the ransomware know you’re at their mercy. They may give you a working key if you’re willing to pay a little (or a lot) more.&lt;br&gt;
• You may receive a decryption key that works—kind of. The creators of ransomware aren’t in the file recovery business; they’re in the moneymaking business. In other words, the decryptor you receive may be just good enough for the criminals to say they held up their end of the deal. Moreover, it’s not unheard of for the encryption process itself to corrupt some files beyond repair. If this happens, even a good decryption key will be unable to unlock your files—they’re gone forever.&lt;br&gt;
• You may be painting a target on your back. Once you pay a ransom, criminals know you’re a good investment. An organization that has a proven history of paying the ransom is a more attractive target than a new target that may or may not pay. What’s going to stop the same group of criminals from attacking again in a year or two, or logging onto a forum and announcing to other cybercriminals that you’re an easy mark?&lt;br&gt;
• Even if everything somehow ends up fine, you’re still funding criminal activity. Say you pay the ransom, receive a good decryptor key, and get everything back up and running. This is merely the best worst-case scenario (and not just because you’re out a lot of money). When you pay the ransom, you’re funding criminal activities. Putting aside the obvious moral implications, you’re reinforcing the idea that ransomware is a business model that works. (Think about it—if no one ever paid the ransom, do you think they’d keep putting out ransomware?) Bolstered by their success and their outsized payday, these criminals will continue wreaking havoc on unsuspecting businesses, and will continue putting time and money into developing newer and even more nefarious strains of ransomware—one of which may find its way onto your devices in the future.&lt;/p&gt;

&lt;p&gt;Thank you very much for your time.&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
    </item>
    <item>
      <title>DevOps y el futuro de la automatización de infraestructura.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Fri, 23 Jul 2021 16:25:10 +0000</pubDate>
      <link>https://forem.com/aws-builders/devops-y-el-futuro-de-la-automatizacion-de-infraestructura-21fp</link>
      <guid>https://forem.com/aws-builders/devops-y-el-futuro-de-la-automatizacion-de-infraestructura-21fp</guid>
      <description>&lt;p&gt;El termino GITOPS comienza a darse a conocer en el año 2017, como complemento a la creciente demanda de nuevas tecnologías como Docker, Kubernetes, Cloud, etc. &lt;br&gt;
Este nuevo concepto derivado de la palabra GIT (sistema de control de versiones) y OPS (operaciones) es un conjunto de buenas practicas y el concepto de que todo gira en torno de GIT.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy35ea4b3a1pqfsyfrzqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy35ea4b3a1pqfsyfrzqq.png" alt="Alt Text" width="800" height="480"&gt;&lt;/a&gt;&lt;br&gt;
GITOPS se considera una evolución de infraestructura como código, incorporando las buenas practicas de DevOps, chocando así con el modelo tradicional establecido de CI/CD. &lt;br&gt;
Este proceso lineal que se usa en CI/CD se divide en 2 cuando aplicamos las practicas GITOPS:&lt;br&gt;
-Por un lado, se compilara la imagen y por otro se actualizara un repositorio de git, donde se indicara que se ha subido de versión la imagen para un despliegue.&lt;br&gt;
-La herramienta usada en GITOPS se encargara de que el despliegue en el clúster coincida con este nuevo estado deseado registrado en GIT.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bekit8erwdwst9kohsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bekit8erwdwst9kohsf.png" alt="Alt Text" width="800" height="294"&gt;&lt;/a&gt;&lt;br&gt;
GITOPS Pipeline&lt;br&gt;
Como se puede ver en el diagrama, el verdadero cambio se centra en el despliegue. Aparecen dos nuevos elementos: el repositorio de configuración, donde se almacena todos los ficheros de despliegue de nuestra aplicación y un operador desplegado en el clúster de kubernetes que se encarga de monitorear el repositorio de configuración y cuando exista un cambio, desplegarlo en el clúster. Por tanto, cuando se quiera subir una nueva versión de la aplicación, se deberá realizar un Pull Request en el repositorio de configuración, y cuando éste se haya aprobado por las personas encargadas para este propósito se despliega automáticamente en el entorno correspondiente en el que se haya solicitado el despliegue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm23seo9l7464d25gywbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm23seo9l7464d25gywbj.png" alt="Alt Text" width="240" height="240"&gt;&lt;/a&gt;&lt;br&gt;
Seguridad.&lt;br&gt;
GitOps añade un beneficio considerable desde el punto de vista de seguridad. El propio clúster es el que se encarga de desplegar las aplicaciones. Además, con GitOps, cualquier cambio de estado del clúster esta registrado: se sabe quién realizo el cambio, quién lo aprobó y cuando se desplegó. Otro aspecto importante es cuando se necesita restaurar el clúster por algún desastre solo se debe sincronizar los repositorios de configuración&lt;/p&gt;

&lt;p&gt;Beneficios de GitOps&lt;br&gt;
-Mejor experiencia de desarrollador: Ayuda a los desarrolladores a utilizar una herramienta muy familiar como Git para gestionar Kubernetes con facilidad sin necesidad de conocer sus detalles internos. También aumenta la productividad de los desarrolladores recién incorporados.&lt;br&gt;
-Seguro: Con la ayuda de funcionalidades en Git, como la reversión, es fácil volver a una versión estable en caso de cualquier colapso, lo que reduce drásticamente el tiempo de recuperación.&lt;br&gt;
-Consistente: El flujo de trabajo de un extremo a otro de GitOps es muy consistente como infraestructura; un modelo proporciona aplicación, administración de Kubernetes, todo.&lt;br&gt;
-Implementación más rápida: Le ayuda a implementar aplicaciones más rápido que antes al integrar la automatización de implementación continua con un circuito de control de retroalimentación.&lt;br&gt;
-Entornos autodocumentados: Puede obtener un historial completo de cada cambio en el sistema y todos los detalles de lo que se implementó consultando la rama maestra. Ayuda a facilitar la colaboración con otros equipos o comparte suficiente conocimiento con un nuevo miembro.&lt;br&gt;
-Seguridad y cumplimiento: GitOps ayuda a las grandes organizaciones a mantenerse seguras y en cumplimiento. Puede bloquear los permisos de las personas que realmente tienen permiso para fusionarse en una rama.&lt;/p&gt;

&lt;p&gt;¿GitOps es un reemplazo de DevOps?&lt;br&gt;
DevOps es una practica que tiene en cuenta factores culturales, de procesos y de tecnología. En DevOps, la cultura es parte fundamental para asegurarse de que las personas se alineen y trabajen de forma fluida. &lt;br&gt;
Pero GitOps incluye aquellas mejores prácticas que unifican la implementación, la gestión y la supervisión de aplicaciones, y la administración de clústeres de containers.&lt;br&gt;
    GitOps por tanto, es un complemento de DevOps, con mayor orientación al desarrollo Cloud Native, Container First, o Serverless. &lt;/p&gt;

&lt;p&gt;Flujo de trabajo de ejemplo:&lt;br&gt;
1-Se escribe el código en el local.&lt;br&gt;
2-Se envía al repositorio de la aplicación y al mismo tiempo se ejecuta una pull request para revisar y aprobar el código.&lt;br&gt;
3-El responsable revisa y aprueba el código, el PR ejecuta la validación.&lt;br&gt;
4-Se dispara CI, valida el cambio del equipo y se completa correctamente.&lt;br&gt;
5-Se dispara CD, el pipeline de CD crea una PR al repositorio de GitOps con los cambios deseados en el estado del clúster.&lt;br&gt;
6-En cuestión de minutos, la aplicacion (operador) observa un cambio en el repositorio de GitOps e incorpora el cambio.&lt;br&gt;
7-Debido al cambio de imagen de Docker, el pod de aplicación requiere una actualización. Flux aplica el cambio al clúster.&lt;br&gt;
8-Se comprueba que la implementación se completó correctamente.&lt;/p&gt;

&lt;p&gt;Muchas gracias.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create your own personal blog using AWS S3 and Jekyll.</title>
      <dc:creator>Pablo Salas</dc:creator>
      <pubDate>Mon, 05 Jul 2021 19:56:30 +0000</pubDate>
      <link>https://forem.com/aws-builders/create-your-own-personal-blog-using-aws-s3-and-jekyll-2o78</link>
      <guid>https://forem.com/aws-builders/create-your-own-personal-blog-using-aws-s3-and-jekyll-2o78</guid>
      <description>&lt;p&gt;First of all we must mention the benefits of AWS S3 from the AWS Documents: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reliability: S3 guarantees 99.9% uptime and 99.999999999% durability. In other words, your site will be down for at most 45 minutes every month and you will almost never lose any data.

Scalability &amp;amp; Flexibility: S3 gives you unlimited storage. There is no cap on storage space and bandwidth. If your site suddenly receives a lot of traffic, S3 can scale to handle that increase in traffic without impacting user experience.

Pricing: You are charged based on your usage. There are no fixed costs that you need to pay every month. More details about the pricing is available here.

Developer friendly: AWS provides a comprehensive CLI which we can use to interact with all of their services including S3.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After that we can proceed with our blog:&lt;/p&gt;

&lt;p&gt;Basics Prerequisites you must have:&lt;br&gt;
Create an AWS Account.&lt;br&gt;
Setup the AWS CLI.&lt;/p&gt;

&lt;p&gt;Create your site using Jekyll&lt;br&gt;
What is Jekyll?&lt;br&gt;
Jekyll is a simple tool for transforming your content into static websites and blogs. You can simply create your content in a format such as Markdown and Jekyll can produce a static website which can served over the Internet.&lt;/p&gt;

&lt;p&gt;1- Install Jekyll&lt;br&gt;
   gem install jekyll&lt;/p&gt;

&lt;p&gt;If you don’t have Ruby installed on your machine, you might have to install that first.&lt;/p&gt;

&lt;p&gt;2- Create a new Jekyll site&lt;br&gt;
   mkdir my-site&lt;br&gt;
   cd my-site&lt;br&gt;
   jekyll new my-jekyll-site&lt;/p&gt;

&lt;p&gt;3- Generate your site&lt;br&gt;
   cd my-jekyll-site&lt;br&gt;
   jekyll serve&lt;/p&gt;

&lt;p&gt;If everything works correctly, you should see output similar to this:&lt;/p&gt;

&lt;p&gt;➜  my-jekyll-site jekyll serve&lt;br&gt;
Configuration file: /home/abhishek/Projects/learn-aws/tutorials/my-jekyll-site/_config.yml&lt;br&gt;
            Source: /home/abhishek/Projects/learn-aws/tutorials/my-jekyll-site&lt;br&gt;
       Destination: /home/abhishek/Projects/learn-aws/tutorials/my-jekyll-site/_site&lt;br&gt;
 Incremental build: disabled. Enable with --incremental&lt;br&gt;
      Generating...&lt;br&gt;
                    done in 0.193 seconds.&lt;br&gt;
 Auto-regeneration: enabled for '/home/abhishek/Projects/learn-aws/tutorials/my-jekyll-site'&lt;br&gt;
    Server address: &lt;a href="http://127.0.0.1:4000/" rel="noopener noreferrer"&gt;http://127.0.0.1:4000/&lt;/a&gt;&lt;br&gt;
  Server running... press ctrl-c to stop.&lt;/p&gt;

&lt;p&gt;If you open your browser and go to the address &lt;a href="http://127.0.0.1:4000/" rel="noopener noreferrer"&gt;http://127.0.0.1:4000/&lt;/a&gt; you should be able to see your site.&lt;br&gt;
 &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe21sq5in3lscq5jrv3x0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe21sq5in3lscq5jrv3x0.png" alt="Alt Text" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a bucket in Amazon S3&lt;br&gt;
1- Create S3 bucket&lt;br&gt;
   It is good practice to name your S3 bucket the same as your         site&lt;br&gt;
aws s3 mb s3://your-bucket-name&lt;/p&gt;

&lt;p&gt;2- Enable Static website&lt;br&gt;
   S3 buckets can be used for static websites. We need to enable the feature to be able to use it.&lt;br&gt;
aws s3 website s3://your-bucket-name/ --index-document index.html&lt;/p&gt;

&lt;p&gt;3- Set public access to the bucket&lt;br&gt;
   Any objects added to a S3 bucket are private by default. Since we want to use our S3 bucket as a static website we need to make all contents of this bucket public so users will be able to access the site.&lt;/p&gt;

&lt;p&gt;create a file called policy.json with the following content:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; {
"Version": "2021-07-05",
"Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::your-bucket-name/*"
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;Don’t forget to replace your-bucket-name in the policy with the bucket you created previously.&lt;/p&gt;

&lt;p&gt;Upload to S3&lt;br&gt;
After we have created our S3 bucket, we will go ahead and upload all the required files to S3.&lt;/p&gt;

&lt;p&gt;The following command uploads all the static files generated by Jekyll under the _site folder to the S3 bucket you just created.&lt;br&gt;
aws s3 sync _site s3://your-bucket-name/ --delete&lt;/p&gt;

&lt;p&gt;Now, if you navigate to &lt;a href="http://your-bucket-name.s3-website-us-west-2.amazonaws.com/" rel="noopener noreferrer"&gt;http://your-bucket-name.s3-website-us-west-2.amazonaws.com/&lt;/a&gt;, you should be able to see our site similar to what we saw in Step 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ea4no2c9mcryoj9j5f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ea4no2c9mcryoj9j5f7.png" alt="Alt Text" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you very much for your time and support.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
  </channel>
</rss>
