<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Daniel Muthoni </title>
    <description>The latest articles on Forem by Daniel Muthoni  (@danmuso).</description>
    <link>https://forem.com/danmuso</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/danmuso"/>
    <language>en</language>
    <item>
      <title>Optimize AWS Costs: One Load Balancer, Multiple Apps</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Tue, 09 Dec 2025 12:40:54 +0000</pubDate>
      <link>https://forem.com/danmuso/optimize-aws-costs-one-load-balancer-multiple-apps-4neg</link>
      <guid>https://forem.com/danmuso/optimize-aws-costs-one-load-balancer-multiple-apps-4neg</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Running separate Application Load Balancers (ALBs) for each application costs $16-18/month per ALB. With 5 apps, that's $80-90/month just for load balancers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Use a single ALB with path-based or host-based routing rules to serve multiple applications, reducing costs by up to 80%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create One Application Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to EC2 &amp;gt; Load Balancers in AWS Console&lt;br&gt;
Click Create Load Balancer &amp;gt; Application Load Balancer&lt;br&gt;
Configure:&lt;/p&gt;

&lt;p&gt;Name: shared-alb&lt;br&gt;
Scheme: Internet-facing&lt;br&gt;
IP address type: IPv4&lt;br&gt;
Select at least 2 availability zones&lt;/p&gt;

&lt;p&gt;Configure security group (allow HTTP/HTTPS)&lt;br&gt;
Create the ALB&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create Target Groups for Each App&lt;/strong&gt;&lt;br&gt;
For each application, create a separate target group:&lt;/p&gt;

&lt;p&gt;Go to EC2 &amp;gt; Target Groups&lt;br&gt;
Click Create target group&lt;br&gt;
Configure each:&lt;/p&gt;

&lt;p&gt;Target type: Instances/IP/Lambda (based on your setup)&lt;br&gt;
Name: app1-targets, app2-targets, etc.&lt;br&gt;
Protocol: HTTP&lt;br&gt;
Port: Your app's port&lt;br&gt;
Health check path: /health or /&lt;/p&gt;

&lt;p&gt;Register targets (EC2 instances, containers, IPs)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Set Up Routing Rules&lt;/strong&gt;&lt;br&gt;
Option A: Path-Based Routing (Same domain, different paths)&lt;/p&gt;

&lt;p&gt;Go to your ALB &amp;gt; Listeners tab&lt;br&gt;
Select HTTP:80 or HTTPS:443 listener&lt;br&gt;
Click Manage rules &amp;gt; Add rules&lt;br&gt;
Create rules for each app:&lt;/p&gt;

&lt;p&gt;Rule 1: /app1* → Forward to app1-targets&lt;br&gt;
Rule 2: /app2* → Forward to app2-targets&lt;br&gt;
Rule 3: /api/* → Forward to api-targets&lt;br&gt;
Default: Forward to landing-page-targets&lt;br&gt;
Option B: Host-Based Routing (Different subdomains)&lt;br&gt;
Create rules based on hostname:&lt;br&gt;
Rule 1: app1.example.com → Forward to app1-targets&lt;br&gt;
Rule 2: app2.example.com → Forward to app2-targets&lt;br&gt;
Rule 3: api.example.com → Forward to api-targets&lt;br&gt;
Step 4: Configure DNS&lt;br&gt;
Point all domains/subdomains to the single ALB:&lt;br&gt;
Route 53 or your DNS provider:&lt;br&gt;
app1.example.com → CNAME → shared-alb-xxxxx.region.elb.amazonaws.com&lt;br&gt;
app2.example.com → CNAME → shared-alb-xxxxx.region.elb.amazonaws.com&lt;br&gt;
api.example.com  → CNAME → shared-alb-xxxxx.region.elb.amazonaws.com&lt;br&gt;
&lt;strong&gt;Step 5: Add SSL Certificates (Optional but Recommended)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Request certificates in AWS Certificate Manager for all domains&lt;br&gt;
Add certificates to ALB listener (HTTPS:443)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Configure Health Checks&lt;/strong&gt;&lt;br&gt;
For each target group:&lt;/p&gt;

&lt;p&gt;Go to Target Groups &amp;gt; Select group&lt;br&gt;
Edit Health check settings:&lt;/p&gt;

&lt;p&gt;Path: /health or your health endpoint&lt;br&gt;
Interval: 30 seconds&lt;br&gt;
Timeout: 5 seconds&lt;br&gt;
Healthy threshold: 2&lt;br&gt;
Unhealthy threshold: 3&lt;/p&gt;

&lt;p&gt;Cost Savings Example&lt;br&gt;
Before (5 separate ALBs):&lt;/p&gt;

&lt;p&gt;5 ALBs × $16/month = $80/month&lt;br&gt;
Total: $80/month&lt;/p&gt;

&lt;p&gt;After (1 shared ALB):&lt;/p&gt;

&lt;p&gt;1 ALB × $16/month = $16/month&lt;br&gt;
Total: $16/month&lt;/p&gt;

&lt;p&gt;Savings: $64/month ($768/year)&lt;br&gt;
Best Practices&lt;/p&gt;

&lt;p&gt;Use Priority Rules Wisely - More specific rules should have lower priority numbers&lt;br&gt;
Monitor Target Health - Set up CloudWatch alarms for unhealthy targets&lt;br&gt;
Enable Access Logs - Store in S3 for troubleshooting&lt;br&gt;
Use HTTPS - Terminate SSL at ALB for better security&lt;br&gt;
Set Appropriate Timeouts - Match your app's response times&lt;br&gt;
Tag Everything - Tag target groups and ALB for cost tracking&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations to Consider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ALB has limits (100 rules per listener, 50 targets per target group by default)&lt;br&gt;
All apps share the same ALB capacity (but auto-scales)&lt;br&gt;
Request limits apply to the entire ALB, not per app&lt;br&gt;
One ALB failure affects all applications (use multiple ALBs across regions for critical apps)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When NOT to Use This Approach&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Regulatory requirements mandate isolation&lt;/li&gt;
&lt;li&gt;Apps have drastically different traffic patterns&lt;/li&gt;
&lt;li&gt;You need different network configurations per app&lt;/li&gt;
&lt;li&gt;Critical production apps requiring dedicated resources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Using a single ALB with routing rules is a simple yet powerful way to reduce AWS costs without sacrificing functionality. For most small to medium workloads, this approach provides excellent cost optimization while maintaining flexibility and scalability.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>architecture</category>
      <category>aws</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AWS DevSecOps: Deep Dive into Software Development, Security, and Operations Integration</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Sat, 22 Nov 2025 09:14:12 +0000</pubDate>
      <link>https://forem.com/danmuso/aws-devsecops-deep-dive-into-software-development-security-and-operations-integration-f22</link>
      <guid>https://forem.com/danmuso/aws-devsecops-deep-dive-into-software-development-security-and-operations-integration-f22</guid>
      <description>&lt;p&gt;I had the opportunity to participate in the AWS Community Day KE 2025 at KCA University, where I shared the following insights.&lt;/p&gt;

&lt;p&gt;The convergence of software development, security, and operations in AWS environments requires sophisticated orchestration of practices, tools, and cultural transformations. This comprehensive analysis explores the intricate relationships between these domains and their practical implementation at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software Development in DevSecOps Context&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Development Lifecycle Security Integration&lt;/strong&gt;&lt;br&gt;
Pre-Development Security Planning Security requirements gathering begins during the product planning phase, where threat modeling sessions identify potential attack vectors specific to the planned functionality. Development teams collaborate with security architects to establish security acceptance criteria alongside functional requirements. For example, a payment processing feature automatically inherits requirements for PCI DSS compliance, input validation, encryption standards, and audit logging. These requirements are captured in the same tracking systems used for feature development, ensuring visibility and accountability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure Coding Practices at Scale:&lt;/strong&gt; Modern development practices embed security considerations into daily workflows. IDE integrations provide real-time feedback on security issues as developers write code, identifying problems like hardcoded secrets, SQL injection vulnerabilities, and insecure cryptographic implementations. Code review processes include security-focused checklists that reviewers use to validate security aspects of proposed changes. Pull request templates include security impact assessments that force developers to consider the security implications of their changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Branch Protection and Security Gates:&lt;/strong&gt; Repository management implements security-enforced branch protection rules that prevent code from advancing without passing security checks. Master branches require passing security scans, peer reviews with security focus, and automated security testing. Feature branches undergo continuous security scanning that provides immediate feedback to developers. Emergency hotfix procedures include expedited security review processes that maintain security standards while enabling rapid deployment of critical fixes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced Testing Strategies&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Shift-Left Security Testing:&lt;/strong&gt; Security testing begins at the earliest stages of development. Unit tests include security-focused test cases that validate input sanitization, authentication mechanisms, and authorization logic. Integration tests verify security boundaries between services, ensuring that service A cannot access service B’s data without proper credentials. Contract testing validates that API security requirements are maintained across service interactions, preventing regression in security controls during system evolution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comprehensive Security Test Automation:&lt;/strong&gt; Automated security testing encompasses multiple dimensions of application security. Static analysis tools examine source code for common vulnerability patterns, with custom rules that enforce organization-specific security requirements. Dynamic testing tools interact with running applications to identify runtime security issues, including authentication bypasses and injection vulnerabilities. Interactive application security testing combines static and dynamic approaches to provide comprehensive vulnerability coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Performance Testing:&lt;/strong&gt; Security controls undergo performance testing to ensure they don’t degrade system performance beyond acceptable thresholds. Authentication mechanisms are load tested to verify they can handle peak user loads. Encryption and decryption operations are benchmarked to ensure they meet performance requirements. Rate limiting and DDoS protection mechanisms are validated under simulated attack conditions to verify their effectiveness without impacting legitimate users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Architecture and Implementation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Defense in Depth Strategy&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Application Layer Security:&lt;/strong&gt; Applications implement multiple layers of security controls that provide overlapping protection. Input validation occurs at multiple points: client-side validation for user experience, server-side validation for security, and database-level constraints for data integrity. Authentication mechanisms include primary authentication, secondary factor verification, and session management with appropriate timeout and rotation policies. Authorization implements both role-based and attribute-based access controls with fine-grained permission matrices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure Security Hardening:&lt;/strong&gt; Infrastructure security extends beyond basic configuration to include comprehensive hardening practices. Operating systems undergo security hardening that removes unnecessary services, applies security patches, and configures security settings according to industry benchmarks. Network security implements microsegmentation that limits lateral movement, with application-level firewalls that provide deep packet inspection. Database security includes transparent data encryption, network encryption, and access logging with automated anomaly detection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Protection Throughout Lifecycle:&lt;/strong&gt; Data protection mechanisms address data security from creation through disposal. Data classification systems automatically tag sensitive data with appropriate protection levels. Encryption key management implements hierarchical key structures with regular rotation and audit trails. Data loss prevention systems monitor data movement and prevent unauthorized data exfiltration. Data retention policies automatically archive or delete data according to business and regulatory requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Automation and Orchestration&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Automated Threat Response:&lt;/strong&gt; Security automation responds to threats with minimal human intervention while maintaining appropriate oversight. Intrusion detection systems automatically isolate suspected compromised systems while preserving forensic evidence. Malware detection triggers automated remediation that removes threats and rebuilds affected systems from known-good configurations. Account compromise detection automatically disables affected accounts, rotates credentials, and initiates investigation workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance Automation:&lt;/strong&gt; Automated compliance systems continuously validate adherence to regulatory requirements and organizational policies. Configuration drift detection identifies when systems deviate from approved baselines and automatically remediate common issues. Compliance reporting generates evidence packages for auditors that include configuration snapshots, access logs, and security test results. Policy violations trigger automated workflows that notify responsible parties and track remediation progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Orchestration Workflows:&lt;/strong&gt; Security orchestration platforms coordinate complex security processes across multiple tools and teams. Incident response workflows automatically gather relevant information, notify appropriate personnel, and coordinate response activities. Vulnerability management processes automatically prioritize threats based on business impact, coordinate patching activities, and verify remediation effectiveness. Security assessment workflows schedule and coordinate penetration testing, vulnerability scanning, and compliance audits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operations Excellence in DevSecOps&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Infrastructure as Code Security&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Security-First Infrastructure Design:&lt;/strong&gt; Infrastructure as Code templates embed security principles from initial design. Network architectures implement security zones with appropriate access controls and monitoring. Compute resources include security agents and monitoring tools by default. Storage systems automatically configure encryption, access logging, and backup procedures. Load balancers and API gateways include DDoS protection, rate limiting, and security monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Infrastructure Validation:&lt;/strong&gt; Infrastructure deployment includes comprehensive security validation before resources become operational. Configuration scanning validates security settings against organizational baselines and industry standards. Network connectivity testing ensures that security groups and network ACLs provide appropriate isolation. Compliance checking validates that deployed infrastructure meets regulatory requirements. Performance testing ensures that security controls don’t negatively impact system performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Infrastructure Security:&lt;/strong&gt; Operational infrastructure undergoes continuous security monitoring and adjustment. Configuration drift detection identifies unauthorized changes and automatically restores approved configurations. Vulnerability scanning regularly assesses infrastructure components and coordinates patching activities. Capacity monitoring ensures that security controls scale appropriately with system load. Cost optimization balances security requirements with operational efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Observability&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Comprehensive Security Monitoring&lt;/strong&gt; Security monitoring provides visibility into all aspects of system security. Application monitoring tracks authentication attempts, authorization decisions, and data access patterns. Infrastructure monitoring observes network traffic, system resource usage, and configuration changes. User behavior analytics identify unusual patterns that may indicate compromised accounts or insider threats. Threat intelligence integration provides context about emerging threats and attack patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-Time Alerting and Response:&lt;/strong&gt; Alerting systems provide timely notification of security events while minimizing false positives. Machine learning algorithms establish baseline behavior patterns and identify anomalies that warrant investigation. Alert correlation combines related events into coherent incident narratives. Escalation procedures ensure that critical security events receive appropriate attention and resources. Response time tracking measures the effectiveness of security operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Analytics and Intelligence:&lt;/strong&gt; Security analytics platforms process large volumes of security data to identify trends and patterns. Behavioral analytics establish normal patterns for users, systems, and applications. Threat hunting processes proactively search for indicators of compromise. Security metrics provide visibility into security posture trends and the effectiveness of security controls. Predictive analytics identify potential security issues before they become active threats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational Resilience&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Business Continuity and Disaster Recovery:&lt;/strong&gt; Security considerations are integrated into business continuity planning to ensure that recovery processes don’t compromise security. Backup systems include security monitoring and access controls equivalent to production systems. Disaster recovery testing validates that security controls function correctly in recovery scenarios. Incident response plans address scenarios where security incidents trigger business continuity procedures. Recovery time objectives include requirements for security control restoration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change Management Integration:&lt;/strong&gt; Change management processes include security impact assessment and approval procedures. Emergency changes include expedited security review processes that maintain security standards while enabling rapid deployment. Change rollback procedures include security validation to ensure that rollbacks don’t introduce security vulnerabilities. Change communication includes security teams in planning and notification processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capacity and Performance Management:&lt;/strong&gt; Capacity planning includes security control resource requirements to ensure that security systems scale appropriately with business growth. Performance management includes security control impact assessment to ensure that security doesn’t compromise user experience. Resource optimization balances security requirements with cost considerations. Scalability testing includes security controls to validate that they perform effectively under increased load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-Domain Integration Strategies&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Cultural and Organizational Transformation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Shared Responsibility Models:&lt;/strong&gt; Organizations implement shared responsibility frameworks that clearly define security responsibilities across development, security, and operations teams. Development teams own application security, including secure coding practices, security testing, and vulnerability remediation. Security teams provide security architecture guidance, threat intelligence, and specialized security services. Operations teams maintain infrastructure security, including patching, monitoring, and incident response. Clear interfaces between teams prevent security gaps while avoiding duplicate responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Skills Development:&lt;/strong&gt; Comprehensive training programs ensure that all teams have appropriate security knowledge for their responsibilities. Developers receive secure coding training, threat modeling education, and security testing instruction. Operations personnel learn infrastructure security, incident response, and compliance management. Security professionals develop understanding of development and operations processes to provide effective guidance and support. Cross-training programs ensure that teams can collaborate effectively and provide backup coverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Communication and Collaboration Frameworks:&lt;/strong&gt; Structured communication processes ensure effective information sharing between teams. Regular security briefings keep all teams informed of emerging threats and organizational security priorities. Incident post-mortems include representatives from all affected teams to ensure comprehensive learning. Security architecture reviews include development and operations input to ensure that security designs are practical and implementable. Feedback loops ensure that operational experience informs security design decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics and Continuous Improvement&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Integrated Performance Measurement:&lt;/strong&gt; Metrics frameworks measure DevSecOps effectiveness across all three domains. Development metrics include security defect rates, security test coverage, and security requirement completion. Security metrics encompass threat detection effectiveness, incident response times, and compliance adherence. Operations metrics track system uptime, security control performance, and infrastructure vulnerability rates. Combined metrics provide holistic views of DevSecOps maturity and effectiveness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Process Optimization:&lt;/strong&gt;  Regular assessment and improvement processes ensure that DevSecOps practices evolve with organizational needs and threat landscapes. Process retrospectives identify inefficiencies and improvement opportunities. Benchmarking against industry standards provides context for organizational performance. Pilot programs test new tools and processes before organization-wide implementation. Feedback collection ensures that process changes address real operational challenges.&lt;/p&gt;

&lt;p&gt;This integrated approach to DevSecOps ensures that software development, security, and operations work together seamlessly to deliver secure, reliable, and efficient systems while maintaining the agility and speed that modern businesses require.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Resources and Cost Optimization Strategy in AWS</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Sat, 11 Jan 2025 04:45:53 +0000</pubDate>
      <link>https://forem.com/danmuso/resources-and-cost-optimization-strategy-in-aws-iob</link>
      <guid>https://forem.com/danmuso/resources-and-cost-optimization-strategy-in-aws-iob</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a6n05hgszj7ntx1e5cc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a6n05hgszj7ntx1e5cc.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Amazon Web Services (AWS) offers a comprehensive suite of cloud computing services. While these services provide businesses with flexibility, scalability, and innovation opportunities, optimizing resource usage and costs is vital to avoid overspending and maximize value. Below is a detailed exploration of strategies to optimize AWS resources and reduce costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. &lt;strong&gt;Understand AWS Billing and Costs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AWS operates on a pay-as-you-go pricing model. To start optimizing, it's critical to understand how billing works:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Pricing Models:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand:&lt;/strong&gt; Pay for compute or database capacity by the hour or second without long-term commitments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reserved Instances (RI):&lt;/strong&gt; Commit to usage for 1 or 3 years for discounts of up to 75%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances:&lt;/strong&gt; Use spare AWS capacity for discounts of up to 90%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Savings Plans:&lt;/strong&gt; Commit to consistent usage for significant discounts on EC2, Lambda, and Fargate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Cost Explorer and Budgets:&lt;/strong&gt; Use AWS Cost Explorer to visualize and analyze spending patterns. Set up AWS Budgets for alerts when costs exceed predefined thresholds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Rightsize Resources
&lt;/h2&gt;

&lt;p&gt;Rightsizing involves matching resources to actual demand to avoid over-provisioning or under-utilization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Analyze Instance Usage:&lt;/strong&gt; Use the AWS Trusted Advisor to identify underutilized or idle EC2 instances.
Resize instances using Amazon EC2 Auto Scaling or switch to smaller instance types.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review EBS Volumes:&lt;/strong&gt; Delete unused Elastic Block Store (EBS) volumes. Switch to lower-cost volume types like general-purpose SSD (gp3) or Cold HDD (sc1) for infrequent access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize RDS Instances:&lt;/strong&gt; Leverage RDS Storage Auto Scaling.
Use Aurora Serverless for variable workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Utilize Elasticity
&lt;/h2&gt;

&lt;p&gt;AWS enables businesses to scale resources dynamically based on demand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto Scaling Groups:&lt;/strong&gt; Automatically increase or decrease EC2 instances based on predefined policies. Use predictive scaling to anticipate future demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Architectures:&lt;/strong&gt; Adopt AWS Lambda to pay only for execution time. Use Amazon API Gateway and AWS Step Functions to reduce infrastructure management costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances:&lt;/strong&gt;Incorporate spot instances into workloads that can handle interruptions, such as batch processing or data analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Leverage Cost-Effective Storage Solutions
&lt;/h2&gt;

&lt;p&gt;AWS provides a variety of storage solutions for different needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3 Storage Classes:&lt;/strong&gt; Use S3 Standard for frequently accessed data. Switch to S3 Intelligent Tiering for automatic cost optimization. Use S3 Glacier or S3 Glacier Deep Archive for archival data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EFS and FSx:&lt;/strong&gt; Use Amazon Elastic File System (EFS) Infrequent Access storage class for cost savings. Migrate legacy file systems to Amazon FSx for managed file storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lifecycle Policies:&lt;/strong&gt; 
Implement lifecycle policies to transition data between storage classes automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Optimize Data Transfer Costs
&lt;/h2&gt;

&lt;p&gt;Data transfer costs can quickly escalate if not managed properly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Content Delivery Networks (CDN):&lt;/strong&gt; Use Amazon CloudFront to cache content and reduce outbound data transfer costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leverage AWS Direct Connect:&lt;/strong&gt; For consistent, high-volume data transfer, use AWS Direct Connect to reduce bandwidth costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimize Cross-Region Data Transfer:&lt;/strong&gt; Architect systems to avoid unnecessary cross-region traffic by deploying resources closer to end-users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitor and Optimize Networking
&lt;/h2&gt;

&lt;p&gt;Networking optimizations can significantly cut costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimize Load Balancers:&lt;/strong&gt; Use Application Load Balancer (ALB) for HTTP/HTTPS traffic. Turn off unused Elastic Load Balancers (ELBs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Private IPs:&lt;/strong&gt; Reduce costs by routing traffic through private IPs instead of public IPs when possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Leverage Discounts and Savings
&lt;/h2&gt;

&lt;p&gt;AWS offers several options to save money with upfront commitments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Savings Plans:&lt;/strong&gt; Choose Compute Savings Plans for flexibility across EC2, Fargate, and Lambda. Use EC2 Instance Savings Plans for specific EC2 instance families.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reserved Instances:&lt;/strong&gt; Commit to 1- or 3-year terms for predictable workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Discount Program (EDP):&lt;/strong&gt; Negotiate volume-based discounts for large-scale AWS usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement Resource Tagging and Governance
&lt;/h2&gt;

&lt;p&gt;Tagging resources can improve cost visibility and management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tagging Strategy:&lt;/strong&gt; Use tags like Environment, Department, and Project to attribute costs accurately.
Enforce tagging compliance using AWS Config Rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Policies:&lt;/strong&gt; Use Service Quotas to limit the use of specific resources.
Regularly review and clean up unused or orphaned resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automate Cost Optimization
&lt;/h2&gt;

&lt;p&gt;Automation reduces manual efforts in identifying and resolving inefficiencies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Trusted Advisor:&lt;/strong&gt; Regularly review cost optimization recommendations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Compute Optimizer:&lt;/strong&gt; Use Compute Optimizer to get insights on EC2, Lambda, and Auto Scaling performance and cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third-Party Tools:&lt;/strong&gt; Consider tools like CloudHealth or Spot.io for advanced cost management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Foster a Culture of Cost Awareness
&lt;/h2&gt;

&lt;p&gt;Cost optimization isn't a one-time activity; it requires ongoing vigilance and awareness:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Train Teams:&lt;/strong&gt; Educate teams about AWS pricing models and optimization techniques.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Allocation Reports:&lt;/strong&gt; Share cost reports with stakeholders to drive accountability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Reviews:&lt;/strong&gt; Schedule regular audits to identify and act on inefficiencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
AWS offers unparalleled flexibility and scalability, but without a deliberate strategy, costs can spiral. By right-sizing resources, leveraging automation, adopting cost-effective storage and computing models, and fostering a culture of cost awareness, businesses can optimize their AWS usage and achieve significant savings. The key to successful cost optimization lies in continuous monitoring, periodic reviews, and adapting strategies as needs evolve.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using PostgreSQL With Laravel</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Thu, 05 Oct 2023 11:42:41 +0000</pubDate>
      <link>https://forem.com/danmuso/using-postgresql-with-laravel-58ak</link>
      <guid>https://forem.com/danmuso/using-postgresql-with-laravel-58ak</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgy62qkrmovnfxr7yz6ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgy62qkrmovnfxr7yz6ev.png" alt="Using PostgreSQL With Laravel" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I'll illustrate how to establish a connection between Laravel applications and a PostgreSQL database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What advantages does PostgreSQL offer over the MySQL Database Engine?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MySQL is a relational database management system that lets you store data as tables with rows and columns. It’s a popular system that powers many web applications, dynamic websites, and embedded systems. PostgreSQL is an object-relational database management system that offers more features than MySQL. It gives you more flexibility in data types, scalability, concurrency, and data integrity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Installation&lt;/strong&gt;&lt;br&gt;
Download PostgreSQL to your pc from&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.postgresql.org/download/" rel="noopener noreferrer"&gt;https://www.postgresql.org/download/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then Install.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. php.ini&lt;/strong&gt;&lt;br&gt;
MySQL is the default therefore there’s need to activate PostgreSQL for use with PHP on your pc by editing the php.ini&lt;/p&gt;

&lt;p&gt;Search for these two lines within your php.ini and remove the “;” in front of each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;;extension=pdo_pgsql
;extension=pgsql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to (notice that “;” was removed)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extension=pdo_pgsql
extension=pgsql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. .env&lt;/strong&gt;&lt;br&gt;
Change the following as appropriate within the Laravel’s .env file (all the defaults are ok except DB_DATABASE and DB_PASSWORD that you may have to change):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=database_name
DB_USERNAME=postgres
DB_PASSWORD=your_choosen_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. config/database.php&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to Config/database.php and change the following lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'default' =&amp;gt; env('DB_CONNECTION', 'pgsql'),
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Migration&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Create your migration files as usual and run the migration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monitoring Amazon EC2 with CloudWatch and SNS Notifications.</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Sun, 27 Aug 2023 03:39:27 +0000</pubDate>
      <link>https://forem.com/danmuso/monitoring-amazon-ec2-with-cloudwatch-and-sns-notifications-24bi</link>
      <guid>https://forem.com/danmuso/monitoring-amazon-ec2-with-cloudwatch-and-sns-notifications-24bi</guid>
      <description>&lt;p&gt;Monitoring Amazon Elastic Compute Cloud (Amazon EC2) instances is crucial for maintaining the health, performance, and availability of your applications and infrastructure. Amazon CloudWatch, a monitoring and management service, provides a comprehensive suite of tools for monitoring various AWS resources, including EC2 instances. Additionally, you can use Amazon Simple Notification Service (SNS) to receive notifications based on CloudWatch alarms, enabling you to promptly respond to any issues or anomalies. This integration helps you ensure that your EC2 instances are operating efficiently and that any potential problems are addressed in a timely manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-Requisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account: You must have an active AWS account. If you don't have one, you can sign up for an AWS account on the AWS website by providing the necessary information.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Launch an EC2 Instance and install necessary packages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will initiate the launch of an EC2 instance employing the Amazon Linux 2023 AMI. This instance will utilize the t3a.small instance type, and the default settings will remain unchanged. However, feel free to modify these settings according to your needs. To demonstrate the process, I've configured SSH Access to be permitted from MyIP. Nevertheless, I strongly advise you to adopt a more precise access approach, such as using MyIP, to enhance security in practical scenarios.&lt;/p&gt;

&lt;p&gt;In the userdata section, you can include the following commands to install the AWS CloudWatch Agent and set up the configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash -xe

echo --- install packages ---
dnf update &amp;amp;&amp;amp; dnf install -y amazon-cloudwatch-agent-1.247358.0-1.amzn2023.x86_64 \
    gcc \
    ec2-instance-connect \
    aws-cfn-bootstrap.noarch \
    openssh-8.7p1-8.amzn2023.0.4.x86_64 \
    rsyslog-8.2204.0-3.amzn2023.0.2.x86_64

echo --- create cw agent config file ---
cat &amp;lt;&amp;lt; EOF &amp;gt; /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
{
  "agent": {
    "run_as_user": "root"
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/var/log/secure",
            "log_group_name": "SSHunsuccessfulattempt",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 3,
            "timestamp_format": "%b %d %H:%M:%S"
          }
        ]
      }
    }
  }
}
EOF

echo --- starting the cloudwatch agent ---
systemctl start amazon-cloudwatch-agent.service

echo --- modify sshd to log to file ---
systemctl stop sshd
sed -i 's|RestartSec=42s|RestartSec=42s\nStandardOutput=syslog\nStandardError=syslog\n|g' /lib/systemd/system/sshd.service
systemctl daemon-reload
systemctl start sshd

echo --- start syslog ---
systemctl start rsyslog

/opt/aws/bin/cfn-signal -e 0 --stack "ec2monitoring" --region "us-west-2" --resource MonitorCloudWatchLabInstance


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Instructions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To confirm that the Amazon CloudWatch agent is running, you can use the following command:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status amazon-cloudwatch-agent.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this command in your terminal will provide information about the current status of the CloudWatch agent service, including whether it's active and running, any recent logs, and more. This will help you verify that the CloudWatch agent has been successfully started and is operational on your EC2 instance. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;To view the configuration file of the Amazon CloudWatch agent, you can use the more command or any text viewer of your choice. The configuration file is typically located at /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json. Here's the command to view the configuration file using more:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;more /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this command in your terminal will display the contents of the CloudWatch agent's configuration file, allowing you to inspect the settings and configurations that you've defined for the agent's behavior.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To view log files, in the terminal session, enter the following.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/log
ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Open up a real-time display of the secure log file:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -f secure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;In the AWS Management Console search bar, enter cloudwatch, and click the CloudWatch result under Services:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create SNS TOPIC&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create Subscription&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Email Confirmation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Certainly, here's a concise recap of the progress you've made:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;EC2 Instance:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Configured and launched an EC2 instance with CloudWatch log agent running as a service.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;CloudWatch Logging:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Created a CloudWatch Log Group and Log Stream to capture logs from the EC2 instance's "secure" log file.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SNS Notifications:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Established an SNS topic and subscription, ready to receive push notifications triggered by CloudWatch alarms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cont'&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate back to CloudWatch, and click Alarms &amp;gt; All alarms:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create Alarm.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the search bar in the Metrics section, enter Incoming log events and press enter:&lt;br&gt;
 Select Account Metrics -&amp;gt; IncomingLogEvents&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Now click on Graphed metrics, and select the following:&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1 Minute as Period
Sum as Statistic.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;In the Conditions section, enter and select:&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Whenever IncomingLogEvents is...: Select Greater/Equal
Than: Enter 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;In the Notification section, click in the Send a notification to... box and select your ssh-fails topic:&lt;/em&gt;&lt;br&gt;
    The alarm is created and is now listed on the Alarms page. The valid States for an alarm are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In alarm - The alarm was triggered
OK - The Alarm was not triggered, status is normal
Insufficient data - Not enough data exists to either set 
    an Alarm or set the status to OK.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;_From the Log groups page of CloudWatch, notice the Metric Filters column has no value. _&lt;/p&gt;

&lt;p&gt;Although the Alarm is created, you have not set up a Metric Filter yet. You need to do that next so you can match against a specific pattern within the logs sent from your EC2 instance to CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Select the checkbox for the SSHfail Log Group, then click Actions &amp;gt; Create Metric Filter:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;[Mon, day, timestamp, ip, id, status = Invalid, ...]&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Enter the following:&lt;/em&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create filter name:
Filter Name: InvalidSSHUsers
Metric details:
Metric name: Enter ssh-fails
Metric namespace: Enter ssh-fails
Metric value: Enter 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;At the bottom of the page, click Next, and then click Create metric filter:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Attempt to SSH into your running EC2 instance again as the invalid user "Daniel", and then attempt several failures in a row.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Once you see the failed notification on the EC2 Instance Connect page, you can refresh the browser page several times to rack up several failed logins.&lt;/p&gt;

&lt;p&gt;Aim for five or six attempts. Anything over two will raise an alarm.&lt;/p&gt;

&lt;p&gt;After a minute or two, your alarm should be raised. Once raised, if not violated again, it will settle and reset on its own. That is, the alarm should transition to an OK state within a few minutes of no violations:&lt;/p&gt;

&lt;p&gt;In conclusion, you've successfully orchestrated a monitoring and notification framework for your Amazon EC2 instance using AWS CloudWatch and Simple Notification Service (SNS). By configuring and launching an EC2 instance with the CloudWatch log agent, you've ensured the seamless collection and transmission of logs and metrics. The setup of a CloudWatch Log Group and Log Stream guarantees the efficient management and analysis of collected logs. Furthermore, the creation of an SNS topic, along with a subscription, positions you to receive prompt notifications via push notifications whenever CloudWatch alarms are triggered.&lt;/p&gt;

&lt;p&gt;This integrated solution empowers you to proactively monitor the health and performance of your EC2 instance, centralize log management, and stay informed about critical events. Through these steps, you've taken significant strides in fortifying your AWS environment's reliability, security, and operational efficiency. As you continue to refine and adapt this framework, you'll be better equipped to ensure the uninterrupted operation of your applications and services.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to associate AWS Elastic IP Address with AWS EC2 Linux Server.</title>
      <dc:creator>Daniel Muthoni </dc:creator>
      <pubDate>Tue, 24 Jan 2023 17:19:57 +0000</pubDate>
      <link>https://forem.com/danmuso/how-to-associate-aws-elastic-ip-address-with-aws-ec2-linux-server-17mm</link>
      <guid>https://forem.com/danmuso/how-to-associate-aws-elastic-ip-address-with-aws-ec2-linux-server-17mm</guid>
      <description>&lt;p&gt;This video provides a solution to this problem.&lt;/p&gt;

&lt;p&gt;The auto-assigned public IP address associated with my Amazon Elastic Compute Cloud (Amazon EC2) instance changes every time I stop and start the instance. How can I assign a static public IP address to my Windows or Linux EC2 instance that doesn't change when I stop/start the instance?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=jnjZu8THaU0" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
