<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DevOps4Me Global</title>
    <description>The latest articles on Forem by DevOps4Me Global (@devops4mecode).</description>
    <link>https://forem.com/devops4mecode</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/devops4mecode"/>
    <language>en</language>
    <item>
      <title>Failing the DevOps Audit? Here's What Every Secure Pipeline Needs in 2025</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Fri, 25 Apr 2025 14:22:20 +0000</pubDate>
      <link>https://forem.com/devops4mecode/failing-the-devops-audit-heres-what-every-secure-pipeline-needs-in-2025-a60</link>
      <guid>https://forem.com/devops4mecode/failing-the-devops-audit-heres-what-every-secure-pipeline-needs-in-2025-a60</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Last month, I reviewed a FinTech's CI pipeline. Everything looked smooth… until the audit flagged 37 license violations on Open-Source Software (OSS), 2 hardcoded secrets, and no SBOM at all. Sound familiar?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's 2025, and DevOps is everywhere - but so are audit failures. You're exposed if your team is shipping code fast but hasn't prepared for SBOM governance, CI/CD security, and AI-generated code traceability. As someone helping teams across MedTech, FinTech, and Industrial Software, here's what I've seen inside real pipelines - and why they're failing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 5 Most Common Compliance Pitfalls in DevOps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. No SBOM Strategy&lt;/strong&gt;&lt;br&gt;
Most teams still don't generate a proper Software Bill of Materials (SBOM), or worse, they rely on automated SCA tools but never review the output.&lt;br&gt;
Fix: Adopt SBOM tools like BlackDuck, Syft, or CycloneDX, and automate export in your build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Secrets in Repos or CI Variables&lt;/strong&gt;&lt;br&gt;
I've reviewed pipelines with unencrypted secrets exposed in env variables or YAML files - a ticking time bomb.&lt;br&gt;
Fix: Use Azure Key Vault, GitHub Secrets, or HashiCorp Vault with strict access policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No Policy Gates or Security Gating&lt;/strong&gt;&lt;br&gt;
CI/CD runs green even when critical vulnerabilities exist. Why? No security gate policies.&lt;br&gt;
Fix: Integrate gating with Coverity, BlackDuck, Trivy, or custom rules to break builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Lack of AI Code Tracking&lt;/strong&gt;&lt;br&gt;
With the rise of AI code tools, teams can't distinguish AI-generated vs. developer-written code. This creates audit headaches under the EU CRA.&lt;br&gt;
Fix: Implement tagging policies or track source commits with specific AI markers + governance checklists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. No Mapped Compliance Framework&lt;/strong&gt;&lt;br&gt;
Teams follow "best practices" but can't prove alignment with IEC 62443, CRA, or FDA pre-submission standards.&lt;br&gt;
Fix: Build a mapping checklist showing each security control met in your pipeline tooling.&lt;br&gt;
What a Secure, Audit-Ready Pipeline Looks Like&lt;br&gt;
Below is a high-level view of a compliant CI/CD pipeline. From SBOM generation to secrets management and security gates - each layer aligns with audit requirements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpapfqlx5sqqsk67lwt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpapfqlx5sqqsk67lwt5.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to Fix Your Pipeline Before the Auditor Does?&lt;/strong&gt;&lt;br&gt;
I've helped DevOps and Cybersecurity teams design fast audit-ready pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw89zv2qxe0k2s3yfmcdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw89zv2qxe0k2s3yfmcdv.png" alt="Image description" width="800" height="1131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can:&lt;br&gt;
&lt;a href="https://tinyurl.com/4zdf9pre" rel="noopener noreferrer"&gt;Download my Free CI/CD YAML&lt;/a&gt;&lt;br&gt;
&lt;a href="https://tinyurl.com/4pdn42p7" rel="noopener noreferrer"&gt;Audit Checklist&lt;/a&gt;&lt;br&gt;
&lt;a href="https://calendly.com/najibradzuan/nr-30-minute-meeting" rel="noopener noreferrer"&gt;Book a 30-minute free consult (limited slots)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📩 *&lt;em&gt;Get the eBook DevOps Unlocked *&lt;/em&gt;→&lt;a href="https://najibradzuan.gumroad.com/l/devopsunlocked" rel="noopener noreferrer"&gt;https://najibradzuan.gumroad.com/l/devopsunlocked&lt;/a&gt;&lt;br&gt;
🔗 Follow more DevSecOps and Cybersecurity posts at &lt;a href="https://dev.to/devops4mecode"&gt;Dev.to&lt;/a&gt; or &lt;a href="https://medium.com/@devops4me" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Najib Radzuan is a DevSecOps Security Architect working with teams globally on Regulatory Compliance, IEC 62443, EU CRA, and DevSecOps transformation.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>regulatorycompliance</category>
      <category>devsecops</category>
      <category>eucra</category>
    </item>
    <item>
      <title>Leadership in DevOps: Applying the PATH-C Framework for High-Performing Teams</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Mon, 10 Feb 2025 06:01:47 +0000</pubDate>
      <link>https://forem.com/devops4mecode/leadership-in-devops-applying-the-path-c-framework-for-high-performing-teams-342o</link>
      <guid>https://forem.com/devops4mecode/leadership-in-devops-applying-the-path-c-framework-for-high-performing-teams-342o</guid>
      <description>&lt;p&gt;DevOps isn’t just about automation and pipelines—it’s about people. Effective leadership is the cornerstone of high-performing DevOps teams. But how can leaders drive focus, accountability, and collaboration without micromanaging or slowing down innovation?&lt;/p&gt;

&lt;p&gt;Enter PATH-C: A Leadership Framework for DevOps. 💡&lt;/p&gt;

&lt;p&gt;What is PATH-C?&lt;/p&gt;

&lt;p&gt;PATH-C is a structured approach to improving team productivity, decision-making, and collaboration. It consists of five key principles:&lt;/p&gt;

&lt;p&gt;✅ Prioritization – Focus on what matters, avoiding distractions.&lt;br&gt;
✅ Accountability – Ensure every team member owns their work.&lt;br&gt;
✅ Transparency – Keep processes, decisions, and progress visible.&lt;br&gt;
✅ Habits – Build sustainable routines that support efficiency.&lt;br&gt;
✅ Collaboration – Foster teamwork and knowledge sharing.&lt;/p&gt;

&lt;p&gt;🔥 Applying PATH-C in Leadership&lt;br&gt;
🚀 1. Prioritization:&lt;br&gt;
Leaders must help teams cut through the noise and focus on critical objectives. Use quarterly OKRs, sprint-level goals, and backlog refinement to keep priorities clear.&lt;/p&gt;

&lt;p&gt;🚀 2. Accountability:&lt;br&gt;
Ownership drives performance. Assign clear responsibilities, track commitments, and use sprint retrospectives to reinforce accountability.&lt;/p&gt;

&lt;p&gt;🚀 3. Transparency:&lt;br&gt;
Open dashboards, shared status updates, and clear documentation ensure that work isn’t hidden in silos. DevOps thrives when teams see the big picture.&lt;/p&gt;

&lt;p&gt;🚀 4. Habits:&lt;br&gt;
Good leaders establish habits that sustain productivity. Daily stand-ups, regular code reviews, and structured learning sessions keep teams aligned and growing.&lt;/p&gt;

&lt;p&gt;🚀 5. Collaboration:&lt;br&gt;
Cross-functional teamwork is essential. Dev, Ops, and Security must work together seamlessly. Promote pair programming, DevSecOps principles, and knowledge sharing across teams.&lt;/p&gt;

&lt;p&gt;🎯 Real-World Impact&lt;br&gt;
Companies that implement PATH-C experience:&lt;br&gt;
✅ Faster deployments and reduced bottlenecks 🚀&lt;br&gt;
✅ Higher team engagement and job satisfaction 😊&lt;br&gt;
✅ Better cross-team collaboration and reduced silos 🤝&lt;br&gt;
✅ Stronger security and reliability with built-in accountability 🔐&lt;/p&gt;

&lt;p&gt;📖 Want to Dive Deeper?&lt;br&gt;
Learn how PATH-C can transform your leadership style and DevOps team performance. Get "DevOps Unlocked: Mastering Productivity with the PATH-C Framework" today!&lt;/p&gt;

&lt;p&gt;🔗 Order now → &lt;a href="https://a.co/d/4fWAUoq" rel="noopener noreferrer"&gt;https://a.co/d/4fWAUoq&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s unlock productivity, leadership, and innovation in DevOps together! 💪🔥&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps #Leadership #Productivity #PATHC #Agile #Engineering #Automation #Collaboration #ContinuousImprovement #DevSecOps #CI/CD
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>leadership</category>
      <category>automaton</category>
    </item>
    <item>
      <title>DevOps Unlocked: The Ultimate Guide to Mastering Productivity with the PATH-C Framework</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Wed, 05 Feb 2025 14:45:10 +0000</pubDate>
      <link>https://forem.com/devops4mecode/devops-unlocked-the-ultimate-guide-to-mastering-productivity-with-the-path-c-framework-n25</link>
      <guid>https://forem.com/devops4mecode/devops-unlocked-the-ultimate-guide-to-mastering-productivity-with-the-path-c-framework-n25</guid>
      <description>&lt;p&gt;🚀 Are you a DevOps professional struggling to keep up with the ever-increasing demands of automation, security, and efficiency? Do you feel overwhelmed by the complexities of modern software development and delivery?&lt;/p&gt;

&lt;p&gt;You’re not alone.&lt;/p&gt;

&lt;p&gt;In today’s fast-paced DevOps landscape, productivity isn’t just about getting things done — it’s about getting the right things done efficiently, securely, and at scale. I wrote “DevOps Unlocked: Mastering Productivity with the PATH-C Framework” — to help DevOps engineers, managers, and teams optimize workflows, eliminate bottlenecks, and build sustainable productivity habits that drive long-term success.&lt;/p&gt;

&lt;p&gt;💡 Get your copy now on Amazon! 👉 DevOps Unlocked: Mastering Productivity with the PATH-C Framework&lt;/p&gt;

&lt;p&gt;Why This Book?&lt;br&gt;
As a DevSecOps Security Architect, Agile Practitioner, and DevOps Leader, I’ve spent over a decade navigating the challenges of building scalable, efficient, and secure DevOps environments. Through years of experience, research, and collaboration with teams worldwide, I developed the PATH-C Framework — a structured yet flexible approach to mastering DevOps productivity.&lt;/p&gt;

&lt;p&gt;✅ PATH-C = Prioritization + Automation + Time-Blocking + Hyper-Focus + Continuous Learning&lt;/p&gt;

&lt;p&gt;Each pillar is designed to tackle a specific daily challenge that DevOps professionals face. Whether you’re a beginner or a seasoned DevOps engineer, this book provides actionable strategies, real-world examples, and step-by-step guidance to help you unlock your full potential.&lt;/p&gt;

&lt;p&gt;What You’ll Learn in This Book&lt;br&gt;
📌 Prioritization: Learn how to separate urgent from important tasks, reduce context switching, and manage DevOps backlogs effectively using the Eisenhower Matrix and agile prioritization frameworks.&lt;/p&gt;

&lt;p&gt;📌 Automation: Discover how to automate repetitive tasks, optimize CI/CD pipelines, and leverage Infrastructure as Code (IaC) to eliminate manual errors and scale effortlessly.&lt;/p&gt;

&lt;p&gt;📌 Time-Blocking: Master structured work sessions to avoid burnout, improve focus, and align daily work with long-term objectives — using proven scheduling techniques.&lt;/p&gt;

&lt;p&gt;📌 Hyper-Focus: Explore deep work strategies, including the Pomodoro Technique and digital tools that enhance concentration while minimizing distractions.&lt;/p&gt;

&lt;p&gt;📌 Continuous Learning: Stay ahead in the fast-evolving DevOps field by adopting a learning mindset, leveraging online courses, certifications, and real-world problem-solving.&lt;/p&gt;

&lt;p&gt;📌 Leadership &amp;amp; Collaboration: Learn how to implement PATH-C at a team level, foster accountability, and build a high-performance DevOps culture.&lt;/p&gt;

&lt;p&gt;📌 Gamification &amp;amp; Motivation in DevOps: Transform DevOps tasks into engaging challenges using leaderboards, reward systems, and skill-based learning paths.&lt;/p&gt;

&lt;p&gt;📌 Future of DevOps: Get ahead of industry trends, including AI-driven automation, GitOps, platform engineering, and cloud-native security.&lt;/p&gt;

&lt;p&gt;Who Should Read This?&lt;br&gt;
🔹 DevOps Engineers &amp;amp; SREs — Seeking productivity hacks to optimize workflows.&lt;br&gt;
🔹 Software Developers — Want to integrate DevOps best practices into their development process.&lt;br&gt;
🔹 IT Managers &amp;amp; Leaders — Looking for scalable frameworks to improve team performance.&lt;br&gt;
🔹 DevSecOps &amp;amp; Cloud Architects — Interested in securing and automating software delivery.&lt;br&gt;
🔹 Tech Enthusiasts &amp;amp; Career Changers — Exploring DevOps as a career path and looking for practical strategies to fast-track their success.&lt;/p&gt;

&lt;p&gt;Why DevOps Professionals Love This Book&lt;br&gt;
📖 “This book provides a structured approach to DevOps productivity that is both practical and insightful. The PATH-C Framework is a game-changer for DevOps teams looking to optimize their workflows and reduce inefficiencies.” — ⭐⭐⭐⭐⭐&lt;/p&gt;

&lt;p&gt;📖 “Finally, a book that doesn’t just talk about DevOps tools, but teaches how to manage time, focus on critical tasks, and build a sustainable career in DevOps.” — ⭐⭐⭐⭐⭐&lt;/p&gt;

&lt;p&gt;📖 “As someone managing a DevOps team, this book helped me redefine how we structure our work. PATH-C is now a core part of our team’s daily execution strategy.” — ⭐⭐⭐⭐⭐&lt;/p&gt;

&lt;p&gt;🔹 Ready to Unlock DevOps Productivity?&lt;br&gt;
📌 Whether you’re an experienced DevOps engineer or just starting, this book will give you the strategies, tools, and mindset needed to thrive in DevOps.&lt;/p&gt;

&lt;p&gt;📌 Stop wasting time on inefficiencies, and start working smarter with PATH-C!&lt;/p&gt;

&lt;p&gt;📖 📢 Get your copy today!&lt;br&gt;
👉 Paperback &amp;amp; eBook available now on Amazon: DevOps Unlocked: Mastering Productivity with the PATH-C Framework | &lt;a href="https://a.co/d/jk2dfk1" rel="noopener noreferrer"&gt;https://a.co/d/jk2dfk1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Join the DevOps Productivity Revolution!&lt;br&gt;
💬 Have questions or insights? Drop a comment below!&lt;br&gt;
🔁 Share this post with your network and help fellow DevOps professionals level up!&lt;br&gt;
🚀 Follow me for more DevOps insights, tips, and career advice!&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps #Productivity #Automation #DevSecOps #CloudComputing #Agile #SRE #Kubernetes #CI/CD #DevOpsLeadership
&lt;/h1&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>devsecops</category>
      <category>automaton</category>
    </item>
    <item>
      <title>Ethical Considerations for Generative AI in DevOps: Building Trust and Ensuring Fairness</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Sat, 17 Feb 2024 15:26:58 +0000</pubDate>
      <link>https://forem.com/devops4mecode/ethical-considerations-for-generative-ai-in-devops-building-trust-and-ensuring-fairness-1k5l</link>
      <guid>https://forem.com/devops4mecode/ethical-considerations-for-generative-ai-in-devops-building-trust-and-ensuring-fairness-1k5l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hey, fellow DevOps enthusiasts! On February 15, 2024, I've been selected for the MBOT ( Board of Technologists (MBOT), ThursWeb online talk event, I've devled deep into a topic that's close to my heart and crucial for our field: the ethical integration of generative AI in . It Revolutionizing  with a Focus on Ethical Challenges and Ensuring Trust and Fairness.As we embark on this journey, we aim to understand and integrate trust and fairness in industry-changing technology. Imagine a world where AI not only automates but also enhances every aspect of our  practices, from deployment to infrastructure management, with a keen eye on ethics. Sounds dreamy, right? &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The 4 AI Waves by *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwtoe28a3fdim5tbp4f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwtoe28a3fdim5tbp4f6.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In his book AI, he discusses the four waves of AI. These waves illustrate the evolution of AI applications, reflecting an increase in complexity and integration into daily operations, directly relevant to ethical considerations as AI becomes more autonomous and ingrained in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg72nsuva2if8wqpialwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg72nsuva2if8wqpialwp.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Ethical Compass&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwclxlvixiit79jnrjes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwclxlvixiit79jnrjes.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To navigate these waters, we need to anchor ourselves to a robust ethical framework. This means actively working to eliminate  training data and model outputs, ensuring transparency in AI , and upholding privacy and security standards that protect sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mitigation Biases in DevOps&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgic80b53jh7xyz2oaog1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgic80b53jh7xyz2oaog1.png" alt="Image description" width="674" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The path towards responsible AI in DevOps is paved with transparency and fairness. It's about making AI's decision-making process as understandable as the code we write, ensuring that every stakeholder can trust the AI systems we deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyglq4jysm0zn0139pjxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyglq4jysm0zn0139pjxz.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging the Gap with Responsible AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvpi57kjzl9tq5h0t3n1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvpi57kjzl9tq5h0t3n1.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building an ethical AI culture in DevOps isn't just about protocols and guidelines; it's about fostering a community that values awareness, supports ethical practices, and embeds these principles into the very fabric of our workflows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd1f8lqye83uuq4qdac4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd1f8lqye83uuq4qdac4.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i70nxeja39kswky3bt2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5i70nxeja39kswky3bt2.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and Privacy at Stake&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In DevOps, data breaches are related to leaks in a continuous delivery pipeline. We prioritize protecting user data integrity at every stage of the CI/CD process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvc0hha9309ldph2kkz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fquvc0hha9309ldph2kkz.png" alt="Image description" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Clear Rules for AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Just like there are rules for playing games fairly, we make clear rules for AI that respect both excellent values and technical needs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiv46pcpofm3snycngv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiv46pcpofm3snycngv8.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top Ethical Bodies for AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As we look ahead, the goal is clear: to advance technology while upholding our shared ethical values. It's about creating a collaborative effort towards responsible AI that benefits society at large, ensuring that the advancements we make in DevOps through AI are transparent, fair, and beneficial for all.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkr97gkkqk6kajd2jkcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkr97gkkqk6kajd2jkcs.png" alt="Image description" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dpyi6g9uyyph88anlni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dpyi6g9uyyph88anlni.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>genai</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploy AWS Resources in Different AWS Account and Multi-Region with Terraform Multi-Provider and Alias</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Wed, 05 Apr 2023 02:59:44 +0000</pubDate>
      <link>https://forem.com/devops4mecode/deploy-aws-resources-in-different-aws-account-and-multi-region-with-terraform-multi-provider-and-alias-ie9</link>
      <guid>https://forem.com/devops4mecode/deploy-aws-resources-in-different-aws-account-and-multi-region-with-terraform-multi-provider-and-alias-ie9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As a company with a multi-account AWS setup, we recently faced the challenge of applying our terraform scripts across two AWS accounts, with some resources being created in one account and others in another. Fortunately, we discovered that Terraform provides a simple solution for this problem through the use of provider aliases.&lt;/p&gt;

&lt;p&gt;By creating aliases, we were able to have multiple AWS providers within the same terraform module. This functionality can be used in a variety of situations, such as creating resources in different regions of the same AWS account or in different regions of different AWS accounts.&lt;/p&gt;

&lt;p&gt;In this article, we will explain how to use provider aliases to create resources in single and multiple AWS accounts. By doing so, we hope to help others who may be facing similar challenges in their own multi-account AWS setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform Provider Alias
&lt;/h2&gt;

&lt;p&gt;In some cases, it may be necessary to define multiple configurations for the same provider and choose which one to use on a per-resource or per-module basis. This is often required when working with cloud platforms that have multiple regions, but can also be used in other situations, such as targeting multiple Docker or Consul hosts.&lt;/p&gt;

&lt;p&gt;To create multiple configurations for a provider, simply include multiple provider blocks with the same provider name. For each additional configuration, use the "alias" meta-argument to provide a unique name segment. By doing so, you can easily select the appropriate configuration for each resource or module, making it easier to manage complex infrastructure setups. For Example;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The default provider configuration; resources that begin with `aws_` will use
# it as the default, and it can be referenced as `aws`.
provider "aws" {
  region = "ap-southeast-1"
}

# Additional provider configuration for west coast region; resources can
# reference this as `aws.uswest1`.
provider "aws" {
  alias  = "uswest1"
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform Provider Alias Use-Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Multiple AWS Account&lt;/li&gt;
&lt;li&gt;Multiple Region with same AWS Account&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Prequisites
&lt;/h2&gt;

&lt;p&gt;As for this article, I have 2 AWS account and I configured both AWS account profile like below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;do4m-main&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29d05s3gntlw7imcjjj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29d05s3gntlw7imcjjj1.png" alt="1st AWS MAIN Account Configure" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;do4m-dev&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnfdgee5od968y98j1ph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnfdgee5od968y98j1ph.png" alt="1st AWS DEV Account Configure" width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will used "Profile" as we going to do the 1st use-case above.&lt;/p&gt;
&lt;h2&gt;
  
  
  Configuring Multiple AWS providers
&lt;/h2&gt;

&lt;p&gt;If you need to create resources in multiple AWS accounts using Terraform, you may run into an issue where you can't write two or more providers with the same name, such as two AWS providers. However, Terraform provides a solution to this problem by allowing you to use the "alias" argument.&lt;/p&gt;

&lt;p&gt;With the alias argument, you can set up multiple AWS providers and use them for creating resources in both AWS accounts. This approach enables you to differentiate between the providers and avoid conflicts that could arise from having two providers with the same name. By using this technique, you can effectively manage your infrastructure and ensure that your resources are deployed to the correct AWS accounts.&lt;/p&gt;

&lt;p&gt;I've set my &lt;strong&gt;provider.tf&lt;/strong&gt; file like below;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I used &lt;strong&gt;do4m-main&lt;/strong&gt; profile and set my alias as &lt;strong&gt;awsmain&lt;/strong&gt; which I want to deploy to &lt;/li&gt;
&lt;li&gt;I used &lt;strong&gt;do4m-dev&lt;/strong&gt; profile and set my alias as &lt;strong&gt;awsdev&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xlwvcngqg8rb08sep30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xlwvcngqg8rb08sep30.png" alt="Provider file" width="548" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Defining two providers for two different AWS accounts is a great start, but it's important to know how Terraform will know which resource to create in which AWS account. To achieve this, we need to know how to refer to these different providers when defining our resources in Terraform templates.&lt;/p&gt;

&lt;p&gt;To differentiate between providers, we need to use the "alias" argument that we defined earlier. By specifying the alias name in our resource block, we can instruct Terraform to create that resource using the provider with the corresponding alias.&lt;/p&gt;

&lt;p&gt;For example, if we have two providers named "aws" with aliases "awsmain" and "awsdev", respectively, we can create an EC2 instances in the Main and DEV account by using the following resource block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "ec2_main" {
  provider        = aws.awsmain
  ami             = var.ami_apsoutheast
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-main",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}

resource "aws_instance" "ec2_dev" {
  provider        = aws.awsdev
  ami             = var.ami_useast
  instance_type   = "t2.micro"
  key_name        = "linux-useast-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-dev",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I used a Linux AMI for this example and since we defined our AMI ID with variables, so we create another file called &lt;strong&gt;variables.tf&lt;/strong&gt; and fill all the AMI ID respectively in Singapore and US East region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "ami_apsoutheast" {
  type    = string
  default = "ami-0af2f764c580cc1f9"
}

variable "ami_useast" {
  type    = string
  default = "ami-00c39f71452c08778"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let's get started!: Terraform Apply
&lt;/h2&gt;

&lt;p&gt;Now we already set up all providers we want, we can run all the Terraform commands below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9yyo9qm4h0o9mij87md.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9yyo9qm4h0o9mij87md.png" alt="terraform init output" width="800" height="466"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

➜  terraform-multi-providers terraform plan   

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.ec2_dev will be created
  + resource "aws_instance" "ec2_dev" {
      + ami                                  = "ami-0af2f764c580cc1f9"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "linux-sea-key"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = [
          + "ansible-sg",
        ]
      + source_dest_check                    = true
      + subnet_id                            = (known after apply)
      + tags                                 = {
          + "Name"    = "account-dev"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tags_all                             = {
          + "Name"    = "account-dev"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id                 = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
          + instance_metadata_tags      = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_card_index    = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record    = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type                        = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_instance.ec2_main will be created
  + resource "aws_instance" "ec2_main" {
      + ami                                  = "ami-0af2f764c580cc1f9"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "linux-sea-key"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = [
          + "ansible-sg",
        ]
      + source_dest_check                    = true
      + subnet_id                            = (known after apply)
      + tags                                 = {
          + "Name"    = "account-main"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tags_all                             = {
          + "Name"    = "account-main"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id                 = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
          + instance_metadata_tags      = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_card_index    = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record    = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type                        = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the result/outcome from terraform apply, we have both EC2 instance deployed in both AWS account we set earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS-MAIN(Singapore):&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7alrj0divy1n9928avq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7alrj0divy1n9928avq.png" alt="MAIN account EC2 Instance" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS-DEV(US East-Virginia)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig1ccyvpv4kb5r00erqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig1ccyvpv4kb5r00erqg.png" alt="DEV account EC2 Instance" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy EC2 Instance in multi-region with same AWS Account
&lt;/h2&gt;

&lt;p&gt;Now we proceed with our 2ns use-case, we want to use Terraform Provider Alias to deploy 2 EC2 intances into different AWS Region with my MAIN AWS account. For this use-case I have the following scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I want to deploy the 1st EC2 to Singapore region (ap-southeast-1)&lt;/li&gt;
&lt;li&gt;I want to deploy the 2nd EC2 to Sydney region (ap-southeast-2)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create new folder called multi-region in current code workspace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir multi-region
cd multi-region
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create our &lt;strong&gt;provider.tf&lt;/strong&gt; with alias like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  alias  = "se1"
  region = "ap-southeast-1"
}

provider "aws" {
  alias  = "se2"
  region = "ap-southeast-2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have 2 terraform providers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I've created 1st terraform alias &lt;strong&gt;se1&lt;/strong&gt; and region is &lt;strong&gt;Singapore (ap-southeast-1)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Another terraform alias is &lt;strong&gt;se2&lt;/strong&gt; and region is &lt;strong&gt;Sydney ap-southeast-2&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can re-use/clone the earlier &lt;strong&gt;main.tf&lt;/strong&gt; file we created in 1st use-case and since already know AMI ID I want to used;you can modified like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "ec2_main" {
  provider        = aws.se1
  ami             = "ami-0af2f764c580cc1f9"
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-se1",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}

resource "aws_instance" "ec2_dev" {
  provider        = aws.se2
  ami             = "ami-0d0175e9dbb94e0d2"
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-se2",
    Project = "multiprovider",
    Region  = "ap-southeast-2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we can run the Terraform commands below to see the output of above setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After all terraform commands executed, we get both EC2 instances in both regions below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Singapore Region&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv5l9aszt3c5ciajq31v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhv5l9aszt3c5ciajq31v.png" alt="AWS-SE1" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sydney Region&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih1rzxe9lziigtcdye4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih1rzxe9lziigtcdye4a.png" alt="AWS-SE2" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we strive to scale our application and infrastructure, it's important to adopt an effective approach when working with Terraform. Relying on a single state file can lead to potential bottlenecks and even a point of failure, especially when managing a large infrastructure team. However, we can overcome these challenges by using multiple AWS providers in different combinations.&lt;/p&gt;

&lt;p&gt;This approach enables us to manage numerous Terraform state files and carry out multi-region deployment. The same principles can also apply when using different types of providers in the same Terraform module, such as AWS and GCP. By embracing this strategy, we can streamline our infrastructure management processes and optimize our workflow. This ensures that our application and infrastructure are fully scalable, flexible, and resilient, regardless of the size of our infrastructure team or the types of providers we use.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>automaton</category>
    </item>
    <item>
      <title>Create an API with a private integration to an AWS ECS service with Terraform (IaC)</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Mon, 13 Feb 2023 15:30:28 +0000</pubDate>
      <link>https://forem.com/devops4mecode/create-an-api-with-a-private-integration-to-an-aws-ecs-service-with-terraform-iac-3aj4</link>
      <guid>https://forem.com/devops4mecode/create-an-api-with-a-private-integration-to-an-aws-ecs-service-with-terraform-iac-3aj4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;You may connect Amazon API Gateway API routes to VPC-restricted resources using VPC links. A VPC connection is an abstraction layer on top of other networking resources and functions like any other integration endpoint for an application programming interface (API). This makes it easier to set up secure connections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use-case
&lt;/h2&gt;

&lt;p&gt;We want to setup a private AWS API Gateway to our backend service that used AWS serverless service such a Fargate and it's deployed in AWS ECS Cluster. The architecture below that we want to create in this blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9talmb0jt1q4m3x0b68g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9talmb0jt1q4m3x0b68g.png" alt="Use-Case Architecture" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;br&gt;
We need to create our main Terraform file(&lt;strong&gt;main.tf&lt;/strong&gt;); proceed to execute below command in your prefer directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir automation &amp;amp;&amp;amp; cd automation
vi main.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the top we set our AWS Availability Zone (AZ):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_availability_zones" "available_zones" {
  state = "available"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to create our versions.tf file as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}
provider "aws" {
  region = "ap-southeast-1"
  default_tags {
    tags = {
      Name = "do4m-demo"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, we need output.tf file to get the &lt;br&gt;
Lastly, for our prerequisites step, we create variables.tf file for all required Fargate application count we neeed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "app_count" {
  type    = number
  default = 1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;VPC Setup&lt;/strong&gt;&lt;br&gt;
First, you use a Terraform create a Amazon VPC which includig VPC subnets(Private &amp;amp; Public), Internet Gateway, NAT Gateway for Private subnets and route table for subnets association.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#VPC Setting
resource "aws_vpc" "default" {
  cidr_block = "10.32.0.0/16"
}

resource "aws_subnet" "public" {
  count                   = 2
  cidr_block              = cidrsubnet(aws_vpc.default.cidr_block, 8, 2 + count.index)
  availability_zone       = data.aws_availability_zones.available_zones.names[count.index]
  vpc_id                  = aws_vpc.default.id
  map_public_ip_on_launch = true
}

resource "aws_subnet" "private" {
  count             = 2
  cidr_block        = cidrsubnet(aws_vpc.default.cidr_block, 8, count.index)
  availability_zone = data.aws_availability_zones.available_zones.names[count.index]
  vpc_id            = aws_vpc.default.id
}

resource "aws_internet_gateway" "gateway" {
  vpc_id = aws_vpc.default.id
}

resource "aws_route" "internet_access" {
  route_table_id         = aws_vpc.default.main_route_table_id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.gateway.id
}

resource "aws_eip" "gateway" {
  count      = 2
  vpc        = true
  depends_on = [aws_internet_gateway.gateway]
}

resource "aws_nat_gateway" "gateway" {
  count         = 2
  subnet_id     = element(aws_subnet.public.*.id, count.index)
  allocation_id = element(aws_eip.gateway.*.id, count.index)
}

resource "aws_route_table" "private" {
  count  = 2
  vpc_id = aws_vpc.default.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = element(aws_nat_gateway.gateway.*.id, count.index)
  }
}

resource "aws_route_table_association" "private" {
  count          = 2
  subnet_id      = element(aws_subnet.private.*.id, count.index)
  route_table_id = element(aws_route_table.private.*.id, count.index)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Security Group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "lb" {
  name   = "do4m-alb-sg"
  vpc_id = aws_vpc.default.id

  ingress {
    protocol    = "tcp"
    from_port   = 80
    to_port     = 80
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS ALB Setting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb" "default" {
  name            = "do4m-lb"
  subnets         = aws_subnet.public.*.id
  security_groups = [aws_security_group.lb.id]
}

resource "aws_lb_target_group" "hello_world" {
  name        = "do4m-target-group"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = aws_vpc.default.id
  target_type = "ip"
}

resource "aws_lb_listener" "hello_world" {
  load_balancer_arn = aws_lb.default.id
  port              = "80"
  protocol          = "HTTP"

  default_action {
    target_group_arn = aws_lb_target_group.hello_world.id
    type             = "forward"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS ECS and Fargate Setting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_task_definition" "hello_world" {
  family                   = "hello-world-app"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 1024
  memory                   = 2048

  container_definitions = &amp;lt;&amp;lt;DEFINITION
[
  {
    "image": "registry.gitlab.com/architect-io/artifacts/nodejs-hello-world:latest",
    "cpu": 1024,
    "memory": 2048,
    "name": "hello-world-app",
    "networkMode": "awsvpc",
    "portMappings": [
      {
        "containerPort": 3000,
        "hostPort": 3000
      }
    ]
  }
]
DEFINITION
}

resource "aws_security_group" "hello_world_task" {
  name   = "do4m-task-sg"
  vpc_id = aws_vpc.default.id

  ingress {
    protocol        = "tcp"
    from_port       = 3000
    to_port         = 3000
    security_groups = [aws_security_group.lb.id]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_ecs_cluster" "main" {
  name = "do4m-cluster"
}

resource "aws_ecs_service" "hello_world" {
  name            = "hello-world-service"
  cluster         = aws_ecs_cluster.main.id
  task_definition = aws_ecs_task_definition.hello_world.arn
  desired_count   = var.app_count
  launch_type     = "FARGATE"

  network_configuration {
    security_groups = [aws_security_group.hello_world_task.id]
    subnets         = aws_subnet.private.*.id
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.hello_world.id
    container_name   = "hello-world-app"
    container_port   = 3000
  }

  depends_on = [aws_lb_listener.hello_world]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS API Gateway and VPC PrivateLink Setting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#1: API Gateway
resource "aws_apigatewayv2_api" "api" {
  name          = "do4m-api-gateway"
  protocol_type = "HTTP"
}
#2: VPC Link
resource "aws_apigatewayv2_vpc_link" "vpc_link" {
  name               = "development-vpclink"
  security_group_ids = [aws_security_group.lb.id]
  subnet_ids         = aws_subnet.private.*.id
}
#3: API Integration
resource "aws_apigatewayv2_integration" "api_integration" {
  api_id             = aws_apigatewayv2_api.api.id
  integration_type   = "HTTP_PROXY"
  connection_id      = aws_apigatewayv2_vpc_link.vpc_link.id
  connection_type    = "VPC_LINK"
  description        = "VPC integration"
  integration_method = "ANY"
  integration_uri    = aws_lb_listener.hello_world.arn
  depends_on         = [aws_lb.default]
}
#4: APIGW Route
resource "aws_apigatewayv2_route" "default_route" {
  api_id    = aws_apigatewayv2_api.api.id
  route_key = "$default"
  target    = "integrations/${aws_apigatewayv2_integration.api_integration.id}"
}
#5: APIGW Stage
resource "aws_apigatewayv2_stage" "default_stage" {
  api_id      = aws_apigatewayv2_api.api.id
  name        = "$default"
  auto_deploy = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Execute&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need to configure AWS account before we can run Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub0ac2crf59gijenryoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub0ac2crf59gijenryoj.png" alt="aws configure" width="800" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we run command to initial our Terraform modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4ca2ksbryr58n1vk38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl4ca2ksbryr58n1vk38.png" alt="terraform init" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we run validation and formating command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform validate &amp;amp;&amp;amp; terraform fmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5d9zss53rd3y2bwxobi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5d9zss53rd3y2bwxobi.png" alt="terraform validate &amp;amp;&amp;amp; terraform fmt" width="800" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we run command to create Terraform Plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -no-color &amp;gt; tfplan.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3n88f4d23yiue8actxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3n88f4d23yiue8actxg.png" alt="terraform plan" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, once we confirmed and validated our plan, we can execute Terraform Apply command to create all the AWS resources/services we set above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;VPC&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr0icfejnw88i8kxqzs4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxr0icfejnw88i8kxqzs4.png" alt="VPC" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subnets&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszn4fpt39wjblxuukezn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszn4fpt39wjblxuukezn.png" alt="subnets" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Route Table&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s9s6p223m6ca5b409db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s9s6p223m6ca5b409db.png" alt="Route Table" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Elastic Public IP&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5uanxsbyfznbush82hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5uanxsbyfznbush82hj.png" alt="EIP" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17m9lf4owhgeany3pslg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17m9lf4owhgeany3pslg.png" alt="IGW" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;NAT Gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3j3us6pzteflmnmtio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z3j3us6pzteflmnmtio.png" alt="NAT GW" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ECS Cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthqv4y0uh1kc0bdfm6ie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthqv4y0uh1kc0bdfm6ie.png" alt="ECS Cluster" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fargate Task Definition&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrz3ym9rhv8c47lbsgbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrz3ym9rhv8c47lbsgbj.png" alt="Fargate" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS API Gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtdhruscdkt47dgodecx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtdhruscdkt47dgodecx.png" alt="API Gateway" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VPC PrivateLink&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5of4ujrub902iay250f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5of4ujrub902iay250f.png" alt="VPC PrivateLink" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once your API has been created, you must test it to ensure it is functioning properly. Invoking your API from a web browser will save time and effort. In order to put your API through its paces&lt;br&gt;
To use the API Gateway, log in to the console via &lt;a href="https://console.aws.amazon.com/apigateway" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/apigateway&lt;/a&gt; . Choose your API and you need to invoke URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dhgrxmt8pe5ksuwangt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dhgrxmt8pe5ksuwangt.png" alt="Invoke API" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result we able to call our private services below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubsbdltqz3a9e4444cc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubsbdltqz3a9e4444cc4.png" alt="Result" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;To clean/remove all AWS resources we created in this post, we run Terraform destroy below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Source Code
&lt;/h2&gt;

&lt;p&gt;You can refer for full source via this link: &lt;a href="https://github.com/devops4mecode/ecs-fargate-vpclink-apigw" rel="noopener noreferrer"&gt;https://github.com/devops4mecode/ecs-fargate-vpclink-apigw&lt;/a&gt;&lt;/p&gt;

</description>
      <category>coding</category>
    </item>
    <item>
      <title>AWS Cost Optimization : Automatic Detect Your Unused EBS and Delete It</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Wed, 08 Feb 2023 03:31:29 +0000</pubDate>
      <link>https://forem.com/devops4mecode/aws-cost-optimization-automatic-detect-your-unused-ebs-and-delete-it-1e9h</link>
      <guid>https://forem.com/devops4mecode/aws-cost-optimization-automatic-detect-your-unused-ebs-and-delete-it-1e9h</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;What is cost optimization in AWS?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Its primary objective is to achieve the lowest possible cost for the system or workload while operating in the AWS environment. You should try to minimise expenditures while taking into consideration the needs of your account. However, you shouldn't do this at the expense of performance, security, or dependability.&lt;/p&gt;

&lt;p&gt;It is crucial to fully grasp the value of AWS, as well as measure and efficiently manage your AWS consumption and expenses, as you transfer workloads to AWS and grow your use of different AWS services. This is especially important when you increase your use of AWS services.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Cost Optimization Use-Case&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A lack of awareness of the EBS volume lifetime results in additional expenditures for underused and neglected resources. Unexpected costs on an AWS account may result from Elastic Block Storage (EBS) volumes that aren't associated with an EC2 instance or used. Some EBS volumes may continue to exist after an EC2 instance is shut down. You are paying for EBS volumes in AWS accounts even though they are not associated.  Following the below steps will allow you to save cloud charges and avoid wasted resources by removing an EBS volume that was accidentally left unattached.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Steps&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EBS Volume&lt;/strong&gt;&lt;br&gt;
We purposely create another 2 new AWS EBS that not attach to any EC2 instance. We use &lt;a href="https://github.com/boto/boto3"&gt;Boto3 &lt;/a&gt;; which Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python which is maintained and published by Amazon Web Services. Boto3 allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2 as well as provision AWS Services from Boto3. You need to configure your local environment with your AWS Access Key ID and Access Secret Key by execute below command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws configure&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TIDk59y1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkfeinayjosvyw119ywo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TIDk59y1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkfeinayjosvyw119ywo.png" alt="AWS Account Configuration" width="474" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have configured your AWS account and installed the Boto3 SDK, then you may execute the Pyhton code below to create a new EBS volume for our use-case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### Author:Najib Radzuan
### CreatedDate:23 Dec 2022
### Purpose: Create New EBS volumes
### Requirements: Boto3(EC2)

import boto3
ec2 = boto3.client('ec2')
response = ec2.create_volume(
    AvailabilityZone='ap-southeast-1a',
    Size=20,
    VolumeType='gp2',
    TagSpecifications=[
        {
            'ResourceType': 'volume',
            'Tags': [
                {
                    'Key': 'Name',
                    'Value': 'Unused_Vol1'
                },
            ]
        },
    ],
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We had created 2 new EBS volumes and it not attach to any EC2 instance as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LSL1S0Vf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bugnjmdvojfixwobt7q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LSL1S0Vf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bugnjmdvojfixwobt7q7.png" alt="New EBS Volume" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;br&gt;
We need to create a new AWS Simple Notification Service (SNS) for our use-case notification whenever the AWS Lambda function detected unused EBS and deleted it. Enter the following input for our AWS SNS Topic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Type: Standard&lt;/li&gt;
&lt;li&gt;Name: Notify-Unused-EBS-Volume&lt;/li&gt;
&lt;li&gt;Display: Notify-Unused-EBS-Volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let the rest of the configuration to be default values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ssf-lTLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghkmhnp38uqcifg24ic1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ssf-lTLR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghkmhnp38uqcifg24ic1.png" alt="AWS SNS Topic Configuration" width="852" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we to set/create our subscription of the newly created SNS Topic. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Topic ARN: [Newly created AWS SNS Topic ARN]&lt;/li&gt;
&lt;li&gt;Protocol: Email&lt;/li&gt;
&lt;li&gt;Endpoint: [your subscriber email]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let the rest to be default values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U4Wl8H2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ioc61qk7eahj36vle33p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U4Wl8H2i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ioc61qk7eahj36vle33p.png" alt="SNS Subscribers" width="846" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we need confirmation from the subscriber we set above via email below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wm62hkrv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/882yc673lbcpc11eo33c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wm62hkrv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/882yc673lbcpc11eo33c.png" alt="SNS Subscription Confirmation" width="880" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the subscriber clicked the "Confirmation subscription", they we redierected below page that said they are subscribed to the AWS SNS Topic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WxntXWmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruvsiprsluzxvk1i145f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WxntXWmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruvsiprsluzxvk1i145f.png" alt="SNS Confirmation" width="832" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we have new subscriber to our AWS SNS Topic:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M43ij9hD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1c09gf4xrm5qtqx98lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M43ij9hD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1c09gf4xrm5qtqx98lr.png" alt="Configured AWS SNS Topic" width="880" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't forget to copy and paste the AWS SNS Topic ARN somewhere, since we going to use it in AWS Lambda Function code later for SNS notification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a new &lt;strong&gt;AWS Lamda Function&lt;/strong&gt; and you can enter below configuration for our new AWS Lambda Funtion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Author from scratch&lt;/li&gt;
&lt;li&gt;Function name: do4m-unused-volume&lt;/li&gt;
&lt;li&gt;Runtime : Pyhton 3.9&lt;/li&gt;
&lt;li&gt;Architecture: x86_64&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Left the rest configuration to be default values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qn94nwB1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vn77w86o6fm0l9eqnfaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qn94nwB1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vn77w86o6fm0l9eqnfaf.png" alt="AWS Lambda Function Configuration" width="880" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy and paste below code into your Lambda Function code. There is 2 parts in this code:&lt;/p&gt;

&lt;p&gt;1) Boto3&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We use Boto3 EC2 module to find all unsed EBS volumnes in the region we set in the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We put the detected unused EBS ID in "Attachement" for SNS notification usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once we get all the unused EBS IDs, then we delete EBS volume that had mark with “Available” status.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) AWS SNS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We get from "Attachment" unused EBS IDs and we send out the SNS email notification to our subscriber(s) that we set in the previous step.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### Author:Najib Radzuan
### CreatedDate:23 Dec 2022
### Purpose: Detect all unused EBS volumes under selected region
### Requirements: Boto3(EC2),SNS Arn

import boto3
ec2 = boto3.client('ec2')
sns_client = boto3.client('sns')
volumes = ec2.describe_volumes()
ec2 = boto3.resource('ec2',region_name='ap-southeast-1')

def lambda_handler(event, context):
    #Get All Unused Volume
    unused_volumes = []
    for vol in volumes['Volumes']:
        if len(vol['Attachments']) == 0:
            vol1 = ("-----Unused Volume ID = {}------".format(vol['VolumeId']))
            unused_volumes.append(vol1)

    #Delete Volume if Unused
    for vol in ec2.volumes.all():
        if  vol.state=='available':
                vid=vol.id
                v=ec2.Volume(vol.id)
                v.delete()
                print ('Deleted ' +vid)
        ## If we use tagging as ##
        # continue
        # for tag in vol.tags:
        #     if tag['Key'] == 'Name':
        #         value = tag['Value']
        #         if value != 'DND' and vol.state == 'available':
        #             vid = vol.id
        #             v = ec2.Volume(vol.id)
        #             v.delete()
        #             print('Deleted ' + vid)

    #email
    sns_client.publish(
        TopicArn='arn:aws:sns:ap-southeast-1:627315336549:Notify-Unused-EBS-Volume',
        Subject='Warning - Unused Volume List',
        Message=str(unused_volumes)
    )
    return "success"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;&lt;br&gt;
The final step we going to set our AWS EventBridge Rule for schedule to trigger our AWS Lambda Function we created in the previous step. Go to AWS EventBridge-&amp;gt;Create New Rule; we enter below input.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: do4m-unused-ebs-rule&lt;/li&gt;
&lt;li&gt;Description: do4m-unused-ebs-rule&lt;/li&gt;
&lt;li&gt;Event Bus: default&lt;/li&gt;
&lt;li&gt;Rule Type: Schedule
Proceed with "Continue to create rule"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LgIy257R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/251uuoahj1xuldkdom74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LgIy257R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/251uuoahj1xuldkdom74.png" alt="EventBridge Rule Creation" width="820" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We set our EventBridge Schedule Pattern. We set our Cron Expression as below. For this use-case; I created schedule patter for &lt;em&gt;&lt;strong&gt;every Saturday at 8 AM it will triggered Lambda and it happen every month every year&lt;/strong&gt;&lt;/em&gt;. You may change to your requirement and you can refer the cron-expression &lt;a href="https://docs.aws.amazon.com/scheduler/latest/UserGuide/schedule-types.html?icmpid=docs_console_unmapped#cron-based"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JaaNoCuz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ridejovjdifo923cjgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JaaNoCuz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ridejovjdifo923cjgg.png" alt="Schedule Pattern" width="826" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we choose our Target for EventBridge Rule as invoke the AWS Lambda. We select the AWS Lambda Function we created in the previous step as our Lambda Function below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MZiCy-g3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au18l0g03fhcwgpagop0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MZiCy-g3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au18l0g03fhcwgpagop0.png" alt="Target" width="823" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, we created our AWS EventBridge Rule for Scheduler.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OXlHwsoQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbwv5ingl7zm1ot13m0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OXlHwsoQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbwv5ingl7zm1ot13m0r.png" alt="EventBridge Rule" width="880" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
For sample I've changed to trigger our AWS Lambda function every 5 minutes to get fast result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EYIJuiaw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xp6j2wjmwvfeuokzo477.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EYIJuiaw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xp6j2wjmwvfeuokzo477.png" alt="Change Rule" width="880" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;We can monitor whether our EventBridge Rule is work by go to CloudWatch Log group we created via AWS Lambda Function. As we can see CloudWatch log record below it detected the unused EBS volumes and deleted it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wpubThbT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue4gu7je72hvxo7lpkez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wpubThbT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue4gu7je72hvxo7lpkez.png" alt="CloudWatch-EBS Volume Deleted" width="880" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our EBS Volume is deleted and only left the "In-Use" EBS volume in AWS Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rBmRe7yX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j13vowf98krztpzdua0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rBmRe7yX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j13vowf98krztpzdua0c.png" alt="EBS Volume Listing" width="880" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also get SNS notification that our AWS Lambda Function detected the unused and deleted it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IVlb2_b9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4rjfc6hz959itsxeqax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IVlb2_b9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4rjfc6hz959itsxeqax.png" alt="SNS Notification" width="880" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post we learn how to do AWS Cost Optimization by auto detect your unused EBS volume via AWS Lambda Function which it triggered by AWS EventBridge Rule. Lastly we get AWS SNS notifition email whenever Lambda Function found unused EBS volume.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploy Apps using GitHub Actions to AWS Beanstalk</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Wed, 17 Aug 2022 05:46:00 +0000</pubDate>
      <link>https://forem.com/devops4mecode/deploy-apps-using-github-actions-to-aws-beanstalk-12n</link>
      <guid>https://forem.com/devops4mecode/deploy-apps-using-github-actions-to-aws-beanstalk-12n</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post I will show you how you use the GitHub Actions and Amazon Web Services (AWS)-Beanstalk, we'll build a CI/CD pipeline from the ground up. I've broken down the guide into three sections for easier reading and comprehension:&lt;/p&gt;

&lt;p&gt;Before getting too bogged down in jargon, let's define a few key terms.&lt;/p&gt;

&lt;p&gt;Second, we'll implement continuous integration (CI) to ensure that all builds and tests are executed without human intervention.&lt;/p&gt;

&lt;p&gt;Our code will be automatically deployed to AWS after we have set up continuous delivery (CD).&lt;/p&gt;

&lt;p&gt;Yes, that was a lot to take in. First, let's define each of these terms in detail so you know exactly where we're headed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;What is GitHub Action?&lt;/strong&gt;&lt;br&gt;
I'm going to oversimplify the GitHub Actions concept because I can't think of a better way to explain it. Hence, I've made below Mind Map for better understanding on GitHub Action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5tju467xdzowvbyi64j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5tju467xdzowvbyi64j.png" alt="What is GitHub Action?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details on GitHub , please go to &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;GitHub Action Details&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;How to automatically build and run unit tests on push or on PR to the main branch with GitHub Actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to automatically deploy to AWS on push or on PR to the main branch with GitHub Actions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;PART 1: How to Automatically Run Builds and Tests - Continuous Integration (CI)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;P1:P1 Project Folder &amp;amp; Files Structure&lt;/strong&gt;&lt;br&gt;
I have created a demo Django project which you can grab from this repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/devops4mecode/django-github-actions-aws.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7uoluad5425pufsmwtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7uoluad5425pufsmwtp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you download the code, create a virtualenv and install the requirements via pip:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2zazn8jo2d7vm344xmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2zazn8jo2d7vm344xmg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part1:P2 Locally Build and Unit-Test&lt;/strong&gt;&lt;br&gt;
We build and test the Django App locally below;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F257h8zh3inpnlygf63cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F257h8zh3inpnlygf63cv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's set up GitHub Actions now that you have a local Django project ready to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part1:P3 Create GitHub Repository for Django App&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need tp create a new GitHub repository, once create you will get below repository. You may use different name than mine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95kwcjswx7obv5tyreqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95kwcjswx7obv5tyreqc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we also don't have GitHub Action configure yet.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhp9jmzd6fvn7qxtzoce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhp9jmzd6fvn7qxtzoce.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also created a new DEV branch and update all code/script that we need for GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEPS:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Part1:S1 How to Configure GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our project is now ready to go. More importantly, however, we have committed our polished update to GitHub and have a testcase prepared for the view that we have defined.&lt;/p&gt;

&lt;p&gt;Each time a pull request or push is made to master, we want GitHub to automatically initiate a build and run all of our tests. The build and testing weren't triggered by GitHub Actions after we published to main.&lt;/p&gt;

&lt;p&gt;Make a new folder in your.github called workflows: The YAML file editor is where all your files will be made. In order to get our build and test routines up and running, we need to make a first workflow. To accomplish this, we make a new file ending in.yml. Put the name "build test.yml" in this file's name. Incorporate the following into the newly produced yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build and Test

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout code
      uses: actions/checkout@v2
    - name: Set up Python Environment
      uses: actions/setup-python@v2
      with:
        python-version: '3.x'
    - name: Install Dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt

    - name: Run Tests
      run: |
        python manage.py test
  deploy:
    needs: [test]
    runs-on: ubuntu-latest

    steps:
    - name: Checkout GitHub source code
      uses: actions/checkout@v2

    - name: Generate deployment package
      run: zip -r deploy.zip . -x '*.git*'

    - name: Deploy to AWS Elastic Beanstalk
      uses:  einaregilsson/beanstalk-deploy@v20
      with:
        aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        application_name: django-github-actions-aws
        environment_name: Djangogithubactionsaws-env
        version_label: "ver-${{ github.sha }}"
        use_existing_version_if_available: true
        region: "ap-southeast-1"
        deployment_package: deploy.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Part1:S2 Push/PR DEV branch code to GitHub Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that you've defined a workflow by adding the config file in the designated folder, you can commit and push your change to your remote repo.&lt;/p&gt;

&lt;p&gt;Let's create Pull Request and Merge it to Master branch.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kk7wph61vnhxm7m98nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kk7wph61vnhxm7m98nf.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
While our PR being check, you can see our GitHub Action already running;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbevnrjl14xludw59oh0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbevnrjl14xludw59oh0n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you navigate to the Actions tab of your remote repo, you should see a workflow with the name Build and Test (the name which we've given it) listed there. As of now, we can see our Deploy jobs is failed, since we not setup yet our AWS Beanstalk. So this output is expected, don't worry, we will setup it later.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku3xxo5ve5gpr34nqvay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku3xxo5ve5gpr34nqvay.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PART 2: How to Automatically Deploy Our Code to AWS - Continuous Deployment (CD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part2:P1 AWS AWS_ACCESS_KEY_ID &amp;amp; AWS_SECRET_ACCESS_KEY&lt;/strong&gt;&lt;br&gt;
Go to AWS Management Console -&amp;gt; IAM -&amp;gt; User -&amp;gt; Access Key&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffm46afxm8wudgret4h5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffm46afxm8wudgret4h5i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please copy and paste Access Key and Secret to your Notepad/any place you prefered, since we will reuse it when we want to setup our secret in GitHub repository later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part2:P2 Elastic Beanstalk environment&lt;/strong&gt;&lt;br&gt;
Before showing the workflow, we must have an operating environment to receive our Django app from GitHub Actions.  Since we will use DJango Python as our language, hence, create a new Environment as below.&lt;br&gt;
3.1 Create a new Beanstalk applicaton&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbns4xtaxyillmt5i5k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbns4xtaxyillmt5i5k3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgwcsz7jx1g69ch1ndfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgwcsz7jx1g69ch1ndfu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we have created our Beanstalk Sample App&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo94hlr46nptga8rvrvxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo94hlr46nptga8rvrvxp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEPS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part2:S1 Configure GitHub Secret&lt;/strong&gt;&lt;br&gt;
Previously, we have copy our AWS Access Key and Secret Key, now go to GitHub Repository Setting. Then, go to Secret below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibmvr7xobvm0l1vydz35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibmvr7xobvm0l1vydz35.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose "Action" menu, then click "New repository secret" button;  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpez3myy1pd8o777clqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpez3myy1pd8o777clqs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give your secret name, please follow back what you had defined in our workflow script above.&lt;/p&gt;

&lt;p&gt;AWS_ACCESS_KEY_ID:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfs2vzpwwub1bun5ga86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfs2vzpwwub1bun5ga86.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
AWS_SECRET_ACCESS_KEY:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg7o35dfhf3hw6unc5w0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg7o35dfhf3hw6unc5w0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part2:S2 Configure your Project for Elastic Beanstalk&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our project's application.py file is the default location where Elastic Beanstalk will look for our code. Our application cannot run without that file, but it is not included in our repository. The Elastic Beanstalk must be instructed to use the wsgi.py file in our project rather than the default python servlet container. Here's what you need to do:&lt;/p&gt;

&lt;p&gt;In the root of your project's directory, make a new folder and label it.ebextensions. Make a configuration file in that directory. What you want to call it is up to you. My file is called eb.config. The following should be included in your configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;option_settings:
  aws:elasticbeanstalk:container:python:
    WSGIPath: django_github_actions_aws.wsgi:application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjxh1pgtc83qvgrr0ia8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjxh1pgtc83qvgrr0ia8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part2:S3 Update and finalize your Workflow File&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please update all 5 parameters below with your parameters:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2g7jyyw3t1vm6jhja6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2g7jyyw3t1vm6jhja6p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your AWS Access Key.&lt;/li&gt;
&lt;li&gt;Your AWS Secret Key.&lt;/li&gt;
&lt;li&gt;Beanstalk's Application Name.&lt;/li&gt;
&lt;li&gt;Beanstalk's Environment Name.&lt;/li&gt;
&lt;li&gt;AWS Region that want to deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step utilizes einaregilsson/&lt;a href="mailto:beanstalk-deploy@v20"&gt;beanstalk-deploy@v20&lt;/a&gt;. Actions are reusable apps that handle often repeated activities. the e inaregilsson/beanstalk-deploy@v20 is one.m To emphasize the above, remember our deployment step: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;GitHub-&amp;gt;AmazonS3-&amp;gt;Elastic Beanstalk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We didn't set up Amazon s3 in this tutorial. We didn't upload to or pull from an S3 bucket in our workflow file.  The einaregilsson/beanstalk-deploy@v20 operation performs all that for us. You can construct your own action to handle monotonous activities and sell it on GitHub Marketplace.&lt;/p&gt;

&lt;p&gt;Now that you've updated your workflow file locally, you can then commit and push this change to your remote. Your jobs will run and your code will be deployed to the Elastic Beanstalk instance you created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk51hna3hkd4zdjamkhfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk51hna3hkd4zdjamkhfe.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You also can check the log on deploy jobs to our AWS Beanstalk below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzst7p4bc78rl6kjccy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzst7p4bc78rl6kjccy9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, we have updated our Sample App to new code below.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ipnyl97q9zqhntiv7n4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ipnyl97q9zqhntiv7n4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt; &lt;br&gt;
Good lord, this one was a marathon, right? I concluded by defining CI/CD Pipeline, Amazon Web Services (AWS), and GitHub Actions. We also learned how to set up GitHub Actions for automated code deployment to an AWS Elastic Beanstalk environment.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>githubactions</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>DevSecOps in AWS Codepipeline</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Mon, 25 Apr 2022 17:08:53 +0000</pubDate>
      <link>https://forem.com/devops4mecode/devsecops-in-aws-codepipeline-563a</link>
      <guid>https://forem.com/devops4mecode/devsecops-in-aws-codepipeline-563a</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ufu9ZEbJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e83l8s715mrjjdcmrdqo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ufu9ZEbJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e83l8s715mrjjdcmrdqo.jpg" alt="Image description" width="880" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Traditional security cannot keep pace with DevOps’s lightning-fast software development cycles. Organisations need to inject continuous security and automated testing throughout the software development process to improve security. Secure DevOps—that is, DevSecOps—is about making security central to development and operations. Building security into every stage of the software pipeline fills long-standing gaps between IT and Security. DevSecOps approach helps spot software security issues faster and alleviates security bottlenecks. And, it preserves the rapid development pace that DevOps makes possible. This post will show you how to do implement DevSecOps CI/CD using AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before DevSecOps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ywb3otyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iy8fm288ynd93el3iuky.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ywb3otyR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iy8fm288ynd93el3iuky.jpg" alt="Image description" width="880" height="787"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is normal Angular website which the flow like below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer/Programmer wil commit the new updated code to &lt;strong&gt;AWS CodeCommit&lt;/strong&gt; Repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CodePipeline **automatic triggered from the CodeCommit updated code in **SOURCE stage&lt;/strong&gt; below:
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YlQrud-G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ta5hq2nv3v33sfbftdbh.png" alt="Image description" width="709" height="391"&gt; &lt;/li&gt;
&lt;li&gt;Then it's goes to &lt;strong&gt;BUILD stage&lt;/strong&gt; started the &lt;strong&gt;Unit-Test&lt;/strong&gt; action group using "ng-test" command from "unit-test-buildspec.yml" file we set.&lt;/li&gt;
&lt;li&gt;Then it goes to Build process by using "ng-build" via buildspec.yml file we set.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fs7rsmsn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajojumfisceemgetydp8.png" alt="Image description" width="730" height="486"&gt;
&lt;/li&gt;
&lt;li&gt;We have dynamic Staging EC2 server by using AWS CloudFormation which we only provision Staging server whenever we have new updated code.&lt;/li&gt;
&lt;li&gt;Once we provisioned Staging server then we deploy to EC2 server. We also passed CodeDeploy parameter which new Staging URL for manual approval or reviewer to evaluate our staging environment/server.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vQMRPJ2M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txc8nubogjuk24s7fmor.png" alt="Image description" width="648" height="451"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here our Staging look like: &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2t6Y21QN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5h1rcek0kwnqkqrvdra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2t6Y21QN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5h1rcek0kwnqkqrvdra.png" alt="Image description" width="880" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once the Approver approved and reviewed our Staging environment, we will delete our Staging CloudFormation Stack.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YZppZxG7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j8oz548nys4zol7s35q7.png" alt="Image description" width="880" height="358"&gt;
&lt;/li&gt;
&lt;li&gt;Lastly, we deploy to Production environment which we using AWS CodeDeploy 
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4iyhaW8c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51t3nhygv1roq8lwixra.png" alt="Image description" width="689" height="301"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;*&lt;em&gt;From DevOps to DevSecOps *&lt;/em&gt;&lt;br&gt;
Now we can introduce the DevSecOps tools to use "Shift-Left" approach which added security scanner in BUILD, STAGING and PRD stage like below.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jcZhVhNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m9md7vgejc0jmbjt98s4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jcZhVhNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m9md7vgejc0jmbjt98s4.jpg" alt="Image description" width="880" height="988"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Toolchain&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x6tpoTjQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3xmy17twxp2o8jlwq0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x6tpoTjQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3xmy17twxp2o8jlwq0b.png" alt="Image description" width="880" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j2UY36Zw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n0cgz1p9ubwb9en1k5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j2UY36Zw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7n0cgz1p9ubwb9en1k5s.png" alt="Image description" width="880" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NdBFZ2bY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kl3rh3imo6xynelkwe65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NdBFZ2bY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kl3rh3imo6xynelkwe65.png" alt="Image description" width="880" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to optimize your DevSecOps Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Everyone should understand failures quickly&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;CI job output with key messages&lt;/li&gt;
&lt;li&gt;Use colored error messages with emojis ✅ &lt;/li&gt;
&lt;li&gt;Hint and link troubleshooting docs&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Always use exactly the same environment&lt;/li&gt;
&lt;li&gt;A CI/CD pipeline can’t be reliable if a pipeline run modifies the next pipeline’s environment. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each workflow should start from the same, clean, and isolated environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document the pipeline design&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In your repository use Markdown docs to describe what you do on your CI/CD pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You also can use Wiki Page or Confluence to make everyone in the team know better about CI/CD process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Communication&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Human communication, collaboration, and teamwork are examples of factors that do not rely on automation. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the absence of these three factors, success will be extremely hard to achieve. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It's crucial to optimize communication and transparency in your CI/CD pipeline workflow if you want it to succeed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shift-Left + DevSecOps
I think that everyone in the organization must make an effort to "Shift-Left" to a DevSecOps culture or methodologies and come up with a multidisciplinary security team. &lt;/li&gt;
&lt;li&gt;Agile +  DevSecOps
DevSecOps must be fed by Agile software development. Security user stories must be part of each sprint. &lt;/li&gt;
&lt;li&gt;Automation is the key
Security and test automation can reduce delivery time, improve quality and security, and eliminate human error&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>awscommunitybuilder</category>
      <category>devsecops</category>
      <category>awscodepipeline</category>
    </item>
    <item>
      <title>DevSecOps Transformation Bucket List</title>
      <dc:creator>DevOps4Me Global</dc:creator>
      <pubDate>Fri, 22 Apr 2022 05:25:08 +0000</pubDate>
      <link>https://forem.com/devops4mecode/devsecops-transformation-bucket-list-3o0f</link>
      <guid>https://forem.com/devops4mecode/devsecops-transformation-bucket-list-3o0f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cshMMdjt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t36y6oyhk9y5qke61xje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cshMMdjt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t36y6oyhk9y5qke61xje.png" alt="Image description" width="493" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
DevOps’s lightning-fast software development cycles imply traditional security measures can’t keep up. Organizations must include automated and continuous security testing in their software development processes to enhance security. It’s all about making security a part of the DevOps process — DevSecOps. That’s Filling the gap between I.T and security by incorporating security into every step of the software development lifecycle process Security bottlenecks may be alleviated by using the DevSecOps strategy. It also maintains the high development pace that DevOps allows. This post will share how the DevSecOps framework, activities, process, and challenges during your DevSecOps transformations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is DevSecOps?&lt;/strong&gt;&lt;br&gt;
 DevSecOps is the practice of incorporating security and compliance testing into the DevOps pipeline without jeopardizing the speed and agility of continuous delivery. DevSecOps adoption is needed to make software delivery more agile, responsive, and secure. More team collaboration between IT security and the product team (including development and operations) must happen. In the past, the security team worked alone. There were always security and compliance checks done at the end. When you have a faster software delivery cycle with DevOps and more often make software, those checks at the end make it more difficult and slow down the continuous delivery process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Move to DevSecOps?&lt;/strong&gt;&lt;br&gt;
The increased delivery velocity with a DevOps CI/CD pipeline, particularly with microservices-based applications, is prone to introducing vulnerabilities and making continuous security a major issue in organizations.&lt;br&gt;
Security teams must learn and understand the security implications of all these new technologies to identify issues. The learning curve becomes a significant bottleneck, so automating security checks early in the continuous delivery process, i.e., practising DevSecOps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BRsfLrJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1er6bqom92zujzgi5ht.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BRsfLrJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1er6bqom92zujzgi5ht.jpeg" alt="Image description" width="880" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Metrics&lt;/strong&gt;&lt;br&gt;
DevSecOps is a proven method for improving the quality and security of software, which can be measured using these metrics:&lt;br&gt;
• Meantime to production: the average time it takes from when new software features are required to when they are up and running.&lt;br&gt;
• Average lead-time: the time it takes to deliver and deploy a new requirement.&lt;br&gt;
• Deployment speed: how quickly DevSecOps can deploy a new application version into production.&lt;br&gt;
• Deployment frequency: the frequency with which DevSecOps can deploy a new release into production.&lt;br&gt;
• Production failure rate: the frequency with which software fails during production.&lt;br&gt;
• Mean time to recovery (MTTR): the time it takes for applications in the production stage to recover from a failure.&lt;br&gt;
Furthermore, DevSecOps practice enables:&lt;br&gt;
• Fully automated risk characterization, monitoring, and mitigation throughout the application lifecycle; and&lt;br&gt;
Software updates and patching on-demand allows for addressing security vulnerabilities and code flaws.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Four (4) DevSecOps Pillars&lt;/strong&gt;&lt;br&gt;
The diagram shows that DevSecOps is supported by four pillars: culture, process, technology, and governance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LyuYCjey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3d1ge3ox9n7f8m11mgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LyuYCjey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3d1ge3ox9n7f8m11mgx.png" alt="Image description" width="596" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Lifecycle&lt;/strong&gt;&lt;br&gt;
The DevSecOps software lifecycle phases. Plan, develop, build, test, release, deliver, deploy, operate, and monitor are the nine (9) phases. Security is implemented in each stage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KpyCydTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujhqxqzbd3jp7m383lnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KpyCydTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujhqxqzbd3jp7m383lnj.png" alt="Image description" width="698" height="438"&gt;&lt;/a&gt;&lt;br&gt;
The software development lifecycle is not a monolithic linear process with DevSecOps. The Waterfall process’s “big bang” delivery style is replaced with smaller but more frequent deliveries, making it easier to change course as needed. DevSecOps implementation accelerates continuous integration and delivery, and each small delivery is completed through a fully automated or semi-automated process with minimal human intervention. The DevSecOps lifecycle is adaptable and includes numerous feedback loops to ensure continuous improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2vsSYNQK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ow5rjno1m7e57oraaogy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2vsSYNQK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ow5rjno1m7e57oraaogy.jpeg" alt="Image description" width="880" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above picture presents a framework for getting started with crucial DevSecOps domains and activities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevSecOps Challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue1: Not Many DevSecOps Talent in the Market&lt;/strong&gt;&lt;br&gt;
Some continent might facing issue lack of DevSecOps Engineer/Talent since DevSecOps still a broad topic or approach most of organizations&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution1: Diverify your DevSecOps Team&lt;/strong&gt;&lt;br&gt;
Instead looking in same region or continental, you have to start looking in differenct region or offshore DevSecOps Engineer to help in your transformation journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue2: End-of-cycle security testing causes delays and bugs to stay in the production process.&lt;/strong&gt;&lt;br&gt;
The conventional method of trying to shoehorn security into a nearly finished product is unsatisfactory. Too many flaws are finding their way into production code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution2: Integrate security analysis into the very beginning of the development process.&lt;/strong&gt;&lt;br&gt;
Instead, security may be woven into the fabric of daily operations and growth. In order to move security compliance into the very early phases of development, developers should be able to do analysis right within the IDE.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue3: To "fix" security is a difficult task.&lt;/strong&gt;&lt;br&gt;
It's a big order for developers and testers who aren't security professionals to be told to "fix" security in the last phases of development when it's unclear what to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution3: Continually improve the level of security.&lt;/strong&gt;&lt;br&gt;
There is no need for separate teams to implement their own security rules, such as safe coding standards and static analysis tests for common vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue4: There is no way to know the project's security risk until the last minute.&lt;/strong&gt;&lt;br&gt;
If a project doesn't have any form of vulnerability assessment, there are no recognised security issues. When late-cycle testing uncovers security flaws, it often leads to rework and further delays. Security flaws still find their way into the hands of users despite these valiant last-minute attempts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution4: Software quality and security are constantly checked.&lt;/strong&gt;&lt;br&gt;
Compliance reporting or Vulnerability Management tool should give dashboards that are simple to use and provide a high-level and a detailed perspective of the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Started&lt;/strong&gt;&lt;br&gt;
It’s a good idea to move to DevSecOps if your firm is currently using DevOps. As a result, DevSecOps is founded on the DevOps mindset, which will make it easier for you to switch. So you may bring together people with diverse strategic backgrounds to work on improving the security procedures that are already in place. This post just discussed the non-technical of DevSecOps transformation; we will dive deeper into the DevSecOps scanner tool and CI/CD implementation using AWS CodePipeline. I hope to see you back in the next post for the DevSecOps CI/CD pipeline and its tools involved.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>devopstransformation</category>
      <category>aws</category>
      <category>awscommunitybuilder</category>
    </item>
  </channel>
</rss>
