<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Firefly</title>
    <description>The latest articles on Forem by Firefly (@gofirefly).</description>
    <link>https://forem.com/gofirefly</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gofirefly"/>
    <language>en</language>
    <item>
      <title>Why Your Cloud Assets Need A Time Machine</title>
      <dc:creator>Tamir Zheleznyak</dc:creator>
      <pubDate>Tue, 23 May 2023 17:21:47 +0000</pubDate>
      <link>https://forem.com/gofirefly/why-your-cloud-assets-need-a-time-machine-409o</link>
      <guid>https://forem.com/gofirefly/why-your-cloud-assets-need-a-time-machine-409o</guid>
      <description>&lt;p&gt;Tracking the configuration of cloud resources is essential in the DevOps world. But what happens when you need to look back at past configurations? This is where resource history tracking comes in. By tracking the history of your cloud resources, you can retrieve configurations from any point in time, even if you have since deleted the resource. Furthermore, you can use history tracking to investigate unexpected changes, such as when something goes wrong with your cloud resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Git For You Cloud’s Asset History
&lt;/h2&gt;

&lt;p&gt;Git is really a great tool to enable you to investigate  who has made  changes to files and when, and also revert changes, as needed. With the growing complexity of cloud operations, version control is becoming mission critical to business continuity, and is a best practice with DevOps and cloud operations today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JE_h2YaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z67p99uhw5js5ymryhgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JE_h2YaT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z67p99uhw5js5ymryhgp.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Git Is great when it comes to:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Retrieving assets configuration from  a particular point in time&lt;/li&gt;
    &lt;li&gt;Reverting to a specific version of the asset using Git revert capabilities.&lt;/li&gt;
    &lt;li&gt;Looking at the commit’s information about the committer and the commit message to understand the motivation for the change, and hope the person who has done the commit has written good commit messages. (Here is a link for a good blog about commit messages &lt;a href="https://cbea.ms/git-commit/"&gt;https://cbea.ms/git-commit/&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;Git is generic and all your environments can be managed there in a single place.  This is true even for tools where codification is only in its infancy, like Okta (this is a great post about &lt;a href="https://www.gofirefly.io/blog/okta-as-code"&gt;Okta as Code&lt;/a&gt;).&lt;/li&gt;
    &lt;li&gt;Ability to compare two points in time to see exactly the change that has happened.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But sometimes Git alone is not enough when it comes to managing resources in your cloud. Even if we don’t want to manually change resources in the cloud - this does still happen from time to time. When this happens Git won’t know about those changes. In this scenario you must constantly be proactive and sample the cloud for changes and track them to not miss such misconfigurations. This “ClickOps” (a manual change) will usually be the changes that interest you when something bad happens.&lt;/p&gt;

&lt;p&gt;‍In addition sometimes you will have cloud resources that are not being managed by Infrastructure as Code in a Git repository and the timeline of your repo won’t cover any changes to them at all. There are cases where you want to see the history of unmanaged resources too.&lt;/p&gt;

&lt;p&gt;‍Another area that Git breaks down is when the scope is too large. Since Git tracks changes to files, the scope of changes can be too large when dealing with cloud resources. You might have multiple resources in a single file and multiple resources and even files in a single commit, making it harder to track changes to individual resources. Also regular git events won’t be helpful in such a case if you want to revert just a single resource and not all the resources of the commit.&lt;/p&gt;

&lt;p&gt;‍So while Git is a great tool, it's not always enough for tracking changes across all of your cloud resources. This is why sometimes you need a  timeline for your resources showing what really happened across your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Cloud Vendor’s Audit Tools for Cloud’s Asset History
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JS7CuoBj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3owqz7x5zdf4haxgt0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JS7CuoBj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3owqz7x5zdf4haxgt0s.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Different Cloud SaaS vendors offer their audit services, such as AWS's CloudTrail or Azure’s Azure Monitor, And Google Cloud’s Cloud Audit Logs. This can be leveraged to see what happened to your cloud resources, and they are great tools with a lot of benefits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audit tools allow you to track changes to all resources, including those not managed by code. &lt;/li&gt;
&lt;li&gt;You won't miss any manual changes, ensuring that you have a complete picture of your resource's history.&lt;/li&gt;
&lt;li&gt;The timeline view provided by audit tools makes it easy to see what happened to a single resource over time.&lt;/li&gt;
&lt;li&gt;Event names help build a full story of what happened to your asset, showing who made changes and what happened to deleted assets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But sometimes your audit tool also won’t be enough because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audit tools may not provide a full view of a resource's configuration at a certain point in time, preventing you from traveling to the past.&lt;/li&gt;
&lt;li&gt;Each vendor has their own audit tool, which can make it difficult to work with multiple systems if your infrastructure is spread across multiple vendors.&lt;/li&gt;
&lt;li&gt;Knowing past IAC states of your managed resources might be useful when you want to know when it was drifted and why. Vendor audit tools don’t take in count the infrastructure as code and the data there is not helpful enough. Having the IAC state of the resource against its state as part of the resource timeline is powerful.&lt;/li&gt;
&lt;li&gt;Because they audit the change and not full configuration you won’t be able to make a comparison between two points in time in case you need.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Cloud’s Change History Feature
&lt;/h2&gt;

&lt;p&gt;Some vendors have implemented their own asset history feature, such as Google Cloud's Change History Tab as part of their Asset Inventory Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4LnfWjee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/loylch86l42othg2iryv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4LnfWjee--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/loylch86l42othg2iryv.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Its benefits are similar to the benefits of the audit tool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud change history tools allow you to monitor and track changes made to your resources, even those that are not managed through code. This ensures that no manual changes go unnoticed, providing a comprehensive overview of your resource's history.&lt;/li&gt;
&lt;li&gt;The timeline view provided by the vendor's change history tool makes it convenient to visualize the sequence of events and alterations that occurred to a specific resource. This chronological representation enhances the understanding of the resource's history.&lt;/li&gt;
&lt;li&gt;The compare feature also enables you to compare the screen between two selected points in time. This comparison assists in comprehending the specific modifications that took place, facilitating analysis and troubleshooting.&lt;/li&gt;
&lt;li&gt;Being in some console with your cloud infrastructure is a great benefit because you don’t need to go to another place and everything is real time because it’s in the same place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But there are also some disadvantages in these vendors’ asset history features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vendor dependency: Each vendor has its own cloud history tool, and some vendors may not provide this feature at all. This can pose challenges if your infrastructure is spread across multiple vendors, requiring you to work with different systems to access the history of your resources.&lt;/li&gt;
&lt;li&gt;Limited insights into IAC states: Vendor audit tools typically do not consider infrastructure as code (IAC) and provide limited information in this regard. Having the ability to compare the IAC state of a resource with its actual state as part of the resource's timeline would be powerful for understanding drifts and troubleshooting.&lt;/li&gt;
&lt;li&gt;Lack of deleted resource tracking: One notable disadvantage of relying solely on the vendor's asset history feature is the absence of a view for deleted resources. Most vendor consoles do not provide a mechanism to track and retain history for assets that have been deleted. Consequently, once a resource is deleted, its history is lost, making it challenging to trace back and understand the reasons behind its deletion. This limitation emphasizes the need for alternative approaches or tools if preserving the history of deleted resources is essential for your workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Let's Do a Comparison of What's Important
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zv1PVcHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx6ksxvb5bd4bm8wvk6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zv1PVcHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx6ksxvb5bd4bm8wvk6x.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It appears that audit trails, vendor built in asset history and Git are useful tools for reviewing the history of your resources, but each has its own limitations. It would be ideal if there were a tool that could combine the advantages of the three methods we mentioned here.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Firefly Asset History
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://app.gofirefly.io/"&gt;Firefly Asset History&lt;/a&gt; provides a convenient solution that combines the benefits of both worlds being multi and cross-cloud while also fusing the added value from both Git and audit tools, to provide a comprehensive view of your inventory. By leveraging the timeline capabilities, you can easily track when a resource was changed and the details of its configuration modifications.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
Here is an example of a timeline of a Cloudwatch Log Subscription Filter with a change to its destination ARN to a different environment. We can see when the change has occurred:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XAFbKKOd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bt2f0ccpxbdly93zt889.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XAFbKKOd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bt2f0ccpxbdly93zt889.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the event that  we want to revert the change to the resource,we can use the codification for this purpose by codifying it to the IaC language we use for our infrastructure by clicking “Codify Revision”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BJL1aYxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11lyu9a4gywy26mcwyi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BJL1aYxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11lyu9a4gywy26mcwyi6.png" alt="Image description" width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no need to manually copy and paste the configuration to the IaC, as you can use Firefly to create a pull request to your Git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4m0yy-s2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/np6irkrjed634wfrgetv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4m0yy-s2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/np6irkrjed634wfrgetv.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Firefly is a one-stop-shop when it comes to asset history, including all your cloud and SaaS vendors. With just one click on the Asset History label, you can access all resources for which history is enabled. For example in the screenshot below we see resources from AWS and from the SaaS provider, Cloudflare with the full Asset History available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R22Qhp2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbcihpwjet7iylxt6c0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R22Qhp2T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbcihpwjet7iylxt6c0k.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even if an asset has been deleted, Firefly has you covered. With the Asset History, you can view deleted assets and their previous configurations, just like any other active asset in your inventory so you don’t lose any critical configuration history due to an accidental deletion (or even intentional deletion without backing up the configuration).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i2zEbOtC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jh8qy5x8jj6yynq5n5ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i2zEbOtC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jh8qy5x8jj6yynq5n5ul.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x_D4uiB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l4ey44qp0ybs4vwkwra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x_D4uiB---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l4ey44qp0ybs4vwkwra.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s Compare Again
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P0g0mJ5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgvt8d1sgvdjg3k0znfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P0g0mJ5i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgvt8d1sgvdjg3k0znfw.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;After exploring the advantages and disadvantages of popular tools such as Git and AWS CloudTrail audit services, and Google Cloud’s Change History it is clear that they are critical tools in our cloud ops toolkit. However, with the growing number of clouds, tools, services and much more––many DevOps teams are opting for consolidation, and prefer to have access to information in a unified system and format.  By providing a comprehensive resource history and the ability to restore and revert past configurations, with Firefly this is now possible cross-cloud and even SaaS application configurations. &lt;/p&gt;

&lt;p&gt;‍---------&lt;/p&gt;

&lt;p&gt;Firefly offers the benefits from all tools mentioned during this article by combining the traceability of audit tools with the reversibility of Git. With Firefly, you can have complete confidence that you won't miss any changes or events affecting your assets and peace of mind that you can revert changes. &lt;/p&gt;

&lt;p&gt;So go ahead &amp;gt;&amp;gt;&lt;a href="https://app.gofirefly.io/"&gt; give it a try&lt;/a&gt; and see the difference it can make for your team.&lt;/p&gt;

</description>
      <category>version</category>
      <category>git</category>
      <category>scm</category>
      <category>iac</category>
    </item>
    <item>
      <title>Okta as Code - Identity Management in the Cloud Native Era</title>
      <dc:creator>Eran Bibi</dc:creator>
      <pubDate>Sun, 09 Apr 2023 07:39:21 +0000</pubDate>
      <link>https://forem.com/gofirefly/okta-as-code-identity-management-in-the-cloud-native-era-165e</link>
      <guid>https://forem.com/gofirefly/okta-as-code-identity-management-in-the-cloud-native-era-165e</guid>
      <description>&lt;p&gt;We have previously written posts on how to manage &lt;a href="https://www.gofirefly.io/blog/top-aws-unmanaged-resources-that-you-should-know-about"&gt;uncodified legacy apps&lt;/a&gt; on different platforms like AWS.  In this post, we’d like to take a look at the very popular Okta platform, which provides some of the largest companies in the world cloud-based identity and access management (IAM) for secure authentication and authorization for applications, devices, and users. &lt;/p&gt;

&lt;p&gt;‍Okta is likely the most popular choice for organizations that need to manage access to their digital resources across multiple applications and environments, as it provides a range of features that make it popular for authentication and authorization, such as:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Single sign-on (SSO)&lt;/li&gt;
    &lt;li&gt;Multi-factor authentication (MFA)&lt;/li&gt;
    &lt;li&gt;User management&lt;/li&gt;
    &lt;li&gt;Integration with a wide range of APIs and SDKs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍In this post we’d like to dive into why managing Okta configuration as code, like any other cloud application should be a best practice.  The diversity of challenges codifying your Okta will help you overcome, and how to get started migrating your manual config to as-code config in Okta.&lt;/p&gt;

&lt;h2&gt;Manually Configuring Okta Gone Wrong&lt;/h2&gt;

&lt;p&gt;Okta is a forward-thinking cloud-based IAM platform, and was built with extensible APIs to enable it to align with IaC best practices.  However, the teams that usually manage Okta implementations are largely from the IT domain, and less DevOps, and either are less familiar with this approach to configuring Okta, or don’t have the domain expertise to do so.&lt;/p&gt;

&lt;p&gt;Let’s get started with going through some of the challenges Okta-as-Code can help overcome, and good ways to get started with codifying your Okta operation.&lt;/p&gt;

&lt;h3&gt;Policy as Code for Okta - Misconfiguration Prevention&lt;/h3&gt;

&lt;p&gt;Like any other cloud assets, Okta too can be prone to human misconfigurations and errors––which is why the world of cloud and containers has endless amounts of tools to help identify and mitigate such errors before being deployed to production.  However, since Okta is oftentimes manually configured, there aren’t even the same safeguards for rollback, history tracking, revisions and more.&lt;/p&gt;

&lt;p&gt;‍By configuring Okta manually, and not as code––you lose the opportunity to apply CI/CD to your code, PR/MR workflows and gating, as well as the guardrails available in these processes for reviewing your code, ensuring its quality, and then deploying to production. There are countless horror stories on the internet of manual configurations gone wrong, with no ability to roll these back or recover the previous state when manually configured.  Even Okta, due to its high level of security, does not have any way to recover the accounts of admins who mistakenly deploy misconfigurations that cannot be reverted through git practices. &lt;/p&gt;

&lt;p&gt;‍Once you have defined policy best practices in your organization, it is important to apply these globally to your entire cloud and SaaS-based operation, whether its DevOps tools or IAM platforms.  This is where IaC can help level up your Okta operation. &lt;/p&gt;

&lt;p&gt;‍Leveraging IaC for your Okta configuration will enable you to gain the known benefits git makes possible, including history tracking, change management, revision approval workflows, and CI/CD gating for your Okta configurations.  In addition to this, you can take this one step further and apply the relevant policies to your code in your CI/CD pipelines, fail/pass build based on configuration checks and much more, before deploying to production.&lt;/p&gt;

&lt;p&gt;‍One more critical inherent value made possible through IaC, is the ability to create a backup of your configuration in Git.  Aside from change management, this can also be a distributed, highly-available form of disaster recovery in the event of a ransomware attack or a hostile takeover of the admin account, which can reduce stress in a high-stress scenario.&lt;/p&gt;

&lt;h3&gt;ClickOps Doesn’t Scale - IaC to the Rescue&lt;/h3&gt;

&lt;p&gt;Once you choose to configure your Okta as code there are immediate benefits you derive in the form of automation, on top of the safety and guardrails these make possible.  As noted above Okta comes with out of the box SSO, MFA, user management and integration with third-party software.&lt;/p&gt;

&lt;p&gt;‍Let’s provide a quick overview of how this works.  &lt;/p&gt;

&lt;p&gt;‍For SSO, Okta enables users to authenticate themselves once and gain access to multiple applications without the need to enter separate credentials for each one. This reduces the risk of weak or reused passwords and simplifies the user experience.&lt;/p&gt;

&lt;p&gt;‍In addition, most large enterprises today require at the very least 2FA if not MFA for accessing company resources. Okta supports a range of MFA methods, including SMS, voice, and push notifications, as well as hardware tokens and biometric factors. This adds an extra layer of security to the authentication process, making it more difficult for attackers to gain access to sensitive resources.&lt;/p&gt;

&lt;p&gt;‍When it comes to user management, Okta allows administrators to manage user accounts and permissions across multiple applications and environments from a single console. This simplifies the process of provisioning and de-provisioning users, and helps ensure that access is granted and revoked in a timely manner.&lt;/p&gt;

&lt;p&gt;‍Finally, Okta provides a range of APIs and SDKs that make it easy to integrate with other applications and platforms. This allows organizations to leverage their existing technology investments while adding an additional layer of security to their authentication and authorization processes.&lt;/p&gt;

&lt;p&gt;‍As you can see, that is a very wide range of mission-critical capabilities that security-minded enterprises today require to be able to properly manage access and authorization at scale.  However, doing this manually for just tens of users sounds ludicrous, so let’s not talk about with hundreds, thousands, and even tens of thousands of employees.&lt;/p&gt;

&lt;p&gt;‍By codiyfing your Okta config, you can also apply automation to your processes around onboarding, revising, and removing users, changing their access to different applications, adding applications, removing OAuth access, just to name a few of the common activities constantly performed by administrators on a daily basis for organizational asset safekeeping.  Without automation this becomes a nearly impossible feat––and also loses the ability to track changes and history over time, or any other critical information with regards to managing critical company assets.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;Okta Config as Code in Action&lt;/h3&gt;

&lt;p&gt;All of this is why the Okta Terraform provider has millions of all time downloads, and tens of thousands monthly.  In the example below we’ll demonstrate how you can quickly take your manual &lt;a href="https://info.gofirefly.io/okta-configuration" rel="noopener noreferrer"&gt;Okta config,&lt;/a&gt; and convert to Terraform code, to be able to have the power of git coupled with automation your cloud-based IAM requires.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://info.gofirefly.io/okta-configuration"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hxoqadrl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lazel9jwv65fsmcj34h4.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Treat Your Okta Like All Your Cloud Assets&lt;/h2&gt;

&lt;p&gt;Overall, Okta's popularity for authentication and authorization is driven by how it changed cloud-based IAM for the better––with its ease of use, flexibility and integration features for the diversity of applications used today, and robust security features. By helping organizations improve their security posture and streamline their access management processes, it has become the nearly de facto tool for any organization that needs to manage access to their digital resources. &lt;/p&gt;

&lt;p&gt;‍With such scale and adoption, Okta can benefit from applying the same as-code best practices from other disciplines and cloud worlds, which is possible due to its open APIs that are quite powerful.  Many organizations have learned to leverage Terraform to codify their Okta configurations, and those who have not yet, should most certainly get started with doing so.  Okta as code will enable greater safety, guardrails, recovery, tracking and change management, alongside greater scale and automation.  With today’s tooling, Terraform providers and more you can quickly migrate your existing configurations to code and provide your organization with greater confidence and security of your IAM operation.&lt;/p&gt;

</description>
      <category>iac</category>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>saas</category>
    </item>
    <item>
      <title>No Chance Event! How We Built Our Event-Driven Architecture</title>
      <dc:creator>Gal Cohen</dc:creator>
      <pubDate>Tue, 03 Jan 2023 13:17:54 +0000</pubDate>
      <link>https://forem.com/gofirefly/no-chance-event-how-we-built-our-event-driven-architecture-476f</link>
      <guid>https://forem.com/gofirefly/no-chance-event-how-we-built-our-event-driven-architecture-476f</guid>
      <description>&lt;p&gt;The event-driven architecture pattern is a popular asynchronous architecture pattern which pushes information as events happen. It is a way of building systems that enable information flow between chained services in a real-time manner as events occur.&lt;/p&gt;

&lt;p&gt;‍Event-driven architecture as the name implies, is a way to load balance and filter data  from a large number of events from which insights or actions are then derived.  A good example is the deluge of events that are received from every single platform all the time–whether it’s social media or just your average run of the mill dev tool–many times you’ll see these alerts batched together into ‘X people liked your post’.  This post will describe why and how this is done.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Typical Event Load Story
&lt;/h2&gt;

&lt;p&gt;Let’s take another, more technical example of IP filtering.  A common scenario is when, for security purposes, a company’s CISO will request that you start monitoring and filtering traffic based on the reliability and safety of IP addresses.  You usually start by creating filtering logic such as &lt;code&gt;IsDangerousIP&lt;/code&gt; on all incoming traffic. This quickly begins failing when running this heavy and resource intensive process on each packet that comes through the router doesn’t scale, particularly when you have enormous amounts of traffic coming through all the time. This breaks down because It is impossible to run this logic on this packet mass.&lt;/p&gt;

&lt;p&gt;What quickly becomes obvious though, is that many of the IP addresses are actually the same, oftentimes due to ongoing sessions between entities, and this is actually a huge waste of resources––running the same complicated heavy logic over and over again.&lt;/p&gt;

&lt;p&gt;This is when a load balancing solution needs to be implemented. &lt;/p&gt;

&lt;p&gt;We encountered a similar breakdown in our event filtering due to a huge amount of data streaming in, and also needed to find a solution to handle the load.  As our serverless architecture is built upon DynamoDB as our primary data store, we decided to research whether there could possibly be a cloud service that could do the heavy lifting for us, and learned about DynamoDB Streams.  In this post, we’ll describe how it works, and provide a real code example for you to do it yourselves. (You can try this at home!)&lt;br&gt;
‍&lt;/p&gt;
&lt;h2&gt;
  
  
  Event-Driven Architecture in Cloud Operations
&lt;/h2&gt;

&lt;p&gt;In order for Firefly to be able to analyze user behavior and provide insights to clients regarding anomalies in their cloud systems such as drift &amp;amp; unmanaged resources, many similar events need to be parsed so that relevant insights can then be extracted based upon this data. Therefore, it was quickly understood that similar events would need to be aggregated into batch notifications to ensure our users aren’t overwhelmed by too much information. &lt;/p&gt;

&lt;p&gt;Similar to the IP filtering example, with Firefly, many of these alerts are not unique or important to consume individually, and we understood we would need to aggregate similar events together, for both cost effectiveness and resource utilization.&lt;/p&gt;

&lt;p&gt;‍We understood that if we were to build such a transactional microservices event aggregation mechanism ourselves, these are the elements we would need to design and implement ourselves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deduplication of events in the same scope. Duplicate events on the same time slot can cause throttling of services and ultimately will be very costly&lt;/li&gt;
&lt;li&gt;Managing and holding the state. The state of the component that defines the notification order would require multiple writers, meaning it would also need to be safe for multiple writers&lt;/li&gt;
&lt;li&gt;Filtering. Of records that represent an event scope and not as standalone repetitive events - requires the filtering/deduplication of events in the timeslot
‍
With DynamoDB Streams this is built-in out of the box, and it’s just a matter of writing the lambda functions to handle the logic. This is what it looks like in action.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Dynamo Streams for Event-Driven Lambda
&lt;/h2&gt;

&lt;p&gt;DynamoDB Streams are basically a time-ordered flow of information about item-level modifications in a DynamoDB table. This means, whenever items are created, updated or deleted in a table, DynamoDB Streams write a stream record with the primary key attributes of the modified items. So if we take the example of the incoming traffic filtering, DynamoDB Streams is an excellent candidate for this task. &lt;/p&gt;

&lt;p&gt;Below we’ll walk you through how to build a solution for IP filtering, based on specific logic, without bogging down your system resources.&lt;/p&gt;

&lt;p&gt;‍To get started you need to be able to quickly aggregate and analyze the  incoming packet IP addresses.  We do this by upserting each as an item into a DynamoDB table, in order to collect all unique IP addresses coming from packets in a specific period of time. That’s the easy part, this is basically using DynamoDB in its typical fashion.&lt;/p&gt;

&lt;p&gt;‍This is where it gets fun. Next, in order to get all of the distinct IPs in a fixed period of time, we can now, through DynamoDB Streams, easily forward these to a designated lambda function that will run our &lt;code&gt;IsDangerousIP&lt;/code&gt; logic and alert upon any anomalous IPs, as needed.  With DynamoDB streams it is possible to quickly build a transactional microservices event aggregation mechanism where every IP address is processed once, efficiently, and durably. &lt;/p&gt;

&lt;p&gt;‍In order to implement the IP filtering solution described above, we’ll need to create a stream that essentially comprises a Change Data Capture of every &lt;code&gt;create&lt;/code&gt; action being performed on the DynamoDB table items, although the same logic can be applied to any action such as &lt;code&gt;delete&lt;/code&gt; or &lt;code&gt;update&lt;/code&gt;, as well.  This means that with each action taken, a record that describes the data modification is written to the stream. The stream is highly configurable, filtering the event data or adding information such as the "before" and "after" images of modified items is also possible and can be useful when you want to implement a smart update based on different attributes.&lt;/p&gt;

&lt;p&gt;‍This stream enables us to build applications that consume these events and take action based on the contents.&lt;/p&gt;
&lt;h2&gt;
  
  
  Build the IaC with Terraform!
&lt;/h2&gt;

&lt;p&gt;When it comes to IP filtering, we first need to start with properly configuring the infrastructure, This code example, will save all the events waiting to be checked in the next stream to the DynamoDB Table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_dynamodb_table" "accumulator_table" {
  name           = "EventsAccumulator”
  billing_mode   = "PROVISIONED"
  read_capacity  = 200
  write_capacity = 20
  hash_key       = "IPAddress"
  range_key      = "IPAddress"
  stream_enabled = true
  stream_view_type = "NEW_IMAGE"
  attribute {
    name = "IPAddress"
    type = "S"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will build the lambda function that is responsible for detecting every dangerous IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "detector" {
  function_name = "dangerous-ips-detector"
  role          = aws_iam_role.iam_for_lambda.arn
  handler       = "main"

  filename         = "lambda_function.zip"
  source_code_hash = filebase64sha256("lambda_function_payload.zip")
  runtime = "go1.x"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;IMPORTANT:&lt;/em&gt; The most critical piece when configuring the lambda event source mapping, is to limit the function to only extract from the DynamoDB table the INSERT event, and not other events that may come in, such as UPDATE events. This will ensure the function only retrieves the distinct list of IP with events that occurred in the fixed period of time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_event_source_mapping" "aws_event_streamer_trigger" {
  event_source_arn  = aws_dynamodb_table.accumulator_table.stream_arn
  function_name     = aws_lambda_function.detector.arn
  batch_size        = var.batch_size
  maximum_batching_window_in_seconds        = var.batching_window_in_seconds
  starting_position = "LATEST"
  filter_criteria {
    filter {
      pattern = jsonencode({
         "eventName": [ "INSERT" ]
      })
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s take a minute to look at some of the interesting arguments we can create within these: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;batch_size&lt;/code&gt; - Enables us to define the largest number of records the Lambda will retrieve from the event source at the time of invocation. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This protects your systems from throttling the lambda function at peak times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍&lt;code&gt;Maximum_batching_window_in_seconds&lt;/code&gt; - This defines, in seconds, the maximum window of time to gather records before invoking the function. Records will continue streaming in until either the &lt;code&gt;maximum_batching_window_in_seconds&lt;/code&gt; expires or the &lt;code&gt;batch_size&lt;/code&gt; has been met.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This allows you to dynamically configure the period of time to accumulate IP addresses until handling them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Filter_criteria&lt;/code&gt; - This defines the criteria to use upon the event filtering event sources. If an event satisfies at least one filter, the lambda will send the event to the function or add it to the next batch. This is a powerful tool that supports creating complex pattern rules and customized streams for every use case. Learn more about this cool functionality here: DynamoDB Streams Filter Criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show me the Code
&lt;/h2&gt;

&lt;p&gt;For every incoming packet we upsert the IP address into the DynamoDB table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;key := map[string]*dynamodb.AttributeValue{
    "IPAddress": {
        S: aws.String(CURRENT_IP_ADDRESS),
    },
}
output, err := dynamoService.UpdateItem(&amp;amp;dynamodb.UpdateItemInput{
  Key:       key,
  TableName: aws.String("EventsAccumulator"),
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The detector lambda consumes events from the DynamoDB stream. Every item is deleted after processing, in order for it to be retrieved in the next batch as an INSERT event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
    lambda.Start(handler)
}
func handler(ctx context.Context, e events.DynamoDBEvent) {
    for output, record := range e.Records {
        IPAddress := record.Change.NewImage["IPAddress"].String()
        ISDangerous = IsDangerousIP(item)
        if ISDangerous != nil {
            alert(IPAddress)
        }
      key := map[string]*dynamodb.AttributeValue{
         "IPAddress": {
          S: aws.String(IPAddress),
       },
      }
     _, err := dynamoService.DeleteItem(&amp;amp;dynamodb.DeleteItemInput{
        Key:       key,
        TableName: aws.String(tableName),
       })
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DynamoDB Streams makes it extremely easy to build transactional microservices with serverless architecture, something that was formerly a complex undertaking with a lot of backend logic to fulfill.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Transactional Microservices &amp;amp; Reducing Alert Fatigue
&lt;/h2&gt;

&lt;p&gt;When building an event-driven platform that requires big data in order to provide precise information, but still has the requirement of remaining useful and not overwhelming, event aggregation is a good practice to reduce alert fatigue, not consume too many resources, and be cost effective.  By providing users aggregated data, they can still benefit from near real-time data, without the downside of having to filter and sift through too much information to understand what is actually happening in their system.&lt;/p&gt;

&lt;p&gt;‍With the growing scale of cloud operations today, our tools need to work for us, and not the other way around.  We also understood in the same vein, that cloud services have evolved immensely, and many big data companies suffer from the same pains - and perhaps we don’t have to reinvent the wheel.  By leveraging an existing cloud service we were able to automate much of the heavy lifting and focus only on the business logic and customer experience.  We hope you found this example useful if you have a similar tech stack.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>datastreaming</category>
    </item>
    <item>
      <title>Terraform Modules for Advanced Users</title>
      <dc:creator>Sefi Genis</dc:creator>
      <pubDate>Wed, 21 Dec 2022 14:59:27 +0000</pubDate>
      <link>https://forem.com/gofirefly/terraform-modules-for-advanced-users-4n56</link>
      <guid>https://forem.com/gofirefly/terraform-modules-for-advanced-users-4n56</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SW5Iom6E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhu8ktr6l4rd8bhvu4uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SW5Iom6E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhu8ktr6l4rd8bhvu4uj.png" alt="Image description" width="880" height="634"&gt;&lt;/a&gt;Terraform (TF) Modules have become an increasingly popular and efficient way to configure a diversity of applications and infrastructure as code. They are a catalog of repeatable templates that help you deploy Terraform code quickly and consistently. The most popular tools, clouds, and platforms have a Terraform Module – some of which have been downloaded tens of millions of times. &lt;/p&gt;

&lt;p&gt;The great part about Terraform Modules is that they’re quite flexible and extensible, enabling you to write your own custom modules for proprietary applications, as needed. There are some pretty great tutorials online that extensively cover the basics of getting started and writing your own Terraform Module, you can check out this post as a great reference for how to build your TF module.&lt;/p&gt;

&lt;p&gt;When leveraging public-facing modules, like any other resources taken off the public web––whether open source tools and utilities or even container images, there are some good practices to keep in mind when selecting your module of choice, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensuring it is well-maintained, with good security hygiene (e.g. quick patches for high-severity vulnerabilities)&lt;/li&gt;
&lt;li&gt;Maintains backwards compatibility even with upgrades, so that new versions don’t break anything&lt;/li&gt;
&lt;li&gt;Has a good release cycle management of major / minor versions&lt;/li&gt;
&lt;li&gt;Has a strong community - which is represented in stars, forks, and even how quickly issues are resolved in their repo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These can provide a good understanding of whether these modules are recommended for adoption in your stack, or whether they will cause you future heartache––so be sure to vet the modules you choose before you apply them to your code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Use Cases with Terraform Modules
&lt;/h3&gt;

&lt;p&gt;When it comes to creating our own modules, it’s best to start first with a design of your systems and decide the level of coverage you want for your modules. While modules are great for composability, collaboration and automation––like all code, each requires its own share of maintenance. &lt;/p&gt;

&lt;p&gt;It is most certainly possible to create a module per component, whether it’s Lambda services, IAM users and roles, policies, and even backend and frontend components. However the decision to create a module should always be driven by the value and ROI in controlling the code being greater than the long-term maintenance and overhead of supporting the modules for perpetuity. If it’s easier to leverage a public resource, and customize it to your need, that may sometimes be sufficient, and is saves having to write the module from scratch. &lt;/p&gt;

&lt;p&gt;There are also plenty of modules that bundle components together, such as infrastructure/runtime, a database like RDS and IAM roles. Anton Babenko’s &lt;a href="https://github.com/terraform-aws-modules"&gt;AWS module library&lt;/a&gt; is just one example of a well-maintained library of resources you can leverage for nearly every AWS use case and component bundle.&lt;/p&gt;

&lt;p&gt;There are also situations where modules are great provisioning tools, but when it comes to connecting multiple and diverse providers there may be additional complexity to take into consideration. Let’s take the use case of MongoDB Atlas on AWS with VPC Peering, in this case you would need to ensure that your modules are connected from both sides, MongoDB to AWS and AWS to MongoDB, to ensure they work properly. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W3MrUmxx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oyfeiqdlcumh2a9bgfmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3MrUmxx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oyfeiqdlcumh2a9bgfmv.png" alt="Image description" width="880" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍The good part is that Terraform Modules have a great community and developer tools, so that even more advanced use cases are well-documented. That said, there is also the reverse scenario where leveraging modules can actually be the best practice, and prevent poor operations hygiene like infrastructure drift and other issues, but may sometimes be an afterthought.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging Modules for ClickOps Components
&lt;/h3&gt;

&lt;p&gt;All of the examples above are great when we start by creating our resources as code in the cloud. But what happens when we have manually created resources in the cloud via ClickOps? This is a very common practice in operations, particularly with resources created pre-infrastructure as code. What if later we want to leverage a module, particularly a bundled module, that is much more aligned with best practices? How can this be done without causing breakage?&lt;/p&gt;

&lt;p&gt;Let's take an example of creating an S3 bucket manually. In our scenario, this manually created resource will not have encryption, versioning, ACL (access lists), or anything else. Therefore, if we'd like to switch to using a TF Module that has these components, it won't match our current, manually created bucket's configuration. In such a scenario, will we now be stuck with our manually provisioned resource forever that has no encryption or versioning?&lt;/p&gt;

&lt;p&gt;This is exactly the type of situation where we can create drift, if our code and resources are not aligned, which can cause volatility in our clouds. So what’s a good way to overcome this?&lt;/p&gt;

&lt;p&gt;The first step is to actually convert our resource to code, and then upload it to the Terraform state file. Ok - good! But this is still a bare bones resource with none of the additional featuers  such as versioning. The next thing you can do is import the module into your newly created Terraform code. This way you import all of the added configuration that you want to apply to your S3 bucket, without having to write it yourself.&lt;/p&gt;

&lt;p&gt;The next time you run &lt;code&gt;terraform apply&lt;/code&gt; it will prompt you with the many changes you’re making to your bucket, and you will just need to confirm that these are OK. Once the changes apply, your bucket will be upgraded with the additional functionality you wanted to apply to your resource.&lt;/p&gt;

&lt;p&gt;That’s great for one-off resources created manually, but how do we do this at scale? What if we have hundreds of manually provisioned S3 buckets with no encryption, ACL or versioning? What now?&lt;/p&gt;

&lt;p&gt;EUREKA! This is where a tool like Firefly comes in. It enables you to select all of the resources you’d like to update, select a publicly available module or even upload a custom module, and apply the changes to all of your resources that require upgrading / changing / modification at once.&lt;br&gt;
‍&lt;br&gt;
See it in action:&lt;/p&gt;

&lt;p&gt;RUN:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main_bucket&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see in the code the diff between your existing resource and the imported module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-s3-bucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"github.com/terraform-aws-modules/terraform-aws-s3-bucket"&lt;/span&gt;

 &lt;span class="nx"&gt;object_lock_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
 &lt;span class="nx"&gt;bucket&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;BUCKET_NAME&amp;gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;These&lt;/span&gt; &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;listed&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="nx"&gt;but&lt;/span&gt; &lt;span class="nx"&gt;not&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;dependencies&lt;/span&gt; 
&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;hence&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;be&lt;/span&gt; &lt;span class="nx"&gt;created&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt; &lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;):&lt;/span&gt; 
&lt;span class="c1"&gt;# aws_s3_bucket_logging&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_acl&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_website_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_versioning&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_server_side_encryption_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_accelerate_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_request_payment_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_cors_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_lifecycle_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_object_lock_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_replication_configuration&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_policy&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_public_access_block&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_ownership_controls&lt;/span&gt;
&lt;span class="c1"&gt;# aws_s3_bucket_intelligent_tiering_configuration&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you click &lt;code&gt;apply&lt;/code&gt; (like in the example with the single resource), you can then update all of the selected resources with one single action. This method enables you to write, import, and govern your manually created resources at scale, with minimal pain.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR - Terraform Modules Made Easy
&lt;/h3&gt;

&lt;p&gt;So just to wrap it up, Terraform Modules have changed the game for codifying resources, bundling services, components, and enabling easy cloud infrastructure governance in the long run. The framework makes it truly simple to connect more common and proprietary resources through publicly available and maintained modules alongside custom-built modules (that take some research, but are eventually pretty easy to create, configure and maintain). Terraform Modules are a great way to ensure your clouds are aligned with best practices, and they can still be leveraged for resources that were not created as code.&lt;/p&gt;

&lt;p&gt;It is most certainly recommended to upgrade manually created resources to infrastructure as code, to gain all of the benefits derived from codified resources––security, policy and governance, automation, and extensibility. You can also leverage some great tools to make these transitions at scale, and not leave any resources unmanaged in your clouds.&lt;/p&gt;

</description>
      <category>iac</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Doing DevOps Like Skynet with ChatGPT</title>
      <dc:creator>Eran Bibi</dc:creator>
      <pubDate>Fri, 09 Dec 2022 12:04:02 +0000</pubDate>
      <link>https://forem.com/gofirefly/doing-devops-like-skynet-with-chatgpt-2n6e</link>
      <guid>https://forem.com/gofirefly/doing-devops-like-skynet-with-chatgpt-2n6e</guid>
      <description>&lt;p&gt;If you've been anywhere on the interwebs in the last week, you have likely heard about &lt;a href="https://openai.com/blog/chatgpt/" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt;. &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; announced the next generation of their natural language processing AI model, called ChatGPT–that you can chat with about basically any topic under the sun.&lt;/p&gt;

&lt;p&gt;People have been updating their avatars with AI generated avatars, based on ChatGPT-created prompts, and posting endless screenshots of their disbelief at the quality and breadth of ChatGPT's knowledge.&lt;/p&gt;

&lt;p&gt;Here are just a few examples of what ChatGPT is capable of:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1600623026730545153-370" src="https://platform.twitter.com/embed/Tweet.html?id=1600623026730545153"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1600623026730545153-370');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1600623026730545153&amp;amp;theme=dark"
  }



 &lt;/p&gt;

&lt;p&gt;One area that really has the tech industry taken by storm is ChatGPT's capability of writing code. Some have even claimed that it will replace Google and Stack Overflow. We found this intriguing and wanted to see how it holds up with infrastructure configuration. So we took ChatGPT on a test drive.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1600290313276645376-794" src="https://platform.twitter.com/embed/Tweet.html?id=1600290313276645376"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1600290313276645376-794');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1600290313276645376&amp;amp;theme=dark"
  }



 &lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;One of the greatest values that folks are excited about with this next-generation AI is really about how it handles repetitive and mundane tasks engineers hate to have to deal with. One such area is IaC templates - so our first attempt was to see how ChatGPT would deliver on creating an IaC template and modules.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;Let's take a look.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;
&lt;h3&gt;
  
  
  PROMPT #1: Create a Terraform template for a highly available EKS cluster
&lt;/h3&gt;

&lt;p&gt;And the results are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;
&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an EKS cluster&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_eks_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"eks_cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-eks-cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;role_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_cluster_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an IAM role for the EKS cluster&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"eks_cluster_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eks-cluster-role"&lt;/span&gt;

  &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POLICY&lt;/span&gt;&lt;span class="sh"&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
&lt;/span&gt;&lt;span class="no"&gt;POLICY
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Attach the AmazonEKSClusterPolicy policy to the EKS cluster role&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"eks_cluster_policy_attachment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_cluster_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;policy_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an Amazon Elastic Block Store (EBS) volume for the Kubernetes etcd data&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ebs_volume"&lt;/span&gt; &lt;span class="s2"&gt;"eks_etcd_volume"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_control_plane&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;availability_zone&lt;/span&gt;
  &lt;span class="nx"&gt;size&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gp2"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an Amazon Elastic Compute Cloud (EC2) instance for the Kubernetes control plane&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"eks_control_plane"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0ac019f4fcb7cb7e6"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.medium"&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_control_plane_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;iam_instance_profile&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_instance_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_control_plane_instance_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-ssh-key"&lt;/span&gt;

  &lt;span class="nx"&gt;root_block_device&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;volume_type&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gp2"&lt;/span&gt;
    &lt;span class="nx"&gt;volume_size&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
    &lt;span class="nx"&gt;delete_on_termination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eks-control-plane"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an IAM instance profile for the Kubernetes control plane EC2 instance&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_instance_profile"&lt;/span&gt; &lt;span class="s2"&gt;"eks_control_plane_instance_profile"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eks-control-plane-instance-profile"&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eks_control_plane_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create an IAM role for the Kubernetes control plane EC2 instance&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"eks_control_plane_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eks-control-plane-role"&lt;/span&gt;

  &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;POL&lt;/span&gt;&lt;span class="sh"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Very cool! Should I be looking for a job? Probably not yet… but It's incredible to see how accurate ChatGPT is and its capabilities of writing infrastructure code with limited prompts. &lt;/p&gt;

&lt;p&gt;However, all due respect, let’s see if it’s up to the challenge of an imperative IaC like Pulumi.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt #2: Create a Golang Pulumi that will create an S3 bucket with SNS topic notification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/pulumi/pulumi-aws/sdk/v2/go/aws/s3"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/pulumi/pulumi-aws/sdk/v2/go/aws/sns"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/pulumi/pulumi/sdk/v2/go/pulumi"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pulumi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;pulumi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// Create an S3 bucket.&lt;/span&gt;
        &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewBucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"my-bucket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BucketArgs&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c"&gt;// Add other bucket configuration options here.&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c"&gt;// Create an SNS topic for notifications.&lt;/span&gt;
        &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewTopic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"my-topic"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sns&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TopicArgs&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c"&gt;// Add other topic configuration options here.&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c"&gt;// Add an SNS topic subscription for the bucket.&lt;/span&gt;
        &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewBucketNotification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"bucket-notification"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BucketNotificationArgs&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;Bucket&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;TopicConfigurations&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BucketNotificationTopicConfigurationArray&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;s3&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BucketNotificationTopicConfigurationArgs&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;TopicArn&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Arn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="c"&gt;// Add other topic configuration options here.&lt;/span&gt;
                &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;What you can see in this code, is that it uses the AWS Go SDK and the Pulumi AWS package to create an S3 bucket and an SNS topic. It then adds a subscription to the SNS topic using the &lt;code&gt;s3.NewBucketNotification&lt;/code&gt; function.  An added step that is important and wasn’t directly requested.  Impressive.&lt;/p&gt;

&lt;p&gt;Let’s dig even deeper.  It would be interesting to see how ChatGPT manages repetitive but multi-step configuration workflows, such as CI/CD pipelines. although, I will be more specific with my ask this time.&lt;/p&gt;

&lt;p&gt;‍‍&lt;br&gt;
‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt #3: Create a Jenkins pipeline that will clone the code, scan it using Trivy, run Infracost and plan and apply infrastructure using Terraform
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="o"&gt;{&lt;/span&gt;
stage&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Clone repository"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
git url: &lt;span class="s2"&gt;"https://github.com/&amp;lt;username&amp;gt;/&amp;lt;repository&amp;gt;.git"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
stage&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Scan code with trivy"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
sh &lt;span class="s2"&gt;"trivy --exit-code 1 --severity CRITICAL &amp;lt;repository&amp;gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
stage&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Run infracost"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
sh &lt;span class="s2"&gt;"infracost --no-color"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
stage&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Plan infrastructure with terraform"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
sh &lt;span class="s2"&gt;"terraform plan"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
stage&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Apply infrastructure with terraform"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
sh &lt;span class="s2"&gt;"terraform apply"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now trying with GitHub Actions pipeline.&lt;/p&gt;

&lt;p&gt;‍‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt #4: Create a Github Action that applies terraform code and sends me a Slack notification once done
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;terraform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Apply Terraform&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Apply Terraform&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/terraform-github-actions@v0.12.x&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tf_actions_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.12.29&lt;/span&gt;
        &lt;span class="na"&gt;tf_actions_subcommand&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apply&lt;/span&gt;
        &lt;span class="na"&gt;tf_actions_working_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Send Slack notification&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shivammathur/slack-notify-action@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;slack_webhook_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SLACK_WEBHOOK_URL }}&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform apply complete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aside from the fact that ChatGPT uses an outdated version of Terraform (0.12.29), which is easily fixable with a small minor edit, the pipeline is what I was expecting. It put an outdated version probably because it has limited knowledge of the world and events after 2021, I assume that one day ChatGPT will know that Terraform already released v1.x .&lt;/p&gt;

&lt;p&gt;‍‍&lt;/p&gt;

&lt;h2&gt;
  
  
  So What's the Verdict?
&lt;/h2&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;ChatGPT is pretty amazing.  While the haters will talk about its shortcomings, have whatever opinions you may about Elon Musk, others see this as a new (and possibly scary) future:&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fze7g1ggbaw4qxsu2y8s6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fze7g1ggbaw4qxsu2y8s6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I, for one, welcome the machine overlords to manage the machines that manage us and our time today.&lt;/p&gt;

&lt;p&gt;I, for one, welcome the machine overlords to manage the machines that manage us and our time today.&lt;/p&gt;

&lt;p&gt;‍Think this is cool? We took this one step further and actually created and AI IaC generating tool - you guessed it, called &lt;a href="https://aiac.dev/" rel="noopener noreferrer"&gt;AIaC&lt;/a&gt;.  Check it out here, and start leveraging the power of machines to configure your machines.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>chatgpt</category>
      <category>ai</category>
      <category>iac</category>
    </item>
    <item>
      <title>Go CDK! What this Means for the World of IaC</title>
      <dc:creator>Roy Tal</dc:creator>
      <pubDate>Wed, 28 Sep 2022 17:15:42 +0000</pubDate>
      <link>https://forem.com/gofirefly/go-cdk-what-this-mean-for-the-world-of-iac-4e21</link>
      <guid>https://forem.com/gofirefly/go-cdk-what-this-mean-for-the-world-of-iac-4e21</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; &lt;br&gt;
CDK has landed, and the AWS community is hyped, but what exactly is CDK, how it works, and what does it mean for the world of Infrastructure-as-Code?&lt;/p&gt;

&lt;p&gt;CDK has landed and the AWS community is hyped - it was all the rage at the last &lt;a href="https://www.youtube.com/watch?v=7SluZSZntKA" rel="noopener noreferrer"&gt;AWS re:Invent&lt;/a&gt;, and that’s why I’d like to talk about what CDK introduces to an IaC world.  CDK is a game changer for all things infrastructure as code for AWS resources, and in this post, I’m going to dive into why that is, what it introduces, and how you can get started and even migrate existing resources.&lt;/p&gt;

&lt;p&gt;So just so we’re on the same page, what exactly is AWS CDK?  CDK, AKA AWS Cloud Development Kit, is the successor of AWS’ CloudFormation that has been used to date for automating and codifying AWS cloud resources.  Because of CloudFormation’s inherent limitations, this opened the door for the popular infrastructure as code tools we have all grown to love, such as Terraform and Pulumi that while they are not AWS native, still often delivered better IaC results than CloudFormation (particularly with diverse stacks).&lt;/p&gt;

&lt;p&gt;‍CDK is set to introduce a completely new and universal way to configure cloud resources that doesn’t require a proprietary DSL or config language - like HCL or YAML - but rather is built to be configured in your programming language of choice, namely Golang, Python, and NodeJS. The move to CDK is a result of two main factors.  &lt;/p&gt;

&lt;p&gt;‍First, AWS are looking to provide a much more native experience for their very own cloud, which actually wasn’t very native with CloudFormation and led people to prefer non-AWS tooling to configure their cloud resources.  The second reason is that CloudFormation wasn’t built for the complexity and scale of cloud operations today, and simply couldn’t keep up.  When you need to write really complex IaC with many dependencies chaining one resource to another, and build a dependency graph and tree in CloudFormation, { but really any config language }, it’s going to be very long, and many times super buggy. &lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
Another limitation in CloudFormation is when working with multi-region operations (which is certainly a common, and correct practice for high availability purposes).  In order to be able to use CloudFormation with multi-region you’ll need to employ another AWS duct tape service called “stack set”, which makes your IaC even more complex.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;
&lt;h2&gt;
  
  
  What’s All the Hype?
&lt;/h2&gt;

&lt;p&gt;CDK not only enables developers–– who today many times have gained the added fun of having to configure their own environments–– to write their cloud config in their favorite programming language, but on top of this, it’s super Amazon oriented.  It has been built to work seamlessly with AWS services and be much more AWS native than its predecessor CloudFormation, and  is now slated to be better than third party tools for configuring AWS services and resources.&lt;/p&gt;

&lt;p&gt;‍The first improvement this introduces over existing IaC tools is that when you have the option to configure your resources in the same programming language as your application’s code, you can use the same language libraries, and you don’t need to create a new YAML repo, for creating specific IaC code. So now, you’ll gain the benefit of having your configurations be in the same repo and code library, when doing any other code implementations.  The second improvement is that you also gain all of  the benefits of using your programming language for your configurations, in the form of code reuse, shared libraries and classes, environment variables and much more. &lt;br&gt;
‍&lt;/p&gt;
&lt;h2&gt;
  
  
  How Does AWS CDK Stack Up Against Other IaCs?
&lt;/h2&gt;

&lt;p&gt;We’ll start with Terraform, as Pulumi is actually built upon the underlying Terraform engine, and some of this will hold true for both.  When it comes to third-party IaC tools, CDK isn’t a hands-down 100% winner as it clearly is vs. CloudFormation, which has been in need of an upgrade for many years.  Like with all things in development, when it comes to third-party IaC it highly depends on the use case.&lt;/p&gt;

&lt;p&gt;‍So the most obvious reason CDK would trump third-party tools for AWS config, is that unlike CloudFormation, CDK has been built to be highly native to AWS.  And while both Terraform and Pulumi work very well for the AWS config use case, they are not specifically built solely for AWS and there are times well you’ll run into edge cases where it simply is not a perfect fit for AWS.&lt;/p&gt;

&lt;p&gt;‍An example of such a scenario.  I’ve had times when I’ve written security groups (and even log groups) that have been dependent on an instance, and I’ve wanted to delete the security group, but not the instance, which I still need.  Terraform has failed in such a scenario, when you don’t delete the instance, and doesn’t always quite know how to deal with such scenarios.  This would hold true for Pulumi in this case as well, which leverages the same Terraform engine.  This is of course only one example, but there are many other similar examples that exist in the DevOps world.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
Just to make sure you’re really convinced, an even more frustrating scenario that I’ve likely encountered even more often than the security group is with scaling groups.  When modifying a launch configuration’s user data that is attached to a scaling group - you’ll encounter similar behavior that is equally difficult or impossible to solve.&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;Third-party tools however would be a much better fit for your environment when you are working with a diverse tool set - for example multiple clouds, or platforms such as Datadog and Auth0, that have better native integrations with third-party IaC tools.  Also note, that all of the IaC providers have understood that the move is towards CDK, and they are building their own native CDK implementations - such as &lt;a href="https://www.hashicorp.com/blog/cdk-for-terraform-enabling-python-and-typescript-support" rel="noopener noreferrer"&gt;CDK for Terraform&lt;/a&gt;, and &lt;a href="https://cdk8s.io/" rel="noopener noreferrer"&gt;CDK8S&lt;/a&gt;.   &lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;How Does AWS CDK Actually Work?&lt;br&gt;
This is probably the part you have been waiting to get to.  So how does AWS CDK actually work in practice, and how do you migrate existing resources?  &lt;/p&gt;

&lt;p&gt;If you are already working in Golang - and have a repo written in Golang - migrating your IaC into Go is pretty easy, and the outcome will be that it’s going to be fully attached to your already existing project. This, of course, always depends on the number of resources being migrated and their complexity, but overall the rewrite itself isn’t too hard, it might just be time consuming.&lt;/p&gt;

&lt;p&gt;‍Since you will need to rewrite all of the resources, as there currently isn’t a robust enough tool out there to enable backwards compatibility or the reverse engineering of YAML to Golang for example––however, since you do have the benefits of code reuse, variables and classes, this often times not a very heavy lift.&lt;/p&gt;

&lt;p&gt;‍Also, it is possible to migrate in a hybrid approach until you manage to convert all of your resources.&lt;/p&gt;

&lt;p&gt;Another benefit is that you can actually see the diffs between the config files, so you can see how much progress you have made, and what remains to be converted, and be able to get a pretty accurate schedule for the migration.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;
&lt;h2&gt;
  
  
  Show Me the Code‍
&lt;/h2&gt;

&lt;p&gt;Below we will dive into a real example of migrating your CloudFormation to CDK in Golang.&lt;br&gt;
‍&lt;br&gt;
This is what the code will look for migrating a CloudFormation YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type CdkBasicStackProps struct {
    awscdk.StackProps
}

func CdkBasicStack(scope constructs.Construct, id string, props *CdkBasicStackProps) awscdk.Stack {

    /********************************
    ** set up CDK stack properties **
    *********************************/

    var stackProps awscdk.StackProps
    if props != nil {
        stackProps = props.StackProps
    }
    stack := awscdk.NewStack(scope, &amp;amp;id, &amp;amp;stackProps)

    /********************************************************
    ** create new resources straight through the CDK stack **
    *********************************************************/

    queue := awssqs.NewQueue(stack, jsii.String("ExampleQueue"), &amp;amp;awssqs.QueueProps{
        VisibilityTimeout: awscdk.Duration_Seconds(jsii.Number(300)),
    })

    bucket := awss3.NewBucket(stack, jsii.String("ExampleBucket"), &amp;amp;awss3.BucketProps{
        Versioned: jsii.Bool(true),
    })

    /****************************************************
    ** migrate Cloudformation stack into the CDK stack **
    ****************************************************/

    cfn := cloudformationinclude.NewCfnInclude(stack, jsii.String("ExampleMigratedStack"), &amp;amp;cloudformationinclude.CfnIncludeProps{
        TemplateFile: jsii.String("cfnTemplates/cfnInstance.yaml"),
    })

    fmt.Println(queue.QueueName(), bucket.BucketArn(), cfn.Stack())
    return stack
}

func main() {
    app := awscdk.NewApp(nil)

    CdkBasicStack(app, "CdkBasicStack", &amp;amp;CdkBasicStackProps{
        awscdk.StackProps{
            Env: &amp;amp;awscdk.Environment{
                Account: jsii.String(os.Getenv("CDK_DEFAULT_ACCOUNT")),
                Region:  jsii.String(os.Getenv("CDK_DEFAULT_REGION")),
            },
        },
    })
    app.Synth(nil)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this would be the end result of your migrated file to Golang (as you can see it’s significantly shorter and easier to read):&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: "2010-09-09"
Description: A sample template
Resources:
  CFNMigrationExampleInstance:
    Type: "AWS::EC2::Instance"
    Properties:
      ImageId: "ami-0bf84c42e04519c85"
      InstanceType: t2.micro
      KeyName: sample-key-pair
      BlockDeviceMappings:
        -
          DeviceName: /dev/sdm
          Ebs:
            VolumeType: io1
            Iops: 200
            DeleteOnTermination: false
            VolumeSize: 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the CDK template is deployed you will be able to find the CDK artifacts in the AWS console (where you will be able to find the resources that the sample code above created in AWS ).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmhpl68d20syy2en1yqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmhpl68d20syy2en1yqs.png" alt="CDK-Firefly"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  The Case for AWS CDK
&lt;/h2&gt;

&lt;p&gt;So I hope this was a useful overview of what CDK is, how and when to use it in your stacks and environments with a practical example of a CloudFormation migration.  We expect there to be much more traction around CDK, and many more integrations and examples to be rolled out in the coming weeks and months. &lt;/p&gt;

&lt;p&gt;We may also start to see useful tooling to help convert existing files at scale––albeit there are already some, they aren’t quite ready for showtime. So this is definitely a great resource and an AWS area to watch, and start planning your move to CDK when the time is right for you and the fit right for your stack.&lt;/p&gt;

</description>
      <category>cdk</category>
      <category>aws</category>
      <category>configuration</category>
      <category>iac</category>
    </item>
    <item>
      <title>How to Govern Terraform States Using GitLab Enterprise?</title>
      <dc:creator>Sefi Genis</dc:creator>
      <pubDate>Thu, 04 Aug 2022 08:23:00 +0000</pubDate>
      <link>https://forem.com/gofirefly/how-to-govern-terraform-states-using-gitlab-enterprise-3d7</link>
      <guid>https://forem.com/gofirefly/how-to-govern-terraform-states-using-gitlab-enterprise-3d7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; With the mass adoption of Terraform and becoming the de facto tool for developers to build, and manage their cloud infrastructure at scale, most companies today, who rely heavily on Terraform for their infrastructure management, choose to do so with an orchestration tool. In this blog, we'll review the way to govern Terraform States using Gitlab Enterprise&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Terraform has become the nearly ubiquitous way to provision services in a cloud native era.  However, when we start to build our infrastructure using Terraform’s as-code approach, there are a few things we need to consider in order to be able to manage these operations at scale, for a diversity of decentralized services, and for distributed teams.  At &lt;a href="https://gofirefly.io"&gt;Firefly&lt;/a&gt;, we often encounter the challenges of &lt;a href="https://www.gofirefly.io/blog/codify-your-saas-apps"&gt;managing IaC at scale&lt;/a&gt;, as part of our effort to help organizations discover and manage their many cloud assets.&lt;br&gt;
‍&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Management at Scale
&lt;/h2&gt;

&lt;p&gt;With the mass adoption of Terraform and becoming the de facto tool for developers to build, and manage their cloud infrastructure at scale, most companies today, who rely heavily on Terraform for their infrastructure management, choose to do so with an orchestration tool.  These tools complement the suite of tools Hashicorp provides, to  help get a handle on the many modules and providers, frameworks and services being provisioned with the sheer scale of cloud operations, as well as a remote backend to maintain the state for your infrastructure.&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;The companies that choose a fully managed orchestration tool, oftentimes will select Hashicorp’s very own Terraform Cloud (SaaS solution), however like all tools that gain widespread popularity ––there are many non-Hashicorp alternatives to ride the wave of the tool’s success, as well. Terraform Cloud's benefits are a fully remote backend, native integration with GitHub, State versioning, and advanced features for infrastructure stakeholders, such as platform engineers, DevOps teams and cloud engineers.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
However, there is another option that is gaining popularity for large-scale Terraform operations, and that is the GitOps approach, ones who decide to deploy their infrastructure using GitHub Actions or other built-in CI pipelines applications.  The most popular tool for this use case is Atlantis.  Atlantis is a basic solution that integrates automatically with each pull request (PR) and enforces best practices for infrastructure deployments as they are defined in the company policies such as: the code owner, code reviewers, unit tests using tools like TerraTest, among others.&lt;br&gt;
‍&lt;br&gt;
When you choose the GitOps method, this will not come with the managed backend, and therefore this will still be required for those looking to maintain state for their IaC. Terraform currently supports out-of-the-box integration with:  AWS S3, GCS, Hashicorp Consul, Kubernetes, and HTTP.&lt;/p&gt;

&lt;p&gt;‍As we all know though, many companies today have chosen to work with Gitlab on-prem, for many reasons, and therefore all of the Github and Github Actions integrations become less relevant with this choice.&lt;br&gt;&lt;br&gt;
‍&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform States Using Gitlab Enterprise
&lt;/h2&gt;

&lt;p&gt;Those companies who choose GitLab as their primary source code management (SCM) platform, will also many times choose to  deploy their infrastructure using dedicated GitLab pipelines. This leaves us with the question: but what about the Terraform state?&lt;br&gt;
‍&lt;br&gt;
Introducing a new feature for remote backends inside Gitlab.  We knew this was just what we needed as a Gitlab shop.  However, when we came to try and enable it, we found very little documentation to help us…and so we had to go down the rabbit hole of researching how to configure and setup remote backends with specific requirements dictated in the Gitlab API, and we’d like to share with you some of the excellent intel we uncovered.&lt;br&gt;
‍&lt;br&gt;
We’ll start with some of the challenges we immediately encountered. Configuring the Gitlab backend proved itself quite complex, having to understand the Gitlab configuration syntax in-depth and the various S3 configurations to actually get this set up.  Once we managed to configure our S3 bucket as the dedicated data store for our Terraform states, we found that these are all encrypted using AES 256 inside the S3 by Gitlab.  What this means is that once encrypted, this state is no longer accessible inside Terraform. This requires you to use Gitlab APIs to download them and be able to use them in your environment.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
This is where it gets tricky.  So we’ve chosen to deploy and orchestrate our code using Gitlab. Great.  Next we want to leverage their new capability of managing state - but this means we can’t actually manage our Terraform State if they are encrypted and not accessible to Terraform.  &lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
This adds a particular layer of complexity for this use case, because when you work in the modern engineering format of CI/CD, Gitlab will increase your version number with each deployment, to maintain the log and change history of deployed versions.  All of this is fine, and important as an engineering best practice - however this introduces a few gaps when it comes to Terraform state management.&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;If we take a look at the Gitlab &lt;a href="https://docs.gitlab.com/ee/user/infrastructure/iac/terraform_state.html"&gt;API documentation&lt;/a&gt; the way to download the state is as follows: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl --header "Private-Token: " "https://gitlab.example.com/api/v4/projects//terraform/state//versions/"&lt;/code&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;This means that in order to be able to download the state you have to have a few critical pieces of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your Access Token&lt;/li&gt;
&lt;li&gt;Your Project ID&lt;/li&gt;
&lt;li&gt;Your State Name&lt;/li&gt;
&lt;li&gt;Your Version Number
‍&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not only does one rarely know the specific name of their deployment, it’s very rare to know the latest version number (Gitlab doesn’t expose this in the UI, only if you hover over the deployment or click on it will you see this number in the URL)––as this is constantly changing with continuous deployment.&lt;br&gt;&lt;br&gt;
‍&lt;br&gt;
In very large scale operations, there are hundreds of environments running Terraform all the time, and news ones constantly being deployed.  Not to mention different kinds of environments–– development, staging, production, with all of these having multiple dev accounts.  It’s a needle in a haystack.&lt;/p&gt;

&lt;p&gt;‍We felt like we hit a wall. We knew there had to be a better way.  We went back to researching.&lt;br&gt;
‍&lt;/p&gt;
&lt;h2&gt;
  
  
  Gitlab GraphQL API for Terraform State Management‍
&lt;/h2&gt;

&lt;p&gt;After digging deeper, we found a gold mine.  There IS another way.&lt;/p&gt;

&lt;p&gt;‍We found a hidden GraphQL API that reveals all of your Gitlab environments built through GitLab Pipelines which enables you to extract quite simply all of the critical information you will need to be able to download and access the Terraform State.&lt;/p&gt;

&lt;p&gt;‍See it in action - below is the GraphQL code snippet that enables you to query and extract the required data.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;POST&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;https://&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="err"&gt;GITLAB-HOST&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="err"&gt;/api/graphql&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"operationName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"getStates"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"variables"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"projectPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sefi/tf-demo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"first"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"after"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"last"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"before"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"query getStates($projectPath: ID!, $first: Int, $last: Int, $before: String, $after: String) {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; project(fullPath: $projectPath) {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; terraformStates(first: $first, last: $last, before: $before, after: $after) {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; count&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; nodes {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; ...State&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; pageInfo {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; ...PageInfo&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;fragment State on TerraformState {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; name&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; lockedAt&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; updatedAt&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; deletedAt&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; lockedByUser {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; ...User&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; latestVersion {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; ...StateVersion&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;fragment User on User {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; avatarUrl&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; name&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; username&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; webUrl&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;fragment StateVersion on TerraformStateVersion {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; downloadPath&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; serial&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; updatedAt&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; createdByUser {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; ...User&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; job {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; detailedStatus {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; detailsPath&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; group&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; icon&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; label&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; text&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; pipeline {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; id&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; path&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;fragment PageInfo on PageInfo {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; hasNextPage&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; hasPreviousPage&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; startCursor&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; endCursor&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; __typename&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This API returns the latest version of all environments in the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"project"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gid://gitlab/Project/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"terraformStates"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"nodes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gid://gitlab/Terraform::State/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"lockedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"updatedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2022-08-02T19:55:26Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"deletedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"lockedByUser"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"latestVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gid://gitlab/Terraform::StateVersion/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"downloadPath"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/api/v4/projects//terraform/state//versions/0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"serial"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"updatedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2022-08-02T19:55:26Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"createdByUser"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"job"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"__typename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TerraformStateVersion"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"__typename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TerraformState"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"pageInfo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hasNextPage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"hasPreviousPage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"startCursor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"endCursor"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"__typename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PageInfo"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"__typename"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TerraformStateConnection"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Using the response, we can download the latest version of the Terraform State leveraging the previously mentioned Gitlab API:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://{{GITLAB-HOST}}/API/v4/projects//terraform/state//versions/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s it! It’s that easy. &lt;/p&gt;

&lt;p&gt;For anyone using Gitlab On-Prem or Enterprise, leveraging Gitlab Pipelines, there really is no need to add more tooling to the stack for Terraform orchestration and management.  You can leverage the built-in Gitlab support and Terraform’s integration with S3.  With the GraphQL API access you can now access the required info to download your state from storage via the Gitlab API.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>terraform</category>
      <category>statemanagement</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Codify your SaaS Apps: The Answer to the Unmanaged SaaS Jungle</title>
      <dc:creator>Eran Bibi</dc:creator>
      <pubDate>Mon, 23 May 2022 15:00:34 +0000</pubDate>
      <link>https://forem.com/gofirefly/codify-your-saas-apps-the-answer-to-the-unmanaged-saas-jungle-3lm5</link>
      <guid>https://forem.com/gofirefly/codify-your-saas-apps-the-answer-to-the-unmanaged-saas-jungle-3lm5</guid>
      <description>&lt;p&gt;Infrastructure drift, unmanaged resources, ghost assets, these are all well-known “silent killers” in our clouds.  Whether AWS, GCP, Kubernetes, Azure or anything else, when deploying our services to multiple clouds, we know that a unified inventory and management of our cloud resources is complicated, and there are many great tools out there that are looking to help solve this growing complexity.&lt;/p&gt;

&lt;p&gt;One thing that is often overlooked though is where our SaaS tooling comes into the mix.  When we adopt SaaS tools, we tend to see them as just that–tools, and not what they eventually actually are: additional siloed, unmanaged clouds with their own proprietary inventory of services, objects and resources.&lt;/p&gt;

&lt;p&gt;A phenomenon we have encountered often, when helping companies overcome drift, is a common neglect of cloud infrastructure tooling, such as CloudFlare, Okta, Mongo Atlas, Datadog, Git and many other popular SaaS platforms and tools that are part of our core operations.  How can we make these SaaS clouds immutable, versioned, scalable and monitored if these extensions aren't codified?  Is state drift in Okta less troubling than drift in your IAM roles, for example? How can we guarantee proper monitoring if our Datadog dashboards make it possible for anyone to cause drifts?&lt;/p&gt;

&lt;p&gt;‍These are just some of the questions that come to mind when we see this recurring anti-pattern in cloud operations today. But you may ask, why does this matter?&lt;/p&gt;

&lt;p&gt;‍If a growing understanding is dawning on DevOps engineers that it is much safer and less error prone to codify cloud resources, including the inherent benefits of managing these resources like all other code––whether the git history, peer reviews, PR automation and policy enforcement, it seems that SaaS service have not yet undergone a similar evolution and epiphany.  If clickops for cloud configuration has mostly been abandoned for IaC practices, SaaS tooling is predominantly still configured manually via the UI with minimal codification. Not surprisingly this leads to many similar problems you’d find in your cloud operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting the Git in GitOps
&lt;/h2&gt;

&lt;p&gt;When it comes to Kubernetes and cloud native systems, that are so commonly associated with GitOps practices, which is considered the best practice and modern way for managing complex Kubernetes operations––the git part of gitops is all but neglected when it comes to managing these systems.  I’ll explain.&lt;br&gt;
‍&lt;/p&gt;

&lt;p&gt;If we look at the top downloaded Terraform providers for SaaS applications that are not clouds, the list and data is extremely compelling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DataDog/datadog 32.4M+&lt;/li&gt;
&lt;li&gt;integrations/github 16.9M+&lt;/li&gt;
&lt;li&gt;cloudflare/cloudflare 16.8M+&lt;/li&gt;
&lt;li&gt;newrelic/newrelic 12.5M+&lt;/li&gt;
&lt;li&gt;hashicorp/consul 9.7M+&lt;/li&gt;
&lt;li&gt;PagerDuty/pagerduty 8.6M+&lt;/li&gt;
&lt;li&gt;grafana/grafana 5.4M+&lt;/li&gt;
&lt;li&gt;gitlabhq/gitlab 4.8M+&lt;/li&gt;
&lt;li&gt;mongodb/mongodbatlas 4.2M+&lt;/li&gt;
&lt;li&gt;okta/okta 4M+&lt;/li&gt;
&lt;li&gt;elastic/ec 3.2M+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍&lt;br&gt;
While there is an increasing trend towards codifying these resources through Terraform providers, if we take the number of downloads of the AWS Terraform provider or the next most popular cloud Azure, these are 750M+ and 127M+ respectively, placing the next most popular non-cloud provider at about 5% adoption (and ultimately codification).&lt;/p&gt;

&lt;p&gt;‍This is because not only would this codification need to be done for each SaaS tool individually, once these tools are configured through clickops, just translating this to IaC is an extremely complex undertaking (particularly in large organizations with multiple dashboards, clouds, services and other dependencies and resources).&lt;/p&gt;

&lt;p&gt;‍If we come back to thinking about how we convert our git operations to be GitOps native, we’d likely need to follow a post similar to this one that walks you through the process of &lt;a href="https://www.mineiros.io/blog/how-to-manage-your-github-organization-with-terraform"&gt;managing your Github organization with Terraform&lt;/a&gt;.  And this is just one tool of many in a huge stack of SaaS tooling that would need to undergo a similar transformation, this is another example of a post that walks you through &lt;a href="https://medium.com/hackernoon/managing-datadog-with-terraform-89abe0eb62f5"&gt;a similar transition for Datadog&lt;/a&gt;.  And the list goes on.  Now imagine having 10s of tools, in large organizations––multiple teams and clouds.  The task is daunting just to think about.  Until now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codifying Your SaaS
&lt;/h2&gt;

&lt;p&gt;When thinking about the critical aspects of codifying your SaaS, there are a few angles it was important for &lt;a href="https://gofirefly.io"&gt;Firefly&lt;/a&gt; to focus on to make this transition truly valuable for all DevOps teams.  The first layer of value is in the unified inventory of both cloud assets and SaaS assets in a single place. Just this alone enables DevOps teams to search, understand, and classify assets across all clouds––the operational ones or the tooling clouds.  Something that wasn’t possible before from a single dashboard or tool.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
The next aspect is actually getting all of these tools and assets managed as code.  If this has become the cloud standard, it’s not clear why this hasn’t happened for SaaS apps too.  We’ve spoken about the benefits of managing everything as code, but eventually once managed as code with the relevant guidelines and internal engineering practices applied, these can then be automated as part of CI/CD processes and the relevant gating &amp;amp; guardrails applied here too.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
Doing so manually would require engineers to translate all of their manual configurations (that are not always found in a single place in the UI, across the many layers of their application) into the relevant code configurations, and usually many times over if there are multiple applications, dashboards or tools.  This is now possible at the click of a single button, for all SaaS tools, in one place.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
If we take a look at a typical Firefly dashboard, we can see that typical SaaS tools have as low as 20%+ of codified resources, vs. 50% in cloud service providers.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5zv6nDTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://uploads-ssl.webflow.com/6166f5ce8dcfdbe35a9143b6/62862df5e24dbfc343d67223_Screen%2520Recording%25202022-05-15%2520at%252015.22.28%2520%282%29.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5zv6nDTl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://uploads-ssl.webflow.com/6166f5ce8dcfdbe35a9143b6/62862df5e24dbfc343d67223_Screen%2520Recording%25202022-05-15%2520at%252015.22.28%2520%282%29.gif" alt="Image description" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The companies that have flipped this number and codified these resources, were able to enjoy the IaC advantages of faster deployment cycles and standby configuration templates for disaster recovery scenarios.  &lt;/p&gt;

&lt;p&gt;We hope you found this useful - feel free to drop any questions you may have in the comments.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>saas</category>
    </item>
    <item>
      <title>SQL Triggers with AWS Lambda — Take Your DB to the Next Level</title>
      <dc:creator>Liav Yona</dc:creator>
      <pubDate>Mon, 25 Apr 2022 07:42:25 +0000</pubDate>
      <link>https://forem.com/gofirefly/sql-triggers-with-aws-lambda-take-your-db-to-the-next-level-5f7i</link>
      <guid>https://forem.com/gofirefly/sql-triggers-with-aws-lambda-take-your-db-to-the-next-level-5f7i</guid>
      <description>&lt;p&gt;Relational databases are still a powerful and effective tool to define relationships between structured data points. Among all the functionalities these databases expose, &lt;strong&gt;&lt;em&gt;Triggers&lt;/em&gt;&lt;/strong&gt; are a special stored procedure &lt;strong&gt;that runs before or after specific actions occur within the database&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Triggers (or hooks) play a major role in keeping your codebase simple by passing the event handling to the database itself instead of implementing this with code. However, maintaining those triggers is not trivial; they’ve been proven to be hard to debug; their capabilities are limited, and they suffer a lack of explainability and visibility (logging abilities).&lt;/p&gt;

&lt;p&gt;In this blog, I will introduce the usage of AWS Lambda as the solution for these problems in an AWS RDS Aurora with a PostgreSQL engine.&lt;/p&gt;

&lt;p&gt;[NOTE: This blog assumes you have a working knowledge of Python, AWS Lambdas, AWS SNS, Terraform and PostgreSQL.]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/l0MYzerrM20z3Cb4Y/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/l0MYzerrM20z3Cb4Y/giphy.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Granting Permissions to Invoke Lambda Function
&lt;/h3&gt;

&lt;p&gt;First of all, we start by giving our &lt;a href="https://aws.amazon.com/rds/postgresql/" rel="noopener noreferrer"&gt;RDS cluster&lt;/a&gt; permissions to invoke a Lambda function (using Terraform):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_iam_policy" "invoke_lambda_policy" {
  name        = "invoke_lambda_policy"
  path        = "/"
  description = "A policy which grants permission to invoke a Lambda function."

  policy      = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "lambda:InvokeFunction"
        Effect   = "Allow"
        Resource = "arn:aws:lambda:*:123456789123:function:*"
      },
    ]
  })
}
resource "aws_iam_role" "rds_lambda_role" {
  name               = "rds_lambda_role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "rds.amazonaws.com"
        }
      },
    ]
  })
}
resource "aws_iam_role_policy_attachment" "rds_lambda_role_attach" {
  role       = aws_iam_role.rds_lambda_role.name
  policy_arn = aws_iam_policy.invoke_lambda_policy.arn
}
resource "aws_db_instance_role_association" "rds_lambda_role_attach" {
  db_instance_identifier = aws_db_instance.test.id
  feature_name           = "LAMBDA"
  role_arn               = aws_iam_role.rds_lambda_role.arn
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, our DB instance has permission to invoke any Lambda function in our AWS account. Next, we want to create triggers responding to operational actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/Ln9YwHWf9jwb3eKHMD/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/Ln9YwHWf9jwb3eKHMD/giphy.gif" alt="Image Permission Granted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing DB Trigger in SQL
&lt;/h3&gt;

&lt;p&gt;In our database, we have a user table to manage all of our customers.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

CREATE TABLE users (id integer, firstname varchar(100), lastname varchar(100), age integer);
  INSERT into users (id, firstname, lastname, age) values (1, 'albert', 'einstein', 40);
  INSERT into users (id, firstname, lastname, age) values (2, 'isaac', 'newton', 25);
  INSERT into users (id, firstname, lastname, age) values (3, 'marie', 'curie', 32);



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We now want to trigger a Lambda function that will publish &lt;a href="https://aws.amazon.com/sns/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc" rel="noopener noreferrer"&gt;an SNS message&lt;/a&gt;. Afterward, we can add an SMS subscriber to get a push notification for each new user.&lt;/p&gt;

&lt;p&gt;In the following SQL code snippet, we use the &lt;code&gt;aws_lambda&lt;/code&gt; extension and used the &lt;code&gt;invoke&lt;/code&gt; function after inserting a new row to our &lt;code&gt;users&lt;/code&gt; table. The event contains the first and last name of the new user and the creation timestamp. We build the event in a JSON format using the keyword &lt;code&gt;NEW&lt;/code&gt; as a &lt;a href="https://www.postgresql.org/docs/9.2/plpgsql-trigger.html" rel="noopener noreferrer"&gt;reference&lt;/a&gt; to the new entity.&lt;/p&gt;

&lt;p&gt;[NOTE: PostgreSQL triggers are executed within the same transaction as the statement that triggered them. As a result, if the performing of trigger fails so as the statement’s.]&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

CREATE EXTENSION IF NOT EXISTS aws_lambda CASCADE;
CREATE OR REPLACE FUNCTION respond_with_lambda()
  RETURNS TRIGGER
  LANGUAGE PLPGSQL
  AS
$$
BEGIN
    IF cardinality(TG_ARGV)!=2 THEN
      RAISE EXCEPTION 'Expected 2 parameters to respond_with_lambda function but got %', cardinality(TG_ARGV);
   ELSEIF TG_ARGV[0]='' THEN
      RAISE EXCEPTION 'Lambda name is empty';
   ELSEIF TG_ARGV[1]='' THEN
      RAISE EXCEPTION 'Lambda region is empty';
   ELSE
       PERFORM * FROM aws_lambda.invoke(aws_commons.create_lambda_function_arn(TG_ARGV[0], TG_ARGV[1]),
                               CONCAT('{"firstname": "', NEW.firstname,
                                    '", "lastname": "', NEW.lastname,
                                    '", "created_at": "', TO_CHAR(NOW()::timestamp, 'YYYY-MM-DD"T"HH24:MI:SS'), 
                                   '"}')::json,
                                    'Event');
        RETURN NEW;
    END IF;
END
$$;
DROP TRIGGER IF EXISTS new_user_trigger ON users;
CREATE TRIGGER new_user_trigger
  AFTER INSERT ON users
  FOR EACH ROW
  EXECUTE PROCEDURE respond_with_lambda("on-new-user", "us-east-1");


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Add on user inserted SQL trigger
&lt;/h3&gt;

&lt;p&gt;Now, we need to implement a Lambda function that receives such events and publishes new user notifications to a SNS topic.&lt;br&gt;
[NOTE: Be mindful that the &lt;code&gt;aws_lambda.invoke&lt;/code&gt; method can receive the Event parameter for asynchronous execution. You can use the RequestResponse parameter instead for synchronous invocation.]&lt;/p&gt;
&lt;h3&gt;
  
  
  Notification Producer Lambda
&lt;/h3&gt;

&lt;p&gt;We create a Python Lambda function named &lt;code&gt;on_new_user&lt;/code&gt; in our DB instance region and VPC in order to maximize performances and maintain security. In our case, the Lambda publishes a new user message to a SNS topic (which can have SMS subscribers) but since we are in a code execution unit our options of responding to such an event are endless (We chose a SMS message in this blog but you can respond with any way you wish).&lt;/p&gt;

&lt;p&gt;So we have the following code which handles the new user event:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Built-ins
from os import getenv
from json import dumps
from typing import Any, Dict
from http import HTTPStatus
# Third party
from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools.utilities.parser import event_parser, BaseModel
import boto3

class UserInsertionEvent(BaseModel):
    firstname: str
    lastname: str
    created_at: str

@event_parser(model=UserInsertionEvent)
def handler(event: UserInsertionEvent, context: LambdaContext) -&amp;gt; Dict[str, Any]:
    client = boto3.client("sns")
    response = client.publish(
        TopicArn=getenv("NOTIFICATION_TOPIC_ARN"),
        Message=f"A new user was added at {event.created_at}: {event.firstname} {event.lastname}"
    )
    return {
        "lambda_request_id": context.aws_request_id,
        "lambda_arn": context.invoked_function_arn,
        "status_code": HTTPStatus.OK.value,
        "event": event.json(),
        "response": response
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Lambda handler for &lt;code&gt;on_user_inserted&lt;/code&gt; event&lt;/p&gt;

&lt;p&gt;After our Lambda is deployed with all its dependencies and configured with the accurate environment variables, let’s test our new user creation flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Piece the Components Together
&lt;/h3&gt;

&lt;p&gt;In order to execute the model, we need to insert a new user. For the sake of simplicity, we will do it straight from the DB console instead of having an application in front of our database.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

INSERT into users (id, firstname, lastname, age) values (4, 'stephen', 'hawking', 55);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After a few seconds, I received the following SMS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5z4l3mbwnuj4664185l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5z4l3mbwnuj4664185l.png" alt="Image SMS Success"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/ZdUnQS4AXEl1AERdil/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/ZdUnQS4AXEl1AERdil/giphy.gif" alt="Image We Did It!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We created the following architecture:&lt;/p&gt;

&lt;p&gt;On each &lt;code&gt;INSERT&lt;/code&gt; action in the &lt;code&gt;users&lt;/code&gt; table, we triggered a Lambda function that publishes a notification to our SNS and then straight to our SMS subscriber.&lt;br&gt;
With such a solution, we managed to control our DB trigger easily, effectively and I might say beautifully. With a Lambda function, we can add logging to increase explainability and visibility. In addition, we can respond to any event using the preferred programming language of our choice making use of its advanced and extensive capabilities.&lt;/p&gt;

&lt;p&gt;In this blog, we’ve seen an example of SMS notification for every new user. Even though, such an architecture could be helpful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be notified for stock price’s change.&lt;/li&gt;
&lt;li&gt;Create a new tenant environment asynchronously.&lt;/li&gt;
&lt;li&gt;Send metrics to a remote endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To conclude, once our Lambda function has the modified entity from our database — Our options of responding to the event are unlimited.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>aws</category>
      <category>serverless</category>
      <category>database</category>
    </item>
    <item>
      <title>From Ghost Assets to Infrastructure Drift - Don't Get Spooked</title>
      <dc:creator>Naor Paz</dc:creator>
      <pubDate>Wed, 06 Apr 2022 11:52:20 +0000</pubDate>
      <link>https://forem.com/gofirefly/from-ghost-assets-to-infrastructure-drift-dont-get-spooked-4l4k</link>
      <guid>https://forem.com/gofirefly/from-ghost-assets-to-infrastructure-drift-dont-get-spooked-4l4k</guid>
      <description>&lt;p&gt;We’ve often spoken about state and infrastructure drift at Firefly that is many times the byproduct of large-scale cloud operations that predate infrastructure as code, and were once upon a time managed manually. Just a quick recap though, for those who aren’t familiar, drift is what happens when you have assets that exist in both your code and cloud. These resources, however, have drifted from their initial intended or desired state––and have now altered from the code representation of their intended state. If you want a good reference on this - you can start here with this great talk from &lt;a href="https://www.youtube.com/watch?v=qvGv3Ix7Lzo"&gt;DevOpsDays Tel Aviv&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this post, though, I’m going to talk a little bit about ghost assets, which we’ve found to be a recurring challenge in large-scale cloud operations.  Unlike drifted assets, ghost assets are those assets that no longer exist in your cloud at all, but do still remain in your code, and this can result in all kinds of problematic, and sometimes unpredictable and insecure behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  So why do Ghost Assets even exist in the first place?
&lt;/h2&gt;

&lt;p&gt;Ghost assets are the byproduct of a codified asset, i.e. one that appears in your infrastructure as code (IaC) configuration files, being manually deleted on the cloud - whether via the UI or an API.  This can happen for two primary reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The resource isn’t actually needed anymore and someone decided to remove it&lt;/li&gt;
&lt;li&gt;Human or machine error&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When we’re looking at reason #1 - this can have certain implications, and #2 as well.&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at what this looks like under the hood.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Impact of Ghost Assets?
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Ghost Assets and Unnecessary Cloud Costs
&lt;/h4&gt;

&lt;p&gt;When we’re talking about a cloud resource that is no longer needed, sometimes it is manually deleted from the cloud, but not from the code.  What will occur in such a scenario, when it comes to IaC and immutability, is that anytime you redeploy the IaC code, this manually deleted asset will return.  This can have a few implications - from cost through stability.  &lt;/p&gt;

&lt;p&gt;Cost is obvious, you don’t want to be paying for resources you no longer need (certainly when cloud operations are already quite costly), but sometimes a different alternate service may have replaced the old service––and this can cause conflicts, data loss or even security issues.  It is a bad practice to have your IaC and cloud out of sync. &lt;/p&gt;

&lt;p&gt;Let’s drill down into the security issues that may arise from ghost assets.  Imagine you have an EC2 Instance on AWS, or even a container you used to run on ECS or EKS, they are no longer in use, and you decided to delete them from your cloud. Now someone has redeployed it as it was still in the code (despite being manually removed from the cloud), and it is now there again.  This is problematic because those workloads are no longer maintained, they might use old libraries with known vulnerabilities, they might even be exposed to the internet, making it easier for attackers to exploit them. They are sort of invisible assets that essentially run old, unmaintained code that might still work with some other managed resources such as databases, and this is a huge risk to your production deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ghost Assets and Security
&lt;/h4&gt;

&lt;p&gt;The first scenario, while it can be frustrating to hunt down and resolve, is surprisingly less critical than the second scenario.&lt;/p&gt;

&lt;p&gt;Reason #2 is the likely scenario you’d really want to avoid.  If an asset is accidentally deleted–– whether by humans or machines, Houston, you have a problem. This can lead to production breakage and downtime, long-term data loss (if a database or backup service was removed accidentally), and many more issues.  &lt;/p&gt;

&lt;p&gt;Imagine for example, that you have a production deployment with different workloads, a web server, and a database. If you accidentally deleted a workload, then you definitely can suffer from downtime.  What will likely happen is that when users try to reach that service, it will not be available to them, and that can cause immediate loss of business. It is more likely to happen with workloads and resources that are not often used by customers, but are critical by nature when needed. Let me guess…you probably have something in mind right now.&lt;/p&gt;

&lt;p&gt;At times we’ve had the horror of finding haunted house config files that are just riddled with ghosts.  These types of config files that are all but obsolete in the cloud, create clutter and redundancy, and many times are a source of needless cloud costs.  &lt;/p&gt;

&lt;h3&gt;
  
  
  How to Manage and Prevent Ghost Assets
&lt;/h3&gt;

&lt;p&gt;So obviously if an important and still required resource was manually deleted from the cloud––restoring it is a matter of how up-to-date and “restorable” your backups are.  (There is no shortage of horror stories where the backups were never actually tested for restoration––and to the shock and dismay of everyone, the backups didn’t work in real time.) And a good piece of evergreen advice, is to always test your backups!&lt;/p&gt;

&lt;p&gt;While we can’t undo what’s already been done, we can help you improve your practices and avoid such a recurrence in the future.&lt;/p&gt;

&lt;p&gt;One way to avoid losing critical resources that are manually deleted from the cloud, is by preventing this kind of action entirely from the UI or via API, and enforcing policies that require any resources to be added or deleted solely through your IaC.  This practice is often called GitOps or Policy as Code (and you can learn more about &lt;a href="https://dzone.com/articles/building-with-open-policy-agent-opa-for-better-pol"&gt;Policy as Code&lt;/a&gt;, in this post).&lt;/p&gt;

&lt;p&gt;This means that any resource that requires deletion should have the resource block deleted entirely from the actual code, and then once you do Terraform apply or Pulumi up, the resource will not be created again.  If you would like to search for such missing resources, you can leverage the Terraform plan feature and that will flag for you any resources that appear in your code, but do not appear in your cloud.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Ghost Busters
&lt;/h2&gt;

&lt;p&gt;When you have very large cloud fleets, and you manage hundreds and thousands of config files that may have many ghost assets in them, this can be a heavier lift than just a ghost asset or two in a handful of files.  There are certainly many upsides to the efficiency, flexibility and scale the cloud provides, but its upkeep and maintenance in the long-term has created a lot of operational overhead.  &lt;/p&gt;

&lt;p&gt;The many abstraction layers and tools we use today have caused our clouds to be riddled with drifted assets, ghost assets, and even unmanaged resources that no one even knows exist and are bleeding costs.  The cloud has become a wild west with disparate ways of management, and this has become an operational nightmare for many cloud engineers to handle at scale.  &lt;/p&gt;

&lt;p&gt;This is what we set out to solve at Firefly to enable this kind of detection at scale - and quick fixes for remediation and codification.  Eventually with everything shifting left and being managed as code, your cloud can’t be left behind.  The ability to move and deploy rapidly is dependent on how quickly and efficiently you can manage and automate your infrastructure and resources––so codify all the things and discard the manual toil, and you’ll see your engineering velocity improve as well as your infrastructure safety and robustness.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>security</category>
    </item>
  </channel>
</rss>
