<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andra Somesan (she/her)</title>
    <description>The latest articles on Forem by Andra Somesan (she/her) (@andrasomesan).</description>
    <link>https://forem.com/andrasomesan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/andrasomesan"/>
    <language>en</language>
    <item>
      <title>What is AWS Community Builders and what's in it for you</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Fri, 14 Jan 2022 07:00:52 +0000</pubDate>
      <link>https://forem.com/aws-builders/what-is-aws-community-builders-and-whats-in-it-for-you-1e6a</link>
      <guid>https://forem.com/aws-builders/what-is-aws-community-builders-and-whats-in-it-for-you-1e6a</guid>
      <description>&lt;p&gt;A few days ago the application for the AWS Community Builders program has opened and I happily shared the news on Twitter (@ AndraSomesan) and &lt;a href="https://www.linkedin.com/in/andra-somesan-0003ab69/"&gt;LinkedIn&lt;/a&gt;. Since then, I keep getting DMs like "What exactly is AWS Community Builders program?" and "Could you help me with my application?" and I am delighted to help. &lt;br&gt;
A few of my colleagues have already written some great articles on this topic, but I would like to highlight what the program is about and what you can get from it ✨.&lt;/p&gt;

&lt;p&gt;❗The application is open until January 23rd EOD Pacific Time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Community Builders Program
&lt;/h2&gt;

&lt;p&gt;You can read everything about it on the &lt;a href="https://aws.amazon.com/developer/community/community-builders/"&gt;main page&lt;/a&gt;, but basically it is a great way to connect the people from all around the globe (94 countries now) who are activly involved in the AWS community, all in one place. Of course, it will be impossible to manage a program like this if everyone in the AWS space gets in, and this is why only a small percent of the applicants receive the great news. In the last round of applications - October 2021, only 10% of the applicantions were approved. &lt;br&gt;
To stand out you have to really be involved, write more articles, do some Meetup presentations, organize Meetups, have an YouTube channel, a Podcast, be very active on Twitter. You don't have to be doing all of them, you just have to stand out among the applicants. And it's not all about that, the questions cover your motivation for joining, what is your unique perspective, and what are your future plans.&lt;/p&gt;

&lt;h6&gt;
  
  
  Eager to find out more
&lt;/h6&gt;

&lt;p&gt;To know more about who should apply and how, you can read this &lt;a href="https://dev.to/pawelpiwosz/aws-community-builders-program-1m75"&gt;article&lt;/a&gt; by &lt;a class="mentioned-user" href="https://dev.to/pawelpiwosz"&gt;@pawelpiwosz&lt;/a&gt; &lt;br&gt;
For a more in depth information about the program and some of its benefits, you can read this &lt;a href="https://dev.to/aws-builders/aws-community-builders-program-6kf"&gt;article&lt;/a&gt; by &lt;a class="mentioned-user" href="https://dev.to/andrewbrown"&gt;@andrewbrown&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What's in it for you
&lt;/h2&gt;

&lt;p&gt;It is all about the community ❤️ and it comes with some extra benefits 🎁.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Commnity Builders is a great community - I've met some of my collegues for the first time &lt;del&gt;this&lt;/del&gt; last year at re:Invent, and it was amazing to start feeling comfortable with people I have actually never met before. Being in the Slack workspace, you get a lot of help and support, but also a chance to do some networking and to make new friends, or just finding out about things you never heard about before 😄.&lt;/li&gt;
&lt;li&gt;Free stuff - Everybody loves free stuff, especially when they are branded with the cloud you love or with the great community logo. You get an Welcome Kit, AWS Credits, exam voucher and some extra things that usually vary.&lt;/li&gt;
&lt;li&gt;Exclusiveness - There are exclusive events with people from AWS and sessions with the AWS Heroes. This is more amazing than I can describe. The chance to do some networking with the Heroes and to be able to attend a presentation from Jeff Barr itself is just beyond awesome 🦄.&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Eager to find out more
&lt;/h6&gt;

&lt;p&gt;Then this &lt;a href="https://dev.to/aws-builders/10-benefits-to-joining-aws-community-builders-4cle"&gt;article&lt;/a&gt; on 10 benefits of the program by &lt;a class="mentioned-user" href="https://dev.to/vattybear"&gt;@vattybear&lt;/a&gt; is the right place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this was a quick read that covered the basics of what the AWS Community Builders program is and what benefits it can bring you if you will join. Keep being great and keep trying! If you don't get in easily, doesn't mean you won't get in at all! 🎓&lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>aws</category>
      <category>community</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to prepare for AWS re:Invent 2021</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Tue, 19 Oct 2021 06:08:31 +0000</pubDate>
      <link>https://forem.com/aws-builders/prepare-for-aws-reinvent-2021-8p</link>
      <guid>https://forem.com/aws-builders/prepare-for-aws-reinvent-2021-8p</guid>
      <description>&lt;p&gt;Photo: screenshot from re:Invent &lt;a href="https://reinvent.awsevents.com/" rel="noopener noreferrer"&gt;page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the past few days I've been searching for ways to prepare for the AWS re:Invent 2021 in person conference and I started to feel a bit overwhelmed at first. I have started to search and read more about it and in this post I would like to share with you what helped me out so far in making a plan for the event.&lt;br&gt;&lt;/p&gt;

&lt;p&gt;This will be my first time ever attending the conference (hopefully in person), and as we get closer to the event and also to the &lt;a href="https://reinvent.awsevents.com/faqs/" rel="noopener noreferrer"&gt;reserved seating&lt;/a&gt; opening up (today!) I had to try and make up my mind quickly. &lt;br&gt;
I will start by sharing a few links I found useful and at the end I will talk a bit about the tips and tricks.&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links to prepare for AWS re:Invent 2021 &lt;br&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS re:Invent&lt;/strong&gt; &lt;a href="https://reinvent.awsevents.com/faqs/" rel="noopener noreferrer"&gt;FAQs&lt;/a&gt; page - here we can find all we need about topics such as health and safety, registration, accessibility, and so on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rf8knn745duu4mwogml.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rf8knn745duu4mwogml.jpg" alt="AWS FAQ page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS How to re:Invent&lt;/strong&gt; &lt;a href="https://reinvent.awsevents.com/how-to-reinvent/" rel="noopener noreferrer"&gt;page&lt;/a&gt; - this is an AMAZING page! Here Annie Hancock and Kelley Schultz from AWS are sharing everything we need to know about the conference and how to approach and plan it in order to make the most out of it. The first episode you'll see is the latest one, scroll down and start with episode 1. It will take you about 45 minutes to watch all 3 episodes published by now (October 18, 2021). &lt;br&gt;
The highlights for me here were: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Getting an overview over the session types and which ones have reserved seating, which ones don't.&lt;/li&gt;
&lt;li&gt;Make a goal for what you want to achieve at this conference, something like: see a certain talk, get a certification, learn more about a topic or work to improve another. Arrange your calendar with this goal in mind, but always listen to your body and have a Plan B because this week is going to be a LOT to handle. The final goal will be to enjoy it nevertheless, don't forget that 😄 
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Breakout Content&lt;/strong&gt; &lt;a href="https://reinvent.awsevents.com/learn/breakout-content/" rel="noopener noreferrer"&gt;page&lt;/a&gt; - here we can find the session names explained, more details about what a breakout session or a chalk talk is. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Attendee Guides&lt;/strong&gt; &lt;a href="https://reinvent.awsevents.com/how-to-reinvent/attendee-guides/" rel="noopener noreferrer"&gt;page&lt;/a&gt; - here I will try to follow the most relevant guides for me from the AWS Hero Guides part. It's nice that we can chose a domain, like Serverless and have a look at what &lt;a href="https://twitter.com/emrahsamdan" rel="noopener noreferrer"&gt;Emrah Samdan&lt;/a&gt; will pick and chose for this event. Every guide has at the end the same few tips for the conference and explains how to reserve seats.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS re:Invent 2021&lt;/strong&gt; &lt;a href="https://reinvent.awsevents.com/agenda/?trk=direct" rel="noopener noreferrer"&gt;Agenda&lt;/a&gt; - as the name implies, here we can find a high-level agenda of the event.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips &amp;amp; tricks
&lt;/h2&gt;

&lt;p&gt;There is so much to do in so little time, so my approach was to first see which kind of sessions will be available online after the event, this way I can go if I really want to see that session live, or I can skip if I really can't make it and see it online later on. I feel this takes a bit the stress of "I have to see them all" thought that crossed my mind 😅 .&lt;/p&gt;

&lt;p&gt;Breakout sessions, leadership sessions, and keynotes will be available online later on.&lt;/p&gt;

&lt;p&gt;About the reserved seating, if you don't have the time to watch the session on &lt;strong&gt;How to re:Invent&lt;/strong&gt; here are my takes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reserve first seats for chalk talks and smaller session because they are usually filling up faster than others.&lt;/li&gt;
&lt;li&gt;Leadership sessions have reserved seating but the Keynotes don't, you have to be there sooner to get your spot. There are ways to see the Keynotes from outside of the room in places specially prepared for this.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some tips I have received over &lt;a href="https://www.linkedin.com/in/andra-glavan-0003ab69/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and also found in this nice &lt;a href="https://dev.to/hiro/tips-and-tricks-for-your-first-tech-conference-aws-re-invent-2lam"&gt;article&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wear comfy shoes&lt;/li&gt;
&lt;li&gt;Pack a bit of water and some snacks&lt;/li&gt;
&lt;li&gt;Have a backpack (keep it lightweight if possible)
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let me know if this was useful to you and please share other tips and tricks if you have some 🌟 .&lt;/p&gt;

</description>
      <category>howto</category>
      <category>reinvent2021</category>
      <category>guide</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to configure AWS news RSS feed for Microsoft Teams</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Wed, 18 Aug 2021 14:42:07 +0000</pubDate>
      <link>https://forem.com/andrasomesan/how-to-configure-aws-news-rss-feed-for-microsoft-teams-nee</link>
      <guid>https://forem.com/andrasomesan/how-to-configure-aws-news-rss-feed-for-microsoft-teams-nee</guid>
      <description>&lt;p&gt;This is my second time configuring an RSS news feed, and to be honest, the first time was one month ago.&lt;br&gt;
What made me write this article was the fact that I didn't remember exactly how to do it and finding all the information in one place was not that easy.&lt;br&gt;
Working in IT is challenging and one of the causes is staying up to date with the latest releases and news. By being up to date with the announcements will make any developer's life easier and will also bring business value to the company. For example, if AWS is launching a new instance type that will reduce the costs and increase the performance, we will want to know that.&lt;br&gt;
This is why I find it extremely important while working with AWS to have a channel dedicated to the latest news. &lt;br&gt;
Let's see how we can configure this in Microsoft Teams and stay until the end for an extra tip.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create a new channel
&lt;/h2&gt;

&lt;p&gt;To start we have to first go to our team group in Microsoft Teams and click on the 3 dots in the right, then click &lt;em&gt;Add Channel&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gDHqiwN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g4b7taalm561ufdlz0so.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gDHqiwN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g4b7taalm561ufdlz0so.jpg" alt="step0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Configure the new channel
&lt;/h2&gt;

&lt;p&gt;In the new window we can add a name for the new channel and optionally a description.&lt;br&gt;
We can also choose for it to be either &lt;em&gt;Public&lt;/em&gt; or &lt;em&gt;Private&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jINPnvy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wvehgd3r8aykhyyx2su.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jINPnvy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wvehgd3r8aykhyyx2su.jpg" alt="step1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I like to start with it as private, do all the configuration and then add people or make it &lt;em&gt;Public&lt;/em&gt; if it will make sense for the team. So, in the next step we can skip adding new members for now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z1I4cYWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vckdso0dp5pil87wc6t1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z1I4cYWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vckdso0dp5pil87wc6t1.jpg" alt="step1.1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats! Our new private channel is up and running 🎉.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5snQMjkk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gxbdxsd2asu92h4pwzss.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5snQMjkk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gxbdxsd2asu92h4pwzss.jpg" alt="step1.2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Add connectors to the new channel
&lt;/h2&gt;

&lt;p&gt;Connectors in Microsoft Teams help us by delivering content and updates from services into the channel. To add one, let's click on the 3 dots in the upper right corner of our newly created channel and then select &lt;em&gt;Connectors&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7mwVg6QI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wynm5wr2ozmgfn5akx8r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7mwVg6QI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wynm5wr2ozmgfn5akx8r.jpg" alt="step3"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Search for &lt;em&gt;RSS&lt;/em&gt; if you don't see it on the first page, and click to add it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0-FdtqDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyu0wq16uy4rh3k9lvbk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0-FdtqDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyu0wq16uy4rh3k9lvbk.jpg" alt="step3.1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what we will see next and we have to click again the &lt;em&gt;Add&lt;/em&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MNKzhkul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8q9gtxxr43a3hrlxm6qt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MNKzhkul--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8q9gtxxr43a3hrlxm6qt.jpg" alt="step3.2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Configure the RSS connector
&lt;/h2&gt;

&lt;p&gt;Now that the RSS connector is added, the only thing left to do is to configure it. Let's click again on the 3 dots of our new channel and then click on &lt;em&gt;Connectors&lt;/em&gt; as seen in step &lt;strong&gt;3&lt;/strong&gt;. The option to configure the added RSS will be available now. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y1ly5JVs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ljgaxm92enc5kij5pzbq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y1ly5JVs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ljgaxm92enc5kij5pzbq.jpg" alt="step4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next up we need to give a name to our new RSS connection, add the link address for the RSS feed (see &lt;a href="https://aws.amazon.com/about-aws/whats-new/recent/feed/"&gt;this&lt;/a&gt; for the AWS News RSS feed), and select the frequency you would like to be notified by any changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B8rfI69U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id6iv69avozt7z1hq5pp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B8rfI69U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id6iv69avozt7z1hq5pp.jpg" alt="step4.1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's all. This is a pretty easy thing to do, but it is nice to find all these steps in one place, especially if you are switching from Slack to Microsoft Teams and the later is a bit of a mystery for you at the beginning 😃&lt;/p&gt;

&lt;p&gt;Now, as I promised in the beginning, I will shortly show how to delete an RSS news feed, if you accidentally added the wrong one to the channel (you know why I know this 🙈). &lt;/p&gt;

&lt;p&gt;Go again on the 3 dots of the channel and click on &lt;em&gt;Connectors&lt;/em&gt; like we did in step &lt;strong&gt;3&lt;/strong&gt;. There will be two or more configured RSS feeds, click &lt;em&gt;x configured&lt;/em&gt; to expand  like in the picture below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CMannmV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0os6lalkbofg27bytr9h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CMannmV7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0os6lalkbofg27bytr9h.jpg" alt="extra1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we'll see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6y7g9Krk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/203a08552vsvrh8gdbz2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6y7g9Krk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/203a08552vsvrh8gdbz2.jpg" alt="extra2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;em&gt;Remove&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oyFrL2IT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vq0l9mmfbduqr2oev1k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oyFrL2IT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vq0l9mmfbduqr2oev1k.jpg" alt="extra3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Remove&lt;/em&gt; need to be clicked again.&lt;br&gt;
Now our AWS news channel is nice and clean, with no other feeds in it. Having separate channels for every RSS news feed is a good idea.&lt;/p&gt;

&lt;p&gt;I hope this will be useful and please let me know if you have any tips and tricks for doing this.&lt;/p&gt;

&lt;p&gt;Cheers,&lt;br&gt;
Andra 👻&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rss</category>
      <category>configure</category>
      <category>news</category>
    </item>
    <item>
      <title>Automated database update and restore with AWS Lambda functions for AWS DocumentDB</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Sat, 14 Aug 2021 07:37:01 +0000</pubDate>
      <link>https://forem.com/andrasomesan/automated-database-update-and-restore-with-aws-lambda-functions-for-aws-documentdb-3706</link>
      <guid>https://forem.com/andrasomesan/automated-database-update-and-restore-with-aws-lambda-functions-for-aws-documentdb-3706</guid>
      <description>&lt;p&gt;Disclaimer: This article was first published &lt;a href="https://www.sentiatechblog.com/automated-database-update-and-restore-with-aws-lambda-functions-for-aws-2"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my previous article I talked about my last challenge, namely the automation of an update and restore process for the solution that was already migrated to the cloud. We discussed there the advantages of cloud adoption and we have focused on the Aurora DB. We will discuss in this article how we have reused the same solution and what modifications we have made to make it suitable for DocumentDB, because another great thing about the cloud is that we can reuse and adapt solutions to meet different requirements easly.&lt;/p&gt;

&lt;p&gt;Amazon DocumentDB has MongoDB compatibility and “is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.” as AWS states in the official documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario (recap)
&lt;/h2&gt;

&lt;p&gt;Our customer has 2 different AWS accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acceptance&lt;/li&gt;
&lt;li&gt;Production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All changes made in the production account should be (every two weeks) synchronised to their acceptance account for regression testing in the acceptance account.&lt;br&gt;
The old way of doing this with the on-prem infrastructure was to manually run some commands and update the databases.&lt;br&gt;
The process should be done in the maintenance window, which implies working in the night/early morning when nobody is using the databases.&lt;br&gt;
The big advantage of having the solution in the cloud is that it can easly be automated using services that bring low or no cost at all, and that this solution needs no human interaction.&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution overview (recap)
&lt;/h2&gt;

&lt;p&gt;The solution we agreed on was to split the process in 2 main parts having 2 AWS Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;first one, &lt;em&gt;db-update-latest-snapshot-id&lt;/em&gt;, will mainly focus on preparing the environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;copy the shared DocumentDB snapshot (because we can’t restore from it directly, see &lt;a href="https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-share_cluster_snapshots.html"&gt;this&lt;/a&gt; article)&lt;/li&gt;
&lt;li&gt;check the latest copied snapshot for DocumentDB&lt;/li&gt;
&lt;li&gt;update the latest snapshot id parameters in AWS SSM Parameter Store&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the second one, &lt;em&gt;db-restore-from-snapshot&lt;/em&gt;, will mainly focus on triggering the update and restore process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;will copy the latest snapshot ID value to the current snapshot ID and then trigger the main pipeline to deploy the changes and restore the databasess from the latest snapshots
I took advantage of the already existing AWS Lambda functions that are sharing the database snapshots from one account to the other. The sharing part is not in the scope of this article but it was ilustrated for a better view of the solution.
Having two lambda functions with a small defined taks, is not only best practise of AWS, but gives also the flexibility to copy/share more frequently without a real restore being done.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In AWS SSM Parameter Store, I have created 2 more parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;documentdb_current_snapshot_id&lt;/li&gt;
&lt;li&gt;documentdb_latest_snapshot_id&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and I have reused one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;main_cicd_name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The current value will be the one from which the databases are restored from and the latest value will be the value of the most recent snapshot that exists in the account for DocumentDB or Aurora.&lt;br&gt;
We need to have 2 different parameters, one for the current and one for the latest snapshot, to be able to restore the databases from a specific snapshot ID outside of this process and not directly trigger a restore when the pipeline is running but restore on a pre-defined time/date.&lt;br&gt;
There are 2 pipeline in the account, so the name of the main pipeline, the one that is deploying the infrastructure, was needed.&lt;/p&gt;

&lt;p&gt;The entire solution was written using IaC in AWS CDK with Python.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FCHmEIFS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifnd0mheu4c43drrt26o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FCHmEIFS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifnd0mheu4c43drrt26o.png" alt="Solution Architecture" width="825" height="491"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How does it work
&lt;/h2&gt;

&lt;p&gt;Let’s have a look at the code and you can see that a lot of it was reused. We have been using environment variables and a custom function that gets the environment variable value &lt;em&gt;get_optional&lt;/em&gt;. There is an existing process of restoring the database from a specific snapshot ARN from the environment variables, and this is why we use the check for the current_docdb_backup_snapshot parameter. If it exists it will update the parameter with this value. For the &lt;em&gt;ssm_docdb_latest_snapshot_name&lt;/em&gt; a dummy value was used because you can’t create an SSM parameter with an empty value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;current_docdb_backup_snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'DOCDB_BACKUP_SNAPSHOT_ID_ARN'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;ssm_docdb_current_snapshot_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'SSM_DOCDB_CURRENT_SNAPSHOT_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;ssm_docdb_latest_snapshot_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'SSM_DOCDB_LATEST_SNAPSHOT_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;current_docdb_backup_snapshot&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;current_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StringParameter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"stack_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;DocDbCurrentDSnapshotArn'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;parameter_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ssm_docdb_current_snapshot_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;string_value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;current_docdb_backup_snapshot&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current_arn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameter_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StringParameter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_string_parameter_name&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"stack_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;MainPipelineName'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;string_parameter_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;main_cicd_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;main_cicd_name_value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;string_value&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameter_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'arn:aws:codepipeline:*:*:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;main_cicd_name_value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the code from above was reused from the previous task that had as target the AWS Aurora DB. The change and adaptation were needed more in the way that the function work, as you can see below.&lt;br&gt;
Using a cronjob the update function (&lt;em&gt;db-update-latest-snapshot-id&lt;/em&gt;) will run every 2 weeks at 2 AM.&lt;/p&gt;

&lt;p&gt;This function has the following tasks/functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;copy_docdb_snapshot&lt;/em&gt; - checks for the latest DocumentDB shared and manual snapshot, and it takes the latest value for both of them. It then checks if the latest shared snapshot is already copied and if not, it will copy that one, if yes, it will return a nice message. The process was not needed for Aurora DB and had to be investigated and added to the &lt;em&gt;db-update-latest-snapshot-id&lt;/em&gt; Lambda function. This copy process has added a new required check to be done for the database. The search we have used for the shared snapshot is still needed, but a new search process needs to be in place because once the shared snapshot is copied, this one becomes a manual snapshot and we want to check if the latest shared snapshot was copied or needs to be copied. See the code below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt; &lt;span class="c1"&gt;# Check if the latest shared snapshot is already copied and if not
&lt;/span&gt;    &lt;span class="c1"&gt;# take the latest shared snapshots and copy it
&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Copying the latest shared snapshot for DocDB'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;latest_shared_snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;"copy-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;docdb_shared_snapshots_source&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshotIdentifier'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="s"&gt;')[-1]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;snapshot&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docdb_manual_snapshots&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;latest_shared_snapshot&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;snapshot&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshotIdentifier'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;':'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Latest snapshot already copied'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;snapshot&lt;/span&gt;
    &lt;span class="n"&gt;docdb_copied_snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;docdb_copied_snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docdb_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;copy_db_cluster_snapshot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;SourceDBClusterSnapshotIdentifier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;docdb_shared_snapshots_source&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshotIdentifier'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;TargetDBClusterSnapshotIdentifier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;latest_shared_snapshot&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;KmsKeyId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;kms_key&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'ERROR copying snapshot: '&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;docdb_shared_snapshots_source&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshotIdentifier'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'ERROR: '&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;docdb_copied_snapshot&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;update_ssm&lt;/em&gt; - it will update the SSM Parameter store with the new values of the latest snapshots in documentdb_latest_snapshot_id.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this is it for the update process.&lt;br&gt;
Now for the restore is even simpler. Using another cronjob the restore function (&lt;em&gt;db-restore-from-snapshot&lt;/em&gt;) will run every 2 weeks, 2 hours later than the update process.&lt;br&gt;
The tasks of this functions are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;update_current&lt;/em&gt; - it gets the value from the latest parameters and copies it to the current one. Now the value of the &lt;em&gt;documentdb_current_snapshot_id&lt;/em&gt; will be updated with the value of &lt;em&gt;documentdb_latest_snapshot_id&lt;/em&gt;. The pipeline will check the value of the current parameters. We have split this process with latest and current parameters to facilitate the restore from a snapshot in other cases than the ones described in this article as menioned above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;trigger the deployment pipeline using the &lt;em&gt;main_cicd_name&lt;/em&gt; from the SSM Parameter Store&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Trigger a new pipeline deployment
&lt;/span&gt;    &lt;span class="n"&gt;cicd_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_pipeline_execution&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cicd_name&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Small bumps
&lt;/h2&gt;

&lt;p&gt;Given the fact that we were able to reuse a lot of the code already written, the only small bump we had was the copy process. A new method of search and a new comparison were also needed in order to have a clean copy process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automated processes in the cloud are very flexible and easly reusable. To adapt the existing solution for the automated update and restore process of Aurora DB for DocumentDB aproximately 80% of the code was reused.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>automation</category>
      <category>cdk</category>
    </item>
    <item>
      <title>Automated database update and restore with AWS Lambda functions for AWS Aurora</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Sat, 14 Aug 2021 07:14:10 +0000</pubDate>
      <link>https://forem.com/andrasomesan/automated-database-update-and-restore-with-aws-lambda-functions-for-aws-aurora-22do</link>
      <guid>https://forem.com/andrasomesan/automated-database-update-and-restore-with-aws-lambda-functions-for-aws-aurora-22do</guid>
      <description>&lt;p&gt;Disclaimer: This article was first published &lt;a href="https://www.sentiatechblog.com/automated-database-update-and-restore-with-aws-lambda-functions-for-aws"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The roadmap to cloud adoption can be difficult to set and implement, but once it is completed it offers a lot of flexibility, a lot of space for continuous improvement and a lot of room for creativity to build new solutions.&lt;br&gt;
Having automated processes helps the company to focus on what is important for the business and lets the developers experiment more and efficiently optimize their work.&lt;br&gt;
One of the latest things I’ve been working on involves a task that needed to be continued after the client’s solution was already moved to the cloud. What I liked about it was that it is not only taking the advantage of being in the cloud, it also involves serverless technology that I consider to be the next level of cloud solutions. The task involved both AWS Aurora and AWS DocumentDB databases, and I will make a separate article about the second one since there are some differences to be presented and discussed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Our customer has 2 different AWS accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acceptance&lt;/li&gt;
&lt;li&gt;Production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All changes made in the production account should be (every two weeks) synchronised to their acceptance account for regression testing in the acceptance account.&lt;br&gt;
The old way of doing this with the on-prem infrastructure was to manually run some commands and update the databases.&lt;br&gt;
The process should be done in the maintenance window, which implies working in the night/early morning when nobody is using the databases.&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution overview
&lt;/h2&gt;

&lt;p&gt;The solution we agreed on was to split the process in 2 main parts having 2 AWS Lambda functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;first one, &lt;em&gt;db-update-latest-snapshot-id&lt;/em&gt;, will mainly focus on preparing the environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;check the latest shared snapshot for Aurora&lt;/li&gt;
&lt;li&gt;update the latest snapshot id parameter in AWS SSM Parameter Store&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the second one, &lt;em&gt;db-restore-from-snapshot&lt;/em&gt;, will mainly focus on triggering the update and restore process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;will copy the latest snapshot ID value to the current snapshot ID and then trigger the main pipeline to deploy the changes and restore the database from the latest snapshots
I took advantage of the already existing AWS Lambda functions that are sharing the database snapshots from one account to the other. The sharing part is not in the scope of this article but it was illustrated for a better view of the solution.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In AWS SSM Parameter Store, I have created 3 parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ssm_rds_current_snapshot_name&lt;/li&gt;
&lt;li&gt;ssm_rds_latest_snapshot_name&lt;/li&gt;
&lt;li&gt;main_cicd_name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The current value will be the one from which the database is restored from and the latest value will be the value of the most recent snapshot that exists in the account.&lt;br&gt;
We need to have 2 different parameters, one for the current and one for the latest snapshot, to be able to restore the database from a specific snapshot ID outside of this process and not directly trigger a restore when the pipeline is running but restore on a pre-defined time/date.&lt;br&gt;
There are 2 pipeline in the account, so the name of the main pipeline, the one that is deploying the infrastructure, was needed.&lt;/p&gt;

&lt;p&gt;The entire solution was written using IaC in AWS CDK with Python.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D0AC9P_o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wh5opoypoiytqrsr2y67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D0AC9P_o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wh5opoypoiytqrsr2y67.png" alt="Solution architecture"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How does it work
&lt;/h2&gt;

&lt;p&gt;First lets look at how the parameters are defined in code. We have been using environment variables and a custom function that gets the environment variable value &lt;em&gt;get_optional&lt;/em&gt;. There is an existing process of restoring the database from a specific snapshot ARN from the environment variables, and this is why we use the check for the &lt;em&gt;current_rds_backup_snapshot&lt;/em&gt; parameter. If it exists it will update the parameter with this value. For the &lt;em&gt;ssm_rds_latest_snapshot_name&lt;/em&gt; a dummy value was used because you can’t create an SSM parameter with an empty value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;current_rds_backup_snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'DATABASE_BACKUP_SNAPSHOT_ID_ARN'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;ssm_rds_current_snapshot_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'SSM_RDS_CURRENT_SNAPSHOT_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;ssm_rds_latest_snapshot_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'SSM_RDS_LATEST_SNAPSHOT_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;main_cicd_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_optional&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'SSM_MAIN_CICD_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;resources&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;current_rds_backup_snapshot&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;current_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StringParameter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"stack_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;RdsCurrentSnapshotArn'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;parameter_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ssm_rds_current_snapshot_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;string_value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;current_rds_backup_snapshot&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current_arn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameter_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;rds_ssm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StringParameter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"stack_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;RdsLatestSnapshotArn'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;parameter_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ssm_rds_latest_snapshot_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;string_value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'dummyvalue'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rds_ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameter_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StringParameter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;from_string_parameter_name&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"stack_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;MainPipelineName'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;string_parameter_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;main_cicd_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;main_cicd_name_value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;string_value&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;main_cicd_ssm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameter_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="s"&gt;'arn:aws:codepipeline:*:*:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;main_cicd_name_value&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using a cronjob the update function (&lt;em&gt;db-update-latest-snapshot-id&lt;/em&gt;) will run every 2 weeks at 2 AM.&lt;br&gt;
This function has the following tasks/functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;get_latest_rds_snapshots&lt;/em&gt; - checks for the latest shared snapshot for Aurora DB and it gets the latest one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;update_ssm&lt;/em&gt; - it will update the SSM Parameter store with the new values of the latest snapshots in &lt;em&gt;aurora_latest_snapshot_id&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this is it for the update process.&lt;br&gt;
Now for the restore is even simpler. Using another cronjob the restore function (&lt;em&gt;db-restore-from-snapshot&lt;/em&gt;) will run every 2 weeks, 2 hours later than the update process.&lt;br&gt;
The tasks of this functions are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;update_current&lt;/em&gt; - it gets the value from the latest parameters and copies it to the current one. Now the value of the &lt;em&gt;ssm_rds_current_snapshot_name&lt;/em&gt; will be updated with the value of &lt;em&gt;ssm_rds_latest_snapshot_name&lt;/em&gt;. The pipeline will check the value of the current parameters. We have split this process with latest and current parameters to facilitate the restore from a snapshot in other cases than the ones described in this article as menioned above.&lt;/li&gt;
&lt;li&gt;trigger the deployment pipeline using the &lt;em&gt;main_cicd_name&lt;/em&gt; from the SSM Parameter Store
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Trigger a new pipeline deployment
&lt;/span&gt;    &lt;span class="n"&gt;cicd_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_pipeline_execution&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cicd_name&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Small bumps
&lt;/h2&gt;

&lt;p&gt;The process of finding out a solution was challenging in each phase of implementation. At the beginning we tried to find a solution that will come on top of the existing one and work to improve it. In the process of implementing the solution with Lambda functions there was the problem of identifying the exact snapshot we need, which of course was just a matter of knowing what you want to extract and reading the documentation. For example, the process of checking the available shared snapshot is looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;rds_shared_snapshots&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rds_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;describe_db_cluster_snapshots&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;SnapshotType&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'shared'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;IncludeShared&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Check available RDS shared snapshots from the source account
&lt;/span&gt;    &lt;span class="n"&gt;rds_snapshots_source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;snap&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;rds_shared_snapshots&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshots'&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
        &lt;span class="n"&gt;source_account_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'DBClusterSnapshotIdentifier'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;':'&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'Status'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;'available'&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; \
           &lt;span class="n"&gt;source_account_id&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;source_account&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; \
           &lt;span class="n"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'EngineVersion'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;rds_engine_version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;rds_snapshots_source&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we had to select from all the snapshots the ones that are having a specific source account ID, that are available and that are specific for Aurora engine (MySQL or PostgreSQL).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Even with the small bumps we had, this solution is pretty easy to implement in the AWS cloud. The solution not only saves time and money, but also protects the environment against the human errors that can easily happen at that time at night when databases operations are implemented. With the manual task out of the way the developers can focus on other important tasks and also on bringing more automation to the solution as well. Stay tunned for the DocumentDB scenario to see what are the differences and how can you make it work.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>automation</category>
    </item>
    <item>
      <title>Terraform S3 Cross Region Replication: from an unencrypted bucket to an encrypted bucket</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Sat, 24 Jul 2021 09:25:22 +0000</pubDate>
      <link>https://forem.com/andrasomesan/terraform-s3-cross-region-replication-from-an-unencrypted-bucket-to-an-encrypted-bucket-5ceh</link>
      <guid>https://forem.com/andrasomesan/terraform-s3-cross-region-replication-from-an-unencrypted-bucket-to-an-encrypted-bucket-5ceh</guid>
      <description>&lt;p&gt;I've wrote this article on 14th of December last year and I thought to share it here as well. I hope it will help :) &lt;br&gt;
Disclaimer: This article was first published &lt;a href="https://www.sentiatechblog.com/terraform-s3-cross-region-replication-from-an-unencrypted-bucket-to-an"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve been working with Terraform for a few months now, and one of the scenarios that I’ve encountered, that put me in trouble was this:&lt;br&gt;
New client wants to migrate several buckets from the existing account, Ohio region, to the new account, Frankfurt region. This is, of course, no problem for AWS, and this type of migration can be found in a lot of scenarios already explained on the internet. But what was new was that some of the buckets were not encrypted at the source, and at the destination everything must be encrypted to comply with security standards.&lt;/p&gt;
&lt;h1&gt;
  
  
  What is the issue?
&lt;/h1&gt;

&lt;p&gt;For the Cross Region Replication (CRR) to work, we need to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable Versioning for both buckets&lt;/li&gt;
&lt;li&gt;At Source: Create an IAM role to handle the replication&lt;/li&gt;
&lt;li&gt;Setup the Replication for the source bucket&lt;/li&gt;
&lt;li&gt;At Destination: Accept the replication&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If both buckets have the encryption enabled, things will go smoothly. Same way it goes if both are unencrypted.&lt;br&gt;
But if the Source bucket is unencrypted and the Destination bucket uses AWS KMS customer master keys (CMKs) to encrypt the Amazon S3 objects, things get a bit more interesting.&lt;/p&gt;
&lt;h1&gt;
  
  
  What is the solution?
&lt;/h1&gt;

&lt;p&gt;One of the best advices I have received while working with software for infrastructure as code in AWS, was that if I am going to deploy something new and have troubles with it, one good way to solve it is to go into the AWS console, and try to manually create what I need. This makes things clearer and helps to understand better what it’s needed and how it needs to be modified in order to make it work.&lt;br&gt;
This was the process I followed, and after a few hours of trials and a support ticket with AWS, this was solved with the feedback that, this scenario is ‘tricky’.&lt;br&gt;
The 2 things that must be done, in order to make the CRR work between an unencrypted Source bucket to an encrypted Destination bucket, after the replication role is created, are:&lt;/p&gt;

&lt;p&gt;1.In the Source account, get the role ARN and use it to create a new policy. This policy needs to be added to the KMS key in the Destination account.&lt;br&gt;
2.Modify the role to add a new policy to it, to be able to use the KMS key in the Destination account. For this, the KMS key ARN is needed and the policy will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "VisualEditor0",
           "Effect": "Allow",
           "Action": [
               "kms:Decrypt",
               "kms:Encrypt",
               "kms:GenerateDataKey*",
               "kms:ReEncrypt*",
               "kms:DescribeKey"
               ],
           "Resource": "arn:aws:kms:[aws-region]:[account-id]:key/1234abcd-12ab-34cd-56ef-1234567890ab"
       }
   ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  How do I put it in code?
&lt;/h1&gt;

&lt;p&gt;Let’s say that the bucket to be replicated is called: &lt;strong&gt;source-test-replication&lt;/strong&gt;, and it is in the Source account, in the Ohio region. The versioning is enabled, and the default encryption is disabled. The bucket in the Destination account is &lt;strong&gt;destination-test-replication&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Terraform code for the normal replication, that creates a KMS key for the new bucket, includes these KMS resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_kms_key" "replication_s3_kms_key" {
 description = "s3 encryption key"
}

resource "aws_kms_alias" "replication_s3_kms_alias" {
 name          = "alias/replication-s3-key"
 target_key_id = aws_kms_key.replication_s3_kms_key.key_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this scenario to work, the code needs to me modified and the following information need to be added:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "kms_policy" {
  statement {
   sid = "Enable IAM User Permissions"
   effect = "Allow"
   principals {
     type = "AWS"
     identifiers = [
       "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
       ]
   }
   actions = [
     "kms:*"
   ]
   resources = [
     "*"
   ]
 }
 statement {
   sid = "Allow use of the key"
   effect = "Allow"
   principals {
     type = "AWS"
     identifiers = [
       "arn:aws:iam::&amp;lt;Replication role ARN&amp;gt;”
       ]
   }
   actions = [
     "kms:Encrypt",
     "kms:Decrypt",
     "kms:ReEncrypt*",
     "kms:GenerateDataKey*",
     "kms:DescribeKey"
   ]
   resources = [
     "arn:aws:s3:::destination-test-replication"
   ]
 }
}

resource "aws_kms_key" "replication_s3_kms_key" {
 description = "s3 encryption key"
 policy = data.aws_iam_policy_document.kms_policy.json
}

resource "aws_kms_alias" "replication_s3_kms_alias" {
 name          = "alias/replication-s3-key"
 target_key_id = aws_kms_key.replication_s3_kms_key.key_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both statements are needed, and if you are getting any errors saying something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: MalformedPolicyDocumentException: The new key policy will not allow you to update the key policy in the future.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it means that the first statement is missing.&lt;/p&gt;

&lt;p&gt;This is all that needs to be done in code, but don’t forget about the second requirement: the policy in the Source account to add to the replication role. For this we need to create this new policy, chose a name, and attach it to the replication role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:Encrypt",
                "kms:GenerateDataKey*",
                "kms:ReEncrypt*",
                "kms:DescribeKey"
                ],
            "Resource": "arn:aws:kms:us-east-2:&amp;lt;Source Account ID&amp;gt;:key/523b2035-e947-4c71-8690-db6b43589c34"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To wrap it up, for the replication to work in this scenario, the KMS key in the Destination account needs to have a policy to allow the replication IAM role to use it, and the replication role needs to have a policy to use the KMS key in the destination account.&lt;/p&gt;

&lt;h1&gt;
  
  
  Looking forward
&lt;/h1&gt;

&lt;p&gt;This year at re:Invent, a lot of great things were announced for S3 and I am looking forward to seeing which one will facilitate the automated deployments and which one will be tricky to play with.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>terraform</category>
      <category>wecoded</category>
    </item>
    <item>
      <title>How to use AWS Lambda to create CI/CD dependencies with CodePipeline</title>
      <dc:creator>Andra Somesan (she/her)</dc:creator>
      <pubDate>Fri, 23 Jul 2021 06:44:50 +0000</pubDate>
      <link>https://forem.com/andrasomesan/how-to-use-aws-lambda-to-create-ci-cd-dependencies-with-codepipeline-j5h</link>
      <guid>https://forem.com/andrasomesan/how-to-use-aws-lambda-to-create-ci-cd-dependencies-with-codepipeline-j5h</guid>
      <description>&lt;p&gt;Disclaimer: This article was first published &lt;a href="https://www.sentiatechblog.com/use-aws-lambda-to-create-ci-cd-dependencies-with-codepipeline"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By trying to fully automate the solutions we work on we are makeing everybody's life much easier. Using the proper tools and services to do so, including AWS CDK and AWS CodePipeline really simplifies the process. When we have started to offer Kubernetes solutions to our customers at my old company, one CodePipeline pipeline was not enough to do the job, because one pipeline with multiple purposes will add complexity to the solution and it will create unwanted dependencies.&lt;/p&gt;

&lt;p&gt;Separating the pipelines per purpose, will remove complexity from the pipeline design, will create separate processes and will decouple the components, will be easier to debug, and it will improve the change process by only changing the part you need.&lt;/p&gt;

&lt;p&gt;In our case, we wanted to separate the infrastructure deployments, from the applications (pods) deployments for K8s, because we wanted to have all the infrastructure up to date with the latest changes, before we deploy the application. And, if the only modification made is at the application level, there is no need to update the infrastructure as well.&lt;/p&gt;

&lt;p&gt;Although having different pipelines for different purposes brings many benefits, it does come with some challenges as well. The main challenge is managing all the pipelines. This is because some of them will require more time to run, and some of them will require less. And, usually when this happens, you might need them to run in a specific order, so the result of one pipeline can be used as an input for the next one, and so on. In order to obtain this, some pipeline dependencies need to be created. To do this in a professional and cost-effective manner, following the recommended best practices from AWS, leads us to use AWS Lambda functions.&lt;/p&gt;

&lt;p&gt;Back to our use case, we have the main pipeline, the one responsible for deploying all the underlying infrastructure, and needs more time to complete than the K8s pipeline that is used for the pods deployment. The first one is responsible for updates of the underlying infrastructure, including K8s infrastructure and it should finish to update successfully all the stages before the second one starts running. This way, the K8s pipeline will have the latest updates on the infrastructure and can run safely.&lt;/p&gt;

&lt;p&gt;The plan is to create a new stage in the K8s pipeline and add an action to trigger one lambda function. This first lambda function is the main lambda and will be responsible to check if the main pipeline is running, and if it isn’t running, let the process continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UwoGr5F0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbhu8is33up1h4zvsete.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UwoGr5F0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbhu8is33up1h4zvsete.jpg" alt="Lambda Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To control the ScheduledExpression parameter, a cron expression is being used to schedule an execution 5 minutes after the current time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GIY47ian--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wuh1575jmwpd6qcy28a8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GIY47ian--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wuh1575jmwpd6qcy28a8.jpg" alt="Lambda parameters"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the main pipeline is running, it will trigger a CloudWatch Event that will have another lambda function as a target, let’s call it K8s Lambda. The CloudWatch Event will have a rule that is being triggered by the scheduled expression that we talked about above, and it has as target the K8s Lambda function. This function will have as scope to first delete the event, and then to trigger the K8s pipeline again. In order to delete the event, it must first delete the events targets. The event deletion will ensure that we can always trigger the K8s Lambda, 5 minutes after the event was created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxUnpBuE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9609bgbyjivebl06oyv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxUnpBuE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9609bgbyjivebl06oyv.jpg" alt="Lambda triggers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These checks will be done until the main pipeline has finished running (first scenario is being valid), and the Main Lambda will allow: the stage to be validated and the CodePipeline to move to the next stage in the K8s pipeline.&lt;/p&gt;

&lt;p&gt;The decision tree looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t5vxN-z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p9gpd0gb5ezllk6mye9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t5vxN-z5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p9gpd0gb5ezllk6mye9.jpg" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having separate pipelines for different purposes, simplifies any complex solution and brings automation and flexibility to it. You can now achieve decoupled infrastructure, simplified debug process, more control over the changes, to run only what you need, when you need it. As any change, this comes with some challenges. But when you have the right tools and some creativity, these challenges can be transformed in great solutions for the future.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>codepipeline</category>
      <category>cdk</category>
    </item>
  </channel>
</rss>
