<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mark Sta Ana</title>
    <description>The latest articles on Forem by Mark Sta Ana (@booyaa).</description>
    <link>https://forem.com/booyaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/booyaa"/>
    <language>en</language>
    <item>
      <title>Adding missing functionality to Terraform</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sun, 27 Oct 2019 13:37:00 +0000</pubDate>
      <link>https://forem.com/booyaa/adding-missing-functionality-to-terraform-1390</link>
      <guid>https://forem.com/booyaa/adding-missing-functionality-to-terraform-1390</guid>
      <description>&lt;p&gt;Photo by Hello I'm Nik 🇬🇧 on &lt;a href="https://unsplash.com/photos/qaKG2zozPYE"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I needed to codify the creation of PostgreSQL read replicas, so I did a bit of research around ways I could do this quickly without diving into the Terraform &lt;a href="https://www.terraform.io/docs/providers/azurerm/"&gt;provider&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The quickest way to do this was to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use the &lt;a href="https://www.terraform.io/docs/provisioners/local-exec.html"&gt;&lt;code&gt;local-exec&lt;/code&gt;&lt;/a&gt; provisioner to invoke the Azure CLI commands (details to follow)&lt;/li&gt;
&lt;li&gt;wrap the code in a module (to allow for reuse and share with the community)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Azure &lt;a href="https://docs.microsoft.com/en-us/azure/postgresql/howto-read-replicas-cli"&gt;docs&lt;/a&gt; requires the following steps to be carried out to create a read replica using the Azure CLI are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable replication support on the primary server&lt;/li&gt;
&lt;li&gt;Restart the primary server (for the changes to take effect)&lt;/li&gt;
&lt;li&gt;Create the replica using the primary server as the source&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Caveat emptor: as the Terraform docs &lt;a href="https://www.terraform.io/docs/provisioners/local-exec.html"&gt;mention&lt;/a&gt;, provisioners are a last resort. A major downside of using this method to add missing functionality is that there's no state tracking, i.e. if you make a change to the resource, Terraform won't know about it.&lt;/p&gt;

&lt;p&gt;Here's the essence of the code (I've omitted certain details for brevity the full code is on &lt;a href="https://github.com/booyaa/terraform-azurerm-postgresql-read-replica"&gt;GitHub&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"null_resource"&lt;/span&gt; &lt;span class="s2"&gt;"postgresql-read-replica"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;triggers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;resource_group_name&lt;/span&gt;
    &lt;span class="nx"&gt;postgresql_primary_server_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postgresql_primary_server_name&lt;/span&gt;
    &lt;span class="nx"&gt;postgresql_replica_server_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postgresql_replica_server_name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://www.terraform.io/docs/providers/null/resource.html"&gt;&lt;code&gt;null_resource&lt;/code&gt;&lt;/a&gt; is also provisioner is used as a container for the &lt;code&gt;local_exec&lt;/code&gt; calls. The &lt;code&gt;triggers&lt;/code&gt; block allows the resource to be replaced, i.e. destroyed and recreated when the resource group, the PostgreSQL primary or replica server names changes.&lt;/p&gt;

&lt;p&gt;You can already see that &lt;a href="https://www.terraform.io/docs/modules/index.html"&gt;modules&lt;/a&gt; are just ordinary bits of Terraform code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;ENABLE_REPLICATION&lt;/span&gt;&lt;span class="sh"&gt;
az postgres server configuration set \
...
&lt;/span&gt;&lt;span class="no"&gt;ENABLE_REPLICATION
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;RESTART_SERVER&lt;/span&gt;&lt;span class="sh"&gt;
az postgres server restart \
...
&lt;/span&gt;&lt;span class="no"&gt;RESTART_SERVER
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;CREATE_REPLICA&lt;/span&gt;&lt;span class="sh"&gt;
az postgres server replica create \
...
&lt;/span&gt;&lt;span class="no"&gt;CREATE_REPLICA
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These three provisioner blocks perform the required actions to create a read replica using the Azure CLI. To avoid having to escape quotes we're using the &lt;a href="https://en.wikipedia.org/wiki/Here_document"&gt;here doc&lt;/a&gt; notation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;  &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;when&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"destroy"&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;DESTROY_REPLICA&lt;/span&gt;&lt;span class="sh"&gt;
az postgres server delete \
  --name ${var.postgresql_replica_server_name} \
  --resource-group ${var.resource_group_name} \
  --yes
&lt;/span&gt;&lt;span class="no"&gt;DESTROY_REPLICA
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we handle when what to do when the replica is destroyed.&lt;/p&gt;

&lt;p&gt;I've uploaded the module to the Terraform &lt;a href="https://registry.terraform.io/modules/booyaa/postgresql-read-replica/azurerm/0.2.0"&gt;registry&lt;/a&gt; which means the module can be easily referenced just like another resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;replica&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"booyaa/terraform-azurerm-postgresql-read-replica"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;postgresql_primary_server_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_postgresql_server&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;postgresql_replica_server_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${azurerm_postgresql_server.demo.name}-replica"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just like the &lt;a href="https://www.terraform.io/docs/configuration/data-sources.html"&gt;Data Sources&lt;/a&gt;, modules can be used as a reference so we can now apply a firewall rule against the read replica:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_postgresql_firewall_rule"&lt;/span&gt; &lt;span class="s2"&gt;"demo-replica"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"office"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;server_name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;replica&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replica_name&lt;/span&gt;
  &lt;span class="nx"&gt;start_ip_address&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"8.8.8.8"&lt;/span&gt;
  &lt;span class="nx"&gt;end_ip_address&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"8.8.8.8"&lt;/span&gt;

  &lt;span class="nx"&gt;depends_on&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;demo&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;replica&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Learning about eBPF on macOS</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sun, 13 Oct 2019 12:21:00 +0000</pubDate>
      <link>https://forem.com/booyaa/learning-about-ebpf-on-macos-1h42</link>
      <guid>https://forem.com/booyaa/learning-about-ebpf-on-macos-1h42</guid>
      <description>&lt;p&gt;Photo by Sarah Lee on &lt;a href="https://unsplash.com/photos/QURU8IY-RaI"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've created this is a short post to talk about a new GitHub repo that might be useful to some: &lt;a href="https://github.com/booyaa/vagrant-bcctools"&gt;vagrant-bcctools&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It's a simple Vagrant box using the latest (at the time of writing) version of Ubuntu (bionic) with the bcc tools &lt;a href="https://packages.ubuntu.com/bionic/all/bpfcc-tools/filelist"&gt;package&lt;/a&gt; installed.&lt;/p&gt;

&lt;p&gt;I needed a way to play around with &lt;a href="https://www.iovisor.org/technology/ebpf"&gt;eBPF&lt;/a&gt; on macOS locally. So before embarking on a fool's errand, I did some research. For details about my findings, see the repo's &lt;a href="https://github.com/booyaa/vagrant-bcctools"&gt;&lt;code&gt;README.md&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I saw there was a Docker image, which doesn't work because I think it expects the underlying Docker host to be Linux base (volume mounts to &lt;code&gt;/lib/modules&lt;/code&gt;, &lt;code&gt;/usr/src&lt;/code&gt; and &lt;code&gt;/etc/localtime&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Vagrantfile&lt;/code&gt; provided by &lt;a href="https://www.iovisor.org/"&gt;IO Visor&lt;/a&gt; is using such an old version of Ubuntu that a modern version of Vagrant seems to choke on it.&lt;/p&gt;

&lt;p&gt;No doubt someone will point me to something that only requires &lt;a href="https://github.com/machyve/xhyve"&gt;xhyve&lt;/a&gt; (at me on Twitter or dev.to if you do know).&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ebpf</category>
    </item>
    <item>
      <title>Thanks HacktoberFest!</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Mon, 07 Oct 2019 07:55:11 +0000</pubDate>
      <link>https://forem.com/booyaa/thanks-hacktoberfest-33di</link>
      <guid>https://forem.com/booyaa/thanks-hacktoberfest-33di</guid>
      <description>&lt;p&gt;Just a small diversion before the article begins in earnest...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I'm available for hire! If you want to get in touch contact details are available in this twitter &lt;a href="https://twitter.com/booyaa/status/1179297971906715648" rel="noopener noreferrer"&gt;thread&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively you can either drop me message here or check out my &lt;a href="https://dev.to/booyaa"&gt;profile&lt;/a&gt; for LinkedIn details.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;/jobAd&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Photo by Kerstin Wrba on &lt;a href="https://unsplash.com/photos/zeInZepl_Hw" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've participated in a couple of HacktoberFests (for the uninitiated this is a month-long challenge to help make open source projects better. Check out the tag below for more info).&lt;/p&gt;


&lt;div class="ltag__tag ltag__tag__id__4074"&gt;
    &lt;div class="ltag__tag__content"&gt;
      &lt;h2&gt;#&lt;a href="https://dev.to/t/hacktoberfest" class="ltag__tag__link"&gt;hacktoberfest&lt;/a&gt; Follow
&lt;/h2&gt;
      &lt;div class="ltag__tag__summary"&gt;
        Happy hacking! 🎃
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Last year was a special HacktoberFest because I earned my t-shirt by doing my day job! That's right, I was working on Open Source projects for public sector clients and getting credit for HacktoberFest. 💪&lt;/p&gt;

&lt;p&gt;This year I intended to carry on as usual, but for some reason (unknown to me) a couple of PRs from an old side project caught my eye. The project was a crate (&lt;a href="https://dev.to/t/rust"&gt;Rust&lt;/a&gt; parlance for a package) called &lt;code&gt;wifiscanner&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/booyaa" rel="noopener noreferrer"&gt;
        booyaa
      &lt;/a&gt; / &lt;a href="https://github.com/booyaa/wifiscanner" rel="noopener noreferrer"&gt;
        wifiscanner
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A crate to list WiFi hotspots in your area
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;A quick look and I could see there was no reason to not merge them, they were passing their tests in Travis (CI/CD).&lt;/p&gt;

&lt;p&gt;This got me thinking about improvements I could make to the project more friendly to contributors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;include a contribution guide.&lt;/li&gt;
&lt;li&gt;introduce GitHub actions since it's my current hammer (see exhibits &lt;a href="https://dev.to/booyaa/icymi-github-actions-v2-is-a-breaking-change-52nc"&gt;a&lt;/a&gt;, &lt;a href="https://dev.to/booyaa/github-actions-rust-edition-46e5"&gt;b&lt;/a&gt;, &lt;a href="https://dev.to/booyaa/github-template-for-rust-projects-1ggl"&gt;c&lt;/a&gt; and &lt;a href="https://dev.to/booyaa/github-template-for-terraform-projects-3dbo"&gt;d&lt;/a&gt;) to eventually replace Travis.&lt;/li&gt;
&lt;li&gt;add a &lt;code&gt;checks&lt;/code&gt; target to the Makefile to format, lint and run tests before contributors submit a PR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whilst all of these tasks could be done by me I did wonder, would other people like to do these tasks? So on a whim, I created issues and labelled them as &lt;a href="https://github.com/topics/hacktoberfest" rel="noopener noreferrer"&gt;HacktoberFest&lt;/a&gt;. Within a couple of hours, people had claimed the issues and a few days later fixes were issued.&lt;/p&gt;

&lt;p&gt;I've been thinking about what were the contributing factors to my success. I think the quality of the issues played a significant part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there was a brief explanation of the task&lt;/li&gt;
&lt;li&gt;an acceptance criteria (with one or more check listed items)&lt;/li&gt;
&lt;li&gt;the task could be worked on in isolation&lt;/li&gt;
&lt;li&gt;tasks were relatively short (could be completed in a few minutes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all this collaboration, I felt energized to work on my side project again. Heck, I even tweeted about it.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1179113691679207424-540" src="https://platform.twitter.com/embed/Tweet.html?id=1179113691679207424"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1179113691679207424-540');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1179113691679207424&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1180553738446135296-623" src="https://platform.twitter.com/embed/Tweet.html?id=1180553738446135296"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1180553738446135296-623');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1180553738446135296&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Whilst this is great (I have missed working on side projects) there a little voice in my head that's telling me the reason for my new found love for this project is that I'm procrastinating because I'm currently job hunting! 😂 &lt;/p&gt;

&lt;p&gt;So if you're a remote-friendly company that's hiring for an SRE / DevOps role drop me a message (LinkedIn details are available in my profile)! I'd love to work magic on your systems!&lt;/p&gt;

</description>
      <category>hacktoberfest</category>
      <category>sideprojects</category>
      <category>procrastination</category>
      <category>career</category>
    </item>
    <item>
      <title>GitHub template for Rust projects</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sat, 05 Oct 2019 21:38:52 +0000</pubDate>
      <link>https://forem.com/booyaa/github-template-for-rust-projects-1ggl</link>
      <guid>https://forem.com/booyaa/github-template-for-rust-projects-1ggl</guid>
      <description>&lt;p&gt;Photo by Marian Kroell on &lt;a href="https://unsplash.com/photos/Y5gqglvDL9Y"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've just finished creating a GitHub template that you can use to set up a new Rust project that GitHub Actions for Rust as a &lt;a href="https://www.thoughtworks.com/continuous-integration"&gt;CI&lt;/a&gt; pipeline.&lt;/p&gt;

&lt;p&gt;What this means, is that your project from day one will always perform the following cargo commands against any code changes (PRs, pushes, etc): &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;check&lt;/code&gt; - Check a local package and all of its dependencies for errors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test&lt;/code&gt; - Execute all unit and integration tests and build examples of a local package&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fmt&lt;/code&gt; - This utility formats all bin and lib files of the current crate using rustfmt&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;clippy&lt;/code&gt; - Checks a package to catch common mistakes and improve your Rust code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's how to use it&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;go to my template: &lt;a href="https://github.com/booyaa/gh-actions-template-rust"&gt;https://github.com/booyaa/gh-actions-template-rust&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;click on the "Use this template" button&lt;/li&gt;
&lt;li&gt;fill out the project details using the new project wizard&lt;/li&gt;
&lt;li&gt;clone your new project&lt;/li&gt;
&lt;li&gt;cd to your project&lt;/li&gt;
&lt;li&gt;initialise the project using &lt;code&gt;cargo init --vcs none&lt;/code&gt; or copy files from your existing Rust project&lt;/li&gt;
&lt;li&gt;commit and push your changes up to GitHub&lt;/li&gt;
&lt;li&gt;sit back and relax as GitHub Actions will now perform checks against any code changes submitted to your new Rust project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're wondering why I don't include any Rust scaffolding, it's because &lt;code&gt;cargo&lt;/code&gt; already does a great job of providing this. I didn't want to perform updates as changes occur to the new project process owned by &lt;code&gt;cargo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As an added bonus you also get Rust's dual license of MIT and Apache v2!&lt;/p&gt;

&lt;p&gt;I'd love to know what you think of this template. :)&lt;/p&gt;

</description>
      <category>github</category>
      <category>rust</category>
      <category>devops</category>
      <category>template</category>
    </item>
    <item>
      <title>GitHub template for Terraform projects</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Mon, 30 Sep 2019 18:59:46 +0000</pubDate>
      <link>https://forem.com/booyaa/github-template-for-terraform-projects-3dbo</link>
      <guid>https://forem.com/booyaa/github-template-for-terraform-projects-3dbo</guid>
      <description>&lt;p&gt;Photo by Shane McLendon on &lt;a href="https://unsplash.com/photos/9jPJrfLTBi0"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hot on the heels of my last template for &lt;a href="https://dev.to/booyaa/github-template-for-rust-projects-cm8-temp-slug-9083640"&gt;Rust&lt;/a&gt;, I've created a new one for &lt;a href="https://www.terraform.io"&gt;Terraform&lt;/a&gt;. As before this gives you a new Terraform ready project that uses GitHub Actions for Terraform as a &lt;a href="https://www.thoughtworks.com/continuous-integration"&gt;CI&lt;/a&gt; pipeline.&lt;/p&gt;

&lt;p&gt;What this means, is that your project from day one will always perform the following Terraform commands against any code changes (PRs, pushes, etc): &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fmt&lt;/code&gt; - Rewrites all Terraform configuration files to a canonical format.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;init&lt;/code&gt; - Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;validate&lt;/code&gt; - Validate the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's how to use it&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;go to my template: &lt;a href="https://github.com/booyaa/gh-actions-template-terraform"&gt;https://github.com/booyaa/gh-actions-template-terraform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;click on the "Use this template" button&lt;/li&gt;
&lt;li&gt;fill out the project details using the new project wizard&lt;/li&gt;
&lt;li&gt;clone your new project&lt;/li&gt;
&lt;li&gt;cd to your project&lt;/li&gt;
&lt;li&gt;initialise the project i.e. create a main.tf and add resources&lt;/li&gt;
&lt;li&gt;commit and push your changes up to GitHub&lt;/li&gt;
&lt;li&gt;sit back and relax as GitHub Actions will now perform checks against any code changes submitted to your new Rust project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're wondering why I don't include a main.tf, it's rare for terraform projects to have similar infrastructure. Also, I didn't see the point of sticking an empty terraform file for the sake of having one.&lt;/p&gt;

&lt;p&gt;You maybe have noticed that &lt;code&gt;plan&lt;/code&gt; and &lt;code&gt;apply&lt;/code&gt; aren't included, these usually require secrets to be set i.e. API keys, Cloud Vendor access keys and again these are too specific to a given project.&lt;/p&gt;

&lt;p&gt;Update: turns out &lt;code&gt;init&lt;/code&gt; will be unhappy if there's any remote backends. I will write a new blog post to share my findings of wiring up this workflow to an existing remote backend on Azure Storage.&lt;/p&gt;

&lt;p&gt;I'd love to know what you think of this template. :)&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>GitHub Actions: Rust edition</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sat, 28 Sep 2019 13:43:27 +0000</pubDate>
      <link>https://forem.com/booyaa/github-actions-rust-edition-46e5</link>
      <guid>https://forem.com/booyaa/github-actions-rust-edition-46e5</guid>
      <description>&lt;p&gt;Photo by Zsolt Palatinus on &lt;a href="https://unsplash.com/photos/pEK3AbP8wa4"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm on a bit of a GitHub Actions deep dive this weekend. I tried GitHub's starter workflow for &lt;a href="https://github.com/actions/starter-workflows/blob/master/ci/rust.yml"&gt;Rust&lt;/a&gt; but I was disappointed to discover the macOS virtual environment doesn't have the &lt;a href="https://www.rust-lang.org/"&gt;Rust&lt;/a&gt; toolchain.&lt;/p&gt;

&lt;p&gt;Luckily the folks at &lt;a href="https://github.com/actions-rs"&gt;action-rs&lt;/a&gt; have you covered. They've developed a bunch of actions for Rust's toolchain. All your favourites are there including Clippy!&lt;/p&gt;

&lt;p&gt;Why do I need Rust on macOS? I like to test my code across as many platforms as possible. You can use a job &lt;a href="https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstrategy"&gt;strategy&lt;/a&gt; in your workflow to achieve this.&lt;/p&gt;

&lt;p&gt;There's a handy &lt;a href="https://github.com/actions-rs/meta/blob/master/recipes/quickstart.md"&gt;quickstart&lt;/a&gt; guide on the action-rs "meta" GitHub repo.&lt;/p&gt;

&lt;p&gt;p.s. Can you tell I'm super excited about GitHub Actions? Doing a lot of squeeing at the moment despite coming across the odd quirk.&lt;br&gt;
p.p.s. There's a rather excellent blog post about the action-rs project by one of the main authors of action-rs: &lt;a href="https://svartalf.info/posts/2019-09-16-github-actions-for-rust/"&gt;svartalf.info/posts/2019-09-16-github-actions-for-rust&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>rust</category>
      <category>devops</category>
    </item>
    <item>
      <title>ICYMI GitHub Actions v2 is a breaking change</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sat, 28 Sep 2019 10:45:18 +0000</pubDate>
      <link>https://forem.com/booyaa/icymi-github-actions-v2-is-a-breaking-change-52nc</link>
      <guid>https://forem.com/booyaa/icymi-github-actions-v2-is-a-breaking-change-52nc</guid>
      <description>&lt;p&gt;Photo by Slim Emcee (UG) the poet Truth_From_Africa_Photography on &lt;a href="https://unsplash.com/photos/zGERaFHaLF0"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the 30th of September, the old HCL format will stop working. There's a handy migration tool you can download to migrate your existing workflows to the new YAML format. &lt;/p&gt;

&lt;p&gt;It's ludicrously easy to use, and there's step by step instructions in the following &lt;a href="https://help.github.com/en/articles/migrating-github-actions-from-hcl-syntax-to-yaml-syntax"&gt;GitHub help page&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>yaml</category>
      <category>hcl</category>
      <category>top7</category>
    </item>
    <item>
      <title>AWS DevOps Pro Certification Blog Post Series: Exam Time!</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Tue, 09 Jul 2019 13:37:00 +0000</pubDate>
      <link>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-exam-time-58a</link>
      <guid>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-exam-time-58a</guid>
      <description>&lt;p&gt;Photo by Ian Kim on &lt;a href="https://unsplash.com/photos/gKs6zNil_Ro"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveat emptor
&lt;/h2&gt;

&lt;p&gt;Using AWS costs money, some of these services may not be part of the AWS &lt;a href="https://aws.amazon.com/free/"&gt;Free Tier&lt;/a&gt;. You can keep costs down by tearing down anything you've created whilst learning, but it's still possible to run up a hefty bill so pay attention to the instances you setup!&lt;/p&gt;

&lt;p&gt;I'm very lucky to be able to use my employer's AWS account. You should ask your place of work if a similar arrangement can be made as part of your study.&lt;/p&gt;

&lt;h2&gt;
  
  
  Velocius quam asparagi conquantur
&lt;/h2&gt;

&lt;p&gt;The format of the blog posts is liable to change as I try to refine my mental model of each domain, so be sure to revisit the blog posts on a regular basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exam Time!
&lt;/h2&gt;

&lt;p&gt;At the end of June, I sat the AWS DevOps Profession exam and sadly readers I did not pass. I hadn't really expected to pass the first time, but I scored 69% (you need to score 75% to pass)!&lt;/p&gt;

&lt;p&gt;After completing the exam you're given an immediate PASS or FAIL result. A few days later you get the actual score and areas that need improving.&lt;/p&gt;

&lt;h3&gt;
  
  
  Domains where I have demonstrated adequate knowledge
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Configuration management and Infrastructure as Code&lt;/li&gt;
&lt;li&gt;Monitoring and logging&lt;/li&gt;
&lt;li&gt;Policies and Standards Automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Domains where I need to brush up on
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SDLC automation&lt;/li&gt;
&lt;li&gt;Incident and Events response&lt;/li&gt;
&lt;li&gt;High Availability, Fault tolerance and Disaster recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Things I wish I knew beforehand
&lt;/h2&gt;

&lt;p&gt;The main and most obvious thing I should've done was to focus my knowledge around best practices to build infrastructures and utilising AWS services.&lt;/p&gt;

&lt;p&gt;Most of the questions were of a similar format to the &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS%20Certified%20DevOps%20Engineer%20-%20Professional_Sample%20Questions.pdf"&gt;sample exam&lt;/a&gt; and mock exam. They wanted you to pick the best answer to meet the requirements of a customer. &lt;/p&gt;

&lt;p&gt;There were so many services that I hadn't had hands-on experience, which meant I spent a lot of time just trying to gain a basic understanding of what each service did and their use cases. Turns out I needed to a deeper understanding of the domains I scored poorly against.&lt;/p&gt;

&lt;p&gt;I think the only domain I was surprised at scoring poorly against was SDLC automation, as I do a lot of CI/CD on a daily basis, although this tends to be around non-AWS services: Circle CI, Travis CI and Azure DevOps.&lt;/p&gt;

&lt;p&gt;So, how can you find out what the best practices are? There are probably two key documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The AWS &lt;a href="https://aws.amazon.com/whitepapers/"&gt;whitepapers&lt;/a&gt;, which often contain solutions that combine various AWS products and services&lt;/li&gt;
&lt;li&gt;The service's user or developer guide&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My study before retaking exam will focus on this. I only spent the week before the exam skimming through the whitepapers, but this time I plan to make a lot more notes and try to memorise diagrams of solutions provided.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips for sitting the exam
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Check your gear
&lt;/h3&gt;

&lt;p&gt;The exam centres will require you to place all your belongings in a locker.&lt;/p&gt;

&lt;p&gt;Some centres will allow you to bring cups of water to the exam, take two cups if you can carry them. &lt;/p&gt;

&lt;p&gt;You'll also be given paper and pen, or something similar to make notes. Check the pen works before you start the exam!&lt;/p&gt;

&lt;p&gt;Empty your bladder before you sit the exam, some places require you get the attention of the invigilator which can cost you precious minutes of your exam time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timekeeping
&lt;/h3&gt;

&lt;p&gt;Start getting used to scanning questions in 2-minute intervals. This is the max you should spend on reading and answering a question.&lt;/p&gt;

&lt;p&gt;In a perfect scenario where you could answer each question fully and not skip any, it would be easy to know the next 2-minute interval. In reality, you will probably skip or answer a question in less than 2 minutes, so whilst this might be obvious to most, I used the following to keep myself on track: as I started the next question I made a note of the current time on the exam countdown timer and worked out how long I had to answer the question, so if timer was at 1 hour and 7 minutes, I would need to answer the question on or before 1 hour and 5 minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skip and don't dither
&lt;/h3&gt;

&lt;p&gt;Unless you're feeling particularly confident about your capabilities, you'll probably be in a blind panic (like I was). Give yourself up to 30 seconds (better if you can do it in less) to scan the question, if you don't even know where to begin then skip it. That time is better spent on questions where you have a vague notion of what the answer is.&lt;/p&gt;

&lt;p&gt;Skipping questions is okay and you will find there's still time to revisit these questions once you've gone through all the questions.&lt;/p&gt;

&lt;p&gt;Don't panic if you find you're skipping lots of questions either, it's probably your nerves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative study aids (new)
&lt;/h2&gt;

&lt;p&gt;Whilst studying for this exam, I've used the following to help absorb the study material:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My AWS DevOps Pro &lt;a href="https://tiny.cards/decks/MYHnT1YG/aws-devops-pro-2019"&gt;Tinycards&lt;/a&gt; deck is a flashcard app by Duolingo. It's a bit rough and ready, but it's helped me retain some of the facts and figures that pop up.&lt;/li&gt;
&lt;li&gt;I've also been listening to &lt;a href="https://www.lastweekinaws.com/"&gt;Last week in AWS&lt;/a&gt; which is a podcast by Corey Quinn that covers the ever-changing AWS landscape.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Until next time
&lt;/h2&gt;

&lt;p&gt;I will, of course, let you know when I sit and pass the next exam (see what I did there). I'm aiming for mid-July so expect to hear from me soon!&lt;/p&gt;

&lt;p&gt;Unsplash path (what terms I used to get to the cover image): failure&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To go to the next part of the series, click on the grey dot below which is next to the current marker (the black dot).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>study</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS DevOps Pro Certification Blog Post Series: Study Gaps</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sat, 22 Jun 2019 13:37:00 +0000</pubDate>
      <link>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-study-gaps-51nd</link>
      <guid>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-study-gaps-51nd</guid>
      <description>&lt;p&gt;Photo by Suad Kamardeen on &lt;a href="https://unsplash.com/photos/MYKAZlzW6Nw"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveat emptor
&lt;/h2&gt;

&lt;p&gt;Using AWS costs money, some of these services may not be part of the AWS &lt;a href="https://aws.amazon.com/free/"&gt;Free Tier&lt;/a&gt;. You can keep costs down by tearing down anything you've created whilst learning, but it's still possible to run up a hefty bill so pay attention to the instances you setup!&lt;/p&gt;

&lt;p&gt;I'm very lucky to be able to use my employer's AWS account. You should ask your place of work if a similar arrangement can be made as part of your study.&lt;/p&gt;

&lt;h2&gt;
  
  
  Velocius quam asparagi conquantur
&lt;/h2&gt;

&lt;p&gt;The format of the blog posts is liable to change as I try to refine my mental model of each domain, so be sure to revisit the blog posts on a regular basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Study Gaps
&lt;/h2&gt;

&lt;p&gt;This section will change a lot as I find new gaps whilst sitting in mock exams.&lt;/p&gt;

&lt;p&gt;I've had a go at the &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS%20Certified%20DevOps%20Engineer%20-%20Professional_Sample%20Questions.pdf"&gt;sample exam&lt;/a&gt; under exam condititions (which before AWS made the exams adaptive would leave you with about 2 mins per question). Here's some items where I need to fill in gaps:&lt;/p&gt;

&lt;h3&gt;
  
  
  General
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Knowing which services are able to use &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html"&gt;Resource Based Policies&lt;/a&gt;:

&lt;ul&gt;
&lt;li&gt;Lambda (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-configuration-management-and-infrastructure-as-code-intro"&gt;Configuration Management and Infrastructure as Code&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;ECR (via ECS - &lt;a href="https://dev.to/2019/aws-devops-pro-certification-configuration-management-and-infrastructure-as-code-intro"&gt;Configuration Management and Infrastructure as Code&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;CloudWatch Logs (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-monitoring-and-logging"&gt;Monitoring and Logging&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;AWS Secrets Manager (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-policy-standards-automation/"&gt;Policies and Standards Automation&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SDLC Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Need to read the &lt;a href="https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf"&gt;blue/green&lt;/a&gt; whitepaper (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-sdlc-intro/"&gt;SLDC automation&lt;/a&gt;). Pssst if you have the time you should read all the &lt;a href="https://aws.amazon.com/whitepapers/"&gt;DevOps&lt;/a&gt; related whitepapers!&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Blue/Green Techniques using CloudFormation or manually provisioned i.e. through AWS Console
&lt;/h4&gt;

&lt;p&gt;This is based on the &lt;a href="https://d1.awsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf"&gt;Blue/Green&lt;/a&gt; whitepaper.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update DNS Routing with Amazon Route 53 

&lt;ul&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Route 53 DNS&lt;/li&gt;
&lt;li&gt;Blue/Green Environments 

&lt;ul&gt;
&lt;li&gt;Elastic Load Balancer (ELB)&lt;/li&gt;
&lt;li&gt;Autoscaling group behind xthe ELB&lt;/li&gt;
&lt;li&gt;Both environments are point to the same database instance (Amazon RDS Multi-AZ)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Sub patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Classic DNS pattern&lt;/strong&gt; - Flip alias (live) record from blue to green&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Classic DNS-weighted distribution&lt;/strong&gt; - Use split to send traffic to different environments&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Swap the Auto Scaling Group Behind Elastic Load Balancer

&lt;ul&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Route 53 DNS&lt;/li&gt;
&lt;li&gt;ELB pointing to&lt;/li&gt;
&lt;li&gt;Blue and Green Auto Scaling Groups&lt;/li&gt;
&lt;li&gt;Both ASGs point to the same database  instance (Amazon RDS Multi-AZ)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Update Auto Scaling Group Launch Configurations 

&lt;ul&gt;
&lt;li&gt;Setup&lt;/li&gt;
&lt;li&gt;Route 53 DNS&lt;/li&gt;
&lt;li&gt;ELB point to&lt;/li&gt;
&lt;li&gt;Auto Scaling Group containing

&lt;ul&gt;
&lt;li&gt;Blue Launch Config (LC)&lt;/li&gt;
&lt;li&gt;Green Launch Config (LC)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;LCs  are point to Amazon DynamoDB, Amazon RDS Multi-AZ or Amazon ElastiCache&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s patterns for OpsWorks and Elastic Beanstalk, will add if I have time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration Management and Infrastructure as Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lambda

&lt;ul&gt;
&lt;li&gt;Deploying new versions&lt;/li&gt;
&lt;li&gt;What triggers are available&lt;/li&gt;
&lt;li&gt;API Gateway&lt;/li&gt;
&lt;li&gt;AWS IoT&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;li&gt;CloudWatch Events (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-monitoring-and-logging"&gt;Monitoring and Logging&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CloudWatch Logs (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-monitoring-and-logging"&gt;Monitoring and Logging&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CodeCommit (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-sdlc-intro/"&gt;SLDC automation&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Cognito Sync Trigger&lt;/li&gt;
&lt;li&gt;DynamoDB (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-high-availability-fault-tolerance-disaster-recover/"&gt;High Availability, Fault Tolerance, and Disaster Recovery&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Kinesis (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-incident-and-event-response/"&gt;Incident and Event Response&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Also doesn't hurt to know the following services are supported: S3,SNS and SQS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring and Logging
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CloudWatch events for the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html"&gt;services&lt;/a&gt; covered in the exam

&lt;ul&gt;
&lt;li&gt;SDLC Automation

&lt;ul&gt;
&lt;li&gt;CodeCommit&lt;/li&gt;
&lt;li&gt;CodeBuild&lt;/li&gt;
&lt;li&gt;CodeDeploy&lt;/li&gt;
&lt;li&gt;CodePipeline&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Configuration Management and Infrastrcuture as Code

&lt;ul&gt;
&lt;li&gt;AWS Config&lt;/li&gt;
&lt;li&gt;AWS OpsWorks&lt;/li&gt;
&lt;li&gt;AWS (Lambda) Step Functions&lt;/li&gt;
&lt;li&gt;AWS ECS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Monitoring and Logging

&lt;ul&gt;
&lt;li&gt;CloudWatch (scheduled events)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Policies and Standards Automation

&lt;ul&gt;
&lt;li&gt;Amazon Macie&lt;/li&gt;
&lt;li&gt;AWS Systems Manager

&lt;ul&gt;
&lt;li&gt;Configuration Compliance&lt;/li&gt;
&lt;li&gt;Maintenance Windows&lt;/li&gt;
&lt;li&gt;Parameter Store&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Trusted Advisor&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Incident and Event Reporting

&lt;ul&gt;
&lt;li&gt;Amazon GuardDuty&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Fault Tolerance, High Availability and Disaster Recovery

&lt;ul&gt;
&lt;li&gt;Amazon EC2 Auto Scaling&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;CloudWatch Event Rule &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html"&gt;Targets&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;SDLC Automation&lt;/li&gt;
&lt;li&gt;Code Build&lt;/li&gt;
&lt;li&gt;Code Pipeline&lt;/li&gt;
&lt;li&gt;Configuration Management and Infrastructure as Code&lt;/li&gt;
&lt;li&gt;Lambda (and Step) function&lt;/li&gt;
&lt;li&gt;Incident and Events Reporting&lt;/li&gt;
&lt;li&gt;Kinesis

&lt;ul&gt;
&lt;li&gt;Data Streams&lt;/li&gt;
&lt;li&gt;Data Firehose&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Amazon Inspector&lt;/li&gt;
&lt;li&gt;Policies and Standards Automation&lt;/li&gt;
&lt;li&gt;Systems Manager

&lt;ul&gt;
&lt;li&gt;Run Command&lt;/li&gt;
&lt;li&gt;Automation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Nice to knows: SNS and SQS&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Fault Tolerance, High Availability and Disaster Recovery
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RDS

&lt;ul&gt;
&lt;li&gt;snapshots and their use in a DR situation. (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-high-availability-fault-tolerance-disaster-recover/"&gt;High Availability, Fault Tolerance, and Disaster Recovery&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Understanding Recovery Time Objective (RTO) and Recovery Point Objective (RPO) with DR in mind. (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-high-availability-fault-tolerance-disaster-recover/"&gt;High Availability, Fault Tolerance, and Disaster Recovery&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Policies and Standards Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS Systems Manager - EC2 patch groups and Patch Manager's baselines (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-policy-standards-automation/"&gt;Policies and Standards Automation&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;AWS Service Catalogue - how to offer products that provide different tiers (web, web + db) or stacks (.NET or Ruby) (&lt;a href="https://dev.to/2019/aws-devops-pro-certification-policy-standards-automation/"&gt;Policies and Standards Automation&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unsplash path (what terms I used to get to the cover image): gap&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To go to the next part of the series, click on the grey dot below which is next to the current marker (the black dot).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>study</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS DevOps Pro Certification Blog Post Series: Databases</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Fri, 14 Jun 2019 13:37:00 +0000</pubDate>
      <link>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-databases-244f</link>
      <guid>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-databases-244f</guid>
      <description>&lt;p&gt;Photo by Jan Antonin Kolar on &lt;a href="https://unsplash.com/photos/lRoX0shwjUQ"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveat emptor
&lt;/h2&gt;

&lt;p&gt;Using AWS costs money, some of these services may not be part of the AWS &lt;a href="https://aws.amazon.com/free/"&gt;Free Tier&lt;/a&gt;. You can keep costs down by tearing down anything you've created whilst learning, but it's still possible to run up a hefty bill so pay attention to the instances you setup!&lt;/p&gt;

&lt;p&gt;I'm very lucky to be able to use my employer's AWS account. You should ask your place of work if a similar arrangement can be made as part of your study.&lt;/p&gt;

&lt;h2&gt;
  
  
  Velocius quam asparagi conquantur
&lt;/h2&gt;

&lt;p&gt;The format of the blog posts is liable to change as I try to refine my mental model of each domain, so be sure to revisit the blog posts on a regular basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  What?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt; is a managed service for Relational Database engines Service (RDS). AWS supports the following engines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MySQL is an open source database engine&lt;/li&gt;
&lt;li&gt;MariaDB is a fork of MySQL when the former was acquired by Oracle (through the &lt;a href="https://en.wikipedia.org/wiki/MySQL"&gt;acquisition&lt;/a&gt; of Sun Systems)&lt;/li&gt;
&lt;li&gt;PostgreSQL is an open source database engine&lt;/li&gt;
&lt;li&gt;Oracle is a commercial engine by Oracle&lt;/li&gt;
&lt;li&gt;SQL Server is a commercial engine by Microsoft&lt;/li&gt;
&lt;li&gt;Amazon Aurora is a MySQL / Postgres compatible relational database engine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt; is a proprietary NoSQL database service offered by Amazon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;Managed services for database engines like all managed service remove key concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provisioning/scaling/termination of servers that host the database engines&lt;/li&gt;
&lt;li&gt;maintenance of servers (patching)&lt;/li&gt;
&lt;li&gt;backups and restores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These concerns often affect the ability to provide a database server that is fault-tolerant, highly available and has a contingency for disaster recovery.&lt;/p&gt;

&lt;p&gt;N.B. there's a caveat around storage scaling that it won't be applicable to SQL Server instances. The details can be found in the &lt;a href="https://aws.amazon.com/rds/sqlserver/faqs/"&gt;SQL Server FAQs: Why can’t I scale my storage?&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To understand the difference between Amazon RDS and Amazon DynamoDB, I've provided the following examples:&lt;/p&gt;

&lt;p&gt;Relational database engines (sometimes referred to as RDBMS) are tabular in natural, you can think of it visually as a spreadsheet with fields as the column headers and the rows being a single record of data. The term "relational" comes from the ability to link tables through a foreign key, an example of this might be a list of &lt;code&gt;developer&lt;/code&gt;s and their favourite &lt;code&gt;food&lt;/code&gt;s. The link would be a column in the &lt;code&gt;developers&lt;/code&gt; table called &lt;code&gt;food_id&lt;/code&gt;, which would reference the column called &lt;code&gt;id&lt;/code&gt; in the &lt;code&gt;food&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;developer&lt;/code&gt;(s) table&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;th&gt;food_id&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;alice&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;bob&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;carol&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;code&gt;food&lt;/code&gt;(s) table&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;id&lt;/th&gt;
&lt;th&gt;name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;banana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;nuts&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you were to delete nuts (id: &lt;code&gt;2&lt;/code&gt;) from the &lt;code&gt;food&lt;/code&gt; table you would trigger an error warning there were dependencies in the &lt;code&gt;developer&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;The key takeaway from this example is to remember Relational Databases store records tabularly in rows.&lt;/p&gt;

&lt;p&gt;NoSQL database engines are a bit of a catch-all, but in essence, if you don't store your data tabularly you're probably a NoSQL database engine. Amazon DynamoDB is a Key/Value pair and Document store. Key/Value store allows you to store data like a Hash (associative array/dictionary), you provide a &lt;code&gt;key&lt;/code&gt; and the value is returned, you may have used one without knowing as they're often referred to as cache servers i.e. Redis. Document stores allow you to data in a structured way common formats are XML and JSON, often these are the database engines most people associate with NoSQL i.e. CouchDB and MongoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  When?
&lt;/h2&gt;

&lt;p&gt;Amazon Aurora provides a compatible engine that is 3x faster than PostgreSQL and 5x faster than MySQL. In terms of cost-effectiveness, you need to compare the other RDS engines as a multi-AZ deployment and memory optimised instances.&lt;/p&gt;

&lt;p&gt;Things that set it apart from the other RDS offerings in terms of this domain are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Aurora can failover over to 1 of (up to) 15 read replicas with low impact to the primary instance.

&lt;ul&gt;
&lt;li&gt;You can use MySQL replicas instead of Aurora native, but you're limited to 5 replicas and there's a high impact to the primary instance.&lt;/li&gt;
&lt;li&gt;The order which replicas are promoted to primary can be customised.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Data is stored in 10GB chunks with 6 copies replicated across three availability zone.

&lt;ul&gt;
&lt;li&gt;Aurora will continue to handle:&lt;/li&gt;
&lt;li&gt;write capability with the loss of 2 copies of data&lt;/li&gt;
&lt;li&gt;read capability with the loss of 3 copies of data&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;The data blocks and disks are scanned for errors and repaired automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon RDS can be part of your disaster recovery strategy by keeping replicas of your production Oracle or SQL Server database servers.&lt;/p&gt;

&lt;p&gt;Amazon DynamoDB requires a lot more consideration to make use of its features that make it relevant to this domain. Choosing the wrong scheme for partition keys can see your database starved of I/O.&lt;/p&gt;

&lt;p&gt;Things to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You pay for the number of I/O you use (rather than instance size) the size of units is in kilobytes and vary depending on the type of I/O operation:

&lt;ul&gt;
&lt;li&gt;Read Capacity Unit (RCU) are measured in 4KB blocks, so an 8KB block of data would consume 2 RCUs&lt;/li&gt;
&lt;li&gt;Write Capacity Unit (WCU) are measured in 1KB blocks, so a 5KB block of data would consume 5 WCUs&lt;/li&gt;
&lt;li&gt;Data in tables are stored as 10GB partitions that can handle 3K RCU and 1K WCU&lt;/li&gt;
&lt;li&gt;As you expand the table into more 10GB partitions, the RCU and WCU are distributed across the partitions i.e. if you 2 partitions then you have a max of 1.5K RCUs and 0.5K WCUs across the 2 partitions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Terminology

&lt;ul&gt;
&lt;li&gt;Table (highest level unit for Dynamo DB)&lt;/li&gt;
&lt;li&gt;Item (a record)&lt;/li&gt;
&lt;li&gt;Attributes (columns or fields of a record)&lt;/li&gt;
&lt;li&gt;Primary Keys can consist of&lt;/li&gt;
&lt;li&gt;just a Partition Key (which is often referred to as a Primary Index and is used to query the table)&lt;/li&gt;
&lt;li&gt;the partition key and sort key this is known as a composite primary key&lt;/li&gt;
&lt;li&gt;Secondary Indexes (allows you to query on a different attribute)&lt;/li&gt;
&lt;li&gt;Local - requires the same partition key, but the sort key can be different&lt;/li&gt;
&lt;li&gt;Global - can have a different attribute for the partition and sort keys&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Strategies for write-heavy use cases it's recommended you add a randomly generated number from a predetermined range. e.g. if Partition Key is a composite attribute based on invoice number (1234), then you would suffice the randomly generated number (1) to the end, so the composite key would be: &lt;code&gt;1234&lt;/code&gt;-&lt;code&gt;1&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://serverlessarchitecture.com/2016/03/22/aws-dynamodb-cheat-sheet/"&gt;AWS DynamoDB Cheat Sheet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://read.korzh.cloud/aws-dynamodb-partitions-and-key-design-56688bee8502"&gt;AWS DynamoDB Partitions and Key Design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/"&gt;AWS DynamoDB: Choosing the right partition key&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dynamodbguide.com/key-concepts/"&gt;Key concepts of DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;p&gt;Amazon Aurora requires you to choose your engine compatibility i.e. MySQL or PostgreSQL after which you define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create DB Cluster Parameter Group (parameters to be applied to all instance in a DB cluster)&lt;/li&gt;
&lt;li&gt;Create DB Cluster (the group of instances associated with a DB cluster)&lt;/li&gt;
&lt;li&gt;Create the Database Instance (adds a new instance to a DB cluster)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AWS CLI features a cli &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-services-dynamodb.html"&gt;walkthrough&lt;/a&gt; of how to provision a table, store an item, perform a query. It should be noted that as a rule of thumb you would probably use the [AWS SDK][aws_sdk] to store and retrieve data from DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  API and CLI features and verbs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Amazon RDS
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;DB (Cluster) Parameter Group&lt;/li&gt;
&lt;li&gt;Db Cluster&lt;/li&gt;
&lt;li&gt;DB Instance&lt;/li&gt;
&lt;li&gt;DB (Cluster) Snapshot&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Verbs (CRUD)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;create/copy&lt;/li&gt;
&lt;li&gt;describe&lt;/li&gt;
&lt;li&gt;modify&lt;/li&gt;
&lt;li&gt;delete&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Outliers
&lt;/h4&gt;

&lt;p&gt;Not my best work will see if I can optimise this list.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add-option-to-option-group&lt;/li&gt;
&lt;li&gt;add-role-to-db-cluster&lt;/li&gt;
&lt;li&gt;add-role-to-db-instance&lt;/li&gt;
&lt;li&gt;add-source-identifier-to-subscription&lt;/li&gt;
&lt;li&gt;add-tags-to-resource&lt;/li&gt;
&lt;li&gt;apply-pending-maintenance-action&lt;/li&gt;
&lt;li&gt;authorize-db-security-group-ingress&lt;/li&gt;
&lt;li&gt;backtrack-db-cluster&lt;/li&gt;
&lt;li&gt;copy-option-group&lt;/li&gt;
&lt;li&gt;create-db-cluster-endpoint&lt;/li&gt;
&lt;li&gt;create-db-instance-read-replica&lt;/li&gt;
&lt;li&gt;create-db-security-group&lt;/li&gt;
&lt;li&gt;create-db-subnet-group&lt;/li&gt;
&lt;li&gt;create-event-subscription&lt;/li&gt;
&lt;li&gt;create-global-cluster&lt;/li&gt;
&lt;li&gt;create-option-group&lt;/li&gt;
&lt;li&gt;delete-db-cluster-endpoint&lt;/li&gt;
&lt;li&gt;delete-db-instance-automated-backup&lt;/li&gt;
&lt;li&gt;delete-db-parameter-group&lt;/li&gt;
&lt;li&gt;delete-db-security-group&lt;/li&gt;
&lt;li&gt;delete-db-snapshot&lt;/li&gt;
&lt;li&gt;delete-db-subnet-group&lt;/li&gt;
&lt;li&gt;delete-event-subscription&lt;/li&gt;
&lt;li&gt;delete-global-cluster&lt;/li&gt;
&lt;li&gt;delete-option-group&lt;/li&gt;
&lt;li&gt;describe-account-attributes&lt;/li&gt;
&lt;li&gt;describe-certificates&lt;/li&gt;
&lt;li&gt;describe-db-cluster-backtracks&lt;/li&gt;
&lt;li&gt;describe-db-cluster-endpoints&lt;/li&gt;
&lt;li&gt;describe-db-cluster-parameters&lt;/li&gt;
&lt;li&gt;describe-db-cluster-snapshot-attributes&lt;/li&gt;
&lt;li&gt;describe-db-engine-versions&lt;/li&gt;
&lt;li&gt;describe-db-instance-automated-backups&lt;/li&gt;
&lt;li&gt;describe-db-log-files&lt;/li&gt;
&lt;li&gt;describe-db-parameter-groups&lt;/li&gt;
&lt;li&gt;describe-db-parameters&lt;/li&gt;
&lt;li&gt;describe-db-security-groups&lt;/li&gt;
&lt;li&gt;describe-db-snapshot-attributes&lt;/li&gt;
&lt;li&gt;describe-db-subnet-groups&lt;/li&gt;
&lt;li&gt;describe-engine-default-cluster-parameters&lt;/li&gt;
&lt;li&gt;describe-engine-default-parameters&lt;/li&gt;
&lt;li&gt;describe-event-categories&lt;/li&gt;
&lt;li&gt;describe-event-subscriptions&lt;/li&gt;
&lt;li&gt;describe-events&lt;/li&gt;
&lt;li&gt;describe-global-clusters&lt;/li&gt;
&lt;li&gt;describe-option-group-options&lt;/li&gt;
&lt;li&gt;describe-option-groups&lt;/li&gt;
&lt;li&gt;describe-orderable-db-instance-options&lt;/li&gt;
&lt;li&gt;describe-pending-maintenance-actions&lt;/li&gt;
&lt;li&gt;describe-reserved-db-instances&lt;/li&gt;
&lt;li&gt;describe-reserved-db-instances-offerings&lt;/li&gt;
&lt;li&gt;describe-source-regions&lt;/li&gt;
&lt;li&gt;describe-valid-db-instance-modifications&lt;/li&gt;
&lt;li&gt;download-db-log-file-portion&lt;/li&gt;
&lt;li&gt;failover-db-cluster&lt;/li&gt;
&lt;li&gt;generate-db-auth-token&lt;/li&gt;
&lt;li&gt;list-tags-for-resource&lt;/li&gt;
&lt;li&gt;modify-current-db-cluster-capacity&lt;/li&gt;
&lt;li&gt;modify-db-cluster-endpoint&lt;/li&gt;
&lt;li&gt;modify-db-cluster-snapshot-attribute&lt;/li&gt;
&lt;li&gt;modify-db-snapshot-attribute&lt;/li&gt;
&lt;li&gt;modify-db-subnet-group&lt;/li&gt;
&lt;li&gt;modify-event-subscription&lt;/li&gt;
&lt;li&gt;modify-global-cluster&lt;/li&gt;
&lt;li&gt;promote-read-replica&lt;/li&gt;
&lt;li&gt;promote-read-replica-db-cluster&lt;/li&gt;
&lt;li&gt;purchase-reserved-db-instances-offering&lt;/li&gt;
&lt;li&gt;reboot-db-instance&lt;/li&gt;
&lt;li&gt;remove-from-global-cluster&lt;/li&gt;
&lt;li&gt;remove-option-from-option-group&lt;/li&gt;
&lt;li&gt;remove-role-from-db-cluster&lt;/li&gt;
&lt;li&gt;remove-role-from-db-instance&lt;/li&gt;
&lt;li&gt;remove-source-identifier-from-subscription&lt;/li&gt;
&lt;li&gt;remove-tags-from-resource&lt;/li&gt;
&lt;li&gt;reset-db-cluster-parameter-group&lt;/li&gt;
&lt;li&gt;reset-db-parameter-group&lt;/li&gt;
&lt;li&gt;restore-db-cluster-from-s3&lt;/li&gt;
&lt;li&gt;restore-db-cluster-from-snapshot&lt;/li&gt;
&lt;li&gt;restore-db-cluster-to-point-in-time&lt;/li&gt;
&lt;li&gt;restore-db-instance-from-db-snapshot&lt;/li&gt;
&lt;li&gt;restore-db-instance-from-s3&lt;/li&gt;
&lt;li&gt;restore-db-instance-to-point-in-time&lt;/li&gt;
&lt;li&gt;revoke-db-security-group-ingress&lt;/li&gt;
&lt;li&gt;start-activity-stream&lt;/li&gt;
&lt;li&gt;start-db-cluster&lt;/li&gt;
&lt;li&gt;start-db-instance&lt;/li&gt;
&lt;li&gt;stop-activity-stream&lt;/li&gt;
&lt;li&gt;stop-db-cluster&lt;/li&gt;
&lt;li&gt;stop-db-instance&lt;/li&gt;
&lt;li&gt;wait&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon DynamoDB
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Item&lt;/li&gt;
&lt;li&gt;Backup&lt;/li&gt;
&lt;li&gt;(Global) Table&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Verbs (CRUD)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;create (global table, table and backup)&lt;/li&gt;
&lt;li&gt;describe/list (global table, table and backup), get-item, batch-get-item&lt;/li&gt;
&lt;li&gt;update (global table, table and backup), put-item, batch-put-item&lt;/li&gt;
&lt;li&gt;delete&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Outliers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;describe-continuous-backups&lt;/li&gt;
&lt;li&gt;describe-endpoints&lt;/li&gt;
&lt;li&gt;describe-global-table-settings&lt;/li&gt;
&lt;li&gt;describe-limits&lt;/li&gt;
&lt;li&gt;describe-time-to-live&lt;/li&gt;
&lt;li&gt;list-tags-of-resource&lt;/li&gt;
&lt;li&gt;query&lt;/li&gt;
&lt;li&gt;restore-table-from-backup&lt;/li&gt;
&lt;li&gt;restore-table-to-point-in-time&lt;/li&gt;
&lt;li&gt;scan&lt;/li&gt;
&lt;li&gt;tag-resource&lt;/li&gt;
&lt;li&gt;transact-get-items&lt;/li&gt;
&lt;li&gt;transact-write-items&lt;/li&gt;
&lt;li&gt;untag-resource&lt;/li&gt;
&lt;li&gt;update-continuous-backups&lt;/li&gt;
&lt;li&gt;update-global-table-settings&lt;/li&gt;
&lt;li&gt;update-time-to-live&lt;/li&gt;
&lt;li&gt;wait&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unsplash path (what terms I used to get to the cover image): database&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To go to the next part of the series, click on the grey dot below which is next to the current marker (the black dot).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>rds</category>
      <category>dynamodb</category>
    </item>
    <item>
      <title>AWS DevOps Pro Certification Blog Post Series: Amazon Single Signon, CloudFront, Autoscaling and Route53</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Wed, 12 Jun 2019 13:37:00 +0000</pubDate>
      <link>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-amazon-single-signon-cloudfront-autoscaling-and-route53-68c</link>
      <guid>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-amazon-single-signon-cloudfront-autoscaling-and-route53-68c</guid>
      <description>&lt;p&gt;Photo by Todd Quackenbush on &lt;a href="https://unsplash.com/photos/IClZBVw5W5A"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveat emptor
&lt;/h2&gt;

&lt;p&gt;Using AWS costs money, some of these services may not be part of the AWS &lt;a href="https://aws.amazon.com/free/"&gt;Free Tier&lt;/a&gt;. You can keep costs down by tearing down anything you've created whilst learning, but it's still possible to run up a hefty bill so pay attention to the instances you setup!&lt;/p&gt;

&lt;p&gt;I'm very lucky to be able to use my employer's AWS account. You should ask your place of work if a similar arrangement can be made as part of your study.&lt;/p&gt;

&lt;h2&gt;
  
  
  Velocius quam asparagi conquantur
&lt;/h2&gt;

&lt;p&gt;The format of the blog posts is liable to change as I try to refine my mental model of each domain, so be sure to revisit the blog posts on a regular basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  What?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Single Sign-On&lt;/strong&gt; is a managed single sign-on (SSO) service that you can use to simplify access to applications and 3rd party services. If SSO is not a term you're familiar with, if you've ever signed up for a service using your Google, Facebook or Twitter account (instead of using your email address and password specific to that site) then you've used SSO.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; is a managed Content Delivery Network (CDN) service, you may have heard of CloudFront's competitors like CloudFlare, Akamai and Fastly. CDN speed up your website performance by strategically placing mirrors of popular content (static files, API or streaming audio/video) at locations nearer to the user accessing your website. These mirrors are referred to as Edge locations popular content for the region (not specific to client) is cached here. In more densely populated areas there are also Regional Caches that hold content for longer than Edge locations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Route53&lt;/strong&gt; is a managed &lt;a href="https://en.wikipedia.org/wiki/Dns"&gt;Domain Name Service&lt;/a&gt; (DNS). At its very basic level of functionality DNS servers allow you to connect to servers using friendly domain names i.e. dev.to rather than IP addresses like 151.101.123.4, 151.101.12.34, 151.101.1.234. It's designed to work with other Amazon Web Services that is you can point DNS records directly to Elastic Load Balancer, S3 and EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Autoscaling&lt;/strong&gt; as we saw in the Domain intro comes in two varieties: &lt;a href="https://aws.amazon.com/autoscaling/"&gt;AWS Auto Scaling&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/autoscaling/"&gt;Amazon EC2 Auto Scaling&lt;/a&gt;. The general rule of thumb for use if you want to just autoscale EC2 instances then use EC2 Auto Scaling service, otherwise, AWS Auto Scaling is a better use case for when you want to scale multiple resources (not just EC2) i.e. DynamoDB tables and indexes, ECS tasks.&lt;/p&gt;

&lt;p&gt;An important thing to note that to use AWS Auto Scaling, your resource must be created using CloudFormation or Elastic Beanstalk.&lt;/p&gt;

&lt;p&gt;For the rest of this post, we'll only be referring to Amazon EC2 Auto Scaling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Single Sign-On&lt;/strong&gt; or generically any single sign-on (SSO) service is better than managing the administrative overhead of keeping separate logins for each application / service, you reduce the impact on day to day operations should disaster strike (think the number of helpdesk tickets will be raised for DR systems that rarely get used). You'll also get the undying love of your users because it means fewer logins to track, which in turn means they will be less likely to keep a scrap paper lying around their desk with the various logins and passwords written down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; distributes your content geographically rather than storing in a single location or S3 bucket. By careful design (falling back graceful should the backend be unavailable) ensures your website is highly available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Route 53&lt;/strong&gt; provides the following &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html"&gt;routing policies&lt;/a&gt; whose attributes are suitable for this domain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failover routing - used for active-passive failover, a good use case for automated disaster recovery.&lt;/li&gt;
&lt;li&gt;Geolocation routing - used to route traffic based on the location of users, a good use case for highly available&lt;/li&gt;
&lt;li&gt;Geoproximity routing - similar to Geolocation routing, but also allows you to route to a secondary location. This also makes a good use case for fault tolerance.&lt;/li&gt;
&lt;li&gt;Latency-based routing - used to route users to the resources with the best (least) latency&lt;/li&gt;
&lt;li&gt;Multivalue answer routing - this is similar to round robin, in that you can randomly pick a route from up to eight healthy resources&lt;/li&gt;
&lt;li&gt;Weighted routing - routes traffic to different resources using a percentage split (useful for A/B testing or load balancing). Weights are between 0 to 255.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Auto Scaling&lt;/strong&gt; allows you to launch or terminate a number of EC2 instances by defining conditions when scaling out (increase) or scaling in (decreasing) the number of instances. The condition might be metrics like CPU or Memory utilisation, or health checks. This combined with an elastic load balancer provides a system that can be highly available and fault tolerant.&lt;/p&gt;

&lt;p&gt;Terminology to be aware of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html"&gt;Auto Scaling Group&lt;/a&gt; (ASG) - this is a group of EC2 instances associated with one or more scale in/out conditions.

&lt;ul&gt;
&lt;li&gt;Minimum size - that the ASG never goes below&lt;/li&gt;
&lt;li&gt;Maximum Size - that the ASG never goes above&lt;/li&gt;
&lt;li&gt;Desired capacity - that the ASG will already try to maintain (unless the condition requires further scaling-in).&lt;/li&gt;
&lt;li&gt;Scaling capacity is between desired capacity and Maximum Size&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Single Sign-On&lt;/strong&gt; should ideally be implemented as soon as possible, but it's still possible to retrofit into an existing environment. Doing this soon rather than later could mean you're not having to re-organise the team who are responsible for user and access management if the headcount reduces because of efficiency savings through the implementation of SSO.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; should be implemented once you have some metrics (via Amazon X-Ray or something similar) to indicate you have customers in regions that are experiencing poor response times because of their proximity in relation to the region where your load balancers, EC2 instances or S3 buckets are hosted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Route 53&lt;/strong&gt;'s routing policy provides a lot of desirable features that are relevant for this domain. This combined with the fact that Amazon also offers an [SLA] of 100% availability and the ability to create and modify DNS record programmatically mean the use of Route 53 is a bit of a no-brainer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Auto Scaling&lt;/strong&gt; has the concept of &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html"&gt;lifecycle hooks&lt;/a&gt;. These allow you to perform custom actions by pausing the instances as the ASG launches (&lt;code&gt;EC2_INSTANCE_LAUNCHING&lt;/code&gt;) or terminates  (&lt;code&gt;EC2_INSTANCE_TERMINATING&lt;/code&gt;) them. Whilst the instance is paused, it is in a wait state until you complete the action by issuing the &lt;code&gt;complete-lifecycle-action&lt;/code&gt; action in the CLI/API or the timeout period ends (one hour by default). You can extend the timeout period by either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;set a longer heartbeat timeout period using the &lt;code&gt;put-lifecycle-hook&lt;/code&gt; action (CLI/API) when you create the lifecycle hook&lt;/li&gt;
&lt;li&gt;restart the timeout period using &lt;code&gt;record-lifecycle-action-heartbeat&lt;/code&gt; action (CLI/API)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The maximum time you can place an instance in a wait state is 48 hours or 100 times the heartbeat timeout (whichever is smaller).&lt;/p&gt;

&lt;h2&gt;
  
  
  How?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon Single Sign-On&lt;/strong&gt; requires an AWS Organization to exist and then you can enable single sign-on via the AWS Console. The specifics for setting up the service with the AWS Account or Cloud Applications (3rd party services) can be found in the [guide][sso_guide]. There is an option to link to your existing Microsoft Active Directory, but if you don't need this option then the service will use its own directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt; to setup you define a distribution that determines the content origins (S3 bucket or HTTP server), access, security (TLS/SSL/HTTPS), session/object tracking, geo restrictions and logging. The provisioning of CloudFront can take a while as the content is being distributed to edge locations.&lt;/p&gt;

&lt;p&gt;I've found the following article in the AWS blog very helpful in terms of an application that I was already familiar with, but also knew the difficulty in optimising for response time: &lt;a href="https://aws.amazon.com/blogs/networking-and-content-delivery/how-to-accelerate-your-wordpress-site-with-amazon-cloudfront/"&gt;How to accelerate your WordPress site with Amazon CloudFront&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EC2 Auto Scaling&lt;/strong&gt; terminology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-scaling group - this is a collection of EC2 instances that will be scaled in or out depending on conditions defined

&lt;ul&gt;
&lt;li&gt;There's a minimum size&lt;/li&gt;
&lt;li&gt;Desired capacity&lt;/li&gt;
&lt;li&gt;Max capacity&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Autoscaling lifecycle

&lt;ul&gt;
&lt;li&gt;starts when an ASG launches an instance&lt;/li&gt;
&lt;li&gt;ends, when you terminate an instance or ASG, takes an instance out of service and terminates it&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html"&gt;Cooldowns&lt;/a&gt; prevents the ASG from launching or terminating more instances before the previous scaling activity event has taken effect. The default period is 300 seconds (5 minutes)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  API and CLI features and verbs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Amazon Single Sign-On
&lt;/h3&gt;

&lt;p&gt;This service has no API/CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon CloudFront
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;(Streaming) Distributions (this is probably the most important one to be aware of) with or without tags&lt;/li&gt;
&lt;li&gt;Field Level Encryption (Config/Profile)&lt;/li&gt;
&lt;li&gt;Invalidation (cache)&lt;/li&gt;
&lt;li&gt;(CloudFront) Origin Access Identity&lt;/li&gt;
&lt;li&gt;Public key&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Verbs (CRUD)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;create (distrbution/streaming-with-tags)&lt;/li&gt;
&lt;li&gt;get/list&lt;/li&gt;
&lt;li&gt;update (except invalidation)&lt;/li&gt;
&lt;li&gt;delete (except invalidation)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Outliers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;get-field-level-encryption-profile-config&lt;/li&gt;
&lt;li&gt;get-distribution-config&lt;/li&gt;
&lt;li&gt;get-public-key-config&lt;/li&gt;
&lt;li&gt;get-cloud-front-origin-access-identity-config&lt;/li&gt;
&lt;li&gt;get-streaming-distribution-config&lt;/li&gt;
&lt;li&gt;list-distributions-by-web-acl-id&lt;/li&gt;
&lt;li&gt;list-tags-for-resource&lt;/li&gt;
&lt;li&gt;sign&lt;/li&gt;
&lt;li&gt;tag-resource&lt;/li&gt;
&lt;li&gt;untag-resource&lt;/li&gt;
&lt;li&gt;wait&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon Route 53
&lt;/h3&gt;

&lt;p&gt;I've opted for the main API/CLI for &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/route53/index.html"&gt;Route 53&lt;/a&gt; instead of &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/servicediscovery/index.html"&gt;Domains&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/route53domains/index.html"&gt;Resolvers&lt;/a&gt; as I've been using this more on a day to day basis.&lt;/p&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Health Check&lt;/li&gt;
&lt;li&gt;Hosted Zone&lt;/li&gt;
&lt;li&gt;Reusable Delegation Set&lt;/li&gt;
&lt;li&gt;Traffic Policy (Instance/Version)&lt;/li&gt;
&lt;li&gt;Create Query Logging Config&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Verbs (CRUD)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;create&lt;/li&gt;
&lt;li&gt;get/list&lt;/li&gt;
&lt;li&gt;update&lt;/li&gt;
&lt;li&gt;delete&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Outliers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;CreateVPCAssociationAuthorization&lt;/li&gt;
&lt;li&gt;AssociateVPCWithHostedZone&lt;/li&gt;
&lt;li&gt;DeleteVPCAssociationAuthorization&lt;/li&gt;
&lt;li&gt;ChangeResourceRecordSets&lt;/li&gt;
&lt;li&gt;ChangeTagsForResource&lt;/li&gt;
&lt;li&gt;DisassociateVPCFromHostedZone&lt;/li&gt;
&lt;li&gt;GetAccountLimit&lt;/li&gt;
&lt;li&gt;GetChange&lt;/li&gt;
&lt;li&gt;GetCheckerIpRanges&lt;/li&gt;
&lt;li&gt;GetGeoLocation&lt;/li&gt;
&lt;li&gt;ListGeoLocations&lt;/li&gt;
&lt;li&gt;TestDNSAnswer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  EC2 Auto Scaling
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/APIReference/index.html"&gt;API&lt;/a&gt; has a lot of features, but the API actions I've focussed on have been around the Lifecycle Hooks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Verbs (CRUD)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;describe (types)&lt;/li&gt;
&lt;li&gt;put&lt;/li&gt;
&lt;li&gt;delete&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Outliers
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;CompleteLifecycleAction&lt;/li&gt;
&lt;li&gt;RecordLifecycleActionHeartbeat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unsplash path (what terms I used to get to the cover image): random, miscellany, junkyard, collection&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To go to the next part of the series, click on the grey dot below which is next to the current marker (the black dot).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>route53</category>
      <category>sso</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>AWS DevOps Pro Certification Blog Post Series: High Availability, Fault Tolerance and Disaster Recovery</title>
      <dc:creator>Mark Sta Ana</dc:creator>
      <pubDate>Sat, 08 Jun 2019 10:32:02 +0000</pubDate>
      <link>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-high-availability-fault-tolerance-and-disaster-recovery-2ejj</link>
      <guid>https://forem.com/booyaa/aws-devops-pro-certification-blog-post-series-high-availability-fault-tolerance-and-disaster-recovery-2ejj</guid>
      <description>&lt;p&gt;Photo by Emiel Molenaar on &lt;a href="https://unsplash.com/photos/JOrUKpuMOeU"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What does the exam guide say?
&lt;/h2&gt;

&lt;p&gt;To pass this domain, you'll need to know the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Determine appropriate use of multi-AZ versus multi-region architectures&lt;/li&gt;
&lt;li&gt;Determine how to implement high availability, scalability, and fault tolerance&lt;/li&gt;
&lt;li&gt;Determine the right services based on business needs (e.g., RTO/RPO, cost)&lt;/li&gt;
&lt;li&gt;Determine how to design and automate disaster recovery strategies&lt;/li&gt;
&lt;li&gt;Evaluate a deployment for points of failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This domain is &lt;strong&gt;16%&lt;/strong&gt; of the overall mark for the exam.&lt;/p&gt;

&lt;h2&gt;
  
  
  What whitepapers are relevant?
&lt;/h2&gt;

&lt;p&gt;According to the &lt;a href="https://aws.amazon.com/whitepapers"&gt;AWS Whitepapers&lt;/a&gt; page we should look at the following documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/Storage/Backup_and_Recovery_Approaches_Using_AWS.pdf"&gt;Backup and Recovery Approaches Using AWS (June 2016)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://d1.awsstatic.com/whitepapers/aws-building-fault-tolerant-applications.pdf"&gt;Building Fault-Tolerant Applications on AWS (October 2011)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What services and products covered in this domain?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/single-sign-on/"&gt;AWS Single Sign-On&lt;/a&gt; is Amazon's managed SSO service allow your users to sign in to AWS and other connected services using your existing Microsoft Active Directory (AD).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/cloudfront/"&gt;Amazon CloudFront&lt;/a&gt; is a managed Content Delivery Network (CDN) service.&lt;/li&gt;
&lt;li&gt;Autoscaling resources - Amazon has two offerings &lt;a href="https://aws.amazon.com/autoscaling/"&gt;Amazon Autoscaling&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/autoscaling/"&gt;Amazon EC2 Auto Scaling&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/route53/"&gt;Amzon Route 53&lt;/a&gt; is a managed Domain Name Service (DNS).&lt;/li&gt;
&lt;li&gt;Databases

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/rds/"&gt;Amazon RDS&lt;/a&gt; is a managed relational database service with a large choice of engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database and SQL Server.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/rds/aurora/"&gt;Amazon Aurora&lt;/a&gt; is part of the RDS offering but is unique in that it provides compatibility with MySQL and PostgreSQL engines whilst outperforming them considerably (5x for MySQL and 3x for PostgreSQL).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/dynamodb/"&gt;Amazon DynamoDB&lt;/a&gt; is a managed NoSQL (non-relational) database service that can be used for storing key-value pairs or document based records.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What about other types of documentation?
&lt;/h2&gt;

&lt;p&gt;If you have the time, by all means, read the User Guides, but they are usually a couple of hundred pages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Single-Sign On&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/index.html"&gt;Amazon CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/autoscaling/plans/userguide/"&gt;Amazon Autoscaling&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/index.html"&gt;Amazon EC2 Autoscaling&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/index.html"&gt;Amazon Route53&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Databases

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/index.html"&gt;Amazon RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/index.html"&gt;Amazon Aurora&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/"&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alternatively, get familiar with the services using the FAQs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/single-sign-on/faqs/"&gt;Amazon Single-Sign On&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/faqs/?nc=sn&amp;amp;loc=6"&gt;Amazon CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/autoscaling/faqs/"&gt;Amazon Autoscaling&lt;/a&gt; and &lt;a href="https://aws.amazon.com/ec2/autoscaling/faqs/"&gt;Amazon EC2 Autoscaling&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/route53/faqs/"&gt;Amazon Route53&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Databases

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/faqs/"&gt;Amazon RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/aurora/faqs/"&gt;Amazon Aurora&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/faqs/"&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're all expected to know the APIs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cloudfront/latest/APIReference/Welcome.html"&gt;Amazon CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/autoscaling/plans/APIReference/"&gt;Amazon Autoscaling&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/APIReference/index.html"&gt;Amazon EC2 Autoscaling&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/APIReference/index.html"&gt;Amazon Route53&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Databases

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/index.html"&gt;Amazon RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Amazon Aurora uses the same API as &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/index.html"&gt;RDS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/"&gt;Amazon DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before you panic, you'll start to spot a pattern with the API verbs.&lt;/p&gt;

&lt;p&gt;And the CLI commands&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudfront/index.html"&gt;Amazon CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/autoscaling-plans/index.html"&gt;Amazon Autoscaling&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/autoscaling/index.html"&gt;Amazon EC2 Autoscaling&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Amazon Route53 has three subcommands: &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/route53/index.html"&gt;DNS and Healthchecking&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/servicediscovery/index.html"&gt;Service Discovery&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/route53domains/index.html"&gt;Domain Registration&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Databases

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/rds/index.html"&gt;Amazon RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Amazon Aurora uses the same CLI as &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/rds/index.html"&gt;RDS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB has two sub commands: &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/dynamodb/index.html"&gt;dynamodb&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/dynamodbstreams/index.html"&gt;dynamodbstreams&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As with the API, there are patterns to the commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  High Availability, Fault Tolerance and Disaster Recovery, oh my!
&lt;/h2&gt;

&lt;p&gt;Let's the basics out of the way and discuss the core concepts around this domain.&lt;/p&gt;

&lt;p&gt;I'm going to use an excellent example provided by Patrick Benson in his blog post: &lt;a href="http://www.pbenson.net/2014/02/the-difference-between-fault-tolerance-high-availability-disaster-recovery/"&gt;The Difference Between Fault Tolerance, High Availability, &amp;amp; Disaster Recovery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An airplane has multiple engines and can operate with the loss of one or more engines. The design of the airplane has been made it resilient to falling out of the sky because of engine failure. This design is &lt;strong&gt;fault tolerant&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In terms of infrastructure, this is likely to be a managed service like RDS, where under the hood the database engine has multiple disks and CPUs to cope with catastrophic failure.&lt;/p&gt;

&lt;p&gt;Whereas spare tire in car, isn't fault tolerant i.e. you have to stop change the tire, but having the spare tire in the first place makes the car still &lt;strong&gt;highly available&lt;/strong&gt;. In terms of infrastructure is any type of technology like an autoscaling group.&lt;/p&gt;

&lt;p&gt;It's very common for a solution to implement a system that is fault tolerant (resilience) and highly available (scalable).&lt;/p&gt;

&lt;p&gt;Finally, ejector seats in Fighter aircraft are &lt;strong&gt;disaster recovery&lt;/strong&gt; (DR) measure. The goal is to preserve the pilot, or in our case, the service after all other measures have failed (Fault Tolerance and HA).&lt;/p&gt;

&lt;p&gt;Often in terms of infrastructure, this might be a standby infrastructure or database replica in a different AWS region and using Route 53 to point to the stand by infrastructure. Whilst it's still common for DR strategies to be manual, for this domain we'll be expected to provide an automated solution.&lt;/p&gt;

&lt;p&gt;Unsplash path (what terms I used to get to the cover image): airplane&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To go to the next part of the series, click on the grey dot below which is next to the current marker (the black dot).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>dr</category>
      <category>ha</category>
    </item>
  </channel>
</rss>
