<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Łukasz Budnik</title>
    <description>The latest articles on Forem by Łukasz Budnik (@lukaszbudnik).</description>
    <link>https://forem.com/lukaszbudnik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lukaszbudnik"/>
    <language>en</language>
    <item>
      <title>AI-Driven Development: My Database Experiment</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Wed, 30 Apr 2025 23:15:01 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/ai-driven-development-my-database-experiment-14e4</link>
      <guid>https://forem.com/lukaszbudnik/ai-driven-development-my-database-experiment-14e4</guid>
      <description>&lt;p&gt;There's plenty of debate about AI's role in coding. As an engineer, I've been working with Large Language Models and GenAI since their early days. I decided to skip the debate and put AI to a practical test: build a DynamoDB-like database service from scratch with AI assistance. Because who doesn't love a good database challenge?&lt;/p&gt;

&lt;p&gt;The idea was to build a service with the following characteristics:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB-like API:&lt;/strong&gt; &lt;code&gt;PutItem&lt;/code&gt;, &lt;code&gt;UpdateItem&lt;/code&gt;, &lt;code&gt;DeleteItem&lt;/code&gt;, &lt;code&gt;GetItem&lt;/code&gt;, &lt;code&gt;Query&lt;/code&gt;, &lt;code&gt;TransactWriteItems&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RocksDB Storage:&lt;/strong&gt; Utilizing RocksDB for fast and reliable data storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gRPC Interface:&lt;/strong&gt; Providing a high-performance gRPC API for client interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transport Security:&lt;/strong&gt; Support for TLS encryption and mutual TLS (mTLS) authentication for secure client-server communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient Data Serialization:&lt;/strong&gt; Leveraging gRPC's Protocol Buffers for efficient data serialization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerization &amp;amp; Orchestration:&lt;/strong&gt; multi-arch (&lt;code&gt;linux/amd64&lt;/code&gt; and &lt;code&gt;linux/arm64&lt;/code&gt;) Docker container support for easy deployment, quick-start Kubernetes manifests provided.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability with OpenTelemetry:&lt;/strong&gt; Built-in OpenTelemetry integration for monitoring RocksDB metrics and operational insights.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Agile Development Simulation:&lt;/strong&gt; Instead of providing a complete design upfront, start small and iteratively add (generate) new features. (This was partly a necessity as I worked on it primarily on weekends).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;AI-Driven Development:&lt;/strong&gt; Let AI drive the development process as much as possible.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;The development approach was pragmatic and interactive. The workflow was as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Begin with interactive AI sessions to outline implementation strategies using Amazon Q chat.&lt;/li&gt;
&lt;li&gt; Review generated code and iterate until it met my standards.&lt;/li&gt;
&lt;li&gt; Use the generated code as a foundation.&lt;/li&gt;
&lt;li&gt; Leverage inline code generation for refinements, extensions, and new features using Amazon Q inline code generation.&lt;/li&gt;
&lt;li&gt; Review selected concepts using Google Gemini chat.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The numbers surprised even me: approximately 80% of the application code was AI-generated, with test coverage hitting an impressive 90% (if not more).&lt;/p&gt;

&lt;p&gt;Amazon Q demonstrated impressive knowledge across the chosen technology stack: build tools and testing frameworks (Gradle, JUnit5, Mockito), communication protocols including transport security (gRPC, Protocol Buffers, TLS 1.3), RocksDB database, and deployment technologies (Docker, Kubernetes).&lt;/p&gt;

&lt;p&gt;I want to call out three key things from this experiment:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Development Workflow.&lt;/strong&gt; To state the obvious: the first generated code isn't always the best. By providing feedback and requesting improvements, the quality of the generated code improved significantly.&lt;br&gt;
For example, the default JUnit tests for the gRPC service weren't using mocks but instead used the underlying implementation. When I asked for mocks, I got JUnit tests with interface mocks. Once I had tests with mocks, the inline code generation followed the existing code and started generating code with mocks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Highlight.&lt;/strong&gt; The &lt;code&gt;TransactWriteItems&lt;/code&gt; implementation particularly showcased AI's capabilities, handling everything from Protocol Buffer definitions, gRPC implementation, RocksDB implementation, to &lt;code&gt;@FunctionalInterface&lt;/code&gt; Java lambda for submitting RocksDB transactions. &lt;br&gt;
Initially, the inline code generation had trouble generating the correct Java builders chains for this complex operation (&lt;code&gt;TransactWriteItems&lt;/code&gt; is a composition of a list of &lt;code&gt;PutItem&lt;/code&gt;, &lt;code&gt;UpdateItem&lt;/code&gt;, &lt;code&gt;DeleteItem&lt;/code&gt; operations). In order to get the correct  Java builders chains, I used the Amazon Q chat interface and provided the proto definition as the input. Once I got the working code, I completed the remaining implementation using the inline code generation. The unit test for &lt;code&gt;TransactWriteItems&lt;/code&gt; was almost entirely implemented by inline code generation, except for mocking the Java lambda expression for submitting the RocksDB transaction, which seemed to be a challenge for the inline tool. Again, I solved that by using the chat feature, where I explained the problem and pasted the &lt;code&gt;@FunctionalInterface&lt;/code&gt; definition for additional context. The rest of the unit test was completed by the inline code generation.&lt;br&gt;
As a result, most (about 95%) of the &lt;code&gt;TransactWriteItems&lt;/code&gt; feature (ProtoBuf definition, gRPC service implementation, RocksDB implementation, and all the unit tests) was AI-generated. Quite impressive, I have to say.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lowlight.&lt;/strong&gt; I have to say that not everything was perfect. There's room for improvement in handling cross-cutting concerns, like input validation.&lt;br&gt;
I think that if I had provided a complete design upfront, the generated validation code would have been elegantly built-in rather than bolted-on.&lt;br&gt;
The input validation code is where I spent most of my time and had to provide the most of my manual implementation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;p&gt;Here are my key takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; AI is great at implementing well-defined patterns. The more context, the better and more accurate the generated code.&lt;/li&gt;
&lt;li&gt; AI excels at generating boilerplate code.&lt;/li&gt;
&lt;li&gt; AI is extremely useful for writing unit tests. The inline code generator very strictly follows the Arrange-Act-Assert pattern and very accurately predicts what and how you want to test your code.&lt;/li&gt;
&lt;li&gt; Interactive AI sessions are great for starting from scratch or in situations where there's not (yet) enough context in the existing code.&lt;/li&gt;
&lt;li&gt; Interactive AI sessions produce better results than quick inline generations.&lt;/li&gt;
&lt;li&gt; Complex architectural decisions benefit from human oversight and experience.&lt;/li&gt;
&lt;li&gt; Some aspects (cross-cutting ones) still require significant human input.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The experiment demonstrated that AI-assisted development has matured to the point where it can significantly accelerate software development.&lt;br&gt;
While it's not a complete replacement for human expertise and there are some areas of the project that I'm not happy with (which I left as they were generated for other engineers to review and assess), it can handle much of the heavy lifting in software development.&lt;/p&gt;

&lt;p&gt;I think it's a great tool for teams looking to create minimum viable products to validate ideas quickly, as well as teams looking to build more complex production-grade systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to join the experiment?
&lt;/h2&gt;

&lt;p&gt;This project is open source and welcomes contributions - with one condition: pull requests should primarily consist of AI-generated code :)&lt;/p&gt;

&lt;p&gt;For more, see: &lt;a href="https://github.com/lukaszbudnik/roxdb" rel="noopener noreferrer"&gt;https://github.com/lukaszbudnik/roxdb&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>dynamodb</category>
      <category>rocksdb</category>
    </item>
    <item>
      <title>Cloud native C</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Thu, 19 Aug 2021 18:37:53 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/cloud-native-c-48m</link>
      <guid>https://forem.com/lukaszbudnik/cloud-native-c-48m</guid>
      <description>&lt;h2&gt;
  
  
  Low level cloud SDKs
&lt;/h2&gt;

&lt;p&gt;When you think about writing a cloud native app you probably think about higher level programming languages like Java, JavaScript, .NET, Python, Ruby, etc.&lt;/p&gt;

&lt;p&gt;But what if you have an old C app? And you are preparing to either rehost it or even better replatform it?&lt;/p&gt;

&lt;p&gt;Well, none of the main cloud providers offer C SDKs. AWS goes as low as C++. Which is nice. &lt;/p&gt;

&lt;p&gt;But! All main cloud providers actually do offer Go SDKs. And with Go and its cgo tool we are already at home.&lt;/p&gt;

&lt;h2&gt;
  
  
  c + go = cgo
&lt;/h2&gt;

&lt;p&gt;I have prepared a really simple C application which calls Go functions to upload and download an object to/from AWS S3 bucket.&lt;/p&gt;

&lt;p&gt;The code is uploaded to my GitHub repo:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/cloud-native-c"&gt;
        cloud-native-c
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      cloud-native-c shows how to use AWS Go SDK from C using cgo
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;Makefile&lt;/code&gt; is really simple, but I'm going to quickly review it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nf"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;shell&lt;/span&gt; &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="nf"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;ifeq&lt;/span&gt; &lt;span class="nv"&gt;($(OS), Linux)&lt;/span&gt;
    &lt;span class="nv"&gt;build_cmd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; gcc &lt;span class="nt"&gt;-pthread&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; app app.c storage.a
&lt;span class="k"&gt;endif&lt;/span&gt;
&lt;span class="k"&gt;ifeq&lt;/span&gt; &lt;span class="nv"&gt;($(OS), Darwin)&lt;/span&gt;
    &lt;span class="nv"&gt;build_cmd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; gcc storage.a app.c &lt;span class="nt"&gt;-framework&lt;/span&gt; CoreFoundation &lt;span class="nt"&gt;-framework&lt;/span&gt; Security &lt;span class="nt"&gt;-o&lt;/span&gt; app
&lt;span class="k"&gt;endif&lt;/span&gt;

&lt;span class="nl"&gt;all&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;app&lt;/span&gt;

&lt;span class="nl"&gt;app&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;storage app.c&lt;/span&gt;
    &lt;span class="nv"&gt;$(build_cmd)&lt;/span&gt;

&lt;span class="nl"&gt;storage&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;storage.go&lt;/span&gt;
    go build &lt;span class="nt"&gt;-buildmode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;c-archive storage.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;storage&lt;/code&gt; rule compiles and builds the &lt;code&gt;storage.go&lt;/code&gt; as a C library. It generates 2 artifacts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;storage.h&lt;/code&gt; - which contains definitions needed by the C program to call Go functions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;storage.a&lt;/code&gt; - compiled C library&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other than that &lt;code&gt;storage.go&lt;/code&gt; is just a normal go program (it has to have an empty &lt;code&gt;main&lt;/code&gt; function). It also uses &lt;code&gt;go.mod&lt;/code&gt; to manage dependencies (AWS SDK Go v2 libraries). &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;app&lt;/code&gt; rule compiles and builds the &lt;code&gt;app.c&lt;/code&gt; application. As there are small differencies between &lt;code&gt;gcc&lt;/code&gt; on Mac and Linux I'm passing different paramters to it. For MacOS it links CoreFoundation and Security frameworks and for Linux it links pthread library. Both MacOS and Linux are built automatically on GitHub using actions running on &lt;code&gt;macos-10.15&lt;/code&gt; and &lt;code&gt;ubuntu-20.04&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Edit: Support for Windows 2019 with MinGW was also added. Windows binaries are also built automatically on GitHub using &lt;code&gt;windows-2019&lt;/code&gt; runner. See repo for more details.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Source code - C
&lt;/h2&gt;

&lt;p&gt;How does the application look like then?&lt;/p&gt;

&lt;p&gt;First let's take a look at the C code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include &amp;lt;stdio.h&amp;gt;
#include &amp;lt;string.h&amp;gt;
#include "response.h"
#include "storage.h"
&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;argc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;argc&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"This app requires exactly 2 parameters first is the path to local file and second is the S3 bucket.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Example:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;%s app.c my-bucket-name&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;WRONG_PARAMETERS_ERROR&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PutObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;res&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"There was an error uploading the file.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;res&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;GetObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nb"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;FILE_DOWNLOAD_ERROR&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Content of the uploaded file is:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;%s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="n"&gt;Free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;SUCCESS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing special. I included two header files. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;storage.h&lt;/code&gt; - generated by cgo, contains C definitions of the &lt;code&gt;PutObject()&lt;/code&gt;, &lt;code&gt;GetObject()&lt;/code&gt; and &lt;code&gt;Free()&lt;/code&gt; functions.  These functions take and return C data types.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;response.h&lt;/code&gt; - contains definitions of success and error codes. This header file is also included in Go application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, the above source code is super simple, plain old C.&lt;/p&gt;

&lt;h2&gt;
  
  
  Source code - Go
&lt;/h2&gt;

&lt;p&gt;Now let's take a look at the Go code.&lt;/p&gt;

&lt;p&gt;The go code is almost plain go apart of these 4 lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// #include &amp;lt;stdio.h&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;// #include &amp;lt;stdlib.h&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;// #include "response.h"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"C"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above directives are responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exposing any types/methods/directives defined in stdio and stdlib as C. You can reference them as: &lt;code&gt;C.char&lt;/code&gt; (char data type) or &lt;code&gt;C.free()&lt;/code&gt; (method to release allocated memory), etc.&lt;/li&gt;
&lt;li&gt;exposing any types/methods/directives defined in custom response.h header as C. For example: &lt;code&gt;C.SUCCESS&lt;/code&gt; or &lt;code&gt;C.FILE_UPLOAD_ERROR&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;exposing special cgo methods as C. For example to convert  a &lt;code&gt;*C.char&lt;/code&gt; to go &lt;code&gt;string&lt;/code&gt; you can use: &lt;code&gt;C.GoString()&lt;/code&gt; function, to create a C string (&lt;code&gt;*C.char&lt;/code&gt;) you can use &lt;code&gt;C.CString()&lt;/code&gt;, etc. Check &lt;a href="https://pkg.go.dev/cmd/cgo"&gt;cgo&lt;/a&gt; documentation for more. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two points to note.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's much easier to convert types in Go then in C. So my &lt;code&gt;PutObject()&lt;/code&gt;, &lt;code&gt;GetObject()&lt;/code&gt;, and &lt;code&gt;Free()&lt;/code&gt; functions take and return C types. Thanks to this I call these functions from C just like they were native C functions.&lt;/li&gt;
&lt;li&gt;Important thing is that all the memory allocated by &lt;code&gt;C.*&lt;/code&gt; functions needs to be freed using &lt;code&gt;C.free()&lt;/code&gt; function. That is why I also exposed &lt;code&gt;Free()&lt;/code&gt; function so that all objects which were returned from Go to C app can be send back to Go to be freed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go lang app looks like this. The &lt;code&gt;//export Function&lt;/code&gt; are required otherwise cgo won't include them in &lt;code&gt;storage.h&lt;/code&gt; header file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;//export PutObject&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;PutObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filenameC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucketC&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;char&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GoString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filenameC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GoString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucketC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LoadDefaultConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stderr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"configuration error: %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CONFIGURATION_ERROR&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;//export GetObject&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;GetObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;keyC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucketC&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;char&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;char&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GoString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;keyC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GoString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucketC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;//export Free&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;Free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ptr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;char&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;C&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;unsafe&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Pointer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ptr&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building and running
&lt;/h2&gt;

&lt;p&gt;In order to build and run the application just call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;make all
go build &lt;span class="nt"&gt;-buildmode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;c-archive storage.go
gcc storage.a app.c &lt;span class="nt"&gt;-framework&lt;/span&gt; CoreFoundation &lt;span class="nt"&gt;-framework&lt;/span&gt; Security &lt;span class="nt"&gt;-o&lt;/span&gt; app
&lt;span class="nv"&gt;$ &lt;/span&gt;./app app.c my-bucket-name
File uploaded correctly, version &lt;span class="nb"&gt;id&lt;/span&gt;: Uzp7G2ehns7AtQdwpPrQoTQriXKbeldA
File version &lt;span class="nb"&gt;id&lt;/span&gt;: Uzp7G2ehns7AtQdwpPrQoTQriXKbeldA
Content of the uploaded file is:
&lt;span class="c"&gt;#include &amp;lt;stdio.h&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#include &amp;lt;string.h&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#include "response.h"&lt;/span&gt;
&lt;span class="c"&gt;#include "storage.h"&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;...&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You are now well equipped to start coding cloud native apps in C!&lt;/p&gt;

</description>
      <category>go</category>
      <category>c</category>
      <category>cgo</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Building cloud native apps: Databases best practices</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Wed, 09 Jun 2021 11:39:22 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-databases-best-practices-17md</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-databases-best-practices-17md</guid>
      <description>&lt;p&gt;Databases are the heart of every system. They also tend to be the source of many problems.&lt;/p&gt;

&lt;p&gt;In this post I would like to summarise best practices for implementing databases and caching for cloud native apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database types
&lt;/h2&gt;

&lt;p&gt;Many developers and many architects start with RDB as a default data store. RDB market is the most mature one, there are countless database frameworks available, many developers know SQL really well.&lt;/p&gt;

&lt;p&gt;You can use RDB as key-value store, full-text search engine, JSON documents store, or time-series store.&lt;/p&gt;

&lt;p&gt;All above is true. But does one size really fit all?&lt;/p&gt;

&lt;p&gt;As a refresher here are some database types you should remember about when designing/implementing your cloud native app. They are highly specialised databases designed to do their job really well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relational&lt;/li&gt;
&lt;li&gt;Key Value&lt;/li&gt;
&lt;li&gt;Document-oriented&lt;/li&gt;
&lt;li&gt;In-memory&lt;/li&gt;
&lt;li&gt;Graph&lt;/li&gt;
&lt;li&gt;Time-series&lt;/li&gt;
&lt;li&gt;Ledger&lt;/li&gt;
&lt;li&gt;Full-text search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find an excellent summary of those in AWS database paper &lt;a href="https://pages.awscloud.com/rs/112-TZM-766/images/Enter_the_Purpose-Built-Database-Era.pdf"&gt;Enter the Purpose-Built Database Era&lt;/a&gt;. The paper contains description of each of the technology, examples, pros and cons. I highly recommend you read it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Database versioning
&lt;/h2&gt;

&lt;p&gt;If you run a cloud native app, perhaps even multi-tenant one, you are for sure using infrastructure-as-code paradigm. I touched on this already in my previous article in this series: &lt;a href="https://dev.to/lukaszbudnik/building-cloud-native-apps-codebase-3g51"&gt;Building cloud native apps: Codebase&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Database changes and database versioning should be no different. You must not apply changes to your databases in a manual way. Further every database change should be versioned, and this is especially important if you have, for example, 6 scrum teams working on the same product.&lt;/p&gt;

&lt;p&gt;While database schema changes happen for all database types (including NoSQL ones) in case of RDB you must make explicit changes in both database schema and application code.&lt;/p&gt;

&lt;p&gt;For relational database migrations and versioning, I'm using &lt;a href="https://github.com/lukaszbudnik/migrator"&gt;migrator&lt;/a&gt;. It's a super lightweight and super fast database migration tool written in Go. It runs as a docker container and exposes simple yet powerful GraphQL API. All migrations applied by migrator are grouped into versions for traceability, auditing, and compliance purposes. &lt;/p&gt;

&lt;p&gt;migrator supports PostgreSQL, MySQL (and all its flavours), and MS SQL. It supports multi-tenancy out of the box. It can read migrations from local disk, AWS S3, or Azure Blob Containers. It also out-performs other database migration frameworks like the leading Java tool FlywayDB. It can also sync already executed migrations from legacy frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching
&lt;/h2&gt;

&lt;p&gt;I cannot stress enough how important caching is. Caching should be implemented at multiple levels. Just like CPU caches are organised into hierarchy of L1, L2, some even use L3 and L4 caches. You should investigate implementing caching strategy in your distributed applications. For a reference this is how I do it in my SaaS product:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hibernate L1 cache - session-level cache, enabled by default;&lt;/li&gt;
&lt;li&gt;Hibernate L2 cache - session factory-level cache, application-wide cache for data entities, we use Infinispan as a L2 cache implementation, other options include JBoss Cache2;&lt;/li&gt;
&lt;li&gt;Application cache - application-wide cache for business objects, implemented using Guava Cache;&lt;/li&gt;
&lt;li&gt;Distributed cache - implemented using a cluster of Redis nodes;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;For completeness: a well-architected system should also include other caches like web cache (HTML/JS/CSS), client-side cache (&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API"&gt;Web Storage API&lt;/a&gt;/&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API"&gt;IndexDB API&lt;/a&gt;/&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Cache"&gt;Cache API&lt;/a&gt;, build artifacts, docker images, etc.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Caching can drastically improve performance of your application. Depending on the scale of your operations this can be hundreds of millions of operations a day!&lt;/p&gt;

&lt;p&gt;Important thing to remember is that different types of data may require different expiration time. It's best to write down all the requirements beforehand. You don't want to have stale data in your caches, on the other hand too short expiration times will lead to performance degradation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connection pooling
&lt;/h2&gt;

&lt;p&gt;Connection pooling is a must for session-based database like for example RDBs. Connection pooling prepares, opens, and maintains ready-to-use database connections. When application needs a database connection it fetches one from the pool and returns it when no longer needs it. Connection pooling helps to better manage the database resources, greatly removing database connection fluctuations.&lt;/p&gt;

&lt;p&gt;If you are looking for Java examples: I used &lt;a href="https://github.com/swaldman/c3p0"&gt;C3P0&lt;/a&gt; in the past and then switched to &lt;a href="https://github.com/brettwooldridge/HikariCP"&gt;Hikari&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For those running serverless applications (for example running thousands of functions) and using relational databases as backends the fluctuations of database connections can cause a lot of unnecessary processing on the DB side. If you are running on AWS I highly recommend you to use &lt;a href="https://aws.amazon.com/rds/proxy/"&gt;AWS RDS Proxy&lt;/a&gt; in front-of your AWS RDS database. Have your Lambdas talk to the proxy and the proxy will offload all the work required to pool and maintain database connections for your serverless application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;I assume you do the basic stuff already: CPU, memory, disk space, read IOPS, write IOPS.&lt;/p&gt;

&lt;p&gt;If you see high CPU usage or free memory dropping to zero it's information that something is wrong. But these metrics won't tell you what.&lt;/p&gt;

&lt;p&gt;You need to have a more in-depth monitoring.&lt;/p&gt;

&lt;p&gt;We used to have a pretty detailed monitoring of our data access layer at the application level. However, it was producing inaccurate data because of Hibernate L1 and L2 caches. At the application level we were recording an event of a DB operation where in fact that operation was reading data from L2 cache.&lt;/p&gt;

&lt;p&gt;So what we learnt is that the most accurate data regarding the database comes from the database itself.&lt;/p&gt;

&lt;p&gt;We switched to &lt;a href="https://aws.amazon.com/rds/performance-insights/"&gt;AWS RDS Insights&lt;/a&gt; together with PostgreSQL native &lt;a href="https://www.postgresql.org/docs/current/pgstatstatements.html"&gt;pg_stat_statement&lt;/a&gt; extension. &lt;/p&gt;

&lt;p&gt;I also went one step further and asked my DevOps team to redesign VPC so that every type of the component got its own subnet. Why? With these fine-grained subnets we could generate network traffic stats from the &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html"&gt;VPC Flow Logs&lt;/a&gt; in a nice visual way. We had a deployment diagram on top of which a simple Python script was adding text labels with the information about network traffic exchanged by all system components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping static data next to the application
&lt;/h2&gt;

&lt;p&gt;I would like to finish this post with a true story. I'm running thousands of machines on a weekly basis. All those machines needed to fetch some metadata from a database. There were different metadata for different jobs. The metadata changed only when a new version was released (they were in fact static). Fetching metadata from database for every job resulted in hundreds of millions of unnecessary database calls. This is what we did:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We started with the obvious one: implemented an application cache for metadata, it helped a lot, still didn't solve the whole problem, because we still launched a few thousands of machines and the application cache for a newly launched machine was always empty;&lt;/li&gt;
&lt;li&gt;Second idea was to offload those operations to our Redis cluster, it helped too, our database was happy, but Redis wasn't happy that much;&lt;/li&gt;
&lt;li&gt;Third idea was to package the metadata along the application code, it was the best of all the options, we didn't have to make any remote calls to either database nor Redis, instead we read the metadata (from Java resources) into the in-memory cache;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The takeaway here is that not everything has to be put in a database. Very often the simplest solutions are the best ones, and I'm a huge fan of the KISS principle.&lt;/p&gt;

</description>
      <category>database</category>
      <category>cache</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Why I choose Keycloak over AWS Cognito</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Wed, 02 Jun 2021 07:54:52 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/why-i-choose-keycloak-over-aws-cognito-55d4</link>
      <guid>https://forem.com/lukaszbudnik/why-i-choose-keycloak-over-aws-cognito-55d4</guid>
      <description>&lt;p&gt;I'm responsible for delivering a secure scalable multi-tenant product that is deployed on AWS. I love AWS and my preference is to use AWS managed services everywhere I can.&lt;/p&gt;

&lt;p&gt;AWS has a Cognito service which is a fully managed service that provides authentication, authorization, and user management.&lt;/p&gt;

&lt;p&gt;However, for a multi-tenant SaaS product, I would go for Keycloak. And in this short article, I will tell you why. I will compare AWS Cognito with Keycloak and show you why Keycloak is still a better choice for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-tenant setup
&lt;/h2&gt;

&lt;p&gt;User pool in AWS Cognito is not multi-tenant. In order to create a multi-tenant system, you have to have a dedicated user pool per tenant. If you are a SaaS product then your tenant provisioning logic would have to be extended to automate the following actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create user pool: &lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPool.html"&gt;CreateUserPool&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create user pool client: &lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html"&gt;CreateUserPoolClient&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create user pool domain: &lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolDomain.html"&gt;CreateUserPoolDomain&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Optional, to set up a more user-friendly domain that will host the sign-up and sign-in web pages, you need Route53 API calls as well&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a lot. &lt;/p&gt;

&lt;p&gt;Now compare it with how easy it is to add a new Keycloak tenant and client from my video. Adding a new tenant and a new client are 2 really simple calls in Keycloak.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lJlKNmAsaiE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  MFA setup
&lt;/h2&gt;

&lt;p&gt;OK, you can set up SMS MFA very easily in AWS Cognito. When a user registers, the verification of the phone (via SMS) is provided out of the box: the web pages and verification logic is provided by AWS Cognito.&lt;/p&gt;

&lt;p&gt;But when you actually want to use software tokens MFA (TOTP) then you have to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a special OAuth scope "aws.cognito.signin.user.admin" to the app client&lt;/li&gt;
&lt;li&gt;Pass user's JWT access token to &lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AssociateSoftwareToken.html"&gt;AssociateSoftwareToken&lt;/a&gt; API call&lt;/li&gt;
&lt;li&gt;Generate QR code&lt;/li&gt;
&lt;li&gt;Once user sets up MFA on the phone, pass the user code to &lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_VerifySoftwareToken.html"&gt;VerifySoftwareToken&lt;/a&gt; API call&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was surprised that SMS verification is supported out of the box, but for software token you have to write your own integration.&lt;/p&gt;

&lt;p&gt;That's a shame really when you compare this with Keycloak MFA. You just set it up as a part of a single user journey with all the web pages, QR code generation, and verification logic provided by Keycloak.&lt;/p&gt;

&lt;p&gt;Again, for reference see my video where I set up and test software tokens MFA in Keycloak:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/XUvaMgTdwy0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cognito limitations
&lt;/h2&gt;

&lt;p&gt;AWS Cognito isn't too flexible when it comes to some of its settings. For example, these are the settings that you cannot change after you create a user pool: sign-in options, user attributes. &lt;/p&gt;

&lt;p&gt;AWS Cognito also has a limit of 1000 user pools per AWS account. That's actually very little for a multi-tenant applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cognito strong points
&lt;/h2&gt;

&lt;p&gt;Before I wrap up, I would like to say that AWS Cognito is still a great choice for many apps. Its strong points are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;hands-free fully managed service&lt;/li&gt;
&lt;li&gt;support for many events which can trigger custom Lambda functions&lt;/li&gt;
&lt;li&gt;great support for mobile apps (Android and iOS SDKs available)&lt;/li&gt;
&lt;li&gt;integration with AWS STS which can exchange Cognito tokens for temporary AWS credentials &lt;/li&gt;
&lt;li&gt;integration with AWS API Gateway using Cognito User Pool Authorizer&lt;/li&gt;
&lt;li&gt;great choice for single-tenant applications&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cognito</category>
      <category>keycloak</category>
      <category>identitymanagment</category>
      <category>usermanagement</category>
    </item>
    <item>
      <title>Migrating to DynamoDB using Parquet and Glue</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Tue, 04 May 2021 14:16:47 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/migrating-to-dynamodb-using-parquet-and-glue-21h9</link>
      <guid>https://forem.com/lukaszbudnik/migrating-to-dynamodb-using-parquet-and-glue-21h9</guid>
      <description>&lt;p&gt;I am preparing to migrate 1B records from AWS RDS PostgreSQL to AWS DynamoDB. Before moving 1B records I want to build some POCs using a smaller data set (101M records) to find the most optimal way of getting those records into AWS DynamoDB.&lt;/p&gt;

&lt;p&gt;My first choice was obvious: AWS Data Migration Service. AWS DMS supports AWS RDS PostgreSQL as a source and AWS DynamoDB as a target. This can be setup very easily. &lt;/p&gt;

&lt;p&gt;However, it took me a few iterations, trying out different instance types, different migration task parameters, to finally get to a stage where I was fully using 20000 WCU when writing data to DynamoDB. &lt;br&gt;
I also noticed that AWS DMS was using &lt;code&gt;PutItem&lt;/code&gt; operation rather than more efficient &lt;code&gt;BatchWriteItem&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The fastest of my migrations tasks was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0qnx6hp8n5yre0ju7f4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0qnx6hp8n5yre0ju7f4.png" alt="DMS RDS to DynamoDB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I watched a few videos from the online AWS re:Invent about Glue and how super efficient it was in transforming and moving data around. I also knew that Glue added a support for writing to DynamoDB.&lt;/p&gt;

&lt;p&gt;I wanted to try it out and see if it can be of any use for my migration project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Export tables to AWS S3 in Parquet format
&lt;/h1&gt;

&lt;p&gt;I used AWS Data Migration Service to export data to AWS S3 in Parquet format. Exporting table that has 101M records was quite fast and took 54m:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8nc98zlifhpgs5vt60c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy8nc98zlifhpgs5vt60c.png" alt="DMS export to S3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Glue &amp;amp; Glue Studio
&lt;/h1&gt;

&lt;p&gt;I haven't used Glue before. In one of the AWS re:Invent videos I saw that AWS recently added Glue Studio where you can build jobs using visual editor and for more complex jobs you can add custom code.&lt;/p&gt;

&lt;p&gt;I built a simple job in Glue Studio, but noticed that DynamoDB is not yet supported (in Studio). Doing a quick Google search I found a sample code right from the Glue documentation: &lt;a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-dynamodb" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-dynamodb&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I pasted that code into my "Transformation - Custom code" block. I only had to change variables' names as Glue Studio generates code using camelCase and the documentation was using snake_case. Anyway it was just 1 operation: a call to &lt;code&gt;glue_context.write_dynamic_frame_from_options()&lt;/code&gt; (I hope that in the near future it will be supported out of the box in Glue Studio).&lt;/p&gt;

&lt;p&gt;And that was it. I ran the job and it took 3h 7m to complete:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ma9z55evd9qmphn8j8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ma9z55evd9qmphn8j8z.png" alt="Glue Job"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In DynamoDB I saw all my 101M records:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk74nd3icvo8nyv3zil62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk74nd3icvo8nyv3zil62.png" alt="DynamoDB Table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Environment setup and summary
&lt;/h1&gt;

&lt;p&gt;The environment looked like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VPC with VPC Endpoints to AWS S3 and AWS DynamoDB&lt;/li&gt;
&lt;li&gt;AWS RDS PostgreSQL running in private subnet, db.m5.large&lt;/li&gt;
&lt;li&gt;AWS DMS instance running in private subnet, dms.c5.2xlarge&lt;/li&gt;
&lt;li&gt;DynamoDB table provisioned with 20000 WCU&lt;/li&gt;
&lt;li&gt;AWS DMS migration task with &lt;code&gt;ParallelLoadThreads&lt;/code&gt; set to 130 and &lt;code&gt;ParallelLoadBufferSize&lt;/code&gt; set to 1000 was running at 74% CPU and was generating 20000 WCU&lt;/li&gt;
&lt;li&gt;AWS Glue job had 10 workers, &lt;code&gt;dynamodb.throughput.write.percent&lt;/code&gt; set to 1.0 was maxing 20000 WCU &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The results were the following:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;DMS&lt;br&gt;RDS to DynamoDB&lt;/th&gt;
&lt;th&gt;DMS&lt;br&gt;RDS to S3&lt;/th&gt;
&lt;th&gt;Glue&lt;br&gt;S3 to DynamoDB&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RDS CPU&lt;/td&gt;
&lt;td&gt;4%&lt;/td&gt;
&lt;td&gt;11%&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RDS Reads IOPS&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;350&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RDS Connections&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DMS CPU&lt;/td&gt;
&lt;td&gt;74%&lt;/td&gt;
&lt;td&gt;11%&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DynamoDB WCU&lt;/td&gt;
&lt;td&gt;20000&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;20000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time&lt;/td&gt;
&lt;td&gt;2h 51m&lt;/td&gt;
&lt;td&gt;54m&lt;/td&gt;
&lt;td&gt;3h 7m&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Direct migration between AWS RDS to AWS DynamoDB using AWS DMS was 2h 51m.&lt;/p&gt;

&lt;p&gt;Staged migration with RDS -&amp;gt; S3 Parquet -&amp;gt; DynamoDB was: 54m + 3h 7m = 5h 1m.&lt;/p&gt;

&lt;p&gt;After all DMS was faster than the DMS + S3 + Glue trio.&lt;/p&gt;

&lt;p&gt;As you can see in the table above, in both approaches the pressure on AWS RDS PostgreSQL instance was minimal.&lt;/p&gt;

&lt;p&gt;There are still some optimisations I want to try, like speeding up the import to DynamoDB by setting WCU even higher. That will require some AWS DMS task parameters tuning and probably increasing the DMS instance size as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  DMS
&lt;/h2&gt;

&lt;p&gt;Pros of using DMS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it's fast, even with &lt;code&gt;PutItem&lt;/code&gt; operations&lt;/li&gt;
&lt;li&gt;one-stop shop for your migration tasks&lt;/li&gt;
&lt;li&gt;can run additional validation logic for your migration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons of using DMS are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;need to rightsize the instance and tune task parameters to max DynamoDB writes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  S3 Parquet and Glue
&lt;/h2&gt;

&lt;p&gt;Pros of using S3 Parquet and Glue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;serverless solution - no need to manage the infrastructure&lt;/li&gt;
&lt;li&gt;didn't have to do any job tuning, setting &lt;code&gt;dynamodb.throughput.write.percent&lt;/code&gt; to 1.0 was all I had to do to max DynamoDB writes&lt;/li&gt;
&lt;li&gt;you get some bonuses, think - data lake!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons of using S3 Parquet and Glue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;slower than native DMS migration&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Bonus #1
&lt;/h1&gt;

&lt;p&gt;Since I had the data already in AWS S3 in Parquet format I wanted to play around with AWS Athena. I used Athena before and already knew it was super fast, but I never had a chance to play around with 20 columns wide 101M records data set. The experience was mind blowing. Complex queries running in just a few seconds. The same queries running on the original AWS RDS PostgreSQL instance were taking 20 minutes to complete!&lt;/p&gt;

&lt;h1&gt;
  
  
  Bonus #2
&lt;/h1&gt;

&lt;p&gt;Since I already had the data in AWS Athena database I wanted to go one step further and easily visualise the data I had using AWS QuickSight. It took me a few minutes to set up some Pie Charts and Heat maps and publish my dashboard. All by drag and dropping columns in the QuickSight UI.&lt;/p&gt;

</description>
      <category>database</category>
      <category>aws</category>
      <category>dynamodb</category>
      <category>glue</category>
    </item>
    <item>
      <title>Let the bots do the releases for you while you sleep</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Thu, 08 Apr 2021 14:45:05 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/let-the-bots-do-the-releases-for-you-23gd</link>
      <guid>https://forem.com/lukaszbudnik/let-the-bots-do-the-releases-for-you-23gd</guid>
      <description>&lt;p&gt;When you have a high test coverage or... a high confidence in your code (in my case the former) you can setup fully automatic releases on GitHub. All driven by bots.&lt;/p&gt;

&lt;p&gt;In this post I will show you how I set it up using my own project.&lt;/p&gt;

&lt;p&gt;migrator is a DB migration/evolution tool written in Go. It's super fast and lightweight. It is released as a docker image on docker hub: &lt;a href="https://hub.docker.com/r/lukasz/migrator"&gt;lukasz/migrator&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;migrator has a code coverage of 94% and I decided to release a new docker image every time there is a new dependency available: either a new base docker image or a new version of Go dependencies.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/migrator"&gt;
        migrator
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Super fast and lightweight DB migration &amp;amp; evolution tool written in Go
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
migrator &lt;a rel="noopener noreferrer" href="https://github.com/lukaszbudnik/migrator/workflows/Build%20and%20Test/badge.svg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DAr7qhL2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/lukaszbudnik/migrator/workflows/Build%2520and%2520Test/badge.svg" alt="Build and Test"&gt;&lt;/a&gt; &lt;a rel="noopener noreferrer" href="https://github.com/lukaszbudnik/migrator/workflows/Docker%20Image%20CI/badge.svg"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RFfgPYKJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/lukaszbudnik/migrator/workflows/Docker%2520Image%2520CI/badge.svg" alt="Docker"&gt;&lt;/a&gt; &lt;a href="https://goreportcard.com/report/github.com/lukaszbudnik/migrator" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/e82c5bf941f418eff5daf7472aef7ca77f6044cd631cbec4675fc9cc3b897056/68747470733a2f2f676f7265706f7274636172642e636f6d2f62616467652f6769746875622e636f6d2f6c756b61737a6275646e696b2f6d69677261746f72" alt="Go Report Card"&gt;&lt;/a&gt; &lt;a href="https://codecov.io/gh/lukaszbudnik/migrator" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/0203f28d0161c9a1cb5b46094f87eefd30edb65b6cfc10439c4e22d240a7bb57/68747470733a2f2f636f6465636f762e696f2f67682f6c756b61737a6275646e696b2f6d69677261746f722f6272616e63682f6d61737465722f67726170682f62616467652e737667" alt="codecov"&gt;&lt;/a&gt;
&lt;/h1&gt;
&lt;p&gt;Super fast and lightweight DB migration tool written in go. migrator consumes 6MB of memory and outperforms other DB migration/evolution frameworks by a few orders of magnitude.&lt;/p&gt;
&lt;p&gt;migrator manages and versions all the DB changes for you and completely eliminates manual and error-prone administrative tasks. migrator versions can be used for auditing and compliance purposes. migrator not only supports single schemas, but also comes with a multi-schema support (ideal for multi-schema multi-tenant SaaS products).&lt;/p&gt;
&lt;p&gt;migrator runs as a HTTP REST service and can be easily integrated into your continuous integration and continuous delivery pipeline.&lt;/p&gt;
&lt;p&gt;The official docker image is available on docker hub at &lt;a href="https://hub.docker.com/r/lukasz/migrator" rel="nofollow"&gt;lukasz/migrator&lt;/a&gt;. It is ultra lightweight and has a size of 15MB. Ideal for micro-services deployments!&lt;/p&gt;
&lt;h1&gt;
Table of contents&lt;/h1&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#api"&gt;API&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#v2---graphql-api"&gt;/v2 - GraphQL API&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#get-v2config"&gt;GET /v2/config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#get-v2schema"&gt;GET /v2/schema&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#post-v2service"&gt;POST /v2/service&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#v1---rest-api"&gt;/v1 - REST API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#request-tracing"&gt;Request tracing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#quick-start-guide"&gt;Quick Start Guide&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#1-get-the-migrator-project"&gt;1. Get the migrator project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://raw.githubusercontent.com/lukaszbudnik/migrator/master/#2-start-migrator-and-test-db-containers"&gt;2.&lt;/a&gt;…&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/lukaszbudnik/migrator"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h1&gt;
  
  
  dependabot
&lt;/h1&gt;

&lt;p&gt;First, you need to enable dependabot for your project. It's super easy. Just follow the instructions detailed here: &lt;a href="https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies"&gt;Managing vulnerabilities in your project's dependencies&lt;/a&gt; and &lt;a href="https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically"&gt;Keeping your dependencies updated automatically&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;dependabot can create a pull request for you whenever there is a known security vulnerability in your 3rd party dependency and there is a fix version available. You can go even further and tell dependabot to create a pull request whenever there is a new version available. I like to live on the edge and I went ahead and enabled both options.&lt;/p&gt;

&lt;p&gt;For reference here is the link to my dependabot config file: &lt;a href="https://github.com/lukaszbudnik/migrator/blob/master/.github/dependabot.yml"&gt;https://github.com/lukaszbudnik/migrator/blob/master/.github/dependabot.yml&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Protip: Every time pull request is merged to target branch dependabot is rebasing all created pull requests (followed by re-running all checks). I provide different times for every check so that the number of rebases is kept to a minimum.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  merge-pull-requests action
&lt;/h1&gt;

&lt;p&gt;The bit missing was automated merging to target branch when all tests were passing. There is an auto-merge feature available in GitHub but dependabot doesn't support it (yet).&lt;/p&gt;

&lt;p&gt;I decided to use GitHub Action that was available on the marketplace: &lt;a href="https://github.com/marketplace/actions/merge-pull-requests"&gt;GitHub Actions - Merge pull requests&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I used the default configuration file provided by authors. However, the default config was quite permissive and I needed to tweak it a little bit. I wanted to auto-merge pull requests with &lt;code&gt;dependencies&lt;/code&gt; label (all pull requests created by dependabot have it) and (to make sure other people would not abuse the auto-merge functionality) I wanted to auto-merge pulls created by the &lt;code&gt;dependabot[bot]&lt;/code&gt; user only.&lt;/p&gt;

&lt;p&gt;For reference here is the link to my configuration file: &lt;a href="https://github.com/lukaszbudnik/migrator/blob/master/.github/workflows/automerge.yml"&gt;https://github.com/lukaszbudnik/migrator/blob/master/.github/workflows/automerge.yml&lt;/a&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Protip: Make sure you have "Automatically delete head branches" enabled in your repository settings. GitHub will delete merged branches automatically. To keep things nice and tidy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  project-bot app
&lt;/h1&gt;

&lt;p&gt;Optional step. My CI/CD pipeline instructs docker hub to build new &lt;code&gt;latest&lt;/code&gt; tag on every merge to main branch. However, I also wanted to roll up all small changes into official releases. I use GitHub projects (rather than milestones) to organise my work. Every time I release a new version I create a new project which acts as a placeholder for all new work. To help me organise release notes better I wanted to automatically add dependabot pull requests to an open GitHub project.&lt;/p&gt;

&lt;p&gt;I decided to use GitHub App that was available here: &lt;a href="https://github.com/apps/project-bot"&gt;GitHub Apps - project-bot&lt;/a&gt;. In order to include a new pull request in your project just add a new card with the correct markup in one of the columns. &lt;br&gt;
For reference here is my project that uses project-bot integration: &lt;a href="https://github.com/lukaszbudnik/migrator/projects/9"&gt;https://github.com/lukaszbudnik/migrator/projects/9&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Protip: If you create your projects using "Automated kanban" template GitHub will automatically move merged pull request to Done column. So there is even less configuration required.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;
  
  
  bots in action
&lt;/h1&gt;

&lt;p&gt;Below is a link to a sample pull request which was:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;created by &lt;code&gt;dependabot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;added to v2021.0.1 project by &lt;code&gt;project-bot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;merged by &lt;code&gt;merge-pull-requests&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;built and published by &lt;code&gt;dockerhub&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And it all happened at 4 AM CEST while I was in a well deserved deep sleep phase :)&lt;/p&gt;


&lt;div class="ltag_github-liquid-tag"&gt;
  &lt;h1&gt;
    &lt;a href="https://github.com/lukaszbudnik/migrator/pull/178"&gt;
      &lt;img class="github-logo" alt="GitHub logo" src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg"&gt;
      &lt;span class="issue-title"&gt;
        Bump github.com/aws/aws-sdk-go from 1.38.14 to 1.38.15
      &lt;/span&gt;
      &lt;span class="issue-number"&gt;#178&lt;/span&gt;
    &lt;/a&gt;
  &lt;/h1&gt;
  &lt;div class="github-thread"&gt;
    &lt;div class="timeline-comment-header"&gt;
      &lt;a href="https://github.com/apps/dependabot"&gt;
        &lt;img class="github-liquid-tag-img" src="https://res.cloudinary.com/practicaldev/image/fetch/s--7izdkoHd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://avatars.githubusercontent.com/in/29110%3Fv%3D4" alt="dependabot[bot] avatar"&gt;
      &lt;/a&gt;
      &lt;div class="timeline-comment-header-text"&gt;
        &lt;strong&gt;
          &lt;a href="https://github.com/apps/dependabot"&gt;dependabot[bot]&lt;/a&gt;
        &lt;/strong&gt; posted on &lt;a href="https://github.com/lukaszbudnik/migrator/pull/178"&gt;&lt;time&gt;Apr 08, 2021&lt;/time&gt;&lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag-github-body"&gt;
      &lt;p&gt;Bumps &lt;a href="https://github.com/aws/aws-sdk-go"&gt;github.com/aws/aws-sdk-go&lt;/a&gt; from 1.38.14 to 1.38.15.&lt;/p&gt;

Release notes
&lt;p&gt;&lt;em&gt;Sourced from &lt;a href="https://github.com/aws/aws-sdk-go/releases"&gt;github.com/aws/aws-sdk-go's releases&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;h1&gt;
&lt;span class="octicon octicon-link"&gt;&lt;/span&gt;Release v1.38.15 (2021-04-07)&lt;/h1&gt;
&lt;h3&gt;
&lt;span class="octicon octicon-link"&gt;&lt;/span&gt;Service Client Updates&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;service/accessanalyzer&lt;/code&gt;: Updates service API, documentation, and paginators&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/elasticache&lt;/code&gt;: Updates service API and documentation
&lt;ul&gt;
&lt;li&gt;This release adds tagging support for all AWS ElastiCache resources except Global Replication Groups.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/ivs&lt;/code&gt;: Updates service API, documentation, and paginators&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/mgn&lt;/code&gt;: Updates service API, documentation, paginators, and examples&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/storagegateway&lt;/code&gt;: Updates service API, documentation, and paginators
&lt;ul&gt;
&lt;li&gt;File Gateway APIs now support FSx for Windows as a cloud storage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


Changelog
&lt;p&gt;&lt;em&gt;Sourced from &lt;a href="https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md"&gt;github.com/aws/aws-sdk-go's changelog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;h1&gt;
&lt;span class="octicon octicon-link"&gt;&lt;/span&gt;Release v1.38.15 (2021-04-07)&lt;/h1&gt;
&lt;h3&gt;
&lt;span class="octicon octicon-link"&gt;&lt;/span&gt;Service Client Updates&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;service/accessanalyzer&lt;/code&gt;: Updates service API, documentation, and paginators&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/elasticache&lt;/code&gt;: Updates service API and documentation
&lt;ul&gt;
&lt;li&gt;This release adds tagging support for all AWS ElastiCache resources except Global Replication Groups.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/ivs&lt;/code&gt;: Updates service API, documentation, and paginators&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/mgn&lt;/code&gt;: Updates service API, documentation, paginators, and examples&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service/storagegateway&lt;/code&gt;: Updates service API, documentation, and paginators
&lt;ul&gt;
&lt;li&gt;File Gateway APIs now support FSx for Windows as a cloud storage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


Commits
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/aws/aws-sdk-go/commit/d9428afe4490b19f335aa8576ac204e4ae58c6b7"&gt;&lt;code&gt;d9428af&lt;/code&gt;&lt;/a&gt; Release v1.38.15 (2021-04-07) (&lt;a href="https://github-redirect.dependabot.com/aws/aws-sdk-go/issues/3853" rel="nofollow"&gt;#3853&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;See full diff in &lt;a href="https://github.com/aws/aws-sdk-go/compare/v1.38.14...v1.38.15"&gt;compare view&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;br&gt;
&lt;p&gt;&lt;a href="https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores"&gt;&lt;img src="https://camo.githubusercontent.com/dd8a8398f27db2a09b36aca2dd11d6d8e3a3ee2bc027fb0e02a31411631b4576/68747470733a2f2f646570656e6461626f742d6261646765732e6769746875626170702e636f6d2f6261646765732f636f6d7061746962696c6974795f73636f72653f646570656e64656e63792d6e616d653d6769746875622e636f6d2f6177732f6177732d73646b2d676f267061636b6167652d6d616e616765723d676f5f6d6f64756c65732670726576696f75732d76657273696f6e3d312e33382e3134266e65772d76657273696f6e3d312e33382e3135" alt="Dependabot compatibility score"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting &lt;code&gt;@dependabot rebase&lt;/code&gt;.&lt;/p&gt;


Dependabot commands and options
&lt;br&gt;
&lt;p&gt;You can trigger Dependabot actions by commenting on this PR:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@dependabot rebase&lt;/code&gt; will rebase this PR&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot recreate&lt;/code&gt; will recreate this PR, overwriting any edits that have been made to it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot merge&lt;/code&gt; will merge this PR after your CI passes on it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot squash and merge&lt;/code&gt; will squash and merge this PR after your CI passes on it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot cancel merge&lt;/code&gt; will cancel a previously requested merge and block automerging&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot reopen&lt;/code&gt; will reopen this PR if it is closed&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot close&lt;/code&gt; will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot ignore this major version&lt;/code&gt; will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot ignore this minor version&lt;/code&gt; will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@dependabot ignore this dependency&lt;/code&gt; will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)&lt;/li&gt;
&lt;/ul&gt;


    &lt;/div&gt;
    &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/lukaszbudnik/migrator/pull/178"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>bots</category>
      <category>github</category>
      <category>dependabot</category>
      <category>continuousdeployment</category>
    </item>
    <item>
      <title>Building cloud native apps: Backing services</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Wed, 07 Apr 2021 14:11:19 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-backing-services-1ndh</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-backing-services-1ndh</guid>
      <description>&lt;h2&gt;
  
  
  Backing services as attached resources
&lt;/h2&gt;

&lt;p&gt;The Twelve-Factor Apps has a great summary of backing services: &lt;a href="https://12factor.net/backing-services"&gt;https://12factor.net/backing-services&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I love it and I have very little to add. That's the essence of building a modern software. &lt;/p&gt;

&lt;h2&gt;
  
  
  Off-the-shelf PaaS/SaaS services
&lt;/h2&gt;

&lt;p&gt;I want to make one important point. Wherever possible use integrations to existing PaaS/SaaS managed services rather than trying to build various services yourself. (&lt;em&gt;Does this ring a bell &lt;a href="https://dev.to/lukaszbudnik/building-cloud-native-apps-identity-and-access-management-1e5m"&gt;Building cloud native apps: Identity and Access Management&lt;/a&gt;?&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;The Twelve-Factor App was created by Heroku team and I will use Heroku as an example. I remember when I built my first big app on Heroku back in 2012. I used a number of different Heroku addons (now called &lt;a href="https://elements.heroku.com/"&gt;Heroku Elements Marketplace&lt;/a&gt;) and it was a truly mind-blowing experience. I just attached managed services to my app and used them (endpoints and/or API keys exposed via env variables). Heroku was the avant-garde of PaaS.&lt;/p&gt;

&lt;p&gt;The beauty of Heroku platform is that you simply can't do it wrong. You don't have access to the underlying infrastructure and there is no temptation for you to host your own Elasticsearch cluster and build your own log management solution. I know: homegrown solutions may appear cheaper, but in terms of the quality can you really match a service delivered by a highly specialized company? &lt;/p&gt;

&lt;p&gt;What many people tend to forget is that your team's time is worth more than the costs of those services. Your team should be focusing on the business value and not trying to figure out how to best implement HA for a database cluster or troubleshoot performance issues with your log management solution.&lt;/p&gt;

&lt;p&gt;Just look at &lt;a href="https://elements.heroku.com/"&gt;Heroku Elements Marketplace&lt;/a&gt; or check out AWS, Azure, GCP list of managed services. The big cloud players offer you everything you need to develop your application: Compute, Serverless, Networking, Databases, Caches, Queues, Storage, Monitoring, Log Management, Security/Governance &amp;amp; Compliance/Threat Detection &amp;amp; Prevention, Encryption, Machine Learning, Artificial Intelligence, Internet of Things, Mobile, Game Tech, Media Services, VR &amp;amp; AR. AWS offers even crazy services like &lt;a href="https://aws.amazon.com/ground-station/"&gt;AWS Ground Station&lt;/a&gt; to control your satellites.&lt;/p&gt;

&lt;p&gt;To wrap up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;treat backing services as attached resources; &lt;/li&gt;
&lt;li&gt;don't build them yourself, use specialized managed services.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>cloudnative</category>
      <category>services</category>
      <category>managed</category>
      <category>heroku</category>
    </item>
    <item>
      <title>Building cloud native apps: Config and Toggles</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Tue, 23 Mar 2021 15:31:45 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-config-and-toggles-2384</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-config-and-toggles-2384</guid>
      <description>&lt;h2&gt;
  
  
  Config for cloud native apps
&lt;/h2&gt;

&lt;p&gt;In the original 12-Factor App manifest Config is listed as a third factor: &lt;a href="https://12factor.net/config"&gt;https://12factor.net/config&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To state the obvious: don't hardcode any configuration settings in your application. Always set it as either configuration file or env variables.&lt;/p&gt;

&lt;p&gt;Reduce configuration settings to bare minimum. Introduce convention over configuration and store only the settings that are absolutely necessary in your config.&lt;/p&gt;

&lt;p&gt;Be aware that configuration settings can be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;static - which don't really change, a classic example is a DB endpoint; changing DB endpoint would require the application to be restarted to pick-up the new value;&lt;/li&gt;
&lt;li&gt;dynamic - which can be turned on and off while the application is running.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Based on the configuration type different configuration management strategies apply. &lt;/p&gt;

&lt;p&gt;Let's review them all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration files vs. env variables
&lt;/h2&gt;

&lt;p&gt;Injecting env variables is usually much simpler than injecting configuration files. With env variables you can do fine-grained changes. With files this is more difficult as you would have to store and inject the whole (sometimes large) file. On the other hand, if your application requires 30+ configuration settings, then having to manage 30+ env variables is definitely going to be a challenge. Use configuration files in this case.&lt;/p&gt;

&lt;p&gt;There is also a middle ground. Treat configuration file like a template and inject the key configuration settings at runtime. Look at a sample &lt;a href="https://github.com/lukaszbudnik/migrator"&gt;lukaszbudnik/migrator&lt;/a&gt; configuration file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;If your tool/framework supports env variables substitution then commit your config file to the repo, package it along with the application, and inject the actual values at runtime using env variables. This way you can have the best of the two worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metadata services
&lt;/h2&gt;

&lt;p&gt;If you're deploying your app to the cloud and use services like virtual machines, then you can leverage metadata endpoints. You can query those endpoints at runtime to get a lot of useful information about the machine and its cloud environment. All big players support it: &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html"&gt;AWS EC2&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/linux/instance-metadata-service"&gt;Azure Virtual Machines&lt;/a&gt;, &lt;a href="https://cloud.google.com/compute/docs/storing-retrieving-metadata"&gt;GCP Compute&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Internal DNS service
&lt;/h2&gt;

&lt;p&gt;When you deploy your cloud native apps to AWS, Azure, GCP, Kubernetes you can leverage internal DNS services.&lt;/p&gt;

&lt;p&gt;Internal DNS is the same for all deployments. Staging, pentest, preproduction, production. It doesn't change. Wherever your app is deployed the invoices service will always have "invoices" DNS name.&lt;/p&gt;

&lt;h2&gt;
  
  
  API &amp;amp; Secret Keys
&lt;/h2&gt;

&lt;p&gt;When running on AWS, Azure, GCP, do not use API and Secret Keys. IAM roles are first class citizens even in container services. Your servers, containers, and functions can assume roles meaning you don't have to inject API and Secret Keys into them. This greatly simplifies configuration management and configuration lifecycle (one obvious benefit: no need to rotate them on a regular basis).&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration as a Service
&lt;/h2&gt;

&lt;p&gt;Where to store configuration settings? In a dedicated Configuration as a Service solution. Configuration as a Service is a secure, highly-available, durable, encrypted storage for your configuration and secrets. Every major cloud provider offers such service. Depending on your cloud provider see: &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"&gt;AWS Systems Manager Parameter Store&lt;/a&gt;, &lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/azure-app-configuration/"&gt;Azure App Configuration&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/key-vault/"&gt;Azure Key Vault&lt;/a&gt;, or &lt;br&gt;
&lt;a href="https://cloud.google.com/secret-manager/docs"&gt;GCP Secret Manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The other benefit worth mentioning is that above services come with versioning out of the box. Thanks to this you have a complete history of all the changes for auditing, governance and compliance, and/or troubleshooting purposes.&lt;/p&gt;

&lt;p&gt;If you host your Kubernetes cluster in a cloud then I would highly recommend you to use &lt;a href="https://github.com/external-secrets/kubernetes-external-secrets"&gt;external-secrets/kubernetes-external-secrets&lt;/a&gt; project to sync your Kubernetes secrets with external secrets management systems (completely transparently). kubernetes-external-secrets supports the following backends: AWS Systems Manager Parameter Store, Hashicorp Vault, Azure Key Vault, Google Secret Manager, and Alibaba Cloud KMS Secret Manager.&lt;/p&gt;
&lt;h2&gt;
  
  
  Feature toggles
&lt;/h2&gt;

&lt;p&gt;Now that we covered static configuration, let's talk about dynamic configuration. Or as it's called feature toggles.&lt;/p&gt;

&lt;p&gt;These are the settings that can be turned on and off dynamically at runtime.&lt;/p&gt;

&lt;p&gt;They are very useful when you release new functionality in Alpha, Beta, GA stages for initially a small group of your customers or want to release it to your design partners first, then to a wider group, and finally make it generally available to all customers. &lt;br&gt;
You can use feature toggles when you want to release new functionality using canary releases too. &lt;br&gt;
Finally, feature toggles are used when you give your customers option to opt-out of some features.&lt;/p&gt;

&lt;p&gt;Feature toggles framework that we built stores all the information in DB (in contrast to using Configuration as a Service which we use for all other configuration settings). Since we store feature toggles in DB we support the following 3 levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DB - we use feature toggles in stored procedures;&lt;/li&gt;
&lt;li&gt;back-end - we load feature toggles from DB and wrap them with a Java service; this can be any language you use: Java, Go, JavaScript, Ruby, Python;&lt;/li&gt;
&lt;li&gt;front-end - we have a REST service which exposes feature toggles to JavaScript app; JavaScript app uses it to implement different behavior and/or render different components in UI.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Tenant config vs. global system config
&lt;/h2&gt;

&lt;p&gt;Assume all configs and feature toggles can be changed per customer/tenant basis.&lt;/p&gt;

&lt;p&gt;Implement a hierarchy of configs: check if tenant-specific config exists, if yes return it, if not, fallback to the global system one.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kubernetes
&lt;/h2&gt;

&lt;p&gt;Since we are talking about cloud-native apps let me finish by some Kubernetes examples. &lt;/p&gt;
&lt;h3&gt;
  
  
  Kubernetes ConfigMap
&lt;/h3&gt;

&lt;p&gt;Kubernetes ConfigMap is used to store configuration in the form of key-value pairs and/or files. They are stored in clear-text and should not be used to store any sensitive information. For more, see the official documentation: &lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/"&gt;Kubernetes ConfigMap&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can create a configmap from a file like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Later, we can inject that configmap as a volume into the pod and then mount it into the container:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Kubernetes Secrets
&lt;/h3&gt;

&lt;p&gt;Kubernetes Secret is used to store sensitive information. Just like ConfigMap it can be created in the form of key-value pairs and/or files. There are built-in secret types too. For more, see the official documentation: &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/"&gt;Kubernetes Secrets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can create a secret using yaml file or from literal, I will use the literal here:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Later, we can reference this secret when defining an env variable in a container:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Using external-secrets/kubernetes-external-secrets
&lt;/h3&gt;

&lt;p&gt;The blow gist contains a step-by-step example showing how to inject AWS Systems Manager Parameter Store secrets into Kubernetes secrets using &lt;a href="https://dev.toexternal-secrets/kubernetes-external-secrets"&gt;https://github.com/external-secrets/kubernetes-external-secrets&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I post it at the bottom as it's a detailed (long) example.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


</description>
      <category>cloudnative</category>
      <category>configuration</category>
      <category>toggles</category>
    </item>
    <item>
      <title>Building cloud native apps: Identity and Access Management</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Tue, 16 Feb 2021 15:02:55 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-identity-and-access-management-1e5m</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-identity-and-access-management-1e5m</guid>
      <description>&lt;h2&gt;
  
  
  Identity and Access Management for cloud native apps
&lt;/h2&gt;

&lt;p&gt;Every app needs identity and access management. "No problem", I hear you say. You've done it a thousand times: users table with login and password hash.&lt;/p&gt;

&lt;p&gt;But, is it really that simple?&lt;/p&gt;

&lt;p&gt;Here are a few pretty standard questions you will hear from your customers and their info sec teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how do I enforce a particular password length?&lt;/li&gt;
&lt;li&gt;how do I enforce lowercase, uppercase, digits, special characters in the password?&lt;/li&gt;
&lt;li&gt;how do I enforce password change every X days?&lt;/li&gt;
&lt;li&gt;great, I can enforce password change every X days, but you don't have password history which means I can reset the password to the old one; how do I enforce password history of X?&lt;/li&gt;
&lt;li&gt;my user forgot his password, where's the reset password functionality?&lt;/li&gt;
&lt;li&gt;MFA is a standard for us; oh... you don't support MFA?&lt;/li&gt;
&lt;li&gt;since you don't support MFA we have to use our SSO to login into your app; oh... you don't support SSO?&lt;/li&gt;
&lt;li&gt;great, you support SSO using SAML, but SAML is kinda old-school... do you support OIDC?&lt;/li&gt;
&lt;li&gt;a minor one, hope it is not too much of a hassle: how do I add my company logo and a legal statement to the login page?&lt;/li&gt;
&lt;li&gt;hello again, we updated risk factor for you application, and we now must use webauthn passwordless secure key device to authenticate... &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By now your simple users table design got a little bit more complicated.&lt;/p&gt;

&lt;p&gt;And now imagine you are developing a multi-tenant cloud native app and all customers come with their own security requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  off-the-shelf and open-source solutions
&lt;/h2&gt;

&lt;p&gt;Instead of throwing yourself into development (and spending days and months on reinventing the wheel) pause for a moment. Why not use off-the-self solution? Or even better an open-source solution?&lt;/p&gt;

&lt;p&gt;Some time ago I set myself on a mission: promote using off-the-shelf Identity and Access Management solutions.&lt;/p&gt;

&lt;p&gt;Many architects fear of integrating other solutions into their systems. I don't understand this. Write down your requirements, do the research, write down the results, review results &amp;amp; pick the right solution for you, and then start building your app. Should you not be happy with the solution you can always implement it on your own... but before you do this, please scroll up and take another look at the list of only a few questions you will get from your customers. &lt;/p&gt;

&lt;p&gt;If you choose an open-source solution and it lacks a specific feature, by reading my previous post in this series &lt;a href="https://dev.to/lukaszbudnik/building-cloud-native-apps-dependencies-1f33"&gt;Building cloud native apps: Dependencies&lt;/a&gt;, you already know what to do: implement it and contribute back! &lt;/p&gt;

&lt;p&gt;Trust me, integrating with an off-the-shelf solution (either proprietary or open-source) will save you a lot of time and money compared to building IAM solution yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keycloak
&lt;/h2&gt;

&lt;p&gt;Keycloak is an open-source Identity and Access Management solution. Keycloak was initially developed by JBoss community and is curated by Red Hat now. It's also the foundation of Red Hat Single Sign-On product. &lt;/p&gt;

&lt;p&gt;I love Keycloak. Keycloak is very easy to get you started, comes with tens of features out-of-the-box, fits nicely into multi-tenant architectures, there are a few deployment options to choose from, has nice getting started tutorials for beginners, and a very detailed documentation for those more advanced.&lt;/p&gt;

&lt;p&gt;To encourage people to use Keycloak I decided to, instead of writing a series of posts about it, record videos where I show how to (within a few minutes) setup &amp;amp; test various security requirements. Check them out below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying Keycloak cluster to Kubernetes
&lt;/h3&gt;

&lt;p&gt;If you want to try out all the Keycloak features yourself (and not only watch the videos - which is still fine if you're doing a research I was talking about earlier) then in this video you will learn how to deploy a Keycloak cluster to Kubernetes. &lt;/p&gt;

&lt;p&gt;Source code is available on GitHub: &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik" rel="noopener noreferrer"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/keycloak-kubernetes" rel="noopener noreferrer"&gt;
        keycloak-kubernetes
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Keycloak cluster deployed to Kubernetes
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/g8LVIr8KKSA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #1: Custom password policies and MFA
&lt;/h3&gt;

&lt;p&gt;We have a customer who wants to setup the following password policies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;at least 1 uppercase character&lt;/li&gt;
&lt;li&gt;at least 1 lowercase character&lt;/li&gt;
&lt;li&gt;at least 1 digit&lt;/li&gt;
&lt;li&gt;at least 2 special characters (in video I use 2 for testing purposes)&lt;/li&gt;
&lt;li&gt;password length of 8&lt;/li&gt;
&lt;li&gt;password history of 10&lt;/li&gt;
&lt;li&gt;password not a username&lt;/li&gt;
&lt;li&gt;expire password after 90 days&lt;/li&gt;
&lt;li&gt;use custom hashing algorithm &lt;a href="https://en.wikipedia.org/wiki/PBKDF2" rel="noopener noreferrer"&gt;PBKDF2-SHA256&lt;/a&gt; (key stretching hashing making it less vulnerable to brute-force attacks)&lt;/li&gt;
&lt;li&gt;enforce MFA&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this video I also show how to test above settings using a sample Keycloak app. I also show how to verify JWT tokens generated by Keycloak using &lt;a href="https://jwt.io" rel="noopener noreferrer"&gt;JWT.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/XUvaMgTdwy0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #2: Single Sign-On using SAML
&lt;/h3&gt;

&lt;p&gt;We have a customer who wants to setup Sign Sign-On using SAML. In this case the customer has full control over identity management and can enforce additional authentication factors like authentication from corporate network only, etc.&lt;/p&gt;

&lt;p&gt;In this video I also show how to use custom SAML assertions to import user attributes into Keycloak. I'm using &lt;a href="https://jumpcloud.com" rel="noopener noreferrer"&gt;JumpCloud&lt;/a&gt; as an SSO provider. It's free for small teams.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/K7mjE58hl4I"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #3: Social Identity Providers
&lt;/h3&gt;

&lt;p&gt;We have a customer who wants to setup Single Sign-On using one of the Social Identity Providers. It's very convenient to use, battle-tested, backed by biggest companies. Further, customer doesn't have to introduce yet another solution into their technology stack and simply use what the teams are already using. &lt;/p&gt;

&lt;p&gt;As we are all developers here, the following video shows how to setup GitHub as a Social Identity Provider.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/P8lpE9nV_Sw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #4: User Federation using LDAP
&lt;/h3&gt;

&lt;p&gt;We have a customer who is an IT dinosaur. They are not SSO ready, but they have LDAP in place. Keycloak supports User Federation and can sync with LDAP directories. Once users are in Keycloak your app can talk to Keycloak and take full advantage of JSON Web Tokens or enjoy OIDC without any changes to your app.&lt;/p&gt;

&lt;p&gt;In Keycloak you can setup User Federation in just a few clicks. See how to sync with LDAP with custom attributes mapping in less than 7 minutes! If you need a free LDAP service check out &lt;a href="https://jumpcloud.com" rel="noopener noreferrer"&gt;JumpCloud&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/_yTIQAIm-QE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #5: Customizing multi-tenant login pages
&lt;/h3&gt;

&lt;p&gt;In the below video I show how to customize login pages. In the video I show how to add a customer logo together with a legal banner to the login page. I show how to do it in a way that it can be used for multi-tenant deployments, without having to create a dedicated login page for every customer.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/AM0XV-QTT6Y"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #6: Custom Authentication Flows
&lt;/h3&gt;

&lt;p&gt;Keycloak is highly customizable. You can not only configure password policies, MFA, SSO using SAML, SSO using OIDC, customize UI themes, customize authentication flows, but you can even write Java/JavaScript code to implement custom logic in Keycloak. &lt;/p&gt;

&lt;p&gt;In the below video I show how to customize authentication flows and deploy a custom authenticator written in Java. The authenticator will use the user IP to either force or skip MFA step (MFA will be skipped if authentication request is coming from a trusted network).&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik" rel="noopener noreferrer"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/keycloak-ip-authenticator" rel="noopener noreferrer"&gt;
        keycloak-ip-authenticator
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Simple Custom Java Keycloak Authenticator
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/u36QK9oyrtM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #7: Webauthn passwordless authentication
&lt;/h3&gt;

&lt;p&gt;Keycloak supports webauthn passwordless authentication out of the box. It can be anything that your browser/system supports. In the below video I show how to setup MacOS Touch ID passwordless authentication in Keycloak.&lt;/p&gt;

&lt;p&gt;Apart from setting up webauthn passwordless authentication I also show how to customize authentication flows.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/PPoaPsfYGwQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #8: Authentication for distributed apps
&lt;/h3&gt;

&lt;p&gt;This is an advanced tutorial. Still, everything is fully automated, and you can try it out on your local machine.&lt;/p&gt;

&lt;p&gt;Source code and all the steps are available on GitHub.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik" rel="noopener noreferrer"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/keycloak-kubernetes" rel="noopener noreferrer"&gt;
        keycloak-kubernetes
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Keycloak cluster deployed to Kubernetes
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;To complement other scenarios, Keycloak is now used as a true Identity and Access Management solution: it contains information about users (identity management) and their roles (access management).&lt;/p&gt;

&lt;p&gt;The demo comprises of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React front-end application authenticating with Keycloak using official Keycloak JavaScript adapter &lt;a href="https://github.com/lukaszbudnik/hotel-spa" rel="noopener noreferrer"&gt;lukaszbudnik/hotel-spa&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;haproxy acting as an authentication &amp;amp; authorization gateway implemented by &lt;a href="https://github.com/lukaszbudnik/haproxy-auth-gateway" rel="noopener noreferrer"&gt;lukaszbudnik/haproxy-auth-gateway&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;mock backend microservices implemented by &lt;a href="https://github.com/lukaszbudnik/yosoy" rel="noopener noreferrer"&gt;lukaszbudnik/yosoy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Keycloak as Identity and Access Management&lt;/li&gt;
&lt;li&gt;ready-to-import Keycloak realm with predefined client, roles, and test users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/J42sR1t7Vt0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #9: Multi-tenant JavaScript Clients
&lt;/h3&gt;

&lt;p&gt;An extension of the above video where I show how to use the Keycloak JavaScript adapter in a multi-tenant fashion.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lJlKNmAsaiE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenarion #10: Deploying Keycloak to AWS EKS
&lt;/h3&gt;

&lt;p&gt;The first video from this post shows how to deploy Keycloak cluster to local Kubernetes cluster. &lt;/p&gt;

&lt;p&gt;See how easy it is to setup a Keycloak cluster on AWS EKS.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/BuNZ7bjbzOQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenarion #11: Transparent authentication for AWS API Gateway
&lt;/h3&gt;

&lt;p&gt;AWS API Gateway has a concept of JWT Authorizers that can be attached to its resources. JWT Authorizer can be anything really. As long as it follows the JWT standard.&lt;/p&gt;

&lt;p&gt;In the below video I show how to setup Keycloak as AWS API Gateway JWT Authorizer. &lt;/p&gt;

&lt;p&gt;AWS CDK code for deploying a sample application is available on my github account.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/lukaszbudnik" rel="noopener noreferrer"&gt;
        lukaszbudnik
      &lt;/a&gt; / &lt;a href="https://github.com/lukaszbudnik/aws-cdk-items-app" rel="noopener noreferrer"&gt;
        aws-cdk-items-app
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      API Gateway HTTP API with JWT Authorizer and Lambda and DynamoDB integration.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/r2bN9usRmXE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario #12: Transparent authentication for existing apps
&lt;/h3&gt;

&lt;p&gt;Let's finish with a scenario where we use Keycloak to add authorization and authentication to existing apps. &lt;/p&gt;

&lt;p&gt;Again, this is an advanced tutorial. Still, everything is fully automated, and you can try it out on your local machine.&lt;/p&gt;

&lt;p&gt;I use my open-source project migrator as a sample cloud-native app to secure.&lt;/p&gt;

&lt;p&gt;Source code is available on GitHub: &lt;a href="https://github.com/lukaszbudnik/migrator/tree/master/tutorials/oauth2-proxy-oidc-haproxy" rel="noopener noreferrer"&gt;https://github.com/lukaszbudnik/migrator/tree/master/tutorials/oauth2-proxy-oidc-haproxy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It also uses haproxy acting as an authentication &amp;amp; authorization gateway implemented by &lt;a href="https://github.com/lukaszbudnik/haproxy-auth-gateway" rel="noopener noreferrer"&gt;lukaszbudnik/haproxy-auth-gateway&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Just like in previous video, Keycloak is used as a true Identity and Access Management solution: it contains information about users (identity management) and their roles (access management).&lt;/p&gt;

&lt;p&gt;In this scenario haproxy is deployed in front of migrator. haproxy verifies the JWT access token and implements access control based on user's roles to allow or deny access to underlying migrator resources.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SJf5baf77sY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>identitymanagement</category>
      <category>accessmanagement</category>
      <category>keycloak</category>
    </item>
    <item>
      <title>Building cloud native apps: Dependencies</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Thu, 21 Jan 2021 15:11:29 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-dependencies-1f33</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-dependencies-1f33</guid>
      <description>&lt;h2&gt;
  
  
  Dependencies management for cloud native apps
&lt;/h2&gt;

&lt;p&gt;Dependency management is very important aspect of every application lifecycle management. It has its own chapter in the twelve-factor app manifest: &lt;a href="https://www.12factor.net/dependencies"&gt;https://www.12factor.net/dependencies&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is a very good summary of how to approach dependency management in your project (at all levels including OS dependencies). I would like to throw in my two cents and focus more on the software development part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping dependencies up to date
&lt;/h2&gt;

&lt;p&gt;Make sure that the build tool which you use has the functionality which can help you keep up with your dependencies (either out of the box or via plugins). A project comprising of several services can have tens of external dependencies. Making sure you are always up to date is not a super complex task, but (let's face it) it's a rather dull task. This task should be automated. Dependencies should be upgraded automatically every week (at minimum). Followed by a full suite of unit and integration tests. &lt;/p&gt;

&lt;p&gt;Minor version upgrades don't break the API and what is very important they contain bug fixes and security updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scanning dependencies for security vulnerabilities
&lt;/h2&gt;

&lt;p&gt;Speaking of security updates. You should scan your code on a regular basis for security vulnerabilities. You can use projects like &lt;a href="https://retirejs.github.io/retire.js/"&gt;Retire.js&lt;/a&gt; or &lt;a href="https://owasp.org/www-project-dependency-check/"&gt;OWASP Dependency-Check&lt;/a&gt;. There are also fully featured multi-platform multi-language solutions like &lt;a href="https://dependencytrack.org/"&gt;Dependency Track&lt;/a&gt; and many other.&lt;/p&gt;

&lt;p&gt;If you are using GitHub to store your code, you get some security features out of the box. &lt;br&gt;
You can setup GitHub code quality scanners to find security vulnerabilities and errors in your code: &lt;a href="https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning"&gt;https://docs.github.com/en/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning&lt;/a&gt;.&lt;br&gt;
GitHub Dependabot can scan your project and let you know if there are security vulnerabilities in your dependencies. Dependabot can automatically create a pull request with a bumped version for you once a new version is available. Dependabot can be also configured to create a new pull request every time there is a newer version available (not only when there's a security vulnerability). More about Dependabot can be found here: &lt;a href="https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-dependabot-security-updates"&gt;https://docs.github.com/en/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-dependabot-security-updates&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are much more security-related features in GitHub. For a complete list see &lt;a href="https://github.com/features/security"&gt;https://github.com/features/security&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Checking dependencies' licenses for compliance
&lt;/h2&gt;

&lt;p&gt;A very important step. Check licenses for all your dependencies. Check licenses for both the back-end (maven, gradle, npm, go, gems, ...) and the front-end (npm, grunt, yarn, ...). Prepare a list of approved licenses and fail all pull requests when new dependencies with new licenses are added. You can review new licenses and if you accept them, add them to the approved list. If not, reject the pull and pick-up another library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Share the love
&lt;/h2&gt;

&lt;p&gt;There are a lot of open-source foundations, government institutions, startups, tech giants, and even banks that open-source their work. Every project that I worked on used open-source technologies. And I'm pretty sure so do you. &lt;br&gt;
If you found a bug, implemented an enhancement, or maybe even added a brand new feature - please contribute back. Every contribution counts and every contribution is making the difference!&lt;/p&gt;

&lt;p&gt;If you're on another open-source level, you may even consider sharing your own project. That's how the open-source community is changing the world around us. Share the love!&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>dependencies</category>
      <category>ssdlc</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Building cloud native apps: Architecture</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Wed, 13 Jan 2021 15:06:57 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-architecture-lan</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-architecture-lan</guid>
      <description>&lt;h2&gt;
  
  
  Cloud native architecture
&lt;/h2&gt;

&lt;p&gt;The era of monolithic applications with one big database is long gone. These applications are terrible in every aspect of the application lifecycle management. Especially in production operations and continuous maintenance &amp;amp; improvement.&lt;/p&gt;

&lt;p&gt;If you are about to start a new cloud native project, it has to be microservices. There are a number of benefits of adopting microservices architecture. The most important ones are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;from the software development point of view - you will decompose your application into smaller services which follow single responsibility pattern (reduced complexity &amp;amp; clear design); are faster to develop as different teams can work on different services simultaneously and release them to production more frequently (and independently of others);&lt;/li&gt;
&lt;li&gt;from the production operations point of view - your application will be fine-grained and thus have a better availability, resiliency, and performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You probably seen videos from Kubeconf, AWS, Azure, Google conferences where presenters talk about hundreds of services or show dependency graphs in 10% scale so that they can fit on a slide. &lt;/p&gt;

&lt;p&gt;For most of us this is going from microservices to nanoservices. &lt;/p&gt;

&lt;p&gt;I do have to admit that there are cases where nanoservices are the only way of making your application work. A single function becomes a service of its own (Function as a Service). Such function is then scaled independently of other functions/services. These are the applications that process billions of events an hour. These types of applications are a totally different universe and only a few lucky ones get the chance to work with them.&lt;/p&gt;

&lt;p&gt;So, unless you absolutely have to go nanoservices route I recommend not to do it. You risk losing all the benefits of microservices. You will spend more time on managing your services (some people call it "loving your microservices"), orchestrating releases, figuring out the dependencies between them, and trying to keep track of the costs.&lt;/p&gt;

&lt;p&gt;When you're starting architecting your application you can start with just a few services. Break your application into functionally independent and loosely coupled services. At the design stage think of what are the potential candidates for splitting your application further. If you make these decisions (or at least think about them) at the design stage, you will be well prepared to refactor your application in the future.&lt;/p&gt;

&lt;p&gt;OK. Let's see 2 examples of architecting microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices from ground up
&lt;/h2&gt;

&lt;p&gt;We are developing a HR cloud native system. How to architect it as a microservices cloud native app?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the front-end application is a single page application; the application is served using CDN (note: some CDNs also support serving dynamic content or even handling PUT/POST/DELETE requests which is super useful and helps reduce latency even further);&lt;/li&gt;
&lt;li&gt;all user requests go through the API gateway; &lt;/li&gt;
&lt;li&gt;API gateway connects to authentication service to validate requests and passes through only the valid ones;&lt;/li&gt;
&lt;li&gt;API gateway routes traffic to one of the functional microservices where the actual data processing and data storing happens;&lt;/li&gt;
&lt;li&gt;functional services can be: personal information management, recruitment, time-tracking, benefits, career development, etc. (you get the drill); each of the functional service can be scaled independently (probably personal information management and time-tracking are the most popular ones); &lt;/li&gt;
&lt;li&gt;each functional service has its own database; the database is private to its service;&lt;/li&gt;
&lt;li&gt;services can talk to each other only via API (sync) or event bus (async);&lt;/li&gt;
&lt;li&gt;long running processes are ran by background services; these services are broken down into functional areas too; in case of our HR system this could be reporting, payroll integrations, etc.; &lt;/li&gt;
&lt;li&gt;you must have an audit service (seriously for every cloud native app you must have it); all actions performed in the system should be recorded; there are regulations like EU's GDPR or California's CCPA and there are customers dying to ingest your audit events for their own auditing and governance &amp;amp; compliance purposes;&lt;/li&gt;
&lt;li&gt;audit service should be externally accessible to your customers so that it can be integrated with customers' SIEM solutions;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Microservices by breaking monolith
&lt;/h2&gt;

&lt;p&gt;A little bit tricker scenario, but also quite popular. We have an existing application. To make things worse it's a monolith application. Can we make it microservices? Yes, we can.&lt;/p&gt;

&lt;p&gt;Say we have an application which supports URI paths like &lt;code&gt;/employees&lt;/code&gt;, &lt;code&gt;/companies&lt;/code&gt;, and &lt;code&gt;/invoices&lt;/code&gt;. They take you to different parts of your application where you can list, view, create, update, and delete resources. &lt;/p&gt;

&lt;p&gt;Splitting the code is easy. A high-level action plan looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;create 3 new services by creating 3 exact copies of the existing codebase; &lt;/li&gt;
&lt;li&gt;at the application load balancer level create 3 routing rules (&lt;code&gt;/employees&lt;/code&gt;, &lt;code&gt;/companies&lt;/code&gt;, and &lt;code&gt;/invoices&lt;/code&gt;) and point them to the newly created services;&lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;remove the logic that does not belong to given service; from employees service remove code related to companies and invoices, perform similar exercise for companies and invoices services;&lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;any common code left should be extracted to a shared library and referenced as a dependency;&lt;/li&gt;
&lt;li&gt;release it. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Great! We now have 3 services. However, they all talk to the same SQL database. Breaking monolith database is not as easy, but it's still not a rocket science. A high-level action plan looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;create a new database config for your every service; at first they all point to the same physical database;&lt;/li&gt;
&lt;li&gt;release it; &lt;/li&gt;
&lt;li&gt;list all foreign keys that cross service boundaries; a classic example is user entity which usually ends up on many other entities; hopefully there shouldn’t be too much of them;&lt;/li&gt;
&lt;li&gt;review the database access logic to catch all the places where you need to sync data in more than one service; this is an important step because from now on you will be responsible for managing the integrity of your data; in theory there should only be scalar foreign keys (which will become ordinary int/bigint/UUID columns now); also the foreign keys (which are primary keys in their original tables) never really change so there is no need to update foreign keys when updating the original entity; &lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;drop the FK constraints; once you complete this step you will logically split the database schema;&lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;split the data access layer into separate libraries; group data access logic by the service functionality; refactor common code to a shared library;&lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;next move is to physically split the database; you can do this by: using fork functionality, restoring from snapshot, restoring from point-in-time, or first creating read replicas and then promoting them - choose the approach that is the best for you; remember that based on your split strategy and the size of the database this can take some time;&lt;/li&gt;
&lt;li&gt;update database configs to point to new databases;&lt;/li&gt;
&lt;li&gt;release it;&lt;/li&gt;
&lt;li&gt;the last action you need to do is (similar to what we did with the code) remove the database components that don't belong to given service (like remove all invoices and companies components from employees database and vice-versa).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Release often and in small steps to reduce the risk and gain confidence in your plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying services
&lt;/h2&gt;

&lt;p&gt;Now that we know how to build microservices we need to actually deploy them.&lt;/p&gt;

&lt;p&gt;It should come as no surprise that for microservices I would recommend &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; and serverless for nanoservices. &lt;/p&gt;

&lt;p&gt;With Kubernetes you can easily port existing applications. I would say that every application that was written in the past 15 years can be containerized. Every major cloud provider apart of having their own managed Kubernetes service, can help you move your existing applications to the cloud: &lt;a href="https://aws.amazon.com/app2container/"&gt;AWS App2Container&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/resources/containerize-your-apps-with-docker-and-kubernetes/"&gt;Azure Whitepaper: Containerize your apps with Docker and Kubernetes&lt;/a&gt;, &lt;a href="https://cloud.google.com/migrate/anthos"&gt;Google Migrate Anthos&lt;/a&gt;. &lt;br&gt;
In short: if you didn't hardcode configuration inside your code nor didn't make anything silly, you are good to be containerized.&lt;/p&gt;

&lt;p&gt;With serverless, again, every major cloud provider has its own serverless technology stack and tons of resources: &lt;a href="https://aws.amazon.com/serverless/"&gt;AWS Serverless&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/overview/serverless-computing/"&gt;Azure Serverless Computing&lt;/a&gt;, and &lt;a href="https://cloud.google.com/serverless"&gt;Google Serverless Computing&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>cloudnative</category>
      <category>nanoservices</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Building cloud native apps: Codebase</title>
      <dc:creator>Łukasz Budnik</dc:creator>
      <pubDate>Thu, 31 Dec 2020 03:24:04 +0000</pubDate>
      <link>https://forem.com/lukaszbudnik/building-cloud-native-apps-codebase-3g51</link>
      <guid>https://forem.com/lukaszbudnik/building-cloud-native-apps-codebase-3g51</guid>
      <description>&lt;h2&gt;
  
  
  Everything starts with the code
&lt;/h2&gt;

&lt;p&gt;Codebase is the first factor mentioned in the twelve-factor app manifest: &lt;a href="https://www.12factor.net/codebase"&gt;https://www.12factor.net/codebase&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is the very first factor everyone should consider when building a new app. Why? Because everything starts with the code.&lt;/p&gt;

&lt;p&gt;When I started my professional IT career I was working with CVS (Concurrent Versioning System), then I was working with Subversion, did one project in Mercurial, but fell in love with Git.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git and GitOps
&lt;/h2&gt;

&lt;p&gt;Git is the undisputed king of version control systems. Git makes the dev world go round - it is the foundation of GitOps and what we can now call &lt;code&gt;everything-as-code&lt;/code&gt; paradigm (with the most notable examples like &lt;code&gt;infrastructure-as-code&lt;/code&gt;, &lt;code&gt;build-as-code&lt;/code&gt;, &lt;code&gt;pipeline-as-code&lt;/code&gt;, &lt;code&gt;documentation-as-code&lt;/code&gt;, and &lt;code&gt;security-as-code&lt;/code&gt;). If you're starting a new project, it must be simply Git.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monorepo vs. Multirepo
&lt;/h2&gt;

&lt;p&gt;The twelve-factor app manifest also touches on an important matter which I would like to stress as well. &lt;/p&gt;

&lt;p&gt;Every app should have its own codebase. If you're building your cloud native app based on microservices then every service should have its own code repository (multirepo approach). Developers have a tendency to do shortcuts, if multiple apps are stored in a single repo (monorepo approach) then there is a risk of referencing other apps/components directly and breaking the clear design and component separation. Finally, having one codebase per app simplifies a lot of: code management, release management, and everything we can label as CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosting
&lt;/h2&gt;

&lt;p&gt;Where to host your codebase? My preference is &lt;a href="https://github.com/"&gt;GitHub&lt;/a&gt;. The list of all the available features is mind-blowing. Starting with the basic stuff like storing your code, organising your documentation/pull requests/discussion/teams/organisations/projects, running CI/CD workflows, and last but not least running security scans for your code and 3rd party dependencies. It's where I host all my projects and with (not so) recent announcement of unlimited private repos, GitHub made life so much easier. &lt;a href="https://bitbucket.org/"&gt;BitBucket&lt;/a&gt; is another alternative. Or perhaps if you're going with AWS then you could consider &lt;a href="https://aws.amazon.com/codecommit/"&gt;AWS CodeCommit&lt;/a&gt; and leverage the power of the AWS platform (IAM and other AWS Code* services).&lt;/p&gt;

&lt;p&gt;Just don't host Git server yourself. There are examples of companies that did it and it didn't go as planned.&lt;/p&gt;

</description>
      <category>git</category>
      <category>codebase</category>
      <category>saas</category>
      <category>cloudnative</category>
    </item>
  </channel>
</rss>
