<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alex Hernández</title>
    <description>The latest articles on Forem by Alex Hernández (@stratdes).</description>
    <link>https://forem.com/stratdes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stratdes"/>
    <language>en</language>
    <item>
      <title>How To Install Docker Using Ansible</title>
      <dc:creator>Alex Hernández</dc:creator>
      <pubDate>Sun, 09 Jan 2022 09:27:13 +0000</pubDate>
      <link>https://forem.com/stratdes/how-to-install-docker-using-ansible-2ihj</link>
      <guid>https://forem.com/stratdes/how-to-install-docker-using-ansible-2ihj</guid>
      <description>&lt;p&gt;With the advent of Docker and containerization in general, tools like Ansible, Puppet, or Chef have been losing weight as most of the configuration of the system occurs inside a container.&lt;/p&gt;

&lt;p&gt;Moreover, as cloud computing platforms like Google Cloud or AWS, or Azure, are providing managed Kubernetes clusters, the necessity of configuring machines is lower every day.&lt;/p&gt;

&lt;p&gt;But, what happens if you cannot afford a cloud service and just want to buy a VPS or a dedicated machine and install Docker and Docker Compose to just run a couple of containers?&lt;/p&gt;

&lt;p&gt;Should you do it by hand?&lt;/p&gt;

&lt;p&gt;Not at all. Ansible to the rescue.&lt;/p&gt;

&lt;p&gt;In this article, I will explain to you how to install, configure, and use Ansible to install Docker.&lt;/p&gt;

&lt;h2&gt;
  
  
   How to install Ansible
&lt;/h2&gt;

&lt;p&gt;Installing Ansible consists of installing some CLI tools, and it’s very easy, regardless of the platform you are using. I will teach you how to install Ansible in Mac and Ubuntu.&lt;/p&gt;

&lt;p&gt;For Mac users: you can install Ansible using Homebrew, just by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For Ubuntu users: you can install Ansible running the following commands:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can find more information in the official documentation.&lt;/p&gt;
&lt;h2&gt;
  
  
   How to configure Ansible
&lt;/h2&gt;

&lt;p&gt;Configuring Ansible is a quite simple operation.&lt;/p&gt;

&lt;p&gt;First of all, you need to create a directory called playbooks. This is where you will store YAML files with the steps needed to configure your remote host -the VPS where you want to install Docker and Docker Compose using Ansible.&lt;br&gt;
Next, you need to create a file called inventory -it can be called whatever actually-, with the following content:&lt;/p&gt;

&lt;p&gt;IP_OF_THE_VPS&lt;/p&gt;

&lt;p&gt;That’s all. Pretty simple. The only thing to consider is that you need to be able to SSH this machine using an SSH key. So if ssh user@IP_OF_THE_VPS is already working for you, you are ready to execute Ansible playbooks.&lt;/p&gt;
&lt;h2&gt;
  
  
   A playbook to install Docker and Docker Compose
&lt;/h2&gt;

&lt;p&gt;This is the whole playbook YAML content, and I will explain step by step:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;First things first, the hosts key, which value is all, means that the playbook is going to be executed over all the inventory hosts available. As we only have one, we can just set all and things will go just fine.&lt;/p&gt;

&lt;p&gt;Then, we have the remote_user key: this is the user we use to SSH to the machine, let’s say ubuntu, but it could we whatever user with SSH access and proper permissions.&lt;/p&gt;

&lt;p&gt;become: this means that we are going to execute the different commands using sudo. This is needed to install packages, change permissions, groups, etc. If you open Docker official documentation, you will find all of the commands are run as sudo.&lt;/p&gt;

&lt;p&gt;Next, you find an array of tasks, which contains the different processes we are going to run over the remote host.&lt;br&gt;
Any task has a name, an action -like apt, service, or ansible.builtin.group, and optionally a loop. And the actions use to have params, like name or state in the apt one.&lt;/p&gt;

&lt;p&gt;The first task, called &lt;em&gt;install dependencies&lt;/em&gt;, installs the following packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apt-transport-https&lt;/li&gt;
&lt;li&gt;ca-certificates&lt;/li&gt;
&lt;li&gt;curl&lt;/li&gt;
&lt;li&gt;gnupg-agent&lt;/li&gt;
&lt;li&gt;software-properties-common&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check in the documentation of Docker that these dependencies are required to install Docker.&lt;/p&gt;

&lt;p&gt;If you look at the task, you will see that the state has the value present. This means that Ansible is going to ensure that these packages are present in the machine, so it will install only if needed -this is how Ansible is idempotent.&lt;/p&gt;

&lt;p&gt;Next task, &lt;em&gt;add GPG key&lt;/em&gt;, adds an APT key to the system. If you are familiar with Ubuntu, you’ll already know this is need to install certain repositories.&lt;/p&gt;

&lt;p&gt;And just below we have the task &lt;em&gt;add docker repository to apt&lt;/em&gt;, which is pretty obvious. It installs the repository of Docker in the machine.&lt;/p&gt;

&lt;p&gt;Time to install Docker needed packages in the next task. More precisely, we are going to install the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-ce&lt;/li&gt;
&lt;li&gt;docker-ce-cli&lt;/li&gt;
&lt;li&gt;containerd.io&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, with the task check docker is active, we are going to ensure that the service is running after installation. And we check docker group is in place with the task &lt;em&gt;Ensure group docker exists&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;At this point, we should already have Docker installed in our machine. But we would be only able to run commands using sudo, which is not desirable. So we run the next task adding ubuntu to docker group, which basically adds the user ubuntu -our running user- to the group docker.&lt;/p&gt;

&lt;p&gt;Now Docker is already installed and we can execute commands without sudo. But we don’t have Docker Compose, which is also needed to do what we want. The next couple of tasks install it.&lt;/p&gt;

&lt;p&gt;The first one downloads the binary from the server and installs it under &lt;em&gt;/usr/local/bin/docker-compose&lt;/em&gt;, providing the needed permissions.&lt;/p&gt;

&lt;p&gt;The last one just adds the binary to the user ubuntu property.&lt;/p&gt;

&lt;p&gt;Now we have understood the playbook, how could we execute it?&lt;/p&gt;
&lt;h2&gt;
  
  
  How to execute a playbook file using Ansible
&lt;/h2&gt;

&lt;p&gt;Ansible comes with a CLI tool to execute playbooks, which is ansible-playbook.&lt;/p&gt;

&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i inventory playbooks/main.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s all. Ansible should be able to connect and install all the needed stuff. The output will inform you of what kind of actions have run.&lt;/p&gt;

&lt;p&gt;You can check that Ansible is actually idempotent by running again the command. Nothing should be changed.&lt;/p&gt;

&lt;h2&gt;
  
  
   Ansible is still with us
&lt;/h2&gt;

&lt;p&gt;Regardless of the advent of Docker and the kind, there are still some tasks you should run over the machines, and you don’t want to do it by hand.&lt;/p&gt;

&lt;p&gt;Ansible is still with us and can help you to provision machines in a repeatable, versioned way.&lt;/p&gt;

&lt;p&gt;This article was originally published &lt;a href="https://alexhernandez.info/blog/how-to-install-docker-using-ansible/"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>engineering</category>
      <category>infrastructure</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Docker Template For PHP Explained</title>
      <dc:creator>Alex Hernández</dc:creator>
      <pubDate>Fri, 31 Dec 2021 08:13:54 +0000</pubDate>
      <link>https://forem.com/stratdes/docker-template-for-php-explained-3989</link>
      <guid>https://forem.com/stratdes/docker-template-for-php-explained-3989</guid>
      <description>&lt;p&gt;Several years ago, when Docker was just emerging and very few people were using it, and most of us were using Vagrant as a local development environment, I started to work in a company where everything had been built in Docker.&lt;/p&gt;

&lt;p&gt;I had never used it before. I was very used to Vagrant and my desire to learn something new that I did not think would contribute anything better to me was scarce. I was wrong, we all know today.&lt;/p&gt;

&lt;p&gt;But anyway I was there and I needed to learn. So I opened the first project I was going to work on and started to do the usual archeology to understand how docker was going to help me to run the project.&lt;/p&gt;

&lt;p&gt;After two hours of not having any clue, I asked to my one of my colleges. With the needed time, I got used to this new container technology I had never used before, but it took probably an excessive time… because the project was not friendly at all.&lt;/p&gt;

&lt;p&gt;Over the years, I’ve learned how to set up Docker in a way it’s easy to use without needing to know every detail -which is interesting anyway-. Today I want to give you an easy to use template in order to use Docker with PHP, explained so you can understand how it works in just 10 minutes.&lt;/p&gt;

&lt;p&gt;Let’s go!&lt;/p&gt;

&lt;h2&gt;
  
  
  The file structure
&lt;/h2&gt;

&lt;p&gt;First things first, we need to understand how files are distributed in the workspace folder and, more important, where should we put our project files.&lt;/p&gt;

&lt;p&gt;There are three important folders: app, bin, and docker.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;app: is where your project files should be put. If you are using Symfony, Laravel, or other similar frameworks, the contents of the root folder should be inside the app folder.&lt;/li&gt;
&lt;li&gt;bin: we have some handy files to run docker commands much faster.&lt;/li&gt;
&lt;li&gt;docker: is where we have the setup for docker. I’ll explain more later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the app folder, you will find two more: bin and public. These folders imitate how frameworks use to distribute console and server entry points, and you can change this with these frameworks tools. I have included two fake files, index.php and console.php, to test each option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Docker
&lt;/h2&gt;

&lt;p&gt;Docker provides a CLI tool to build and run container images separately, one at a time. You know, docker build -t tag . or docker run tag.&lt;/p&gt;

&lt;p&gt;If you want to run more than one image, and we want to run php-fpm and nginx at the same time, the best option -in local environment- is docker-compose. With docker-compose you can just plan what you need and then run it. This plan is set up in a file called docker-compose.yaml. This is our docker-compose.yaml file (docker/docker-compose.yaml):&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Let’s analyze the file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;version: is the version of docker-compose’s API you want to use. You can find more details here.&lt;/li&gt;
&lt;li&gt;services: here you will define the services you need:
console: the service for php-cli, for running the console; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder).&lt;/li&gt;
&lt;li&gt;php-fpm: the service for php-fpm; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder).&lt;/li&gt;
&lt;li&gt;nginx: the service for nginx; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder). You also define what ports you want to share between the host and the container -port 80 in the host points to port 80 in the container.&lt;/li&gt;
&lt;li&gt;shared folders: we are sharing app folder in all the services -yes, nginx need this also-. And we share the host.conf file to serve our app/public/index.php file. You can see the host file below:&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;As the intention of this post is not to explain how nginx works, I will just point out the fastcgi_pass php-fpm:9000 part: it is just using the service php-fpm to run PHP from nginx.&lt;/p&gt;

&lt;p&gt;You may be wondering how networking works in terms of the two services we have defined: php-fpm and nginx.&lt;/p&gt;

&lt;p&gt;The answer is actually very easy: if you don’t define networking, docker creates a default network and all the services are inside and have visibility one to each other. Which happens to be exactly what we need.&lt;/p&gt;

&lt;p&gt;So now we have defined what services we want and how they are build -we use docker images from docker hub. Now, how we could launch this services in order to run the console?&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the console
&lt;/h2&gt;

&lt;p&gt;To run the console, we should do something like docker-compose run whatever params etc. But typing this every time we want to run the console is the opposite of productivity, so I have created a simple bash file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Lines 1 and 3 are only to get the folder where the script is saved. This is handy when you want to run the script from different places than the root.&lt;/p&gt;

&lt;p&gt;Line 6 is the actual line that run docker-compose.&lt;br&gt;
We provide the path to the docker-compose.yaml file, then run, then the service we want to run, in our case, console, then the actual command, which is php console.php params.&lt;/p&gt;

&lt;p&gt;So we can’t just run ./bin/console.php params. And if you do, docker will pull the php-cli image from docker hub, and then will execute the app/bin/console.php outputting This is the console, which is what we have built as a fake console.&lt;/p&gt;
&lt;h2&gt;
  
  
  Serving the application
&lt;/h2&gt;

&lt;p&gt;What if we want to serve nginx and php-fpm? We have another handy bash file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Lines 5 and 6 do the trick.&lt;/p&gt;

&lt;p&gt;First all, we pull images. Then, again, we provide the path to the docker-compose.yaml file, then up -d -the -d param detaches the process-, then the services we want to run, php-fpm and nginx.&lt;/p&gt;

&lt;p&gt;And we can just bin/up.sh to do the trick.&lt;/p&gt;

&lt;p&gt;Images are pulled, run, and we can go to localhost and see the Hello World! which, again, is what we have faked.&lt;br&gt;
If you want to stop the services, do bin/stop.sh. You can see stop script here, and I think I don’t need to explain it as it quite straigforward:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Seeing the logs
&lt;/h2&gt;

&lt;p&gt;If you want to see the logs, you can just docker logs -f container, where -f do a tail over the log -to find container id you can just docker ps which will output the list of containers: is the first column.&lt;/p&gt;

&lt;p&gt;But, how logs are working in this setup?&lt;/p&gt;

&lt;p&gt;Well, basically everything you output to stderr and stdout goes to the logs. That’s it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to find the template
&lt;/h2&gt;

&lt;p&gt;I have pushed my template to a public repo in Github. Click here to fork/download.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Infrastructure can be tough. It is very important to build easy-to-use frameworks and tooling so developers can be productive from minute zero.&lt;/p&gt;

&lt;p&gt;With this template, for instance, any developer can run the project just by running two different commands -console and server- without even knowing how things work from behind. &lt;/p&gt;

&lt;p&gt;Then, with enough time and confidence, learning the details is easier.&lt;/p&gt;




&lt;p&gt;This article was originally published on &lt;a href="https://medium.com/codex/docker-template-for-php-explained-d674018e7cef"&gt;medium&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>php</category>
    </item>
    <item>
      <title>What Is The CQRS Pattern?</title>
      <dc:creator>Alex Hernández</dc:creator>
      <pubDate>Sun, 26 Dec 2021 09:04:31 +0000</pubDate>
      <link>https://forem.com/stratdes/what-is-the-cqrs-pattern-4eoe</link>
      <guid>https://forem.com/stratdes/what-is-the-cqrs-pattern-4eoe</guid>
      <description>&lt;p&gt;CQRS stands for Command Query Responsibility Segregation and is an evolution of the CQS, Command Query Separation from Bertrand Meyer. CQRS seems to be stated by &lt;a href="https://twitter.com/gregyoung"&gt;Greg Young&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  CQS
&lt;/h2&gt;

&lt;p&gt;The meaning of CQS is basically that a method should be either a command or a query; in other words, it should change or return something, but not both things. Asking a question should not change the answer.&lt;/p&gt;

&lt;p&gt;Let's see an example, in PHP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function getCounter(): int
{
    $this-&amp;gt;counter++;
    return $this-&amp;gt;counter;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method increases the counter and then returns the new value. But, what we just want to know the value, without changing it? This method is both changing something and returning a value, so it violates CQS.&lt;/p&gt;

&lt;p&gt;How should we proceed?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function getCounter(): int
{
    return $this-&amp;gt;counter;
}

public function increaseCounter(): void
{
    $this-&amp;gt;counter++;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can call each method when needed, without side effects on the getter.&lt;/p&gt;

&lt;p&gt;Let's look at another example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function createUser(array $userData): int {
    $id = $this-&amp;gt;userRepository-&amp;gt;create($userData);

    return $id;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one is very common when identifiers are generated in database engines, like using an autoincrement in a database server. In this situation, we should use previously generated id's. The most common ones are UUID. So we could replace the method like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public function createUser(UUID $id, array $userData): int {
    $this-&amp;gt;userRepository-&amp;gt;create($id, $userData);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, our method only changes something -adding a new user- but doesn't answer anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  CQRS
&lt;/h2&gt;

&lt;p&gt;So, where is the evolutionary part of CQRS? CQRS goes beyond and aims to separate the read model from the write model. That means we use a different model in order to create/update information than for reading it.&lt;/p&gt;

&lt;p&gt;The reason to do this is that many times there are different needs for reading than for writing, so having a different model makes it easier to reason about the system. Also, because of scalability reasons: having a different model, even a different storage system, make it easier to optimize both models properly; you can even have different hardware power for each model.&lt;/p&gt;

&lt;p&gt;Now I have to stop here and remind you that CQRS does not necessarily imply Event Sourcing. You can build a written model without basing the persistence in storing domain events of aggregates to reconstitute them later. You can save the current state in relational tables in boring database storage systems like &lt;a href="https://www.mysql.com"&gt;MySQL&lt;/a&gt; o &lt;a href="https://mariadb.org"&gt;MariaDB&lt;/a&gt; o &lt;a href="https://www.postgresql.org"&gt;PostgreSQL&lt;/a&gt; and still do CQRS.&lt;/p&gt;

&lt;p&gt;So, how should we optimize the write model?&lt;/p&gt;

&lt;h2&gt;
  
  
  The write model
&lt;/h2&gt;

&lt;p&gt;When we think about the write model the first we should be conscious of is that it needs to be reliable. This is where we are going to do domain validations, transactionality, and any other measure we need to apply to ensure the domain will remain valid and consistent.&lt;/p&gt;

&lt;p&gt;Here we have the command concept. A command is an action or a group of actions applied to the write model; it needs to be transactional, so if more than one action is applied, then all the actions should succeed or fail. And it needs to be valid, so it does not break any domain rule.&lt;/p&gt;

&lt;p&gt;Usually, these commands are executed through a command bus, so we have a pair command/command handler. Let's see an example in PHP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CreateUserCommand {
    private UUID $id;
    private string $email;
    private string $password;

    public function __construct(UUID $id, string $email, string $password) {
        // Validation
        if (!filter_var($email, FILTER_VALIDATE_EMAIL)) {
            throw new \InvalidArgumentException("Email $email is not valid");
        }
        if(!strlen($password) &amp;gt;= 8) {
            throw new \InvalidArgumentException("Password should be longer or equal than 8 characters.");
        }

        $this-&amp;gt;id = $id;
        $this-&amp;gt;email = $email;
        $this-&amp;gt;password = $password;
    }
}

class CreateUserCommandHandler {
    private UserRepository $userRepository;

    public function __construct(UserRepository $userRepository) {
        $this-&amp;gt;userRepository = $userRepository;
    }

    public function handle(CreateUserCommand $command): void
    {
        // More validation
        if($this-&amp;gt;userRepository-&amp;gt;has($command-&amp;gt;id()) {
            throw new \InvalidArgumentException("A user with the provided id already exists.");
        }

        $this-&amp;gt;userRepository-&amp;gt;save($command-&amp;gt;id(), $command-&amp;gt;email, $command-&amp;gt;password);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this situation, the command would be run like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$this-&amp;gt;commandBus-&amp;gt;handle($command);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Command Bus could know what handler to execute using a naming or whatever strategy and would open a transaction before and commit it on success after the handling. In case of error, it would roll back. So we have transactionality because of the command bus and validation because of the command and the handler validations.&lt;/p&gt;

&lt;p&gt;You may notice I prescinded of the domain part. This is because this is an example, but we should have an aggregate User with all the needed validations inside (additionally to the application command part, both layers should be consistent).&lt;/p&gt;

&lt;p&gt;Now, where is this data stored? Well, it depends on your persistent decisions. If you were practicing &lt;a href="https://alexhernandez.info/blog/what-is-event-sourcing/"&gt;event sourcing&lt;/a&gt;, you should probably use a solution like &lt;a href="https://www.eventstore.com"&gt;EventStore&lt;/a&gt;; if you just store the last state of the aggregates in a relational way, a storage system like MySQL should work for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The read model
&lt;/h2&gt;

&lt;p&gt;Now, let's think about the read model. The read model doesn't need all of these validations and transactionality of the write one, because we have done all of this when writing, so we only need to copy the information to the read model.&lt;/p&gt;

&lt;p&gt;In order to copy the information, there are several approaches; if you are under event sourcing, you probably should listen to the events, synchronously or asynchronously, and then update the read model. If not, you could do it in the command handler, using synchronous listeners, or even sending messages to queues in order to do this asynchronously; the latter is not common because to do that you could be doing event sourcing.&lt;/p&gt;

&lt;p&gt;And what should be the structure of the information? If you want to really optimize... then the one you need to read, exactly that. So you just need to query and show, without joins or transformations in the middle. This uses to be done through a document-based database system, so...&lt;/p&gt;

&lt;p&gt;Where should I store the information? If you use a document-based storage system, MongoDB or Elastic could be a very good option. You still can use a relational database such as MySQL or PostgreSQL and use the JSON field types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of using CQRS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You can configure different hardware and scalability in general for each model, so you can put more power on writing or on reading depending on the nature of the project.&lt;/li&gt;
&lt;li&gt;Having concerns separated means each model is easier than the merging of both. Easy to reason, easy to maintain.&lt;/li&gt;
&lt;li&gt;Security: only domain entities are changing the source of truth -the write model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Applying CQRS is not an easy thing. Any good practice is, actually. But advantages tend to overcome disadvantages.&lt;/p&gt;




&lt;p&gt;This post was originally published &lt;a href="https://alexhernandez.info/blog/what-is-the-cqrs-pattern/?utm_campagin=what-is-the-cqrs-pattern&amp;amp;utm_medium=dev.to"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cqrs</category>
      <category>architecture</category>
      <category>php</category>
    </item>
    <item>
      <title>What Is Event Sourcing?</title>
      <dc:creator>Alex Hernández</dc:creator>
      <pubDate>Sun, 19 Dec 2021 14:50:25 +0000</pubDate>
      <link>https://forem.com/stratdes/what-is-event-sourcing-4phf</link>
      <guid>https://forem.com/stratdes/what-is-event-sourcing-4phf</guid>
      <description>&lt;p&gt;Event Sourcing is a different persistence approach; instead of saving the last state of an object, in event sourcing, we persist all the domain events that have affected this object in its entire life. This is, actually, not an innovative nor revolutionary way to do, as banks, for instance, have been doing it from the beginning, conscious or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The banking example
&lt;/h2&gt;

&lt;p&gt;When you open your bank webpage and look at one of your accounts, you use to find a table with more or less the following columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Date&lt;/li&gt;
&lt;li&gt;Concept&lt;/li&gt;
&lt;li&gt;Amount (which can be positive or negative)&lt;/li&gt;
&lt;li&gt;Total&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting thing here is the last column, total. Isn't this a calculated column? Isn't this the sum of the different amounts from bottom to top?&lt;/p&gt;

&lt;p&gt;So, if you were going to model this problem, you would probably end doing something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-model.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-model.webp" alt="Banking model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you analyze a little bit more, you conclude that transactions are things happening to the account; which is almost the definition of a domain event. The other thing you may conclude is that by having all the domain events related to an account, you can get any of the "total" values for any point in time.&lt;/p&gt;

&lt;p&gt;This is like to say that having the &lt;a href="https://alexhernandez.info/glossary/domain-event/" rel="noopener noreferrer"&gt;domain event&lt;/a&gt; stream of an object you have all the different states of this object. You can instantiate an object in any given state just by "sourcing" all the "events" involved in the history of this object. This is event sourcing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The event store
&lt;/h2&gt;

&lt;p&gt;So event sourcing consists of storing all the domain events related to the different objects of our domain and then using it to get to the different states of these objects as needed in our applications.&lt;/p&gt;

&lt;p&gt;So, the first question would be, where should we store domain events of an object? And, by the way, we should start calling entities or aggregates to these generic "objects".&lt;/p&gt;

&lt;p&gt;The event store is the storage system we use to persist these events. It can be a table on a database like &lt;a href="https://www.mysql.com" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt;, or a specific product like &lt;a href="https://www.eventstore.com" rel="noopener noreferrer"&gt;EventStore&lt;/a&gt;. Anyway, it will have most of the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An identifier of the domain event, usually a flavor of UUID.&lt;/li&gt;
&lt;li&gt;An identifier of the stream -which uses to be an &lt;a href="https://alexhernandez.info/glossary/entity/" rel="noopener noreferrer"&gt;entity&lt;/a&gt;/&lt;a href="https://alexhernandez.info/glossary/aggregate/" rel="noopener noreferrer"&gt;aggregate&lt;/a&gt; id, again usually a UUID.&lt;/li&gt;
&lt;li&gt;A version of the domain event: as code changes from time to time and so do domain events, we store a version so we can use events depending on it. You will find more on upcasting below.&lt;/li&gt;
&lt;li&gt;Data: obviously, the domain event will include some kind of data; in the banking example, the concept and the amount. This uses to be a serialized string, most of the time JSON.&lt;/li&gt;
&lt;li&gt;Date: the meaning of this field should be obvious, but I would add that, having into account we could have millions of domain events, this date should store up to microseconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the key things to have in mind to store domain events is that the past can't change, and neither do the domain events, so you will not need to use updates or deletes in your storage system; no database locks either. So the event store should only support appending operations, that's it, inserts, and should also be fast to read entries aggregated by... yes, aggregates -streams in domain event language.&lt;/p&gt;

&lt;p&gt;Having this into account, a simple MySQL table with an index in aggregate_id and created_at fields should be enough for now -wait a bit to read about vertical and horizontal optimization.&lt;/p&gt;

&lt;p&gt;Reconstituting: how to get an entity from the domain event stream&lt;/p&gt;

&lt;p&gt;So we have all the domain events stored in our new event store. How should we reconstitute -or rehydrate- an entity from this event stream?&lt;/p&gt;

&lt;p&gt;Let's work again with the banking example. For the sake of this we will use the following three domain events:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-events.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-events.webp" alt="Events"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get to the actual balance of the account, we only need to "apply" the fourth events in the order they were created -chronological order- using the data to create and modify the entity until we reach the end -the last event, which is the current state.&lt;/p&gt;

&lt;p&gt;So, let’s begin with the first one: AccountCreated. AccountCreated could be more complex in a real situation, but for this example, applying an AccountCreated event consists of creating an empty Account object, and then setting id and transactions from AccountCreated event, which in this case is cb11f55c-6023-11ec-8607-0242ac130002 and an empty array respectively. And we have the following account object:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-1.webp" alt="Aggregate state 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we apply the first transaction event, TransactionAdded. To apply this kind of event, we must add just the id of the transaction to the transactions array of the account object. So, now, we have the following account state:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-2.webp" alt="Aggregate state 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We do the same thing with the other TransactionAdded event having the following account state:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-aggregate-state-3.webp" alt="Aggregate state 3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this is the final state of the account.&lt;/p&gt;

&lt;p&gt;We have done this process from start to end, but we could have stopped at any point, so we can get any state of the history of the aggregate.&lt;/p&gt;

&lt;p&gt;But… hey, how could I get the list of accounts that have a total greater than 100$..., paginated and ordered by amount? Should I get all the accounts from the system, reconstitute them, then filter in the application code, and then paginate, and then…? Eh… no, of course not.&lt;/p&gt;

&lt;p&gt;The event store is our write model. Maybe we should talk about CQRS, right?&lt;/p&gt;

&lt;h2&gt;
  
  
  CQRS
&lt;/h2&gt;

&lt;p&gt;CQRS stands for Command Query Responsibility Segregation. In other words, we want to separate reads from writes. So, every software unit, a class, a function, a module, even a system, should return a value or change the environment, but not both things.&lt;/p&gt;

&lt;p&gt;Taking this to the extreme, we should have a read model and a write model, that means, a system to store data and a system to read data. In event sourcing, the write model is the event store. But, as the event store is the write model, we should not use it to read except in the situations we want to update information. For the rest of the time, we should use the read model. So, what is the read model?&lt;/p&gt;

&lt;h2&gt;
  
  
  The read model
&lt;/h2&gt;

&lt;p&gt;As we have a system only for reading -obviously we will need to write to update the model itself, but we don’t need this to be optimized because we will probably do it &lt;a href="https://alexhernandez.info/glossary/asynchronous/" rel="noopener noreferrer"&gt;asynchronously&lt;/a&gt;-, we can optimize it.&lt;/p&gt;

&lt;p&gt;There are many different options to build a read model, but a document database is the usual choice, as we can be more flexible, and we don’t need structured data -because we have the write model for that. &lt;a href="https://www.mongodb.com" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;, for instance, is the option I use.&lt;/p&gt;

&lt;p&gt;And what should be store in the read model?&lt;/p&gt;

&lt;p&gt;Exactly the information we will need for our queries, exactly with the shape we will use!&lt;/p&gt;

&lt;p&gt;So, for the situation we talked before -the list of accounts that have a total greater than 100$, paginated and ordered by amount- we could save a document for each account with the following structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-read-model.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falexhernandez.info%2Fassets%2Fimages%2Fwhat-is-event-sourcing-read-model.webp" alt="Read model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So now, we can just query this document database and get what we want without needing to reconstitute any aggregate from the event store.&lt;/p&gt;

&lt;p&gt;But you probably are wondering how we just maintain this read model database. How we add or update items.&lt;/p&gt;

&lt;p&gt;It’s easy: listening domain events. Every time a domain event happens, we will update -or create new items- the read model. So, if we listen an AccountCreated event, we will add a new document in the read model; if we listen TransactionAdded, we will update the total and the last_movement_at fields. And so on.&lt;/p&gt;

&lt;p&gt;And these operations could -and probably should- be asynchronous as long as you push domain events to a queue system like &lt;a href="https://cloud.google.com/pubsub" rel="noopener noreferrer"&gt;Google PubSub&lt;/a&gt; or &lt;a href="https://aws.amazon.com/es/sqs/" rel="noopener noreferrer"&gt;AWS SQS&lt;/a&gt; or &lt;a href="https://www.rabbitmq.com" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt; and then you pull it from a daemon. Bear in mind you should manage order and duplication.&lt;/p&gt;

&lt;p&gt;But hey, aren’t you being a little bit tricky here? What happens if we update an entity before the read model gets updated? How can we get the current values to update?&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use the read model and when to use the write model
&lt;/h2&gt;

&lt;p&gt;TL; TR: Use the write model for updating or deleting operations and the read model for all the other things.&lt;/p&gt;

&lt;p&gt;So, if you have your read model processing asynchronous, then you cannot trust read model for any writing operation. The source of truth is the event store. So, when you need to update aggregates in your domain, you need to make checks and data recovering from write model. But, as you use to update only one aggregate, this operation is cheap. &lt;/p&gt;

&lt;p&gt;What model should we use to show information to the user? The read one. Could we present outdated information in some situations if we trust the read model? Yes. We could. But bear in mind this delay, this “eventual consistency”, should be a matter of milliseconds. This is a problem for updating operations in batch, but don’t use to be a problem to show information to the user because, to begin, user interfaces use to be slower than that.&lt;/p&gt;

&lt;p&gt;If you have a situation where you need any part of the read model to be trustful, then you can just update this part of the read model &lt;a href="https://alexhernandez.info/glossary/synchronous/" rel="noopener noreferrer"&gt;synchronously&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upcasting
&lt;/h2&gt;

&lt;p&gt;Didn’t you mention something like upcasting? Yep. Upcasting is what we need to do when we change the structure of an aggregate or a domain event. As we apply it on reconstitution based on this structure, if we change it, we need kind of... transformations.&lt;/p&gt;

&lt;p&gt;Therefore, we stored versions. When we receive an event with a version lower than the current version, then we transform -upcast- this event to the next version; and we do this until we get to the last version. This way, the aggregate always uses the last version and the current code for reconstitution always work.&lt;/p&gt;

&lt;p&gt;But how we do this upcasting? It depends.&lt;/p&gt;

&lt;p&gt;The more usual situation is when we added a new attribute to the aggregate. In these situations, upcasting consists of setting a default value for this attribute. When a field is removed, you can just ignore it cause the applier is not going to get it and nothing will happen. When you change the type of a field, then you need to transform it to the new type. At the end of day, is a case by case based transformation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event Sourcing for microservices architectures
&lt;/h2&gt;

&lt;p&gt;In microservices architectures, communication between services is key. And this kind of communication should be, most of the times, through messaging, that means, sending a message from one microservice to a queue, and being listened by another one. This is asynchronous communication and is more reliable than the synchronous one. So, if we need to send a message, what better message than a domain event? And if we send domain events, what could be better than a pattern that have domain events at its core?&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance issues on reconstitution
&lt;/h2&gt;

&lt;p&gt;As I have told you before, reconstituting and entity means applying all the domain events related to this entity in order.&lt;/p&gt;

&lt;p&gt;What happens when you have a huge number of events related to the same entity? You have a problem of vertical scalability.&lt;/p&gt;

&lt;p&gt;In order to fix that, we can use snapshots. A snapshot is a copy of the state of an entity in a moment. So, if you have one million events for the same entity, but you have a snapshot every ten or twenty events, to reconstitute the entity you will need only the last snapshot and the events with date greater than the snapshot date, which can be nine in the worst case. Problem fixed.&lt;/p&gt;

&lt;p&gt;But hey, what happens if you have billions of events, no matter the entity, and the indexes just start to fail? Then, you have a problem of horizontal scalability.&lt;/p&gt;

&lt;p&gt;In this situation, you should break the event store table in a number of tables, having a heuristic for the id’s, so when you write or you read you know exactly from what table you need to read. Then, instead of having a huge table, you will have a number of little tables, and indexes will work as always. Fixed again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backups?
&lt;/h2&gt;

&lt;p&gt;In every and each system we need to do some backups. This is not the question. The question is what should be backed. And for event sourcing you have two options: back up everything, or back up only the event store. As the read model is a consequence of the write model, you could recover the later just reapplying all the events in order.&lt;/p&gt;

&lt;p&gt;The problem with this approach is that this recovery could be slower than a regular backup.&lt;/p&gt;

&lt;p&gt;Anyway, this quality of event sourcing could be also used to fix problems. If you made a mistake and now the read model has wrong information, you can just create fixing events, so the read model will be updated accordingly. This fixing strategy also has the advantage of being explicit, as you can see the fix in the event store.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full traceability of everything
&lt;/h2&gt;

&lt;p&gt;I imagine it is obvious by now that one of the advantages of using event sourcing is that you have the traceability of everything happening in the system. You have the full history.&lt;/p&gt;

&lt;p&gt;This is especially valuable in a world where data is so important. Understanding how we got to a point is easy when you have the whole event stream of an aggregate -of every aggregate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Event sourcing may be hard to understand when you start. But I really think it’s a very natural way to think when you are used to.&lt;/p&gt;

&lt;p&gt;It has also challenges, especially when managing the read model -there are situations where you need very consuming tasks- or managing a huge event store -in this situation you’ll probably enjoy a solution like EventStore.&lt;/p&gt;

&lt;p&gt;But it also has very important advantages, like having the full history of the system, being able to update the read model from the event store at any time, the performance won by having dedicated models for reading and writing, and the natural way it integrates with microservices communications usual ways.&lt;/p&gt;

&lt;p&gt;The key here, as always, is to know in which kind of projects advantages make disadvantages worth!&lt;/p&gt;




&lt;p&gt;Article originally published on &lt;a href="https://alexhernandez.info/blog/what-is-event-sourcing/?utm_source=dev.to&amp;amp;utm_campaign=what-is-event-sourcing&amp;amp;utm_content=article&amp;amp;utm_medium=referral"&gt;alexhernandez.info&lt;/a&gt;&lt;/p&gt;

</description>
      <category>eventsourcing</category>
      <category>microservices</category>
      <category>architecture</category>
      <category>cqrs</category>
    </item>
  </channel>
</rss>
