<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ramon Lima</title>
    <description>The latest articles on Forem by Ramon Lima (@ramonck).</description>
    <link>https://forem.com/ramonck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ramonck"/>
    <language>en</language>
    <item>
      <title>The Low-Code Value</title>
      <dc:creator>Ramon Lima</dc:creator>
      <pubDate>Sun, 25 Oct 2020 02:01:48 +0000</pubDate>
      <link>https://forem.com/ramonck/the-low-code-value-41o6</link>
      <guid>https://forem.com/ramonck/the-low-code-value-41o6</guid>
      <description>&lt;p&gt;The biggest challenge today to anyone interested in Low-Code and Power Platform is to really be able to pitch this internally and I have come up with the following financial analog to the situation to actually promote more investments for all.&lt;br&gt;
From a technical perspective Low-Code is great and awesome but what about internally how can we pitch and work this out? How can I tell my management and that's worth investing in Low-Code and continue to invest in it as it grows. &lt;br&gt;
The key is points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Productivity&lt;/li&gt;
&lt;li&gt;Cost reduction&lt;/li&gt;
&lt;li&gt;Re-use&lt;/li&gt;
&lt;li&gt;Anyone becomes a developer&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Productivity
&lt;/h2&gt;

&lt;p&gt;Just saying is productive is not enough we have to prove this out everyday, but this other day now I heard about how fast I deployed a solution and it was a nice comparison: 5 hours against 5 months, but that didn't tell the full story because we have to take other things into account which is the following it was not only one developer developing this application which was a chatbot so I did a Power Virtual Agent in a couple of hours and the guys worked for 5 months and didn't get it as good as I did it still in 5 hours. This is huge and it's a straight forward comparison of 2 developers, 1 project manager and 1 architect against 1 low-code developer. In this specific case I'm just being nice put yes if we put everything in the calculator it will certainly be over 5x but if we think about Power Automate and everything else we can make it stable in the 5x range easy.&lt;/p&gt;

&lt;p&gt;Another thing we have to take into account in terms of speed is the ability to publish right away any change, this is crucial for many businesses areas and cases and this speed is there in the Low-Code, on regular coding platforms you have to push to github then export or run an action then get a deployment window and deploy it. It will never be as fast as clicking on publish button.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Reduction
&lt;/h2&gt;

&lt;p&gt;One person developing is a lot of times a lot cheaper than having a group of developers developing a solution for your business case, putting into account that you'll have infrastructure for SQL DB and everything else a lot of times it's no expensive to have connectivity to the DBs or WSs if you need to, how much will you save in terms of development work? Not to have to manage another project, another contract, provision new infrastructure. This needs to go into the calculator. Using SaaS has huge benefits and they need to be accountable for, structuring accordingly to your business to provide maximum business value with reduced costs is the optimal goal in the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Re-use
&lt;/h2&gt;

&lt;p&gt;In O365 we're always re-using the same infrastructure, the same environment we're not always provisioning something new this means that we need to make sure to always make this part of the calculation that if we're always re-using and making good use of it, we're getting ROI and we're getting value from it. We're benefiting from this strategy from beginning to end. End-to-End benefits. For a new project, provisioning new hardware what's going to be the TCO for this in long-term versus the SaaS? Low-Code is always the cheapest strategy and choice, no matter how you put it in the equation the re-usage is a huge part of this equation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anyone becomes a developer
&lt;/h2&gt;

&lt;p&gt;Think about the business areas that anyone can become a developer for their business processes that means that you don't have to hire new developers this means you're leveraging the Citizen-Developers within your organization, getting the person who defines the process to actually create the process themselves and you're not hiring new developers at the same time leveraging cheaper working/hours on labor costs. &lt;br&gt;
It becomes a win that you don't have to hire new developers, you can develop the talents internally within your organization.&lt;br&gt;
You can have a chain in your organization where Developers create components for your "Citizen Developers" and then you can have the "Citizen Developers" spending less time because they have better building blocks to work on within the O365 eco-system. You can have developers doing what developers need to do and the Citizen Developers becoming more productive everyday on a combined chain.&lt;br&gt;
There will still exist cases that are more complex and will need IT to develop for them but we can have the majority of the cases being handled by the "Citizen Developers" certainly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanks
&lt;/h2&gt;

&lt;p&gt;Thank you for reading this article if you feel something is missing please let me know.&lt;/p&gt;

</description>
      <category>powerplatform</category>
    </item>
    <item>
      <title>VSCode with WSL2 - Solve the not connecting bug.</title>
      <dc:creator>Ramon Lima</dc:creator>
      <pubDate>Sun, 06 Sep 2020 14:50:31 +0000</pubDate>
      <link>https://forem.com/ramonck/vscode-with-wsl2-solve-the-not-connecting-bug-403p</link>
      <guid>https://forem.com/ramonck/vscode-with-wsl2-solve-the-not-connecting-bug-403p</guid>
      <description>&lt;h2&gt;
  
  
  The Issue
&lt;/h2&gt;

&lt;p&gt;I don't know if you ever had this issue, hopefully not but you're trying to connect to your WSl2 instance and it's saying it's not able to connect, etc. Some people on the internet will tell you to restart and that might work for sometime but then it will come back again the same issue and always restarting will not solve forever.&lt;/p&gt;

&lt;h2&gt;
  
  
  WSL2 Cleanup
&lt;/h2&gt;

&lt;p&gt;Cleanup all the current connections within WSL2.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open you WSL2 distro you're trying to connect to and then go into the following folder: &lt;code&gt;cd ~/.vscode-server/bin/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Remove all the folders there: &lt;code&gt;rm -rf ./*&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Windows VSCode Cleanup
&lt;/h2&gt;

&lt;p&gt;On the newest version of the VSCode it automatically does this cleanup by default but it's always good to double check just in case and see if there's any conflicting versions of "ms-vscode-remote.remote-wsl-*", if there's more then one version there please delete the other folders which are older versions or remove remote wsl and reinstall it from the extensions tab in VSCode.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open CMD: &lt;code&gt;WIN + R&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Type in: &lt;code&gt;cmd&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Go into the vscode extensions dir: &lt;code&gt;cd %userprofile%\.vscode\extensions&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;See the folders: &lt;code&gt;dir&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Thanks
&lt;/h2&gt;

&lt;p&gt;Thank you make sure to give your like on this article if it helped you or if it saved your day with VSCode and WSL2.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Workflow on Docker Swarm</title>
      <dc:creator>Ramon Lima</dc:creator>
      <pubDate>Sun, 30 Aug 2020 02:18:05 +0000</pubDate>
      <link>https://forem.com/ramonck/os-workflow-on-swarm-4i1e</link>
      <guid>https://forem.com/ramonck/os-workflow-on-swarm-4i1e</guid>
      <description>&lt;p&gt;After that cluster up and running the first that comes to mind is what is next? What can I do with a cluster and how can I work with that cluster? This is my first approach here, is the low-code high productivity so I don't have to spend too much time and effort on doing anything in the new setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-Code = Visual Development
&lt;/h2&gt;

&lt;p&gt;I work with low-code and I love the low-code concept due to the productivity and believe it or not the first thing I searched for was: Is there a workflow engine open source or something that I could stick into the cluster I just built and start working with it? How about an open source alternative to Power Automate or something close to that? Initially I was thinking I was dreaming too much but reality proved me wrong and there's such a thing in which I will cover below.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Options
&lt;/h2&gt;

&lt;p&gt;Initially I saw a couple of alternatives and I liked it but not as much as my final choice which in the following order.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;N8n - &lt;a href="https://n8n.io/"&gt;https://n8n.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Project Flogo - &lt;a href="https://www.flogo.io/"&gt;https://www.flogo.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Apache Nifi - &lt;a href="https://nifi.apache.org/"&gt;https://nifi.apache.org/&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Apache Nifi has a nice approach but I was feeling like it was an old interface or something, I'm used to Nintex Workflow and Power Automate and they look prettier it should be about the usability but the UI makes a difference so I putted Nifi in the back burner because of that because in my head it was somehow using older technology.&lt;/p&gt;

&lt;p&gt;Project Flogo by TIBCO is prettier than Nifi but the drawbacks are that basically TIBCO didn't get much traction around this project and it's open source but if you want to customize anything you have to do it in go-lang or you can just write in go-lang instead of the UI. It seems like a better alternative than the Nifi so I left it in 2nd place.&lt;/p&gt;

&lt;p&gt;After looking at those two projects the time came to implement it so I searched a bit more again and found the n8n which is a Berlin based startup and did a fair-code license approach but the huge difference is that visually it looks a lot like the commercial products (Nintex Workflow and Power Automate) but also with the other part which is build your own nodes from code and having a bunch of nodes out of the box is just perfect. Not only that but they also have a workflow marketplace so you can grab a workflow for your needs and start working with it in your own environment. This is it, it was my choice and I will start playing around with it in the coming days and weeks, the most important connector I already found which is "Execute Command" to run some shell on the server that means we can automate some docker commands to spin off some stacks or services or whatever other thing I can think of.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;You can setup using the following docker compose in your swarm or your kubernetes, you are in charge.&lt;/p&gt;

&lt;p&gt;If you have swarmpit you can just copy and paste this and hit deploy and you're done for the day.&lt;/p&gt;

&lt;p&gt;Docker stack deploy: &lt;a href="https://docs.docker.com/engine/reference/commandline/stack_deploy/"&gt;https://docs.docker.com/engine/reference/commandline/stack_deploy/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker stack deploy docker-compose.yml n8n&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Implement the following docker-compose.yml to your environment, please change the values of ADMINUSER and ADMINPASSWORD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.3'
services:
  n8n:
    image: n8nio/n8n:latest
    command:
     - /bin/sh
     - -c
     - sleep 5; n8n start
    environment:
      N8N_BASIC_AUTH_ACTIVE: 'true'
      N8N_BASIC_AUTH_PASSWORD: ADMINPASSWORD
      N8N_BASIC_AUTH_USER: ADMINUSER
    ports:
     - 5678:5678
    volumes:
     - n8n-web:/root/.n8n
    networks:
     - default
    logging:
      driver: json-file
networks:
  default:
    driver: overlay
volumes:
  n8n-web:
    driver: local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;OBS: Please note that in the docker-compose setup I didn't put a database they say it's best with postgres but I didn't like the way the setup is at the moment for that DB so I'm talking with them in the community channel to see if we can get something better going towards this postgres setup. Without the postgres it's just running on SQLite OOTB.&lt;/p&gt;

&lt;h1&gt;
  
  
  Open ports
&lt;/h1&gt;

&lt;p&gt;TCP: 5678&lt;br&gt;
Obs: For this compose setup if you have virtual hosts going you can setup a subdomain instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I have seen of n8n
&lt;/h2&gt;

&lt;p&gt;It looks like an active project, has been in high upvotes in product hunt and it's activily hiring in Berlin for more than 1 position. They're going to launch pretty soon a n8n cloud which is going to be the paid product release based upon the project. It seems like it's going to have a great future this project my best wishes and blessing to these guys, genius way to enter the workflow market!&lt;/p&gt;

&lt;p&gt;Watch-out Nintex Workflow and Power Automate here comes n8n.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>swarm</category>
      <category>workflow</category>
    </item>
    <item>
      <title>Free docker cluster mesh with swarm and GCP</title>
      <dc:creator>Ramon Lima</dc:creator>
      <pubDate>Fri, 28 Aug 2020 00:38:18 +0000</pubDate>
      <link>https://forem.com/ramonck/free-docker-cluster-mesh-with-swarm-and-gcp-3e55</link>
      <guid>https://forem.com/ramonck/free-docker-cluster-mesh-with-swarm-and-gcp-3e55</guid>
      <description>&lt;p&gt;Imagine that you could have your own free docker cluster right now? You probably will say that I'm insane or crazy and that maybe I'm trying to lie or sell you some crazy deal.&lt;br&gt;
Below I will go with you on each little step that you will need to do to get that free docker cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cluster
&lt;/h2&gt;

&lt;p&gt;It's going to be composed of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GCP free-tier nodes.&lt;/li&gt;
&lt;li&gt;Docker Swarm only (no Kubernetes).&lt;/li&gt;
&lt;li&gt;Swarmpit as a GUI (Totally optional but useful).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Mesh cluster
&lt;/h2&gt;

&lt;p&gt;Mesh means we can put any provider as a server in this cluster, so the setup here lets you basically to put anything on your cluster from old hardware to top notch nodes it's all up to you and how much you're up for spending, following this tutorial is all free even after the 1 year so don't worry about it.&lt;/p&gt;

&lt;p&gt;With some big providers now charging somewhere around the 0.10 cents an hour or something just for the kubernetes for some this could be another node in the cluster with this setup below so pay attention.&lt;/p&gt;

&lt;p&gt;In this setup we're using our own managing interface it's not as hyped or full featured as what some provide but for many setups it's more than enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;How will this setup be, let us start with the nodes first? Google gives you a free "life-time" I guess is until a lot of folks start using it (after this post haha).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get two or more accounts setup into GCP: &lt;a href="https://cloud.google.com/"&gt;https://cloud.google.com/&lt;/a&gt; (you will need at least you and your dad or your wife.. two credit cards to be inserted and create the accounts, after you see this working you will get your family into creating GCP accounts and side loading the f1-micros in hehe)&lt;/li&gt;
&lt;li&gt;On each account create the Free tier server that Google provides to you which is the: Series N1 -&amp;gt; Type: f1-micro (1 CPU, 614 MB).&lt;/li&gt;
&lt;li&gt;Change the disk image to Ubuntu -&amp;gt; Ubuntu 20.04 LTS Minimal&lt;/li&gt;
&lt;li&gt;Make sure to enable HTTP and HTTPS traffic on network, it's good to have this already enabled OOTB for you so we don't have to do an extra configuration step for these ports later.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you did this already twice you're starting out great now we're going to go into the technical fun part of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory
&lt;/h2&gt;

&lt;p&gt;Think about this, with 614MB the OS at Minimal is going to make only around 520 MB available so we'll have to work under serious limitations, when people come up to you and say the word "Kubernetes" that alone needs at least 4GB of RAM to start the talks. For many folks that already have a Kubernetes setup thinking about optimizing their setup with the information below can be a great strategy for some companies it can be a great way to save some money now with this pandemic situation so let us continue and stop the talking.&lt;/p&gt;

&lt;p&gt;The docker swarm in the other hand alone consumes around 65 MB of RAM and then there a little amount for the containerd.io that supports it so we can round the full operation up to 70MB. We don't need anything else beyond the docker swarm to have a full operational cluster without depending on any provider or anything in that fashion. But as a bonus I will also show how to install the Swarmpit which gives you a great UI to do any setup or even study further the docker swarm which is where I'm at in this point in time. I have to tell you I love the low foot-print of the swarm it's crazy how performant it's specially compared to the Kubernetes.&lt;/p&gt;

&lt;p&gt;OBS: I'm not here to say which one is the best or wins, just trying to give you insights and information and that's it. No sides taken but I had to choose the swarm because of the resource limitations choosen for this setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Swap
&lt;/h2&gt;

&lt;p&gt;Swap to the rescue, we will need to configure swap over recommendations due to the memory restrictions inputted into the setup, it's not great to have a lot of stuff running in swap due to speed, specially because this free server doesn't give you a good disk technology either but we're after the free setup so we don't really care at this point.&lt;/p&gt;

&lt;p&gt;On any server you come up with if it's going to be the f1-micro do this swap config so you don't run into issues when running stuff on the servers. Usually you would do a setup of swap anyways but here we go:&lt;/p&gt;

&lt;p&gt;We will need nano installed on the nodes so please make sure to install nano: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the references of APT &lt;code&gt;sudo apt update&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Install Nano &lt;code&gt;sudo apt install nano&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;OBS: You can use apt or apt-get either is fine, you're in charge.&lt;/p&gt;

&lt;p&gt;Assuming that this is a fresh start I'm not checking if you have a swap or not and always remember you can change any numbers you feel like if you feel hard about it, I'm just suggesting here what I did basically.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give 2 GB of Swap: &lt;code&gt;sudo fallocate -l 2G /swapfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Make swap file only accessible to root: &lt;code&gt;sudo chmod 600 /swapfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Mark the file as a swap file: &lt;code&gt;sudo mkswap /swapfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ubuntu start using our new swap: &lt;code&gt;sudo swapon /swapfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Survive a reboot, save to /etc/fstab file: &lt;code&gt;echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Change the swappiness value: &lt;code&gt;sudo sysctl vm.swappiness=10&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Change the vfs_cache_pressure value: &lt;code&gt;sudo sysctl vm.vfs_cache_pressure=50&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Persist on reboot, edit the sysctl: &lt;code&gt;sudo nano /etc/sysctl.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add the following to the bottom of the file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vm.swappiness=10
vm.vfs_cache_pressure=50
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;To save the nano and exit, type together: &lt;code&gt;CTRL + X&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Then type: &lt;code&gt;y&lt;/code&gt; and then &lt;code&gt;ENTER&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Open ports
&lt;/h2&gt;

&lt;p&gt;If you're doing it in GCP you can click on Firewall (VPC Network), then you can click on Create Firewall rule.&lt;br&gt;
For our case just put destination to all servers and then make sure to open Ports for Swarm and Swarmpit (If you want the GUI for Swarm, if not then just open the swarm ports).&lt;/p&gt;

&lt;h3&gt;
  
  
  Swarm
&lt;/h3&gt;

&lt;p&gt;TCP: 2377,7946&lt;br&gt;
UDP: 7946,4789&lt;/p&gt;

&lt;h3&gt;
  
  
  Swarmpit
&lt;/h3&gt;

&lt;p&gt;TCP: 888 (Or whatever other port you want to configure, or you can also configure a virtual path is up to you)&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker and Swarm
&lt;/h2&gt;

&lt;p&gt;Now we're done with swap in our nodes and opening the ports we'll start with the Docker and Swarm setup which is the product we're looking forward to do our cluster with.&lt;/p&gt;

&lt;p&gt;On all nodes do the docker setup (If you prefer you can follow the instructions in the Resources section below instead of here):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Update the pre-requirements for docker:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;sudo apt-get install \&lt;br&gt;
apt-transport-https \&lt;br&gt;
ca-certificates \&lt;br&gt;
curl \&lt;br&gt;
gnupg-agent \&lt;br&gt;
software-properties-common&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Apt key: &lt;code&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Apt repository:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;sudo add-apt-repository \&lt;br&gt;
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \&lt;br&gt;
$(lsb_release -cs) \&lt;br&gt;
stable"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update Apt: &lt;code&gt;sudo apt update&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install docker and containerd.io: &lt;code&gt;sudo apt-get install docker-ce docker-ce-cli containerd.io&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure that docker is run as non root: &lt;code&gt;sudo usermod -aG docker YOURUBUNTUUSER&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now on the first master node you will run the following command to start the swarm: &lt;code&gt;docker swarm init&lt;/code&gt;, you'll get a string with the connection of how you can add worker nodes to your master, copy and save that string somewhere so you don't loose it. The string looks like the following: &lt;code&gt;docker swarm join --token BBBBB-1-3kre8r9q3120o46elb9aaaaav6ohrxxx9byy6bbb99g2jffow1-05y7ck1r4x0qvf6smli06175l 99.999.110.96:2377&lt;/code&gt;&lt;br&gt;
Make sure to replace the IP in the end with your external IP because it will generate a string with your internal IP.&lt;/p&gt;

&lt;p&gt;On your worker nodes you can place this string with your external IP and you will me adding nodes to your swarm. You can have multiple master nodes as well as multiple worker nodes in your setup and docker swarm does a fantastic job of auto-scaling accordingly.&lt;/p&gt;

&lt;p&gt;There's also auto-failover out of the box for swarm, I've not gotten there yet I have just got this setup going and wanted to share so I will continue to explore and investigate and share in another post with my findings.&lt;/p&gt;

&lt;p&gt;Run the following command to verify that you see the other nodes from the master node: &lt;code&gt;docker node ls&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Swarmpit (Optional)
&lt;/h2&gt;

&lt;p&gt;There are some people that don't like the GUIs and all I'm actually all in for the visual, the dashboards and everything you can give me in terms of information and I searched for a good UI for swarm and I found this great project called Swarmpit, it's beautiful, it helps me understand the mais swarm concepts and I can use it to manage my swarm so huge thanks for this work.&lt;/p&gt;

&lt;p&gt;Within the master node please do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a directory for the swarmpit docker compose file: &lt;code&gt;mkdir swarmpit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create the docker file:&lt;code&gt;cd swarmpit &amp;amp;&amp;amp; nano docker-compose.yml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Input the following content into the file: 
OBS: Please note that the current version of swarmpit is 1.9, please refer to the Credits section to the swarmpit website to get the latest version currently and put that version instead in the docker-compose.yml
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
 app:
   image: swarmpit/swarmpit:1.9
   environment:
     SWARMPIT_DB: http://db:5984
   volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
   ports:
     - 888:8080
   networks:
     - net
   deploy:
     placement:
       constraints:
         - node.role == manager
 db:
   image: klaemo/couchdb:2.0.0
   volumes:
     - db-data:/opt/couchdb/data
   networks:
     - net
networks:
   net:
     driver: overlay
volumes:
   db-data:
     driver: local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Go back to the folder you were in: &lt;code&gt;cd ..&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Setup a stack for Swarmpit: &lt;code&gt;docker stack deploy -c swarmpit/docker-compose.yml swarmpit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;To see if the swarmpit is up you can check with &lt;code&gt;docker service ls&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Now it's time to test and see it working: &lt;code&gt;curl -v http://127.0.0.1:888&lt;/code&gt;
OBS: If you see html tags it means it's up and running.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you can access the swarmpit in the external ip of your master node or if you have multiple any ip of your masters. &lt;a href="http://EXTERNALIPMASTERNODE:888"&gt;http://EXTERNALIPMASTERNODE:888&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will ask you for login and password, the defaults are admin/admin. Please change your password after first login.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;You can think about LB scenario and also Prometheus for monitoring but I will leave that up to you guys for your homework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanks
&lt;/h2&gt;

&lt;p&gt;Thank you god, mom, wife, son, the company I work for, friends, these companies that made this possible, all these talents working on the Docker eco-system and special thanks to Google for the f1-micro.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credits / Resources:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/engine/install/ubuntu/"&gt;https://docs.docker.com/engine/install/ubuntu/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/free/docs/gcp-free-tier"&gt;https://cloud.google.com/free/docs/gcp-free-tier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://swarmpit.io/"&gt;https://swarmpit.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devanswers.co/creating-swap-space-ubuntu-18-04/"&gt;https://devanswers.co/creating-swap-space-ubuntu-18-04/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ralph.blog.imixs.com/2017/11/27/lightweight-docker-swarm-environment/"&gt;https://ralph.blog.imixs.com/2017/11/27/lightweight-docker-swarm-environment/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>cluster</category>
      <category>googlecloud</category>
      <category>swarm</category>
    </item>
  </channel>
</rss>
