<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sergey</title>
    <description>The latest articles on Forem by Sergey (@sshnaidm).</description>
    <link>https://forem.com/sshnaidm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sshnaidm"/>
    <language>en</language>
    <item>
      <title>Transible — represents cloud configuration as Ansible playbooks</title>
      <dc:creator>Sergey</dc:creator>
      <pubDate>Mon, 02 Dec 2019 16:35:50 +0000</pubDate>
      <link>https://forem.com/sshnaidm/transible-represents-cloud-configuration-as-ansible-playbooks-42dm</link>
      <guid>https://forem.com/sshnaidm/transible-represents-cloud-configuration-as-ansible-playbooks-42dm</guid>
      <description>&lt;h1&gt;
  
  
  What is Transible
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Short description
&lt;/h2&gt;

&lt;p&gt;This tool takes your current cloud configuration and represents it as Ansible playbooks. Its repository on GitHub is: &lt;a href="https://github.com/sshnaidm/transible"&gt;https://github.com/sshnaidm/transible&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  More details
&lt;/h2&gt;

&lt;p&gt;It takes servers, security rules, images, volumes and all other things your cloud includes and defines their config as tasks in Ansible playbooks that are ready for deployment. By running these playbooks you actually can deploy or redeploy your current cloud.&lt;br&gt;
Simple run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/sshnaidm/transible
&lt;span class="nb"&gt;cd &lt;/span&gt;transible
./transible.py &lt;span class="nt"&gt;--os-cloud&lt;/span&gt; your_cloud_name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Why would anybody redeploy its existing cloud?
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Use-cases of Transible
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Freezing cloud config for further IaaC management
&lt;/h3&gt;

&lt;p&gt;There are cases when you have a configured cloud with a lot of servers, networks,&lt;br&gt;
images, etc. But sometimes we don't have this properly managed with&lt;br&gt;
Infrastructure as Code principles and cloud maintenance turns to be a nightmare.&lt;br&gt;
Hundreds of legacy servers with unknown configurations, complex network setups that nobody knows who made them and for what purpose. Just collecting information from the cloud can be a complex task when it's under load and things change quickly. Removing legacy and things that look unused can cause breakages in the work of others, while it's almost impossible to rollback when it turns out this deleted server is in usage and very important for other teams.&lt;br&gt;
It's pretty difficult and sometimes even impossible manually to create IaaC configs for existing infrastructure that can include hundreds of servers or more, &lt;strong&gt;Transible&lt;/strong&gt; comes to solve this problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transible&lt;/strong&gt; can take your cloud and convert everything in it to Ansible playbooks. That way you have all your servers, networks, images and others in one place, ready for maintenance and management. Since you have it in your deployment tool, all next changes can come via Git and keeping IaaC principles. By running ansible playbooks you can add, remove or change your cloud config.&lt;/p&gt;
&lt;h3&gt;
  
  
  Keeping current cloud configuration
&lt;/h3&gt;

&lt;p&gt;There are various use-cases of testing on clouds and sometimes it's required to keep the most important resources untouched. Dumped configuration of desired state and all mandatory resources will help to ensure you have all you need before starting the next testing tasks. Just run &lt;strong&gt;Transible&lt;/strong&gt; and get you config dumped in Ansible playbooks. You can edit them and leave there only important things. Executing these playbooks before every cloud work will restore any lacking resources.&lt;br&gt;
Any time you want to "snapshot" your cloud in the current state, just run &lt;strong&gt;Transible&lt;/strong&gt; and using generated playbooks you can return this config later.&lt;/p&gt;
&lt;h3&gt;
  
  
  Moving current infrastructure to another tenant, cloud, provider, whatever
&lt;/h3&gt;

&lt;p&gt;For example, you need to move your current infrastructure to another cloud in a different hosting provider. In case you don’t have everything in Git or some un-tracked changes were made and you don’t want to lose them. Converting the current cloud to Ansible will freeze cloud config and you can run them with different cloud credentials to deploy everything in a new place.&lt;/p&gt;
&lt;h3&gt;
  
  
  Release vendor lock
&lt;/h3&gt;

&lt;p&gt;In the next &lt;strong&gt;Transible&lt;/strong&gt; versions it will be possible to convert one cloud configuration to another, for example, Openstack to AWS, or Azure to GCP. This will allow us to prevent hard vendor-lock and help Cloud Ops to maintain and move their infrastructure in more convenient ways.&lt;/p&gt;
&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Currently, it supports the Openstack cloud only, while others are under development. Using &lt;a href="https://github.com/openstack/openstacksdk"&gt;openstacksdk&lt;/a&gt;, it requests information from the cloud about current resources. After that using templates of &lt;a href="https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html#openstack"&gt;Ansible Openstack modules&lt;/a&gt; it creates the playbooks configuration.&lt;/p&gt;

&lt;p&gt;Although what if we have hundreds of resources or even more? Having 300 server configurations in a playbook that can end up with 10 kLOCs definitely won't help to maintenance and simplicity. For that &lt;strong&gt;Transible&lt;/strong&gt; knows to optimize cloud resources using the &lt;code&gt;loop&lt;/code&gt; cycles of Ansible.&lt;br&gt;
All required data can be separated from playbooks and saved in &lt;code&gt;vars/&lt;/code&gt; folder where can be managed and changed separately, without touching the actual playbooks.&lt;br&gt;
For example &lt;code&gt;images&lt;/code&gt; playbook for Openstack can look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;os_image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;filename&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;min_disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;min_disk&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;disk_format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;disk_format&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;properties&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;cloud&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-cloud&lt;/span&gt;
    &lt;span class="na"&gt;checksum&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;checksum&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;min_ram&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;min_ram&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;is_public&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is_public&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(omit)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
  &lt;span class="na"&gt;loop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;images&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;While images config in &lt;code&gt;vars/&lt;/code&gt; folder includes actual images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;new_image&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;....&lt;/span&gt;
    &lt;span class="na"&gt;min_disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
    &lt;span class="na"&gt;is_public&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/v2/images/...hash.../file&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloud-image&lt;/span&gt;
    &lt;span class="na"&gt;checksum&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;....&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;....&lt;/span&gt;
    &lt;span class="na"&gt;min_disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;
    &lt;span class="na"&gt;is_public&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/v2/images/...hash.../file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Parameters that are not defined in variables config will be omitted. So that we'll have the same playbook if weren't optimizing it. But anyway &lt;code&gt;vars&lt;/code&gt; optimizations is configurable for any resource type separately in a plugin configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Variables optimization configuration
&lt;/span&gt;&lt;span class="n"&gt;VARS_OPT_NETWORKS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;VARS_OPT_SUBNETS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There various plugin configurations that will differ for different user scenarios like management of existing cloud, moving to other providers or deployment from scratch. Some of them can't be "guessed" from cloud configuration which doesn't allow to know what was the initial configuration for the deployment. You can tweak and tune these parameters according to your needs.&lt;br&gt;
Let's take an example of moving to a different Openstack cloud or provider.&lt;/p&gt;
&lt;h3&gt;
  
  
  Use-case when moving to a new cloud or recreating from scratch
&lt;/h3&gt;

&lt;p&gt;In case you want to recreate your boot volumes and boot servers from them, you will likely configure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;CREATE_NEW_BOOT_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="n"&gt;USE_SERVER_IMAGES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;SKIP_UNNAMED_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;USE_EXISTING_BOOT_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will recreate all required boot and non-boot volumes on a new cloud.&lt;br&gt;
Most likely in the new cloud you'll have new IP pools from your provider, in this case you can't keep the same floating IPs you had before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;FIP_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But in case you'll have the same floating IP range and would like to keep all floating IPs as they are:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;FIP_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For DHCP in subnets you can allow the server to have any IP they get from DHCP server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;NETWORK_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But if you want to keep the same IPs exactly in networks, configure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;NETWORK_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Use-case when moving to a new cloud or recreating from scratch
&lt;/h3&gt;

&lt;p&gt;In case you want to rerun your playbook on the existing environment and continue to use it, most likely you don't want to recreate anything. Then your configuration will look like that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;CREATE_NEW_BOOT_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;SKIP_UNNAMED_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="n"&gt;USE_EXISTING_BOOT_VOLUMES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="n"&gt;NETWORK_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;span class="n"&gt;FIP_AUTO&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That way the playbook running will end not changing anything and keeping idempotency. (Be aware that not all Openstack modules are idempotent or support &lt;code&gt;--check&lt;/code&gt; run.)&lt;br&gt;
In network configuration you'll have the same IPs in internal networks and subnets are they are today. Although we set &lt;code&gt;FIP_AUTO = True&lt;/code&gt; it doesn't mean that the server will get a different floating IP because they already have one, so it will change nothing for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Road map
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Write tests!&lt;/li&gt;
&lt;li&gt;Create playbooks for user roles, user groups, other openstack services.&lt;/li&gt;
&lt;li&gt;Create a plugin for Kubernetes.&lt;/li&gt;
&lt;li&gt;Create a plugin for AWS&lt;/li&gt;
&lt;li&gt;Create a translator from Openstack to AWS and from AWS to Openstack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;More thoughts and ideas? Please post in comments on &lt;a href="https://github.com/sshnaidm/transible/issues"&gt;issues of the &lt;strong&gt;Transible&lt;/strong&gt; project&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>openstack</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>Speed up Ansible with Mitogen!</title>
      <dc:creator>Sergey</dc:creator>
      <pubDate>Sun, 26 May 2019 13:25:17 +0000</pubDate>
      <link>https://forem.com/sshnaidm/speed-up-ansible-with-mitogen-2c3j</link>
      <guid>https://forem.com/sshnaidm/speed-up-ansible-with-mitogen-2c3j</guid>
      <description>&lt;h2&gt;
  
  
  Speed up Ansible with Mitogen!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; is one of most popular &lt;a href="https://en.wikipedia.org/wiki/Configuration_management#Operating_System_configuration_management" rel="noopener noreferrer"&gt;Configuration Management Systems&lt;/a&gt; nowadays, after it was &lt;a href="https://www.redhat.com/en/blog/why-red-hat-acquired-ansible" rel="noopener noreferrer"&gt;acquired by Red Hat&lt;/a&gt; in 2015 Ansible has reached numbers of thousands of &lt;a href="https://github.com/ansible/ansible/graphs/contributors" rel="noopener noreferrer"&gt;contributors&lt;/a&gt; and became maybe one of most used deployment and orchestration tools. Its use-cases are quite impressive.&lt;br&gt;
Ansible works by SSH connections to remote hosts. It opens SSH session, logs in to the shell, copy python code via network and create a temporary file on remote hosts with this code. In the next step, it executes the current file with python interpreter. All this workflow is pretty heavy and there are multiple ways to make it faster and lighter.&lt;/p&gt;

&lt;p&gt;One of these ways is using &lt;a href="https://docs.ansible.com/ansible/2.3/intro_configuration.html#pipelining" rel="noopener noreferrer"&gt;SSH pipelines&lt;/a&gt; which reuses one SSH session for copying python code of multiple tasks and prevent opening multiple sessions, which saves a lot of time. (Just don’t forget to disable &lt;code&gt;requiretty&lt;/code&gt; settings for sudo on the remote side in &lt;code&gt;/etc/sudoers&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;The new way to speed up Ansible is a great python library called &lt;a href="https://mitogen.networkgenomics.com/" rel="noopener noreferrer"&gt;Mitogen&lt;/a&gt;. If somebody like me was not familiar with it — this library allows fast execution of python code on a remote host and Ansible is only one of its cases. Mitogen uses UNIX pipes on remote machines while passing "pickled" python code compressed with zlib. This allows to run it fast and without much traffic. If you're interested you can read details about how it works in its &lt;a href="https://mitogen.networkgenomics.com/howitworks.html" rel="noopener noreferrer"&gt;"How it works"&lt;/a&gt; page. But we'll focus today on Ansible related part of it.&lt;br&gt;
Mitogen in specific circumstances can speed up your Ansible in a few times and significantly lower your bandwidth. Let's check the most popular use cases and figure out if it's helpful for us.&lt;/p&gt;

&lt;p&gt;The most popular use cases for me to run Ansible are: creating configuration files on a remote host, packages installation, downloading and uploading files from and to a remote host. Maybe you want to check other use cases, please leave comments to this article.&lt;/p&gt;

&lt;p&gt;Let's start rolling!&lt;br&gt;
Configuring Mitogen for Ansible is pretty simple:&lt;br&gt;
Install the Mitogen module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mitogen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then either configure environment variables or set configuration options in ansible.cfg file, both options are fine:&lt;br&gt;
Let's assume &lt;code&gt;/usr/lib/python2.7/site-packages/ansible_mitogen/plugins/strategy&lt;/code&gt; is your path to installed Mitogen library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANSIBLE_STRATEGY_PLUGINS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/lib/python2.7/site-packages/ansible_mitogen/plugins/strategy
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANSIBLE_STRATEGY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mitogen_linear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[defaults]&lt;/span&gt;
&lt;span class="py"&gt;strategy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;mitogen_linear&lt;/span&gt;
&lt;span class="py"&gt;strategy_plugins&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;/usr/lib/python2.7/site-packages/ansible_mitogen/plugins/strategy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Prepare Ansible in virtualenv, with and without Mitogen enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;virtualenv mitogen_ansible
./mitogen_ansible/bin/pip &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;ansible&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.7.10 mitogen
virtualenv pure_ansible
./pure_ansible/bin/pip &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;ansible&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;2.7.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Please pay attention that Mitogen 0.2.7 doesn't work with Ansible 2.8 (for May 2019)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Create aliases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;pure-ansible-playbook&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'$(pwd)/pure_ansible/bin/ansible-playbook'&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;mitogen-ansible-playbook&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ANSIBLE_STRATEGY_PLUGINS=/usr/lib/python2.7/site-packages/ansible_mitogen/plugins/strategy:$(pwd)/mitogen_ansible/lib/python3.7/site-packages/ansible_mitogen/plugins/strategy ANSIBLE_STRATEGY=mitogen_linear $(pwd)/mitogen_ansible/bin/ansible-playbook'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's try the playbook that creates file on remote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create files with copy content module&lt;/span&gt;
      &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;test file {{ item }}&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/file_{{item}}&lt;/span&gt;
      &lt;span class="na"&gt;with_sequence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;start=1 end={{ n }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And run it with Mitogen and without while creating 10 files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;mitogen-ansible-playbook file_creation.yml &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &amp;amp;&amp;gt;/dev/null

real    0m2.603s
user    0m1.152s
sys     0m0.096s

&lt;span class="nb"&gt;time &lt;/span&gt;pure-ansible-playbook file_creation.yml &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &amp;amp;&amp;gt;/dev/null

real    0m5.908s
user    0m1.745s
sys     0m0.643s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right now we see improvement in x2 times. Let's check it for 20, 30, ..., 100 files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time &lt;/span&gt;pure-ansible-playbook file_creation.yml &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;100 &amp;amp;&amp;gt;/dev/null

real    0m51.775s
user    0m8.039s
sys     0m6.305s

&lt;span class="nb"&gt;time &lt;/span&gt;mitogen-ansible-playbook file_creation.yml &lt;span class="nt"&gt;-i&lt;/span&gt; hosts &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;100 &amp;amp;&amp;gt;/dev/null

real    0m4.331s
user    0m1.903s
sys     0m0.197s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Eventually, we improved execution time in more than 10 times!&lt;/p&gt;

&lt;p&gt;Now let's try different scenarios and see how it improves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Scenario of uploading files from the local host to remote (with &lt;code&gt;copy&lt;/code&gt; module):&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fuploading_files.svg%3Fsanitize%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fuploading_files.svg%3Fsanitize%3Dtrue" alt="Uploading files"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scenario of creating files on the remote host with &lt;code&gt;copy&lt;/code&gt; module:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fcreating_files.svg%3Fsanitize%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fcreating_files.svg%3Fsanitize%3Dtrue" alt="Creating files"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scenario with fetching files from the remote host to local:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Ffetching_files.svg%3Fsanitize%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Ffetching_files.svg%3Fsanitize%3Dtrue" alt="Fetching files"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's try the last scenario on a few (3) remote hosts, for example uploading files scenario:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fuploading_files_multiple.svg%3Fsanitize%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgist.githubusercontent.com%2Fsshnaidm%2F092ead17ea4b5204586ad3e16a2f3bc3%2Fraw%2F7a9c9f3bcfda9ec32a45b2f6f39eba3b343c3243%2Fuploading_files_multiple.svg%3Fsanitize%3Dtrue" alt="Uploading files to multiple hosts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see the Mitogen saves us both time and bandwidth in these scenarios. But if the bottleneck is not Ansible,&lt;br&gt;
but for example I/O of disk or network, or somewhere else, then it's hard to expect from Mitogen to help of course.&lt;br&gt;
Let's run for example packages installation with yum/dnf and python modules installation with pip.&lt;br&gt;
Packages were pre-cached to avoid dependencies on network glitches:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install packages&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;package&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;samba&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;httpd&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nano&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ruby&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install pip modules&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;pip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pytest-split-tests&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;bottle&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pep8&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;flask&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Mitogen it takes 12 seconds, as well as with pure Ansible.&lt;br&gt;
In &lt;a href="https://mitogen.networkgenomics.com/ansible_detailed.html" rel="noopener noreferrer"&gt;Mitogen for Ansible page&lt;/a&gt; you can see additional benchmarks&lt;br&gt;
and measurements. As the page declares:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mitogen cannot improve a module once it is executing, it can only ensure the module executes as quickly as possible&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's why it's important to find where your bottlenecks are and if they are related to Ansible operations, Mitogen will&lt;br&gt;
help you to solve it and speed your playbooks up significantly.&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>python</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Docker container for HP servers management with ILO</title>
      <dc:creator>Sergey</dc:creator>
      <pubDate>Tue, 21 May 2019 14:35:29 +0000</pubDate>
      <link>https://forem.com/sshnaidm/docker-container-for-hp-servers-management-with-ilo-48c8</link>
      <guid>https://forem.com/sshnaidm/docker-container-for-hp-servers-management-with-ilo-48c8</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszp1yi1wi5g82zrjag5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszp1yi1wi5g82zrjag5s.png" alt="HP ILO"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, you can wonder — why would I use docker container for such a purpose? What’s the problem to enter web-interface of ILO and manage server as usual?&lt;br&gt;
The same thought I had when I’ve got a few old servers that required a reprovision. The servers are located in different continent and the only interface I had it was just a web interface of ILO. And when I had to enter a few manual commands via Virtual Console I discovered that it’s hardly possible.&lt;/p&gt;

&lt;p&gt;For various sorts of Virtual Console of servers (both HP and Dells) usually Java web applets are used. But Firefox and Chrome don’t support them anymore and the newest IcedTea doesn’t work with those old system anyway. So I had a few options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To install in parallel old versions of browsers and Java, trying to find a required combination, on my system. This option was filtered out, since I don’t want to pollute my system just because of few console commands.&lt;/li&gt;
&lt;li&gt;Create a virtual machine with old systems, install there Java 6 and use Virtual Console as before.&lt;/li&gt;
&lt;li&gt;The same as in point 2, but with container, not a virtual machine. Since a few my colleagues hit the same problem, I’d prefer to pass them one bash command to run Virtual Console instead of sharing Virtual Machine disk, passwords for it, etc etc.
(To be honest, point 3 I made only after point 2).
Point 3 is what we are going to implement today.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’ve been inspired mostly by these two projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/jlesage/docker-baseimage-gui" rel="noopener noreferrer"&gt;docker-baseimage-gui&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ktelep/docker-firefox-java" rel="noopener noreferrer"&gt;docker-firefox-java&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Actually, the first project docker-baseimage-gui contains already all needed configs and tools to start desktop apps in browser within a container. Usually you define specific environment variables and your app will become accessible via browser (websocket) or VNC. In our case we start with Firefox and VNC, websocket didn’t work well.&lt;/p&gt;

&lt;p&gt;Firstly, let’s install required packages: Java 6 and IcedTea:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; “deb http://archive.ubuntu.com/ubuntu precise main universe” &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; upgrade &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;firefox &lt;span class="se"&gt;\
&lt;/span&gt;    nano curl &lt;span class="se"&gt;\
&lt;/span&gt;    icedtea-6-plugin &lt;span class="se"&gt;\
&lt;/span&gt;    icedtea-netx &lt;span class="se"&gt;\
&lt;/span&gt;    openjdk-6-jre &lt;span class="se"&gt;\
&lt;/span&gt;    openjdk-6-jre-headless &lt;span class="se"&gt;\
&lt;/span&gt;    tzdata-java


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let’s open the web page of ILO interface in Firefox and enter credentials there. Start Firefox:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;bash &lt;span class="nt"&gt;-c&lt;/span&gt; ‘echo “exec openbox-session &amp;amp;” &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.xinitrc’ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    bash &lt;span class="nt"&gt;-c&lt;/span&gt; ‘echo “firefox &lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;HILO_HOST&lt;span class="o"&gt;}&lt;/span&gt;”&amp;gt;&amp;gt; ~/.xinitrc’ &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    bash &lt;span class="nt"&gt;-c&lt;/span&gt; ‘chmod 755 ~/.xinitrc’


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Variable &lt;code&gt;HILO_HOST&lt;/code&gt; is URL of our ILO interface, for example &lt;code&gt;https://myhp.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For automation let’s add authentication. ILO login is executed via simple POST request, in response you get session_key value and then pass this value in GET request. Let’s discover session_key with curl if environment variables &lt;code&gt;HILO_USER&lt;/code&gt; and &lt;code&gt;HILO_PASS&lt;/code&gt; are defined:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/config
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HILO_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_HOST&lt;/span&gt;&lt;span class="p"&gt;%%/&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;SESSION_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;””
&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;”&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="se"&gt;\”&lt;/span&gt;method&lt;span class="se"&gt;\”&lt;/span&gt;:&lt;span class="se"&gt;\”&lt;/span&gt;login&lt;span class="se"&gt;\”&lt;/span&gt;,&lt;span class="se"&gt;\”&lt;/span&gt;user_login&lt;span class="se"&gt;\”&lt;/span&gt;:&lt;span class="se"&gt;\”&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\”&lt;/span&gt;,&lt;span class="se"&gt;\”&lt;/span&gt;password&lt;span class="se"&gt;\”&lt;/span&gt;:&lt;span class="se"&gt;\”&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_PASS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\”&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;”
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; “&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;” &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; “&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_PASS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;” &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
 &lt;/span&gt;&lt;span class="nv"&gt;SESSION_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST “&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HILO_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/json/login_session” &lt;span class="nt"&gt;-d&lt;/span&gt; “&lt;span class="nv"&gt;$data&lt;/span&gt;” 2&amp;gt;/dev/null | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-Eo&lt;/span&gt; ‘“session_key”:”[^”]+’ | &lt;span class="nb"&gt;sed&lt;/span&gt; ‘s/”session_key”:”//’&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; “SESSION_KEY&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$SESSION_KEY&lt;/span&gt;”
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$SESSION_KEY&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /session_key


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After we wrote session_key in containers we can start VNC server:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;exec &lt;/span&gt;x11vnc &lt;span class="nt"&gt;-forever&lt;/span&gt; &lt;span class="nt"&gt;-create&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now just connect with VNC client to port 5900 (or what you defined in your choice) to localhost and enter the Virtual Console of HP server.&lt;br&gt;
The code is located in git repository docker-ilo-client.&lt;/p&gt;

&lt;p&gt;Full one line command to connect to ILO Virtual Console:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run -d --rm --name ilo-client \
    -p 5900:5900 \
    -e HILO_HOST=https://ADDRESS_OF_YOUR_HOST \
    -e HILO_USER=SOME_USERNAME \
    -e HILO_PASS=SOME_PASSWORD \
    sshnaidm/docker-ilo-client


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;where &lt;code&gt;ADDRESS_OF_YOUR_HOST&lt;/code&gt; is hostname of ILO, &lt;code&gt;SOME_USERNAME&lt;/code&gt; is login and &lt;code&gt;SOME_PASSWORD&lt;/code&gt; is password for ILO.&lt;/p&gt;

&lt;p&gt;Next just go with any VNC client to address &lt;code&gt;vnc://localhost:5900&lt;/code&gt;.&lt;br&gt;
Pull requests and comments are more than welcome.&lt;/p&gt;

&lt;p&gt;The similar project for connection to Dell IDRAC servers is here: &lt;a href="https://github.com/DomiStyle/docker-idrac6" rel="noopener noreferrer"&gt;docker-idrac6&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
