<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Laurent Tardif</title>
    <description>The latest articles on Forem by Laurent Tardif (@ouelcum).</description>
    <link>https://forem.com/ouelcum</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ouelcum"/>
    <language>en</language>
    <item>
      <title>[Devops / Docker] how to manage files and volume with docker</title>
      <dc:creator>Laurent Tardif</dc:creator>
      <pubDate>Thu, 27 Aug 2020 09:07:25 +0000</pubDate>
      <link>https://forem.com/ouelcum/devops-docker-how-to-manage-files-with-docker-2bfl</link>
      <guid>https://forem.com/ouelcum/devops-docker-how-to-manage-files-with-docker-2bfl</guid>
      <description>&lt;p&gt;When starting using docker, lot of people store file either in the container instance itself either on the file system of the server. in this post, will see the different option to deals with files, and we’ll discuss in which case it’s a good approach.&lt;/p&gt;

&lt;p&gt;With docker, like in general, there’s several way to manage files :  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;directly in the container
&lt;/li&gt;
&lt;li&gt;via a mount point on your server
&lt;/li&gt;
&lt;li&gt;data container
&lt;/li&gt;
&lt;li&gt;using a docker volume
&lt;/li&gt;
&lt;li&gt;using a network file system (amazon S3, Hadoop, …) .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Files stored directly in the container&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
 The main advantage of this solution is it’s simplicity.&lt;br&gt;&lt;br&gt;
 But there are several drawbacks:   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your container is not persisted, so, when your container crash, or is removed, you may loose your files. In some case it’s not important : temporary files, demo, files generated by a script, ….
&lt;/li&gt;
&lt;li&gt;Performance, as docker use union fs, when reading and writing files it may not be as fast as expected.
&lt;/li&gt;
&lt;li&gt;file sharing : if you want to share the file among several process, either you have several process in your container (not very good), either you may link strongly two containers (or have them in the same pod if you use K8S), that’s may not be very good too.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Files mount on the host&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
One of the advantage of this solution is also simplicity, and it solves partially the problem of sharing a file among container (as long as thy run on the same server).&lt;br&gt;&lt;br&gt;
This solution have some drawback too:   &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your container need to run on this server,
&lt;/li&gt;
&lt;li&gt;the container need to have access in write (or read) to your host, it may not be secure.
&lt;/li&gt;
&lt;li&gt;that’s to be avoid in production, because you create a hard link between the container and the host, and you loose some of the advantage of the isolation concept.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Files store in a data container&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
Some people do that, …, I still not understand why doing that when you have volumes….&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File in a cloud provider storage&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
It may looks like a perfect solution if your containers run in the cloud, but you create a direct link between the container and the cloud provider. In some case you may not want that. So, use it carefully, being cautious of what your are doing. Using a volume may offer your the abstraction needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Files in a docker volume&lt;/strong&gt; :&lt;br&gt;&lt;br&gt;
This solution may be a bit complex for beginner, as it introduce a new concept, but in fact it’s pretty efficient. The full documentation is available &lt;a href="https://docs.docker.com/storage/volumes/"&gt;here&lt;/a&gt;. You can see it , as creating a new logical disk dedicated for your need, where you can configure the size, the name, the filesystem used.&lt;br&gt;&lt;br&gt;
The main advantages of using a volume are :  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it’s ease file sharing among containers
&lt;/li&gt;
&lt;li&gt;it doesn’t make any link to the server where the container and the volumes are stored.
&lt;/li&gt;
&lt;li&gt;it’s allow to scale, as you can define the filesystem you want inside it, you may even imagine to have a distributed file system like &lt;a href="https://www.gluster.org/"&gt;GlusterFS&lt;/a&gt;. (you may have a look to all driver available &lt;a href="https://docs.docker.com/engine/extend/legacy_plugins/"&gt;here&lt;/a&gt;) . You can even use directly cloud provider solution. So, it’s my favorite solution, as it’s offer also an abstraction level between your container and the file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope it give you some hint on how to manage files, i will try to give some performance metrics soon.&lt;/p&gt;

&lt;p&gt;Why to we need to share a file in the docker world ? I will give some common use case see on several projects :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a container should run only one process, when you start an application, and you need to generate a configuration file at the start up (configuration file that may change over the time because you are using consul, or etcd, or vault ..). So you have a daemon process that scan change , and that update the configuration file, and then that will notify the application to read it. Having two processes in your document will be a mess, as if one of the two process die, you may be in a strange situation. A way to manage it, is too have have two side car container sharing a volume.&lt;/li&gt;
&lt;li&gt;If you want to push your logs in a central place (ELK, ….), you may use the same side container concept.&lt;/li&gt;
&lt;li&gt;Database files, you may store your files in a dedicated volume, that will allow you to start another psql process, using the volume in read-only mode, this will give you access to the data, without taking any risk with the process running your DB.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>[Devops / Docker] How to write a good dockerfile</title>
      <dc:creator>Laurent Tardif</dc:creator>
      <pubDate>Fri, 21 Aug 2020 13:37:20 +0000</pubDate>
      <link>https://forem.com/ouelcum/devops-docker-how-to-write-a-good-docker-file-2af3</link>
      <guid>https://forem.com/ouelcum/devops-docker-how-to-write-a-good-docker-file-2af3</guid>
      <description>&lt;p&gt;Reading or writing a dockerfile, as soon as you get a bit familiar with the syntax is quiet simple. But how many time you spent some time wondering why your image is so big, or why the image you download was so big, how many time lost trying to find the port, the volume defined, have you loose. So, writing a good docker file is not so easy. I’ll try to explain the process that we have defined with &lt;a href="https://github.com/P0ppoff" rel="noopener noreferrer"&gt;Jules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want some kind of introduction to docker, have a look to Aurélie's post &lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/aurelievache" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F9688%2Fg2T2qehD.jpg" alt="aurelievache"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/aurelievache/understanding-docker-part-1-retrieve-pull-images-3ccn" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Understanding Docker: part 1 – Retrieve &amp;amp; Pull images&lt;/h2&gt;
      &lt;h3&gt;Aurélie Vache ・ Aug 3 '20&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#docker&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devops&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloud&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;First of all, what’s a good docker file ? is it a secure one ? , a readable one ?, one that generate a small docker image ? all that ? that’s a challenge.&lt;/p&gt;

&lt;p&gt;In this article, we’ll not focus on security aspect, but we’ll explain how to write a readable docker file, easy to maintain and that will generate a small docker image.&lt;/p&gt;

&lt;p&gt;Let’s start with a simple example :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node  &lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; babel.config.js ./  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json ./  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; yarn.lock ./  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; public ./public  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn lint  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn build  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we docker build this image, we can notice several point :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;it may download a new version of the referenced image “node”&lt;/li&gt;
&lt;li&gt;it create several layers. A layer is an immutable set of files that can be shared among several docker images for performances and disk usage optimization. If you use several images having the same root image, you will have the files of this images present only once on your disk.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;if you rebuild this image, without any modification, you will notice that docker is using a cache mechanism. To give some hints, if nothing change on your disk (for example, you do not change the package.json) it will not redo the step (and if all the previous steps have not changed). In summary, docker will do the minimal stuff needed.&lt;/p&gt;

&lt;p&gt;So, let’s take this 2 points in account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt;  node: **10.15.0-Alpine**  &lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt;  “babel.config.js” “package.json” “yarn.lock” ./  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; public ./public  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn lint  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn build  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, to avoid a random node image being picked, i defined a strict version for it. And to avoid useless layer, i defined a layer with the 3 configurations files.&lt;/p&gt;

&lt;p&gt;Now that we have reduced the two first “issues”, let’s take advantage of the cache mechanism.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt;  node:10.15.0-Alpine  &lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; “babel.config.js” “package.json” “yarn.lock” ./  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn  
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; public ./public  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn lint  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn build  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, I move up the “RUN yarn” just after the copy of the configuration file. Like that, as long as the configuration files do not change, my “RUN yarn” will not be executed, and I’ll benefit of the docker cache mechanism.&lt;/p&gt;

&lt;p&gt;Now, I’ll not run directly use yarn to run my application. Let’s set up quickly an nginx server in front of it. So, I’ll use the multi stage feature of docker. It’s allow me to reuse the result of a docker image in an another one, define in the same dockerfile. Let’s do it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt;  node:10.15.0-Alpine **AS builder**  &lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; “babel.config.js” “package.json” “yarn.lock” ./  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn  
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; public ./public  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src ./src  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn lint  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn build  
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn serve 

&lt;span class="c"&gt;# Volume inherited from nginx image  &lt;/span&gt;
&lt;span class="c"&gt;# VOLUME /usr/share/nginx/html  &lt;/span&gt;
&lt;span class="c"&gt;# VOLUME /etc/nginx   &lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:1.12.1-alpine  &lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80    &lt;/span&gt;
&lt;span class="k"&gt;HEALTHCHECK&lt;/span&gt;&lt;span class="s"&gt; –interval=5m –timeout=3s -CMD curl -f http://localhost/ || exit 1   &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; –from=builder /usr/src/app/dist/ /usr/share/nginx/html/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to do it, I’ve defined a logical name to my first image (I named it “builder”, it will allow me to refer to it in the second image).&lt;/p&gt;

&lt;p&gt;Then I defined a new image, from nginx, and I copy the website content from the first image. So like that, my nginx image is minimal and contains only mandatory stuff (from a security stuff, the less you have, the better it is).&lt;/p&gt;

&lt;p&gt;You can notice I’ve define the expose and healtheck in the top of my docker image definition. Because it a kind of API of my image, i make it quickly visible to the reader. You can access my image on port 80, there’s a check that insure the nginx server respond, the run and the copy element are a kind of implementation, and only few people want to read it.&lt;/p&gt;

&lt;p&gt;Of course, we can improve this image, but if you reach this point, it’s a very good starting point for people using your image.&lt;/p&gt;

</description>
      <category>ci</category>
      <category>devops</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
