<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Javier Marasco</title>
    <description>The latest articles on Forem by Javier Marasco (@javiermarasco).</description>
    <link>https://forem.com/javiermarasco</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/javiermarasco"/>
    <language>en</language>
    <item>
      <title>Managed cluster vs unmanaged clusters in Kubernetes</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Tue, 27 Jun 2023 08:30:00 +0000</pubDate>
      <link>https://forem.com/javiermarasco/managed-cluster-vs-unmanaged-clusters-in-kubernetes-3m1f</link>
      <guid>https://forem.com/javiermarasco/managed-cluster-vs-unmanaged-clusters-in-kubernetes-3m1f</guid>
      <description>&lt;p&gt;As we see in the latest article &lt;a href="https://dev.to/javiermarasco/what-is-kubernetes-how-does-it-works-and-why-do-we-need-it-38pl"&gt;here&lt;/a&gt; a Kubernetes cluster can contain a lot of components just to function properly and as you can imagine, install, maintain, update, upgrade and operate such an infrastructure might be haunting.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does it look like to build a Kubernetes cluster?
&lt;/h2&gt;

&lt;p&gt;As discussed there are two kinds of nodes for our cluster, the &lt;code&gt;master node&lt;/code&gt; also known as &lt;code&gt;control nodes&lt;/code&gt;, and the &lt;code&gt;worker nodes&lt;/code&gt;. Each of those nodes contains different components that will be supporting our cluster, a minimum configuration will have one &lt;code&gt;control node&lt;/code&gt; and one &lt;code&gt;worker node&lt;/code&gt;, which can be physical machines or virtual machines connected by a network.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;control nodes&lt;/code&gt; will run the API server, an etcd database, the scheduler, and some controllers while the &lt;code&gt;worker nodes&lt;/code&gt; will run only tree components being them the kubelet, kube-proxy, and the container runtime.&lt;/p&gt;

&lt;p&gt;This might sound simple, right? but then you need to configure all those components, you also need to do all the networking needed for those nodes to communicate, and in the networking part there is one additional thing, you will (eventually) need to expose your applications to the internet using an ingress controller which will require to have an external IP exposed to the internet from where all the incoming traffic to your applications will &lt;code&gt;ingress&lt;/code&gt; your cluster, this means you will need to administer the creation (and maintenance) of those public IPs.&lt;/p&gt;

&lt;p&gt;This becomes a complex task pretty quick but if this is so complicated, why would anyone want to build a cluster for themselves? well, the answer is pretty easy, &lt;em&gt;security&lt;/em&gt; when you build your cluster, you keep control of everything, patching, kernel versions, os versions, packages in the nodes, everything is possible to customize to your needs. It might sound like overkill but think in heavily regulated industries like the financial sector, there you would like to have control over every piece of your infrastructure, you don't want to have a bug in a library running in the OS of your worker node that allows malicious actors to exploit it and gain root in one of your pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managed clusters and the cloud providers offerings
&lt;/h2&gt;

&lt;p&gt;So, we are not working in a highly regulated industry but we do need one or more clusters and we don't want to spend months learning how to build a cluster and then need to maintain them manually, what can we do?&lt;/p&gt;

&lt;p&gt;Luckily each cloud provider (at least the most popular ones) does offer automated ways to build a cluster for you and give you access to it to simply deploy resources into them. This is very convenient for most of the scenarios where a Kubernetes cluster is needed, even for production workloads, those cloud providers have (normally) multiple levels of SLAs for those clusters meaning that if you are running a development cluster you can opt for a lower SLA but also a cheaper billing while for production workloads you can opt for a higher SLA which includes highly available resources but that will also cost you more, but you can choose what is best for your case and environment.&lt;/p&gt;

&lt;p&gt;Those managed clusters have a clear downside, and that is the fact that the cloud providers will retain the control of the &lt;code&gt;control plane&lt;/code&gt; and their &lt;code&gt;control nodes&lt;/code&gt; meaning you can't decide on how to configure it and even when it is possible to adjust and tune your &lt;code&gt;worker nodes&lt;/code&gt; (even apps, libraries and kernel configurations) as soon as you modify them, you loose support from the cloud provider. This might sound like a bad move from the providers but think what would it mean for them to support every possible change made in any cluster in the world? it would be impossible for them to provide proper support so they stick to a proven configuration and they support it, if you move away from that you are on your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patching and upgrading your cluster
&lt;/h2&gt;

&lt;p&gt;We discussed a lot about building and configuring your cluster, but what about version upgrades? what happens when Kubernetes releases a new version? well, here is again a big difference, cloud providers tend to be a bit delayed with the latest version being released because they need to test it first, pass certain internal validations, and then expose them for you to consume in a trustable manner, this takes time meaning if you are wanting to implement a super shiny new feature of the latest release of Kubernetes and you are running a managed cluster you will need to wait a bit before this version is available for you to pick it up, while in a managed cluster you can upgrade whenever you want.&lt;/p&gt;

&lt;p&gt;But what happens if something goes wrong with my upgrade? I moved to a newer version and now everything is broken (see the note after this paragraph for a good example). In a managed cluster it is very complicated to revert to an older Kubernetes version I would say it is impossible but I actually know cases where the cloud provider managed to revert the version in VERY specific cases but I would take this as "something you can't do" but in your unmanaged cluster it is as simple as just reinstall the older version or revert to a backup snapshot from before your upgrade (because of course you do backups before upgrading your cluster version) and you are again in a case like nothing happened, very easy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A note about this, recently with the introduction of version 1.25 there was a change in a configuration called cgroups in the kernel of the OS being used by Kubernetes, this caused some applications using an old (but still functional) version of their frameworks to start consuming a lot of memory and making the kubelet constantly kill those pods, a simple solution is to upgrade your framework in your app, but if that is not possible you can also revert the change in the worker node kernel, but this last option will make you &lt;code&gt;unsupported&lt;/code&gt; for your cloud provider.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  So, what is the takeaway of this article?
&lt;/h2&gt;

&lt;p&gt;Essentially you now know what a managed and unmanaged cluster, you know what are the advantages and disadvantages of each one, and also you know what it means to have a managed cluster and what can you do with it.&lt;/p&gt;

&lt;p&gt;The main point here is to know your situation, know your use case and know what you expect from Kubernetes and your cluster/s. Will you need to meet very strict standards and provide proof that you have certain configurations? then you are tied to an unmanaged cluster, you will need to learn and understand the details of Kubernetes and manage all by yourself, which might sound haunting but with time and training, you will be perfectly fine.&lt;br&gt;
If on the other hand, you will be deploying applications that you know are secure, you don't need to provide configurations to audits and you can rely on a cloud provider having control over your control plane, you are more than fine with a managed cluster and you can skip the deep part of Kubernetes for now (it is always advisable to learn the concepts anyway, but you will not be rushing to learn and understand everything because you need to build you cluster quickly).&lt;/p&gt;

&lt;p&gt;So what are your thoughts? would you prefer a managed cluster? an unmanaged cluster? would you feel comfortable knowing your control plane is not under your control? Let me know in the comments box 👍&lt;/p&gt;

&lt;p&gt;Thank you for reading 📚 !&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>automation</category>
      <category>devops</category>
    </item>
    <item>
      <title>The ever-growing kubernetes manual</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Thu, 22 Jun 2023 17:13:02 +0000</pubDate>
      <link>https://forem.com/javiermarasco/the-ever-growing-kubernetes-manual-i6f</link>
      <guid>https://forem.com/javiermarasco/the-ever-growing-kubernetes-manual-i6f</guid>
      <description>&lt;p&gt;After several months not posting and trying to think a way to articulate this idea I had in mind, I think I got how to present this to the community.&lt;/p&gt;

&lt;p&gt;This post will serve as an index for a series of other articles about kubernetes from scratch, I am planing on writing a more elavorated version of it (possibly in a ebook format) but I also wanted to give a free version to the community with the basics of Kubernetes in an easy to follow format clarifying the basic concepts you will need to go from 0 to a degree where you will feel confortable enough to deploy apps to Kubernetes and do some basic troubleshooting by using a easy to follow narrative to not only read technical information but also to nuderstand the reasoning behind it.&lt;/p&gt;

&lt;p&gt;I will try to add more articles into the list so if you feel there are items not being presented please let me know and I will add them into the index.&lt;/p&gt;

&lt;p&gt;So, let's jump into it:&lt;/p&gt;

&lt;p&gt;1 - &lt;a href="https://dev.to/javiermarasco/what-is-kubernetes-how-does-it-works-and-why-do-we-need-it-38pl"&gt;What is Kubernetes, how does it works and why do we need it&lt;/a&gt;&lt;br&gt;
2 - Managed cluster vs unmanaged clusters &lt;br&gt;
3 - Basic elements of an applications running in Kubernetes&lt;br&gt;
4 - Deployments, replicasets and pods&lt;br&gt;
5 - Services, networking and ingress&lt;br&gt;
6 - Storage classes, volumes and how to store your app data&lt;br&gt;
7 - Secrets, config maps and how they are used&lt;/p&gt;

&lt;p&gt;I will be adding those articles soon and update this index with the links to the corresponding articles, follow me to get notified every time I upload one of them and don't miss a single post :)&lt;/p&gt;

&lt;p&gt;As always, comments and feedback is always welcome and appreciated.&lt;/p&gt;

&lt;p&gt;See you later!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>automation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What is Kubernetes, how does it works and why do we need it</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Thu, 22 Jun 2023 17:12:40 +0000</pubDate>
      <link>https://forem.com/javiermarasco/what-is-kubernetes-how-does-it-works-and-why-do-we-need-it-38pl</link>
      <guid>https://forem.com/javiermarasco/what-is-kubernetes-how-does-it-works-and-why-do-we-need-it-38pl</guid>
      <description>&lt;p&gt;A lot of people are getting into Kubernetes and the first thing they normally do is google "How to deploy an application to Kubernetes" which could make sense but then you will find thousands of articles (including the official documentation) explaining how to deploy an application, but then you see "deployment", "replica set", "pods", "resource quotas", "secrets" and everything starts getting confusing, complex and not making sense and then you get overwhelmed by the amount of information.&lt;/p&gt;

&lt;p&gt;I believe the first thing to start with a technology that is new to you is to understand why such technology exists, what problems it solves, and how it works (conceptually), this will let you have a clear picture of the technology and will ultimately let you decide if it is the best fit for your particular case, so let's start with that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A little of the history and background&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Previous to Kubernetes applications used to be deployed in virtual machines (and previous to that into physical machines) but those machines needed to have libraries, dependencies, networking configurations, etc. You can imagine managing that was very complex and changes needed a long time and coordination between multiple teams. Then in 2013, there was a presentation that changed everything, a person called "Solomon Hykes" presented a new project his company (with two other persons) was working on, it was Docker.&lt;/p&gt;

&lt;p&gt;The key difference with Docker was nothing new actually, was a set of capabilities that already existed in the Linux kernel for a long time but now it was being exposed to the application level in a more comprehensive way, with this "Docker" tool, it was possible to pack your code with all dependencies in a "container" which you could take to any other system where Docker was running and it will behave the same, that was exactly what was needed!&lt;/p&gt;

&lt;p&gt;So now a few months/years had passed and you have containers everywhere, everyone is happy with the approach but the more containers you have it becomes a bit more complicated to manage, so we started using "docker-compose" which was a way to group containers into "logical units" to deploy them together and have some sense of integration between them, quickly it was obvious that there was a need for some way to manage large amounts of containers, then "Docker Swarm" appeared which was a tool to orchestrate the deployment of complex applications with multiple containers. This worked fine as an orchestrator but then in 2014 Google releases Kubernetes as an alternative to Docker Swarm but with more functionalities, more features were added with time and the community adopted Kubernetes as the de facto orchestrator, Kubernetes used Docker under the hood for a quite long time to execute the containers the same as Docker Swarm but later the community of Kubernetes decided to make it possible to use any container engine you want so they adjusted Kubernetes to support virtually any container engine removing the need to have Docker as the only container engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But... how does it work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes at the very top level is very simple in concept, you have your image (your application code and dependencies packed into a single file made of multiple "layers") and you will deploy it into a set of machines running "something" that will take your image and make it run this "something" will take care of checking your image is running in a healthy machine, will restart the image when something bad happens to it, will kill it and restart it if it starts consuming too many resources, it will handle communication in/out from the world to your image, etc.&lt;/p&gt;

&lt;p&gt;To do this, Kubernetes has its own internal components, being them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Server&lt;/li&gt;
&lt;li&gt;Scheduler&lt;/li&gt;
&lt;li&gt;Kubelet&lt;/li&gt;
&lt;li&gt;Kube proxy&lt;/li&gt;
&lt;li&gt;etcd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API Server is the interface between Kubernetes and the rest of the universe, every time you run a command using kubectl (the CLI to manage any Kubernetes cluster) it will make a REST call to the API server of your cluster and provide the parameters and files you pass to kubectl as the payload of the command.&lt;/p&gt;

&lt;p&gt;The Scheduler is a process that will take the container (note I am not talking about an image anymore, more on this later) and will validate which node in the cluster meets the needs to have that container running (resources, exclusions, affinity, etc.)&lt;/p&gt;

&lt;p&gt;Kubelet is a process running in each "worker" node that will do the actual work of making your container run, it will do all the needed tasks to ensure your code runs in the node, resource allocation, resource monitoring, process handling, etc. this is a key part as it also is responsible to support different container engines.&lt;/p&gt;

&lt;p&gt;Kube proxy is a component also running on each worker node that will take care of all the networking work for our containers, one important piece of information about kube proxy that many people get confused about is that it will not route or be in the middle of the traffic at all, kube proxy does the needed networking configuration in the worker node for the traffic to reach the correct container (adding and removing rules from iptables for example) which means that it can crash and the applications running in your node will continue to work.&lt;/p&gt;

&lt;p&gt;And of course, we have a lot of components in the control nodes and the worker nodes, we have container running, network routes defined and a lot of information about our infrastructure but how does the API server "remembers" all this? well, there is where etcd enters into the scene, etcd is a highly available and scalable key/value data store (what a lot of words, no worries, is not that complex) which takes the role to store all the cluster state constantly, it contains information about your cluster and the API server is the one updating it constantly.&lt;/p&gt;

&lt;p&gt;Now you have a basic understanding of how is it possible for Kubernetes to run a container,  by knowing this you can decide if Kubernetes is the right tool for your application.&lt;br&gt;
I often tend to not recommend Kubernetes for small applications that are just in the initial stages of being deployed or if your application is already running in another infrastructure and the migration to Kubernetes will only bring you more work to do without much gain.&lt;/p&gt;

&lt;p&gt;Just remember Kubernetes is a tool and like any other tool it serves a purpose and that should drive your intentions to move into it or not, adopting Kubernetes is a task that will require a lot of investigation, learning, and effort (translated into time), be mindful to move to Kubernetes only if it makes sense for your use case.&lt;/p&gt;

&lt;p&gt;I hope this first article of the series helped you to understand the basics of Kubernetes and decide if Kubernetes is the right choice for your next steps.&lt;/p&gt;

&lt;p&gt;If you enjoyed this article please consider following me to get alerted on every new article and consider leaving a message with your ideas and/or feedback.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>automation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>HTTPs with Ingress controller, cert-manager and DuckDNS (in AKS/Kubernetes)</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Sat, 19 Feb 2022 14:33:32 +0000</pubDate>
      <link>https://forem.com/javiermarasco/https-with-ingress-controller-cert-manager-and-duckdns-in-akskubernetes-2jd1</link>
      <guid>https://forem.com/javiermarasco/https-with-ingress-controller-cert-manager-and-duckdns-in-akskubernetes-2jd1</guid>
      <description>&lt;p&gt;This guide pretends to explain a quick and easy way to get our applications in Kubernetes/AKS running exposed to internet with HTTPs using DuckDNS as domain name provider.&lt;br&gt;
This guide is also available in my YouTube channel @javi__codes (Spanish only for now, sorry).&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites and versions:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AKS cluster version: 1.21.7&lt;/li&gt;
&lt;li&gt;Helm 3&lt;/li&gt;
&lt;li&gt;Ingress-controller nginx chart version 4.0.16&lt;/li&gt;
&lt;li&gt;Ingress-controller nginx app version 1.1.1&lt;/li&gt;
&lt;li&gt;cert-manager version 1.2.0&lt;/li&gt;
&lt;li&gt;cert-manager DuckDNS webhook version 1.2.2&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  (1) Add ingress-controller Helm repo
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  (2) Update repository
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  (3) Install ingress-controller with Helm
&lt;/h2&gt;

&lt;p&gt;With this command we will be installing the latest version from the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress --create-namespace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (4) Verify the pods are running fine in our cluster
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                      READY   STATUS    RESTARTS   AGE
nginx-ingress-ingress-nginx-controller-74fb55cbd5-hjvr9   1/1     Running   0          41m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (5) We need to verify our ingress-controller has a public IP assigned
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should see something similar to this, the key part here is to have an IP assigned in "EXTERNAL-IP", this might take a few seconds to show, it is expected as in the background what is happening is that Azure is spinning a "Public IP" resource for you and assigning it to the AKS cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.33.214    20.190.211.14   80:32321/TCP,443:30646/TCP   38m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (6) Deploy a test application
&lt;/h2&gt;

&lt;p&gt;Now we will be deploying a testing application that will be running inside a pod with a service that we will use to access the pods. This might feel a bit overkill as we have only a single pod and having a service for a single pod seems a lot but keep in mind that pods can be rescheduled at any given moment and they can even change their IPs while a service doesnt, so reaching our pods using a service is the best (and the desired) option. This also scales better as if we have more pods we will still use the same service to reach them and the service will load balance between them.&lt;/p&gt;

&lt;p&gt;This is the yaml file for our test application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/http-echo&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-text=Test&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;123!"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (7) Deploy our service
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-svc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (8) Let's deploy an ingress resource
&lt;/h2&gt;

&lt;p&gt;Now we need to deploy an ingress resource, this will tell our ingress controller how to manage the traffic that will be arriving to the public IP of the ingress controller (the one from the step 5), basically we are telling it to forward the traffic from the "/" path to the service of our application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-echo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/ssl-redirect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are telling the ingress controller to forward all the traffic of the port 80 to the service &lt;code&gt;echo-svc&lt;/code&gt; on it's port 80.&lt;/p&gt;

&lt;h2&gt;
  
  
  (9) Let's test it all together
&lt;/h2&gt;

&lt;p&gt;To test this we will be accessing the ingress using the public IP that we got in step 5:&lt;/p&gt;

&lt;p&gt;Using a web browser, go to &lt;a href="https://IP"&gt;https://IP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the command line run &lt;code&gt;curl https://IP&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Adding certificates with cert-manager for duckDNS
&lt;/h1&gt;

&lt;p&gt;So far all is good, the only (small :) ) detail is that our ingress has an IP and not a domain/subdomain which is a bit hard for humans to remember and our traffic is all going unencrypted over http, we don't have any security (yet).&lt;br&gt;
We will be adding cert-manager to generate TLS certificates for us in our DuckDNS subdomain, cert-manager not only allows us to get certificates, it also rotate them when they are about to expire (and we can configure how ofter we want to expire/rotate them).&lt;/p&gt;
&lt;h2&gt;
  
  
  (10) Let's install cert-manager
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --set 'extraArgs={--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' --create-namespace --set installCRDs=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After a moment it will be done creating the needed resources, we can verify this by checking the status of the pods in the &lt;code&gt;cert-manager&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n cert-manager

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Something like the following should appear:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                            READY   STATUS    RESTARTS   AGE
cert-manager-6c9b44dd95-59b6n                   1/1     Running   0          47m
cert-manager-cainjector-74459fcc56-6dfn8        1/1     Running   0          47m
cert-manager-webhook-c45b7ff-hrcnx              1/1     Running   0          47m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (11) Do you need a domain for free? DuckDNS to the rescue!
&lt;/h2&gt;

&lt;p&gt;With all this in place we are ready to request a TLS certificate for our site/application, but first we need to own a domain or a subdomain to point to our public IP (step 5) so we can reach our pods/service using a name instead of an IP.&lt;br&gt;
Another very important point to note is that cert-manager will only provide certificates if we can proove we own the domain/subdomain (this is to avoid the possibility of anyone requesting a certificate for a well known domain like google.com), to do this it has two methods &lt;code&gt;http-01&lt;/code&gt; and &lt;code&gt;dns-01&lt;/code&gt;, we will focus this time in &lt;code&gt;dns-01&lt;/code&gt; which basically works like this: cert-manager requests us to provide credentials to access the domain/subdomain (in DuckDNS this is a token), with that cert-manager will generate a random string and make a TXT record in the domain provider with the value of the random string generated, will wait a moment and will use public DNSs to query that TXT record, if cert-manager finds the TXT record with the correct value it means we own that domain/subdomain and then will remove the TXT record and generate a certificate for us for that domain/subdomain. This will end with a secret in our K8s/AKS cluster containing the certificate and the key for that domain/subdomain, that secret is the one we will tell ingress-controller to use to validate the https traffic reaching our ingress.&lt;/p&gt;
&lt;h2&gt;
  
  
  (11) Configuring our DuckDNS account
&lt;/h2&gt;

&lt;p&gt;We need to go to &lt;a href="https://www.duckdns.org/"&gt;https://www.duckdns.org/&lt;/a&gt; and log in with our account/credentials (you have multiple alternatives in the upper right part of the page). Once this is done you will see your token in the screen, that's the token we will need in step 12 of this guide.&lt;/p&gt;

&lt;p&gt;A bit below that we will see a text field where we need to enter the subdomain name we want (something.duckdns.org) and a place to assign an IP (IPv4), in there we can enter a name for our subdomain and for the IP enter the public IP of our ingess (the one from step 5), then click save/update.&lt;/p&gt;

&lt;p&gt;Now we are telling DuckDNS to redirect all the traffic that arrives to that subdomain to the IP we entered, wonderful!&lt;/p&gt;
&lt;h2&gt;
  
  
  (12) Deploy a DuckDNS cert-manager webhook handler
&lt;/h2&gt;

&lt;p&gt;Now is time to deploy a DuckDNS webhook handler, this is what will add the functionality to cert-manager to manage records in DuckDNS. We can opt to use a helm chart or deploy by cloning the repository where this solution resides, the helm chart didn't work for me so I will be describing the approach using the code in the repo instead.&lt;/p&gt;

&lt;p&gt;Let's clone the repository first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/ebrianne/cert-manager-webhook-duckdns.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we install it from the cloned repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd cert-manager-webhook-duckdns

helm install cert-manager-webhook-duckdns --namespace cert-manager --set duckdns.token='TOKEN_DE_DUCKDNS' --set clusterIssuer.production.create=true --set clusterIssuer.staging.create=true --set clusterIssuer.email='NUESTRO_MAIL' --set logLevel=2 ./deploy/cert-manager-webhook-duckdns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will see we have a new pod in our &lt;code&gt;cert-manager&lt;/code&gt; namespace, we can check with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you will see something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                            READY   STATUS    RESTARTS   AGE
cert-manager-webhook-duckdns-5cdbf66f47-kgt99   1/1     Running   0          56m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (13) ClusterIssuers y detalles de cert-manager
&lt;/h2&gt;

&lt;p&gt;To generate certificates cert-manager has two certificate generators, one is called XXXX-Staging and the other one XXXX-Production. The main difference is that the &lt;code&gt;Production&lt;/code&gt; one will provide a certificate that is valid for all web browsers, this is the one we want in our application, but if we are testing and learning we will make mistakes and too many mistakes in the Production one will cause cert-manager to ban us from using the service. To avoid this there is the &lt;code&gt;Staging&lt;/code&gt; one which will provide a valid certifcate that our brownsers will take as "valid buuuuuuut" so you will see the padlock and the https but you will see in the certificate description that it is a &lt;code&gt;Staging&lt;/code&gt; certificate. With this &lt;code&gt;Staging&lt;/code&gt; one we can try and make as many mistakes as we need to fully understand how this works, once done, you simply change the ClusterIssuer to the Production one and you will get a new certifica but for &lt;code&gt;Production&lt;/code&gt; and since it was working when you did your tests in &lt;code&gt;Staging&lt;/code&gt; this one should not fail.&lt;/p&gt;

&lt;p&gt;When we installed the DuckDNS webhook we told the names to use for those ClusterIssuers, here is what I mean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--set clusterIssuer.production.create=true --set clusterIssuer.staging.create=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (14) Let's create an ingress resource using the Staging ClusterIssuer
&lt;/h2&gt;

&lt;p&gt;Create a file called &lt;code&gt;staging-ingress.yaml&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-https-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager-webhook-duckdns-staging&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba-tls-secret-staging&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example the subdomain is called &lt;code&gt;superprueba&lt;/code&gt; and I am defining to use the clusterissuer &lt;code&gt;cert-manager-webhook-duckdns-staging&lt;/code&gt; and to store the certificate in a secret called &lt;code&gt;superprueba-tls-secret&lt;/code&gt;, also all https traffic coming from &lt;code&gt;superprueba.duckdns.org&lt;/code&gt; needs to be forwarded to service &lt;code&gt;echo&lt;/code&gt; on port 80.&lt;/p&gt;

&lt;p&gt;The secret name can be anything we want, is not mandatory to contain the name of the subdomain/domain but is a good practice so we can quickly identify what the secret is for.&lt;/p&gt;

&lt;p&gt;Another important detail is that the ingress resource has to be defined in the same namespace as the service that will be forwarding traffic to, but the ingress CONTROLLER can be (and is normal to configure in this way) in another different namespace.&lt;/p&gt;

&lt;p&gt;Now we apply it with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectil apply -f staging-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (15) Verify the creation process for our certificate
&lt;/h2&gt;

&lt;p&gt;Now if we run a &lt;code&gt;kubectl get challenge&lt;/code&gt; in the same namespace where we deployed the ingress resource we should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                        STATE     DOMAIN                       AGE
superprueba-tls-secret-staging-6lmxj-668717679-4070204345   pending   superprueba.duckdns.org      4s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the process that cert-manager uses to generate the TXT record in DuckDNS and confirm we own the subdomain/domain (basically we provided a valid token), once this process is done and cert-manager confirms we are the owner of the subdomain/domain this &lt;code&gt;challenge&lt;/code&gt; is deleted and a certificate and a key are generated and stored in the secret we specified (&lt;code&gt;superprueba-tls-secret-staging&lt;/code&gt;) in our case.&lt;/p&gt;

&lt;p&gt;If we check the status of our certificate while the &lt;code&gt;challenge&lt;/code&gt; is still pending we will see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                             READY   SECRET                           AGE
superprueba-tls-secret-staging   False    superprueba-tls-secret-staging   7m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And once is done and the &lt;code&gt;challenge&lt;/code&gt; is deleted we will see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                             READY   SECRET                           AGE
superprueba-tls-secret-staging   True    superprueba-tls-secret-staging   7m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point we can verify that we can access our subdomain &lt;code&gt;superprueba-duckdns.org&lt;/code&gt; with a browser or using curl.&lt;/p&gt;

&lt;p&gt;With curl we would see something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; curl https://superprueba.duckdns.org/
curl: (60) schannel: SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was issued by an authority that is not trusted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is correct, we have a certificate but is not a &lt;code&gt;production ready&lt;/code&gt; one, is just one to test the configuration for cert-manager is correct, now we can go and change the clusterissuer from &lt;code&gt;staging&lt;/code&gt; to &lt;code&gt;production&lt;/code&gt; to obtain a real and valid certificate.&lt;/p&gt;

&lt;h2&gt;
  
  
  (16) Adjusting our ingress resource to request a production certificate
&lt;/h2&gt;

&lt;p&gt;Now let's crete a new file called &lt;code&gt;production-ingress.yaml&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-https-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager-webhook-duckdns-production&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba-tls-secret-production&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then let's apply it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f production-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, we can do the same verification steps as before to confirm that the production certificate is issued and stored in our secret and confirm by navigating to our site again&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://superprueba.duckdns.org/
Test 123!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (17) Troubleshooting
&lt;/h2&gt;

&lt;p&gt;Ok, this was the article on how to configure and make it work this solution without problems, some times we have a typo, or misconfigure an IP or a wrong name somewhere and is a pain in the neck to know what is wrong if you are just following this tutorial as a first approach to kubernetes.&lt;br&gt;
So here are a few things to check in case something is not working as expected.&lt;/p&gt;

&lt;p&gt;a) Check the cert-manager webhook logs, here you will find all the actions that the webhook performs to the duckdns service, if there is a problem with the token you are using, a failure to reach duckdns, etc. here is where you will find that.&lt;/p&gt;

&lt;p&gt;b) Check the logs of cert-manager (the core element), those are the pods called &lt;code&gt;cert-manager-XXXX&lt;/code&gt; and in here you will find information on what is cert-manager doing, if is requesting a certificate, creating a secret, running a challenge, etc.&lt;/p&gt;

&lt;p&gt;c) Verify the logs for the ingress-controller pods, here we can see the requests reaching our cluster, if the requests can't reach our ingress they will not be able to be routed to any service, here we should see the request being ingested.&lt;/p&gt;

&lt;p&gt;d) Check the configuration in DuckDNS is pointing to the correct IP as we configured it, this can be done with &lt;a href="https://digwebinterface.com/"&gt;https://digwebinterface.com/&lt;/a&gt; which is a simple page that you input a domain name and it will return you the IP where it is pointing.&lt;/p&gt;

&lt;h2&gt;
  
  
  About me
&lt;/h2&gt;

&lt;p&gt;If this article was useful to you or you liked it, please consider give it a like, write a comment or subscribe to my space or my other social networks, that helps me understand what is the best content to share and what people like to read or see.&lt;/p&gt;

&lt;p&gt;You can subscript, follow, like, message me in: &lt;br&gt;
Twitter -&amp;gt; @javi_&lt;em&gt;codes&lt;br&gt;
Instagram -&amp;gt; javi&lt;/em&gt;&lt;em&gt;codes&lt;br&gt;
LinkedInd -&amp;gt; javiermarasco&lt;br&gt;
Youtube -&amp;gt; javi&lt;/em&gt;_codes&lt;br&gt;
GitHub -&amp;gt; &lt;a href="https://github.com/javiermarasco"&gt;https://github.com/javiermarasco&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the code for this is in the following repository:&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/javiermarasco/https_duckdns"&gt;https://github.com/javiermarasco/https_duckdns&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Let's test our configurations with Powershell and Pester</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Sat, 12 Feb 2022 16:54:32 +0000</pubDate>
      <link>https://forem.com/javiermarasco/lets-test-our-configurations-with-powershell-and-pester-15ol</link>
      <guid>https://forem.com/javiermarasco/lets-test-our-configurations-with-powershell-and-pester-15ol</guid>
      <description>&lt;p&gt;I tend to automate everything, it makes sense that if there is something you are requested to do more than once and the time you need to invest to automate it is not huge, you will spend some time automating it. But I often found myself having a configuration file to not need to deal with modifying my scripts, I simply create a script that does the job and provide the script with configuration files as input.&lt;br&gt;
This is a very nice approach when you don't want to have parameters in your scripts as well.&lt;/p&gt;

&lt;p&gt;But then, you communicate this in your company and more people start using your automation, you of course know how the config file should look like, but what if others are not aware? what if that automation ends up in a pipeline? you have the usage documented (of course you do!) but probably others are not aware of it.&lt;/p&gt;

&lt;p&gt;So.... how do you ensure your script is used in a safe way and your configuration file is honored? well, keep reading and I will show you how I do it :)&lt;/p&gt;
&lt;h2&gt;
  
  
  General idea
&lt;/h2&gt;

&lt;p&gt;Let's think that we have a terraform file that will build some resources in the cloud, and you are providing this terraform plan a JSON file, inside the terraform plan you decode the JSON and use the contents to provide of values to you resources.&lt;/p&gt;

&lt;p&gt;Since this JSON is something you create, the format is what ever it makes sense for your needs, it can have any format and any amount of fields, but you need to be aware that depending on the structure you give to it the tests might change a bit.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example configuration file
&lt;/h2&gt;

&lt;p&gt;For this example I will go with a general JSON file that I made for this example, if contains arrays, nested objects, booleans and arrays inside nested objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vnet_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"demo-vnet-name-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"resource_group_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"resource_groupname"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"address_space"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.0/23"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"dns_servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"10.0.1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.128"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vnet_location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eastus2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Subnets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"misubnet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"address_prefixes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.128/24"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vnet_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"demo-vneT-name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"resource_group_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"resource_groupname"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"address_space"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"10.0.2.0/24"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"dns_servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"10.0.1.12"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"vnet_location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"eastus3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Subnets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"misubnet2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"address_prefixes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"10.0.2.0/24"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"10.0.2.128/24"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file is an array that contains 2 definitions to build two hypothetical network resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thinking process
&lt;/h2&gt;

&lt;p&gt;The first thing we want to do is to sit, relax and watch our configuration file, think on what make sense to validate and what doesn't, we don't want to write kilometers of tests when we don't need to validate all, maybe there are fields that are ok to have any value on them (like tags, we probably don't care if the tag is correct), while others are very important to validate, like a naming convention.&lt;/p&gt;

&lt;p&gt;Once we saw what we want to write a test for, our next step is to write down a list of test we want to address, let's do that.&lt;/p&gt;

&lt;p&gt;I want to test:&lt;/p&gt;

&lt;p&gt;1) My vnet_name is not empty&lt;br&gt;
2) I want to check the vnet_name is following my naming convention (very simple needs to have 4 sections)&lt;br&gt;
3) My vnet_name is composed of only lower case letters&lt;br&gt;
4) The location for the resources needs to be one that I "approve" to build on&lt;/p&gt;

&lt;p&gt;With this list, we are ready to start writing our tests.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pester
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What it is
&lt;/h3&gt;

&lt;p&gt;Pester is a test framework for powershell, is very easy to use, contains a lot of methods/keywords to run our assertions and it's installation and configuration can't be simpler.&lt;br&gt;
Using Pester we will be writing blocks called "Descriptions" defined by the word "Describe" which are logical ways to split our assertions, inside each "Describe" block we will be creating one or more "It" asserts, each one of those will run a validation and will be reported as a "Passed" or "Failed" assertion.&lt;/p&gt;

&lt;p&gt;In the output, when you run the "Invoke-Pester" command with the "-Output Detailed" parameter, the "Describe" block will be grouping the "It" asserts, so it will be easier to read as an output.&lt;/p&gt;
&lt;h3&gt;
  
  
  How to install (Windows)
&lt;/h3&gt;

&lt;p&gt;To install Pester is as simple as install it from the PSGallery following &lt;a href="https://pester-docs.netlify.app/docs/introduction/installation"&gt;this&lt;/a&gt; guide.&lt;/p&gt;

&lt;p&gt;The key steps are:&lt;/p&gt;

&lt;p&gt;1) Open a powershell terminal as administrator&lt;br&gt;
2) Run &lt;code&gt;Install-Module -Name Pester -Force -SkipPublisherCheck&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No big mystery here, it will install pester as a module in your host and let it ready to use.&lt;/p&gt;
&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;As mentioned before, writing a test is simply create &lt;code&gt;Describe&lt;/code&gt; blocks to group similar assertions and inside those blocks, write one &lt;code&gt;It&lt;/code&gt; block for each assertion.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;Describe&lt;/code&gt; blocks, there is not much to say, you place a name on them and nothing more.&lt;/p&gt;

&lt;p&gt;For the &lt;code&gt;It&lt;/code&gt; asserts is different, in them you can pass a &lt;code&gt;TestCase&lt;/code&gt; which is an array of elements that the assert will evaluate one by one or you can simply skip that and inside the &lt;code&gt;It&lt;/code&gt; block write the code to make the validation.&lt;/p&gt;

&lt;p&gt;When you write an assertion you want to do an operation and then pipe it to the &lt;code&gt;should&lt;/code&gt; operator (which is part of what you install with Pester). This &lt;code&gt;should&lt;/code&gt; operator has some parameters that you can use to describe what are you expecting the evaluation will return.&lt;/p&gt;

&lt;p&gt;Some parameters for &lt;code&gt;should&lt;/code&gt; are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be: Compares the evaluation result to a desired value&lt;/li&gt;
&lt;li&gt;Not: Inverts the boolean of the evaluation&lt;/li&gt;
&lt;li&gt;BeNullOrEmpty: Checks if the evaluation is an empty string or not defined at all&lt;/li&gt;
&lt;li&gt;BeGreaterThan: Checks if the evaluation is greater than a defined value&lt;/li&gt;
&lt;li&gt;BeLessThan: Checks if the evaluation is less than a defined value&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://pester-docs.netlify.app/docs/commands/Should"&gt;Here&lt;/a&gt; you can find the complete list&lt;/p&gt;

&lt;p&gt;Once you have your test written, you can call it by running &lt;code&gt;Invoke-Pester -Path &amp;lt;file.ps1&amp;gt;&lt;/code&gt; and if you like the verbose output (like me) add &lt;code&gt;-Output Detailed&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Example Pester test
&lt;/h2&gt;

&lt;p&gt;We are going to be using Pester 5.3.1 in this guide, next I will split my test in sections to explain them next:&lt;/p&gt;
&lt;h3&gt;
  
  
  Pre tests data
&lt;/h3&gt;

&lt;p&gt;We will need some elements before we can start testing stuff, this is to define certain values for the tests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# We retrieve the configuration to test&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$configfile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Get-Content&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./configuration.json"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ConvertFrom-Json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Depth&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;4&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="c"&gt;# Define the list of approved regions to deploy resources&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$Regions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@(&lt;/span&gt;&lt;span class="s1"&gt;'eastus2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'eastus'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="c"&gt;# We create an empty array to pass to our tests&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@()&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="c"&gt;# We populate our test cases creating elements named "Instance" for each entry in our config file&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;foreach&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kr"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$configfile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@{&lt;/span&gt;&lt;span class="nx"&gt;Instance&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  "My vnet_name is not empty"
&lt;/h3&gt;

&lt;p&gt;Let's define the test for this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example to verify if the value was defined, this prevents missing important fields.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Describe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Check vnet_name is defined."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Verify the name is set in &amp;lt;Instance.vnet_name&amp;gt;."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="kr"&gt;Param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vnet_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-not&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Benullorempty&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In here we are using the "should", "not" and "benullorempty" to compare with the value we got from the configuration, Pester already have all this functions ready for use to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check naming convention
&lt;/h3&gt;

&lt;p&gt;In this one we are going to assume our naming convention is something that needs to have 4 segments separated by a "-". This is a very simple check, think that you can even validate each of those sections and see if their values are correct.&lt;br&gt;
Another useful check is to validate if there are no other resources already created with this name.&lt;br&gt;
Keep in mind you can have multiple "It" commands inside a "Describe" section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example to verify namingconvention, this helps to enforce we don't create resources wrongly named.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Describe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Check naming convention for vnet_name."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Verify the vnet_name for &amp;lt;Instance.vnet_name&amp;gt; matches naming convention length."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="kr"&gt;Param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vnet_name&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"-"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;4&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;    
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Only lower letters are allowed on the vnet_name
&lt;/h3&gt;

&lt;p&gt;This one is very interesting, we are going to use regular expressions to check if the name we are providing is composed of lower case letters, regular expressions are very powerfull and we can write really good tests by using them to define what we are expecting.&lt;/p&gt;

&lt;p&gt;In this example cmath is for case sensitive matches and imatch is used fo case insensitive matches.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example to validate our names are all lowercase, useful for resources that doesn't support uppercase&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="c"&gt;# here the cmatch uses a regular expression, this can be adjusted to match any patter we need.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Describe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Check name for vnet_name should be all lowercase."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Verify &amp;lt;Instance.vnet_name&amp;gt; is all lowercase."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="kr"&gt;Param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vnet_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-cmatch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^[^A-Z]*$"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;$true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Validate we are only deploying to approved locations
&lt;/h3&gt;

&lt;p&gt;At the beginning of this section we defined a list of approved locations, we will use that to validate the location in our configuration is in that list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example on how to validate a value in an array of values, like in this case where an &lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="c"&gt;# approved list of regions is given to the test to validate we build in the approved locations/regions.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Describe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Check location/region to deploy."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Verify if region for &amp;lt;Instance.vnet_name&amp;gt; is approved."&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$TestCases&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Verbose&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="kr"&gt;Param&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nv"&gt;$Regions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-contains&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$Instance&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;vnet_location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;should&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;$true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final notes
&lt;/h2&gt;

&lt;p&gt;As usual, you can find my other networks in &lt;a href="https://linktr.ee/javi__codes"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you find this useful or have any recommendation, please let me know in the comments, follow me for future posts so I know which content is more desired by the community and I can focus on produce more of it and I hope you enjoyed it.&lt;/p&gt;

&lt;p&gt;Thanks for reading!!&lt;/p&gt;

</description>
      <category>powershell</category>
      <category>testing</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>(Spanish) Ingress-controller, cert-manager y DuckDNS en AKS</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Sun, 06 Feb 2022 12:24:55 +0000</pubDate>
      <link>https://forem.com/javiermarasco/ingress-controller-cert-manager-y-duckdns-en-aks-3nf6</link>
      <guid>https://forem.com/javiermarasco/ingress-controller-cert-manager-y-duckdns-en-aks-3nf6</guid>
      <description>&lt;h1&gt;
  
  
  Ingress controller con NGINX y cert-manager usando DuckDNS
&lt;/h1&gt;

&lt;p&gt;Esta guía es una forma de configurar ingress controller y cert-manager (usando DuckDNS) para tener rápidamente (y gratis) una URL con HTTPS apuntando a nuestro cluster AKS donde podemos tener nuestras aplicaciónes expuestas a internet.&lt;br&gt;
Esta guía la van a poder encontrar en forma de video en mi canal de YouTube (javi__codes) también. Mas links a mis otras redes sociales al final de la guía.&lt;/p&gt;

&lt;p&gt;Ahora, empecemos:&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre requisitos y versiones:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AKS cluster en version: 1.21.7&lt;/li&gt;
&lt;li&gt;Helm 3&lt;/li&gt;
&lt;li&gt;Ingress-controller nginx chart version 4.0.16&lt;/li&gt;
&lt;li&gt;Ingress-controller nginx app version 1.1.1&lt;/li&gt;
&lt;li&gt;cert-manager version 1.2.0&lt;/li&gt;
&lt;li&gt;cert-manager DuckDNS webhook version 1.2.2&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  (1) Agregar helm repo de ingress nginx
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  (2) Update de repos
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  (3) Instalar nginx ingress-controller con helm
&lt;/h2&gt;

&lt;p&gt;Esto va a instalar la ultima version disponible del chart en el repositorio.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress --create-namespace 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (4) Verificamos que los pods estén corriendo correctamente
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberían ver algo asi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                      READY   STATUS    RESTARTS   AGE
nginx-ingress-ingress-nginx-controller-74fb55cbd5-hjvr9   1/1     Running   0          41m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (5) Verificamos que nuestro ingress tiene una IP publica asignada
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc -n ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberíamos ver algo asi, lo importante es que tengamos una IP asignada en "EXTERNAL-IP", a veces demora unos momentos en asignarnos una IP, lo que pasa por detrás es que Azure tiene que generar un recurso de tipo "Public IP" y asignarlo al cluster de AKS. Si no aparece una IP cuando ejecuten el comando, esperen un momento y vuelvan a intentarlo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.33.214    20.190.211.14   80:32321/TCP,443:30646/TCP   38m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (6) Desplegamos una aplicación de prueba
&lt;/h2&gt;

&lt;p&gt;Ahora vamos a desplegar una aplicación corriendo en un pod y un servicio que vamos a usar para acceder a los pods de esta aplicación. Si bien vamos a desplegar un único pod y tener un servicio para un único pod puede parecer algo sin sentido, piensen que ese pod puede ser eliminado y otro tomar su lugar, esto no siempre mantiene la IP interna del pod por lo que para acceder a el de forma directa tendríamos que estar constantemente actualizando la IP que usamos para acceder al pod en caso que este sea rescheduleado (borrado y otro creado en su lugar), un service evita justamente esto, nosotros siempre usamos el service para acceder al pod y no importa si tenemos 1, 10 o 70 pods, siempre vamos a usar el mismo service.&lt;/p&gt;

&lt;p&gt;yaml de la aplicación de prueba:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/http-echo&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-text=aplicación&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;de&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;prueba"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (7) Desplegamos un service para nuestra aplicación
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-svc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (8) Ahora desplegamos un ingress resource
&lt;/h2&gt;

&lt;p&gt;En este paso vamos a desplegar un "ingress resource" esta es una forma de generar un ingress que va a decirle a nuestro ingress controller que el trafico que ingrese a nuestro cluster por la IP publica del ingress controller (la del paso 5 ) sea redireccionado a un servicio interno de nuestro cluster. Recuerden que el servicio de nuestra aplicación no tiene una IP publica asignada.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-echo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/ssl-redirect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aca le estamos diciendo a Kubernetes que cree un ingress resource para enviar todo el trafico ingresante por el ingress controller en el puerto 80 al servicio &lt;code&gt;echo-svc&lt;/code&gt; en su puerto 80.&lt;/p&gt;

&lt;h2&gt;
  
  
  (9) Probamos que todo funcione
&lt;/h2&gt;

&lt;p&gt;Ahora podemos probar acceder a nuestro sitio usando la IP publica del ingress controller del punto 5.&lt;/p&gt;

&lt;p&gt;Usando un web browser &lt;a href="https://IP"&gt;https://IP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usando linea de comando con curl &lt;a href="https://IP"&gt;https://IP&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Agregando certificados con cert-manager y duckDNS
&lt;/h1&gt;

&lt;p&gt;Bueno, hasta aca todo muy bien, pero nuestro ingress tiene una IP (no es lo mejor para exponer un servicio, es difícil de recordar) y ademas es todo http, no tenemos cifrado.&lt;/p&gt;

&lt;p&gt;Veamos de agregar cert-manager, una solución que nos permite obtener certificados TLS para nuestros dominios web y rotarlos automáticamente.&lt;/p&gt;

&lt;h2&gt;
  
  
  (10) Instalemos cert-manager
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --set 'extraArgs={--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' --create-namespace --set installCRDs=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Luego de que termine de crear sus recursos, podemos verificar que los pods de cert-manager estén corriendo correctamente&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n cert-manager

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberíamos ver algo as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                            READY   STATUS    RESTARTS   AGE
cert-manager-6c9b44dd95-59b6n                   1/1     Running   0          47m
cert-manager-cainjector-74459fcc56-6dfn8        1/1     Running   0          47m
cert-manager-webhook-c45b7ff-hrcnx              1/1     Running   0          47m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (11) Necesitas un dominio temporal y gratuito? DuckDNS al rescate
&lt;/h2&gt;

&lt;p&gt;Hasta este punto, estamos listos para solicitar un certificado TLS para nuestro sitio, PERO tenemos que tener un dominio propio en internet para apuntarlo a la IP publica de nuestro ingress controller (Punto 5) y asi acceder a nuestros servicios usando un nombre de dominio (o subdominio).&lt;br&gt;
Otro punto importante es que cert-manager solo provee certificados TLS si puede comprobar que nosotros somos los propietarios del dominio que queremos usar (para evitar darnos un certificado TLS de google.com por ejemplo), para hacer esto tiene dos formas, pero hoy vamos a hablar de una llamada &lt;code&gt;DNS-01&lt;/code&gt;, en este modelo cert-manager va a generar un registro TXT en nuestro DNS con un valor aleatorio, luego de intentar crear este registro TXT, va a intentar leerlo, si lo puede leer significa que nosotros tenemos acceso a ese dominio (porque cert-manager pudo crear el record TXT), en ese momento cert-manager genera un certificado, elimina el registro TXT y almacena el contenido del certificado (y su key) en un secret en nuestro cluster de kubernetes.&lt;/p&gt;

&lt;p&gt;Una vez terminado este proceso, podemos crear un ingress resource diciéndole que queremos que el trafico que ingrese desde ese dominio (y un path especifico) use un certificado (que tenemos en un secret) y reenvíe el trafico a un servicio (el de nuestro pod).&lt;/p&gt;
&lt;h2&gt;
  
  
  (11) Configuremos nuestra cuenta de DuckDNS
&lt;/h2&gt;

&lt;p&gt;Vamos a tener que ir a &lt;a href="https://www.duckdns.org/"&gt;https://www.duckdns.org/&lt;/a&gt; y loguearnos con algunos de los métodos que están en la parte superior de la pagina.&lt;br&gt;
Una vez hecho esto, vamos a ver que tenemos un TOKEN, ese es el que vamos a usar en el paso 12 de esta guía.&lt;/p&gt;

&lt;p&gt;Tambien mas abajo en la pagina vamos a ver un lugar donde podemos generar un subdominio de duckdns.org y asignarle una IP. Esa IP (IPv4) tiene que ser la IP de nuestro ingress controller (la del paso 5 de esta guía). Una vez hecho eso guardamos los cambios clickeando en el botón de al lado de donde pusimos la IP (IPv4).&lt;/p&gt;

&lt;p&gt;Con esto ya le dijimos a DuckDNS que todos los requests que vayan a ese subdominio que configuramos sean redireccionados a la IP que pusimos.&lt;/p&gt;
&lt;h2&gt;
  
  
  (12) Desplegar el webhook handler de DuckDNS
&lt;/h2&gt;

&lt;p&gt;Vamos a desplegar el webhook de DuckDNS, esta pieza es la que nos permite interactuar con DuckDNS. Este webhook tiene un helm chart que podemos usar, pero en mi caso no funciono, la forma que si me funciono fue clonando el repositorio y usando los archivos de ese repositorio.&lt;/p&gt;

&lt;p&gt;Clonamos el repositorio&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/ebrianne/cert-manager-webhook-duckdns.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instalamos desde el repositorio que clonamos&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd cert-manager-webhook-duckdns

helm install cert-manager-webhook-duckdns --namespace cert-manager --set duckdns.token='TOKEN_DE_DUCKDNS' --set clusterIssuer.production.create=true --set clusterIssuer.staging.create=true --set clusterIssuer.email='NUESTRO_MAIL' --set logLevel=2 ./deploy/cert-manager-webhook-duckdns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;En este punto vamos a tener un nuevo pod en nuestro namespace &lt;code&gt;cert-manager&lt;/code&gt;, lo podemos ver con el siguiente comando&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;y se vería algo asi&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                            READY   STATUS    RESTARTS   AGE
cert-manager-webhook-duckdns-5cdbf66f47-kgt99   1/1     Running   0          56m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (12) ClusterIssuers y detalles de cert-manager
&lt;/h2&gt;

&lt;p&gt;Cert manager maneja conceptualmente dos tipos de formas de generar certificados, tienen un generador de certificados llamado XXXX-Staging y otro llamado XXXX-Production. La principal diferencia es que el de Production nos va a dar un certificado valido que podamos usar en el mundo real, uno que nuestros navegadores van a reconocer mientras que el de staging va a generar un certificado que nuestros navegadores van a reconocer como https, pero van a marcarlo como "de prueba".&lt;/p&gt;

&lt;p&gt;La idea es que hagamos todas las pruebas con el de Staging dado que si nos equivocamos y pedimos muchos certificados erróneos, nos vamos a tener problemas, mientras que si hacemos lo mismo en el de production, es muy probable que nos baneen por un tiempo determinado.&lt;/p&gt;

&lt;p&gt;Cuando instalamos el webhook de DuckDNS especificamos que queríamos crear los clusterissues de production y staging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--set clusterIssuer.production.create=true --set clusterIssuer.staging.create=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (13) Creamos un ingress resource usando el cluster issuer de staging
&lt;/h2&gt;

&lt;p&gt;Vamos a crear un archivo llamado staging-ingress.yaml con este contenido&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-https-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager-webhook-duckdns-staging&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba-tls-secret-staging&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;En este ejemplo mi sub dominio en DuckDNS es &lt;code&gt;superprueba&lt;/code&gt; y lo que estoy definiendo es que usemos el clusterissuer &lt;code&gt;cert-manager-webhook-duckdns-staging&lt;/code&gt; y que guardemos el certificado en un secret llamado &lt;code&gt;superprueba-tls-secret&lt;/code&gt; y que todo el trafico que ingrese por &lt;code&gt;superprueba.duckdns.org&lt;/code&gt; https, lo envíe al servicio &lt;code&gt;echo&lt;/code&gt; en su puerto 80.&lt;/p&gt;

&lt;p&gt;El nombre del secret puede ser cualquier cosa, no tiene que contener el nombre del subdominio, pero es una buena practica que sea descriptivo y que sea claro para que se usa.&lt;/p&gt;

&lt;p&gt;Un detalle importante es que este ingress resource tiene que estar en el mismo namespace que el servicio al cual estamos redireccionando el trafico. El ingress CONTROLLER puede (y recomiendo) estar en un namespace diferente al igual que cert-manager.&lt;/p&gt;

&lt;p&gt;Ahora aplicamos con:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectil apply -f staging-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (14) Verificando el proceso de creación del certificado
&lt;/h2&gt;

&lt;p&gt;Ahora si en nuestro namespace donde desplegamos el ingress resource y los pods/servicios hacemos un &lt;code&gt;kubectl get challenge&lt;/code&gt; vamos a ver algo asi&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                                        STATE     DOMAIN                       AGE
superprueba-tls-secret-staging-6lmxj-668717679-4070204345   pending   superprueba.duckdns.org      4s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Este es el proceso que realiza cert-manager para generar el record TXT en DuckDNS y comprobar que nosotros poseemos ese dominio/subdominio (que el token que indicamos es correcto), una vez que este proceso termina y se verifica que somos los propietarios de ese dominio/subdominio, este &lt;code&gt;challenge&lt;/code&gt; desaparece y se genera un certificado que es almacenado en un secret con el nombre que nosotros indicamos en el ingress resource (&lt;code&gt;superprueba-tls-secret-staging&lt;/code&gt; en nuestro caso).&lt;/p&gt;

&lt;p&gt;Si vemos el estado de nuestro certificado mientras el &lt;code&gt;challenge&lt;/code&gt; existe, vamos a ver algo asi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                             READY   SECRET                           AGE
superprueba-tls-secret-staging   False    superprueba-tls-secret-staging   7m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Y una vez terminado el proceso y cuando &lt;code&gt;challenge&lt;/code&gt; desaparece, lo vemos asi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                             READY   SECRET                           AGE
superprueba-tls-secret-staging   True    superprueba-tls-secret-staging   7m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;En este punto podemos verificar acceder a nuestro dominio &lt;code&gt;superprueba.duckdns.org&lt;/code&gt; usando un navegador o con curl.&lt;/p&gt;

&lt;p&gt;Con curl veriamos algo asi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; curl https://superprueba.duckdns.org/
curl: (60) schannel: SEC_E_UNTRUSTED_ROOT (0x80090325) - The certificate chain was issued by an authority that is not trusted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esto es correcto, tenemos un certificado, pero no es un certificado "productivo", sino uno para comprobar que todo esta configurado correctamente en cert-manager y estamos listos para pedir un certificado definitivo.&lt;/p&gt;

&lt;h2&gt;
  
  
  (15) Ajustando nuestro ingress resource para solicitar un certificado productivo
&lt;/h2&gt;

&lt;p&gt;Ahora vamos a crear un archivo nuevo llamado &lt;code&gt;production-ingress.yaml&lt;/code&gt; con el siguiente contenido:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo-https-ingress&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager-webhook-duckdns-production&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/$1&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/use-regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba-tls-secret-production&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superprueba.duckdns.org&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/(.*)&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Y por supuesto lo aplicamos con:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f production-ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Una vez hecho esto, podemos aplicar los mismos pasos de antes para revisar que todo esta bien y por ultimo comprobamos con nuestro navegador o con curl si nuestro sitio https ya esta funcionando, si todo salio bien, tendríamos que ver algo asi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://superprueba.duckdns.org/
aplicación de prueba
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  (16) Troubleshooting
&lt;/h2&gt;

&lt;p&gt;Hasta aca todo el camino feliz, donde todo funciono perfectamente y no tuvimos ningún problema, no tuvimos errores de tipeo, pusimos las IPs correctas en todos lados y no confundimos los nombres de los secrets (no es que me haya pasado alguna vez, absolutamente no ...). Pero que pasa cuando algo asi nos sucede? como hacemos para poder identificar donde esta el problema? Bueno, aca mis recomendaciones (las que me ayudaron en todas las veces que NO me paso de tener algo mal configurado)&lt;/p&gt;

&lt;p&gt;a) Revisar los logs de los pods de cert-manager webhook. En el del webhook de DuckDNS vamos a ver todos los pasos que esta haciendo cert-manager contra duckDNS y podemos ver si nuestro TOKEN por ejemplo esta mal, o si duckDNS no esta contestando correctamente los requests.&lt;/p&gt;

&lt;p&gt;b) Revisar los logs del pod de cert-manager, el que se llama simplemente &lt;code&gt;cert-manager-XXXX&lt;/code&gt;, este nos va a mostrar información sobre lo que esta haciendo cert-manager, este pod nos va a indicar de la creación o modificación del secret donde va a estar el certificado, el webhook solo se encarga de la comunicación y verificación con DuckDNS, pero este pod se encarga del trabajo dentro de nuestro cluster de kubernetes.&lt;/p&gt;

&lt;p&gt;c) Verificar el log de ingress-controller: Aca vamos a ver cuando un request llegue a nuestro ingress controller, de que IP viene, si hay algún error en el request, básicamente si algo esta mal configurado en el ingress, aca es donde vamos a ver que esta pasando.&lt;/p&gt;

&lt;p&gt;d) usar una herramienta para verificar que la IP que pusimos en DuckDNS esta apuntando a la IP publica de nuestro ingess-controller, una que uso yo es &lt;a href="https://digwebinterface.com/"&gt;https://digwebinterface.com/&lt;/a&gt; en esta herramienta podemos comprobar si realmente el cambio que hicimos en DuckDNS esta correctamente configurado.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notas finales
&lt;/h2&gt;

&lt;p&gt;Como siempre, si esta guía les sirvió o la ven interesante les agradecería mucho que la compartan con todas las personas que puedan, cuanta mas gente la vea mas posible es que me hagan llegar recomendaciones de como mejorar mis próximos artículos o si hay algún error en este y me alientan a seguir escribiendo y compartiendo con el resto de la comunidad.&lt;br&gt;
Esta guía la voy a convertir en un video y subirlo a mi canal de YouTube, los veo por ahi (de paso... subscríbanse, denle like y dejen un comentario si les gusta el video, todo eso me ayuda)&lt;/p&gt;

&lt;p&gt;Aprovecho a dejarles links a mis redes sociales y formas de contacto:&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/javiermarasco/https_duckdns"&gt;https://github.com/javiermarasco/https_duckdns&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Twitter -&amp;gt; @javi_&lt;em&gt;codes&lt;br&gt;
Instagram -&amp;gt; javi&lt;/em&gt;&lt;em&gt;codes&lt;br&gt;
LinkedInd -&amp;gt; javiermarasco&lt;br&gt;
Youtube -&amp;gt; javi&lt;/em&gt;_codes&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>(Spanish) PowerGrafana, que es y como se usa</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Tue, 07 Dec 2021 17:27:39 +0000</pubDate>
      <link>https://forem.com/javiermarasco/powergrafana-que-es-y-como-se-usa-15od</link>
      <guid>https://forem.com/javiermarasco/powergrafana-que-es-y-como-se-usa-15od</guid>
      <description>&lt;h2&gt;
  
  
  PowerGrafana, ¿qué es y para qué se utiliza?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Comencemos con un poco de contexto y cómo comenzó todo
&lt;/h3&gt;

&lt;p&gt;A veces sucede que tenemos que monitorear alguna aplicación, servicio o componente (entre otras cosas) y la herramienta que usamos o usa nuestra empresa es &lt;a href="https://grafana.com"&gt;Grafana&lt;/a&gt;, mientras que tenemos pocas cosas que monitorear no es un gran problema pero a medida que agregamos más elementos al Estos paneles se vuelven más complejos de mantener y / o configurar.&lt;/p&gt;

&lt;p&gt;Imagine que tiene que implementar 20 aplicaciones, cada una ejecutándose en un &lt;a href="https://azure.microsoft.com/en-us/services/app-service"&gt;app service&lt;/a&gt;, por lo que seguramente querríamos monitorear el uso de CPU y memoria (para empezar) y seguramente algunas cosas más.&lt;br&gt;
En este escenario simplista tenemos que configurar algunos paneles en un tablero y en cada panel poner las métricas que queremos mostrar (CPU y Memoria) en nuestro caso.&lt;/p&gt;

&lt;p&gt;Esto suena simple pero en poco tiempo seguramente necesitaremos agregar otra métrica o desplegar una nueva versión de nuestra aplicación, peor aún, podríamos desplegar una nueva versión de algunos componentes y no de otros mientras tenemos que mantener todas las versiones (el el inicial más el nuevo) de cada componente, como puedes ver esto se vuelve cada vez más complejo de mostrar en Grafana, es mucho tiempo pinchando en la interfaz o (la alternativa) editando archivos json para pegarlos en el Grafana interfaz web y generar los cuadros de mando o paneles (créanme que es muy fácil cometer errores al editar esos archivos).&lt;/p&gt;
&lt;h3&gt;
  
  
  La alternativa
&lt;/h3&gt;

&lt;p&gt;PowerGrafana fue creado para resolver este problema (o intentar que sea más fácil de manejar) extrayendo toda la complejidad de lidiar con la interfaz web, los archivos json o incluso ingresando los nombres de los recursos a monitorear a mano.&lt;br&gt;
Usando un módulo simple de PowerShell podemos iterar rápidamente a través de nuestros recursos y para cada uno de ellos crear un panel que muestre el uso de CPU y memoria.&lt;/p&gt;

&lt;p&gt;Cada comando posee su ayuda, la cual pueden consultar ejecutando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;PS&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Get-Help&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; 

&lt;/span&gt;&lt;span class="n"&gt;NAME&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;SYNOPSIS&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;Creates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Grafana&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;span class="n"&gt;SYNTAX&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-DashboardName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Object&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="nt"&gt;-Tags&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CommonParameters&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;span class="n"&gt;DESCRIPTION&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;cmdlet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;will&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;an&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;empty&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Grafana&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;can&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;used&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;point&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;grafana&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;monitoring.&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;EXAMPLE&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-DashboardName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My new dashboard"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Tags&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@(&lt;/span&gt;&lt;span class="s1"&gt;'Web'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'Azure'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'Production'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;RELATED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;LINKS&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;REMARKS&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;To&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;examples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Examples"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;more&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;information&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Detailed"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;technical&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;information&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Full"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;online&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;help&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Online"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Referencias
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/javiermarasco/PowerGrafana"&gt;Link a PowerGrafana en GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.powershellgallery.com/packages/PowerGrafana/0.1.0"&gt;Link a PowerGrafana en la galería de PowerShell&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>powershell</category>
      <category>grafana</category>
      <category>monitoring</category>
      <category>automation</category>
    </item>
    <item>
      <title>(Spanish) Prometheus en AKS</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Tue, 07 Dec 2021 17:24:04 +0000</pubDate>
      <link>https://forem.com/javiermarasco/prometheus-en-aks-1i4j</link>
      <guid>https://forem.com/javiermarasco/prometheus-en-aks-1i4j</guid>
      <description>&lt;h1&gt;
  
  
  Introducción
&lt;/h1&gt;

&lt;p&gt;La intención principal de esta publicación (así como de las demás) no es tener un tutorial muy profundo en todos los detalles de cada tecnología/aplicación que se describe, sino una guía breve y concisa de algo en particular que puede ayudar a las personas que están comenzando a conocer el tema descrito en este artículo, por supuesto, si tenés alguna recomendación para mejorar esto, no dudes en enviarme un mensaje/comentario.&lt;/p&gt;

&lt;h1&gt;
  
  
  ¿Qué es Prometheus?
&lt;/h1&gt;

&lt;p&gt;Arranquemos, vayamos directamente al tema que estaremos hablando hoy acá, &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; es una herramienta de monitoreo que es muy popular en el mundo de Kubernetes al igual que una de los proyectos de la CNCF (Cloud Native Computing Foundation) y esto significa que es un producto muy maduro con un gran apoyo de la comunidad.&lt;/p&gt;

&lt;p&gt;Hay varias características que hacen de Prometheus una de las herramientas favoritas para monitorear entornos, algunas se enumeran aquí:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Metodología "pull" (significa que el servidor de Prometheus extrae métricas en lugar de esperar a que la aplicación envíe las métricas al servidor de Prometheus)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Muy rápido al recopilar métricas y realizar agregaciones&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Una arquitectura que permite monitorear no solo los entornos de Kubernetes sino otras aplicaciones como bases de datos o servidores web (usando "exporters")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Soporta cientos de aplicaciones, hardware, plataformas, etc. para monitorear &lt;a href="https://prometheus.io/docs/instrumenting/exporters/"&gt;aquí está la lista&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Con esto en mente, continuemos instalándolo y configurándolo para un primer intento.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cómo instalarlo
&lt;/h1&gt;

&lt;p&gt;Para nuestro ejemplo, instalaremos Prometheus en un &lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/"&gt;clúster de AKS&lt;/a&gt; (clúster de Kubernetes que se ejecuta como PaaS en Azure).&lt;/p&gt;

&lt;p&gt;Hay varias formas de implementar Prometheus en un clúster de Kubernetes, pero la más simple y la que tiene más sentido es utilizar algo llamado Helm charts, imagina a un Helm chart como un conjunto de archivos YAML vinculados como un único recurso para implementar, esto significa que lo instala como un elemento "único" (un chart) pero, en su lugar, implementará los recursos necesarios en tu clúster para que la solución funcione correctamente (como pods, replica sets, deployments,  services,  secrets, etc.)&lt;/p&gt;

&lt;h1&gt;
  
  
  Qué es Helm
&lt;/h1&gt;

&lt;p&gt;Helm es una tecnología que nos permite empaquetar un montón de archivos YAML que se usarán como un todo para hacer que una solución funcione (en este caso, la solución es Prometheus), al usar Helm estás eliminando la complejidad de tener que administrar de forma independiente todos los recursos para que la solución funcione, en su lugar, proporcionas un archivo de configuración al chart y lo implementás, que creará todos los recursos y los configurará correctamente.&lt;/p&gt;

&lt;p&gt;Una de las ventajas de usar Helm es que se puede confiar en los repositorios donde se mantienen esos charts y usarlos, pero también si querés podés mantener tu propia versión del chart localmente o en un repositorio privado, para que puedas ajustarlo a tus necesidades (como usar una imagen de container personalizada para los deployments del chart en lugar de usar la predeterminada, esto podría ser un requerimiento de seguridad, por ejemplo)&lt;/p&gt;

&lt;p&gt;Otra característica es que se puede almacenar esos charts en los mismos repositorios donde se almacenan sus imágenes de contenedor y versionarlos de la misma manera que lo haces con una imagen de contenedor.&lt;/p&gt;

&lt;p&gt;Cada chart tiene su propio conjunto de archivos porque representan un grupo de recursos que funcionan para hacer que una aplicación funcione, por lo que los recursos necesarios para un chart  para Prometheus no son los mismos recursos necesarios para cert-manager, por ejemplo, pero la idea es la misma, un conjunto de archivos YAML que una vez implementados trabajarán juntos para hacer que la aplicación se ejecute.&lt;/p&gt;

&lt;p&gt;Para utilizar los charts de helm, tenés que tener helm instalado en tu sistema y agregar los repositorios que utilizará, cada chart de helm vive en un repositorio que se debe agregar para descargar el chart y sus archivos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GkhQCQU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-repo-add.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GkhQCQU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-repo-add.png" alt="" width="880" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;agregando un repositorio *&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Para este artículo usaremos los charts de Prometheus estándar.&lt;/p&gt;

&lt;h1&gt;
  
  
  Requisitos previos
&lt;/h1&gt;

&lt;p&gt;Bueno, para este artículo necesitaremos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Un clúster de AKS en funcionamiento, nada especial, solo la implementación base está bien, podés seguir este &lt;a href="https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal"&gt;tutorial&lt;/a&gt; (Haré un tutorial en el futuro)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Una terminal con kubectl y helm instalados&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instala el repositorio de helm de prometheus-community&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Instalación
&lt;/h1&gt;

&lt;p&gt;Lo primero que haremos es agregar el repositorio de helm de prometheus-community y actualizar nuestra lista de repositorios locales ejecutando el siguiente comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ahora podemos revisar todos los gráficos que podemos instalar ahora que agregamos el repositorio, veamos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo prometheus-community
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--waGN7lG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-search-repo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--waGN7lG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-search-repo.png" alt="" width="880" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;¡Perfecto! Vemos muchos charts allí, esos son elementos diferentes que podemos instalar, pero hoy nos centraremos en el llamado "prometheus".&lt;/p&gt;

&lt;p&gt;Hay un nombre de columna ** CHART VERSION **, esta es la versión del gráfico en sí y no de Prometheus, esto se debe a que se pueden realizar modificaciones en la forma en que está compuesto el gráfico y lo que hay dentro, pero aun así usar la misma versión de Prometheus que la versión de gráfico anterior. Se pueden ver todas las versiones de charts ejecutando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo prometheus-community/prometheus &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"community-prometheus/prometheus-"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;El comando grep es para eliminar todos los demás charts de la lista *&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--42lJx3Gn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/chart-versions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--42lJx3Gn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/chart-versions.png" alt="" width="880" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Si instalas Prometheus sin decir la versión del gráfico que deseas, se instalará la última (14.11.1 en el momento en que escribo este artículo).&lt;/p&gt;

&lt;p&gt;Vamos a instalarlo ahora:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus prometheus-community/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;el "prometheus" antes del nombre del repositorio/gráfico es el nombre que queremos darle a esta implementación, podés elegir otro nombre.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---SpTAcId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-install.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---SpTAcId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-install.png" alt="" width="880" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ahora que tenemos nuestro servidor Prometheus instalado y listo, verifiquemos en Kubernetes lo que implementamos (tenía el espacio de nombres "default" seleccionado al instalar el gráfico, por lo que mi gráfico se implementó en el espacio de nombres "default")&lt;/p&gt;

&lt;p&gt;** Una cosa importante es que cuando instala un gráfico de helm, se instala en un namespace en Kubernetes, si cambia a otro namespace e intentas ver los gráficos instalados, no verás el que instalaste en el otro namespace. Dejame poner esto en una imagen para explicarlo mejor: **&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Esto es un &lt;code&gt;helm list&lt;/code&gt; en mi namespace" default "&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZQA6d3R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-default.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZQA6d3R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-default.png" alt="" width="880" height="43"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Esto es un &lt;code&gt;helm list&lt;/code&gt; en otro namespace&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xq4ces-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-other.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xq4ces-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-other.png" alt="" width="880" height="60"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;En Kubernetes podemos ver todos los recursos creados automáticamente:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NX0aOXkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/kubectl-get-all.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NX0aOXkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/kubectl-get-all.png" alt="" width="880" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Siguiendo los pasos descritos después de la instalación del gráfico de helm, debemos reenviar un puerto desde nuestra máquina al pod donde se ejecuta el servidor Prometheus con:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;POD_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"app=prometheus, component=server"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{. items [0].metadata.name}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; default port-forward &lt;span class="nv"&gt;$POD_NAME&lt;/span&gt; 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;** Si estás utilizando WSL para ejecutar tus comandos de Kubernetes, debes realizar algunos pasos adicionales para que esto funcione **&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Primero verifica cuál es la IP de tu WSL ejecutando &lt;code&gt;wsl hostname -I&lt;/code&gt;, ya que no se puede acceder a los puertos en tu máquina host (Windows) ejecutando localhost: port si está exponiendo los puertos dentro de WSL.&lt;/li&gt;
&lt;li&gt;En segundo lugar, el comando port-forward debe incluir --address 0.0.0.0 como &lt;code&gt;kubectl --namespace default port-forward --address 0.0.0.0 $POD_NAME 9090&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;En tercer lugar, debe usar la IP de WSL (la del paso uno) en lugar de &lt;code&gt;localhost&lt;/code&gt; para acceder a Prometheus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Con esto ahora podemos ir a nuestro navegador y acceder a &lt;code&gt;localhost: 9090&lt;/code&gt; para ver este dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJsbc39Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJsbc39Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-dashboard.png" alt="" width="880" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus tiene muchas métricas predeterminadas que se recopilan de forma predeterminada, para verlas, podes comenzar a escribir algo en el cuadro de búsqueda y se completará automáticamente con las métricas disponibles.&lt;/p&gt;

&lt;p&gt;Como ejemplo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H_qQrtZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-metric1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H_qQrtZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-metric1.png" alt="" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Y ahora, lo único que queda es profundizar en las métricas que Prometheus está recopilando, tal vez agregando algunos exportadores, ¿configurar el administrador de alertas tal vez? (este es un tema para otra publicación), ¿consumir esas métricas desde Grafana? (El artículo sobre esto ya está en camino : guiño :)&lt;/p&gt;

&lt;h1&gt;
  
  
  Últimas palabras
&lt;/h1&gt;

&lt;p&gt;Espero que esto te ayude a comenzar con Prometheus, ya que es muy simple de implementar y al mismo tiempo muy poderoso, si tenés algún problema siguiendo esta guía o alguna recomendación, hacémelo saber en la sección de comentarios.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>aks</category>
      <category>kubernetes</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>(English) PowerGrafana, what is it and how to use it</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Tue, 07 Dec 2021 17:24:03 +0000</pubDate>
      <link>https://forem.com/javiermarasco/powergrafana-what-is-it-and-how-to-use-it-1j3g</link>
      <guid>https://forem.com/javiermarasco/powergrafana-what-is-it-and-how-to-use-it-1j3g</guid>
      <description>&lt;h2&gt;
  
  
  PowerGrafana, what is it and what is it used for?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lets beggin with a bit of context and how it all started
&lt;/h3&gt;

&lt;p&gt;Sometimes it happens that we have to monitor some application, service or component (among other things) and the tool that we use or our company uses is &lt;a href="https://grafana.com"&gt;Grafana&lt;/a&gt;, while we have few things to monitor it is not much of a problem but as we add more elements to the panels it becomes more complex to maintain and / or configure.&lt;/p&gt;

&lt;p&gt;Imagine that you have to deploy 20 applications, each one running in an &lt;a href="https://azure.microsoft.com/en-us/services/app-service"&gt;app service&lt;/a&gt; so we would surely want to monitor your CPU and Memory usage (to begin with) and surely a few more things.&lt;br&gt;
In this simplistic scenario we have to configure some panels in a dashboard and in each panel put the metrics that we want to show (CPU and Memory) in our case.&lt;/p&gt;

&lt;p&gt;This sounds simple but in a short time we will surely need to add another metric or deploy a new version of our application, even worse, we could deploy a new version of some components and not others while we have to keep all the versions (the initial one plus the new one) of each component, as you can see this becomes more and more complex to show in Grafana, it is a lot of time clicking on the interface or (the alternative) editing json files to paste them in the Grafana web interface and generate the dashboards or panels ( believe me that it is very easy to make mistakes when editing those files).&lt;/p&gt;
&lt;h3&gt;
  
  
  The alternative
&lt;/h3&gt;

&lt;p&gt;PowerGrafana was created to solve this problem (or try to make it easier to handle) by extracting all the complexity of dealing with the web interface, json files, or even entering the names of the resources to monitor by hand.&lt;br&gt;
Using a simple PowerShell module we can quickly iterate through our resources and for each of them create a panel that shows CPU and Memory usage.&lt;/p&gt;

&lt;p&gt;Each command has it's own help, which can be accessed by executing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;PS&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Get-Help&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; 

&lt;/span&gt;&lt;span class="n"&gt;NAME&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;SYNOPSIS&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;Creates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Grafana&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;span class="n"&gt;SYNTAX&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nt"&gt;-DashboardName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Object&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="nt"&gt;-Tags&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CommonParameters&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;span class="n"&gt;DESCRIPTION&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;This&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;cmdlet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;will&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;an&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;empty&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;dashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Grafana&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;can&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;be&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;used&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;starting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;point&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;your&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;grafana&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;monitoring.&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;EXAMPLE&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;New-GrafanaDashboard&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-DashboardName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My new dashboard"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Tags&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;@(&lt;/span&gt;&lt;span class="s1"&gt;'Web'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'Azure'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'Production'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;RELATED&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;LINKS&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;REMARKS&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nx"&gt;To&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;see&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;examples&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Examples"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;more&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;information&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Detailed"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;technical&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;information&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Full"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="kr"&gt;For&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;online&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;help&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;type:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get-Help New-GrafanaDashboard -Online"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/javiermarasco/PowerGrafana"&gt;Link to PowerGrafana at GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.powershellgallery.com/packages/PowerGrafana/0.1.0"&gt;Link to PowerGrafana at the PowerShell gallery&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>powershell</category>
      <category>grafana</category>
      <category>monitoring</category>
      <category>automation</category>
    </item>
    <item>
      <title>(English) Installing Prometheus in AKS</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Wed, 01 Dec 2021 16:14:24 +0000</pubDate>
      <link>https://forem.com/javiermarasco/installing-prometheus-in-aks-2cdp</link>
      <guid>https://forem.com/javiermarasco/installing-prometheus-in-aks-2cdp</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;The main intention of this post (as well as the others) is not to have a very deep tutorial in all the details of each technology/application that is described but a short and concise guide for something particular that can help people that are starting to know the topic described in this article, of course if you have any recommendation to make this better, just feel free to drop me a message/comment.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Prometheus
&lt;/h1&gt;

&lt;p&gt;So, let's jump directly to what we will be talking today here, &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt; is a monitoring tool that is very popular in Kubernetes world as is one of the CNCF (Cloud Native Computing Foundation) projects and this means is a very mature product with a big support in the community.&lt;/p&gt;

&lt;p&gt;There are several characteristics that make Prometheus one of the favorites tools to monitor environments, some are listed here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pull methodology (Means the Prometheus server pulls for metrics instead of waiting for the application to push metrics to Prometheus server)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Very fast when collecting metrics and doing aggregation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An architecture that allows to monitor not only Kubernetes environments but other applications like databases or web servers (using exporters)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports for hundreds of applications, hardware, platforms, etc. to monitor &lt;a href="https://prometheus.io/docs/instrumenting/exporters/"&gt;here is the list&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this in mind, let's continue to install it and configure it for a first try.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to install it
&lt;/h1&gt;

&lt;p&gt;For our example, we will install Prometheus in an &lt;a href="https://azure.microsoft.com/en-us/services/kubernetes-service/"&gt;AKS cluster&lt;/a&gt; (Kubernetes cluster running as a PaaS in Azure).&lt;/p&gt;

&lt;p&gt;There are multiple ways to deploy Prometheus in a Kubernetes cluster but the simplest one and the one that makes more sense is to use something called Helm chart, think of Helm Charts as a set of YAML files linked as a single resource to be deployed, this means that you install it as a "single" item (one chart) but it will instead deploy the needed resources in your cluster to make the solution work properly (think of pods, replica sets, deployments, services, secrets, etc.)&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Helm
&lt;/h1&gt;

&lt;p&gt;Helm is a technology that allows us to package a bunch of YAML files that will be used as a whole to make a solution to work (in this case the solution is Prometheus), by using Helm you are removing the complexity of needing to manage independently all the resources to make the solution work, instead you provide a configuration file to the chart and deploy the chart, that will create all the resources for you and configure them properly.&lt;/p&gt;

&lt;p&gt;One of the advantages of using Helm is that you can rely on repositories where those charts are being maintained and use them, but you could also want to keep your own version of the chart locally or in a private repository, so you can tune it to match your needs (like using a custom docker image in the chart instead of using the default one, think of security needs for example)&lt;/p&gt;

&lt;p&gt;Another characteristic is that you can store those charts in the same repositories where you store your container images and version them the same way you do with a container image.&lt;/p&gt;

&lt;p&gt;Each chart has its own set of files because they represent a group of resources that work to make an application to work, so the resources required for a helm chart for Prometheus are not the same resources required for cert-manager for example, but the idea is the same, a set of YAML files that once deployed will work together to make the application to run.&lt;/p&gt;

&lt;p&gt;In order to use helm charts you require to have helm installed in your system and add the repositories that you will be using, each helm chart lives in a repository that you require to add in order to retrieve the chart and it's files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GkhQCQU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-repo-add.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GkhQCQU5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-repo-add.png" alt="" width="880" height="186"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;adding a repository&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For this article we will use the default Prometheus helm chart&lt;/p&gt;
&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;Well for this article we will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An AKS cluster up and running, nothing special, just the base deployment is fine, you can follow this &lt;a href="https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal"&gt;tutorial&lt;/a&gt; (I will make a tutorial in the future) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A terminal with kubectl and helm installed &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the prometheus-community helm repository&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Installation
&lt;/h1&gt;

&lt;p&gt;The first thing we will do is to add the prometheus-community helm repository and update our local repository list by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts 
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can check all the charts that we can install now that we added the repository, let's see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo prometheus-community
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--waGN7lG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-search-repo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--waGN7lG3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-search-repo.png" alt="" width="880" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfect! We see a lot of charts there, those are different elements that we can install, but today we will focus on the one called "prometheus"&lt;/p&gt;

&lt;p&gt;You see there is a column names &lt;strong&gt;CHART VERSION&lt;/strong&gt;, this is the version of the chart itself and not of Prometheus, this is because you can make modifications in the way the chart is composed and what is inside it but still use the same version of Prometheus as the chart version before. You can see all the chart versions by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo prometheus-community/prometheus &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"prometheus-community/prometheus-"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;The grep is to remove all the other charts from the list&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--42lJx3Gn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/chart-versions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--42lJx3Gn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/chart-versions.png" alt="" width="880" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you install Prometheus without telling the version of the chart that you want, it will install the latest (14.11.1 at the moment I write this article).&lt;/p&gt;

&lt;p&gt;Let's install it now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus prometheus-community/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*the "prometheus" before the name of the repository/chart is the name we want to give to this deployment, you can choose another name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---SpTAcId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-install.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---SpTAcId--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-install.png" alt="" width="880" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have our Prometheus server installed and ready, let's check in Kubernetes what we got deployed (I had the "default" namespace selected while installing the chart, so my chart was deployed in the "default" namespace)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One important thing is that when you install a helm chart, it gets installed in a namespace in Kubernetes, if you change to another namespace and try to see the installed charts, you will not see the one you installed in the other namespace. Let me put this in an image to explain better:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*This is a &lt;code&gt;helm list&lt;/code&gt; in my "default" namespace&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZQA6d3R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-default.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZQA6d3R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-default.png" alt="" width="880" height="43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*This is a &lt;code&gt;helm list&lt;/code&gt; in another namespace&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xq4ces-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-other.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xq4ces-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/helm-list-other.png" alt="" width="880" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes we can see all the resources created automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NX0aOXkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/kubectl-get-all.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NX0aOXkX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/kubectl-get-all.png" alt="" width="880" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the steps described after the installation of the helm chart, we should forward a port from our machine to the pod where Prometheus server is running with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;POD_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"app=prometheus,component=server"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].metadata.name}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; default port-forward &lt;span class="nv"&gt;$POD_NAME&lt;/span&gt; 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;If you are using WSL to run your Kubernetes commands, you need to do some extra steps for this to work&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First check what's the IP of your WSL by running &lt;code&gt;wsl hostname -I&lt;/code&gt; as you can't access the ports in your host machine (windows) by running localhost:port if you are exposing the ports inside WSL.&lt;/li&gt;
&lt;li&gt;Second, the port-forward command should include --address 0.0.0.0 like &lt;code&gt;kubectl --namespace default port-forward --address 0.0.0.0 $POD_NAME 9090&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Third, you should use the WSL IP (the one from step one) instead of &lt;code&gt;localhost&lt;/code&gt; to access Prometheus&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this we can now go to our browser and access &lt;code&gt;localhost:9090&lt;/code&gt; to see this dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJsbc39Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJsbc39Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-dashboard.png" alt="" width="880" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus has a lot of default metrics getting collected by default, to check them you can start writing something in the search box, and it will autocomplete with available metrics.&lt;/p&gt;

&lt;p&gt;As an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H_qQrtZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-metric1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H_qQrtZk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/prometheus/prometheus-metric1.png" alt="" width="880" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now, the only thing that is left is to dig into the metrics Prometheus is collecting, maybe adding some exporters, configure alert manager maybe? (this is a topic for another post), consume those metrics from Grafana? (Article about this is already in the queue 😉 ) &lt;/p&gt;

&lt;h1&gt;
  
  
  Final words
&lt;/h1&gt;

&lt;p&gt;I hope this helps you to get started with Prometheus as it is very simply to implement and at the same time very powerful, if you have any problem following this guide or any recommendation, please let me know in the comments section.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>aks</category>
      <category>kubernetes</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>(Spanish) Trabajando con Helm Charts</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Wed, 01 Dec 2021 15:19:31 +0000</pubDate>
      <link>https://forem.com/javiermarasco/trabajando-con-helm-charts-365f</link>
      <guid>https://forem.com/javiermarasco/trabajando-con-helm-charts-365f</guid>
      <description>&lt;h1&gt;
  
  
  Introducción
&lt;/h1&gt;

&lt;p&gt;En esta ocasión vamos a estar viendo como trabajar con Helm Charts, en mis anteriores artículos sobre Prometheus y Grafana hablamos un poco sobre los Helm Charts, pero no mucho en detalle.&lt;br&gt;&lt;br&gt;
En este articulo voy a descargar el chart de Grafana y vamos a ver como podemos modificarlo para que se ajuste a nuestras necesidades.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Que son los charts?
&lt;/h1&gt;

&lt;p&gt;En el mundo de Kubernetes a menudo nos encontramos con que tenemos aplicaciones corriendo en algunos pods, servicios para esos pods, probablemente un ingress controller para exponer esta aplicación al mundo exterior y podemos tener secrets, configmaps, volúmenes u otros recursos de Kubernetes. A medida que nuestra aplicación va creciendo en componentes y complejidad nos encontramos con que tenemos cada vez mas cantidad de archivos yaml (o yml) pero nuestra aplicación es "una sola" compuesta de multiples archivos, manejar todos estos archivos por separado siendo "una sola entidad" es complicado. Es por esto que aparecen los Helm Charts como una forma de simplificar esto al reunir en una sola entidad (un chart) todos los componentes a desplegar, una segunda ventaja es que podemos hacer "templating" de estos archivos yaml de forma que tomen un valor de un archivo de configuración, haciendo aun mas fácil la configuración del chart. Veamos como es esto.&lt;/p&gt;

&lt;h1&gt;
  
  
  Viendo los charts disponibles en un repositorio
&lt;/h1&gt;

&lt;p&gt;Como primer paso vamos a agregar el repositorio de Grafana (como un ejemplo, estos pasos aplican para cualquier helm chart).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I7g11zyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-repo-add.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I7g11zyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-repo-add.png" alt="" width="880" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Y vamos a ver que charts hay disponibles en el repositorio&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7bp3lWda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7bp3lWda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo.png" alt="" width="880" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vamos a concentrarnos en el chart llamado &lt;code&gt;grafana/grafana&lt;/code&gt; que es el que usamos en nuestra instalación de Grafana en otro articulo y que despliega los componentes básicos para una implementación de Grafana minima.&lt;/p&gt;

&lt;p&gt;Lo siguiente que tenemos que hacer es ir a un directorio donde queramos descargar nuestro chart y correr el comando &lt;code&gt;helm pull grafana/grafana --untar&lt;/code&gt; esto va a descargar el chart a un directorio llamado &lt;code&gt;grafana&lt;/code&gt; con todos los archivos necesarios dentro.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VpDqBYZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/download-chart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VpDqBYZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/download-chart.png" alt="" width="880" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Y dentro de este directorio están los archivos que componen el chart&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GgT4a1oX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/download-chart-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GgT4a1oX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/download-chart-2.png" alt="" width="880" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Empecemos a ver que hay dentro de un chart
&lt;/h1&gt;

&lt;p&gt;Lo primero que podemos ver es un archivo llamado &lt;code&gt;Chart.yaml&lt;/code&gt;, este archivo contiene la descripcion del chart que acabamos de descargar&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6yMyyhnk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/chart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6yMyyhnk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/chart.png" alt="" width="880" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vemos información como la version del Chart, la URL del proyecto al que pertenece el chart y algo mas como quien mantiene el chart, etc.&lt;/p&gt;

&lt;p&gt;Luego vemos una estructura de directorios&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qs57Fr2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/directories.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qs57Fr2V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/directories.png" alt="" width="544" height="1075"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;El único directorio que siempre vamos a encontrar en un chart es &lt;code&gt;templates&lt;/code&gt; ya que en este directorio es que vamos a poner todos los archivos yaml que componen nuestra solución. Los demás directorios son necesarios para Grafana, en otros charts podemos encontrar otros directorios, pero siempre tengan en cuenta que los templates que forman los recursos en Kubernetes están en el directorio &lt;code&gt;templates&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Por ultimo en el directorio donde descargamos el chart vamos a encontrar un archivo llamado &lt;code&gt;values.yaml&lt;/code&gt; , este archivo es el corazón de la configuración de un chart, en este archivo vamos a encontrar todos los parámetros que son configurables para nuestro chart (y si lo deseamos podemos agregar mas configuraciones en este archivo que luego podemos usar en los yaml del directorio &lt;code&gt;templates&lt;/code&gt;).&lt;/p&gt;

&lt;h1&gt;
  
  
  Como son los templates en un chart
&lt;/h1&gt;

&lt;p&gt;Básicamente un template es un archivo yaml que contiene instrucciones para que al momento de desplegar nuestro chart, los valores finales a implementar sean tomados de un archivo de configuración (values.yaml) y no tengamos que hardcodear un valor en el template. Veamos un ejemplo:  &lt;/p&gt;

&lt;p&gt;En el archivo &lt;code&gt;deployment.yaml&lt;/code&gt; podemos encontrar esto:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MzwwV37s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/deployment.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MzwwV37s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/deployment.png" alt="" width="880" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Donde en la línea 18 hace referencia a &lt;code&gt;.Values.replicas&lt;/code&gt; esta es la forma en la que le decimos a Helm que para el valor del atributo &lt;code&gt;replicas&lt;/code&gt; de este yaml, tome el valor que tenemos en el archivo &lt;code&gt;values.yaml&lt;/code&gt; con la key &lt;code&gt;replicas&lt;/code&gt;. El archivo &lt;code&gt;values.yaml&lt;/code&gt; se ve así:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TawB8LqU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/values.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TawB8LqU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/values.png" alt="" width="880" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;En la línea 24 de &lt;code&gt;values.yaml&lt;/code&gt; podemos ver que dice &lt;code&gt;replicas : 1&lt;/code&gt; si desplegamos este chart así, veremos que tenemos sola replica del pod &lt;code&gt;grafana&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w57kWeqv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/single-replica.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w57kWeqv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/single-replica.png" alt="" width="877" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pero si modificamos este valor y ponemos 3, guardamos el archivo &lt;code&gt;values.yaml&lt;/code&gt; y aplicamos el chart nuevamente, veremos que ahora tenemos 3 replicas de nuestro pod &lt;code&gt;grafana&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JzE1JaXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/3-replicas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JzE1JaXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/helm-charts/3-replicas.png" alt="" width="863" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Esto mismo podemos aplicarlo para cualquier configuración, de esta forma podemos alterar nuestra aplicación simplemente actualizando valores en &lt;code&gt;values.yaml&lt;/code&gt; y corriendo un &lt;code&gt;helm upgrade grafana .&lt;/code&gt; (si estamos en el directorio donde tenemos nuestro chart descargado)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Algo super interesante es que estos charts también podemos almacenarlos en los repositorios donde almacenamos nuestras imágenes de containers, de esta forma podemos versionarlos juntos con nuestras imágenes de containers y hacer un pull desde donde lo necesitemos.&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Mas allá de la configuración básica
&lt;/h1&gt;

&lt;p&gt;En &lt;code&gt;values.yaml&lt;/code&gt; van a encontrar muchas líneas comentadas, si las des comentan van a estar habilitando nuevas configuraciones del chart, esto se debe a que helm nos permite poner código condicional en nuestros templates, algo como "si existe el bloque X desplegar Y recurso en Kubernetes". Un ejemplo de esto es el ingress controller de Grafana. &lt;br&gt;
En la línea 181 de &lt;code&gt;values.yaml&lt;/code&gt; dice "enabled: false" y en el archivo &lt;code&gt;ingress.yaml&lt;/code&gt; en la línea 1 dice &lt;code&gt;{{- if .Values.ingress.enabled -}}&lt;/code&gt;, esta es una forma de habilitar o deshabilitar (llamado también feature flag) un componente o configuración en base a lo que definamos en el archivo de valores del chart.&lt;/p&gt;

&lt;h1&gt;
  
  
  Por ultimo
&lt;/h1&gt;

&lt;p&gt;Espero que este articulo los ayude a comprender un poco mejor que son lo Helm Charts y se puedan animar a crear los suyos propios, realmente no es muy difícil y permite tener un mayor control sobre lo que estamos desplegando en nuestra aplicación y que recursos pertenecen a que aplicación.&lt;/p&gt;

&lt;p&gt;Si les fue de ayuda el articulo, agradecería que lo compartan para que mas gente pueda llegar a leerlo y podamos ayudarnos entre todos, de la misma forma si encuentran algo que no es correcto en este articulo o que puede ser explicado mas en profundidad, háganmelo saber en los comentarios.&lt;/p&gt;

&lt;p&gt;Muchas gracias por leer!&lt;/p&gt;

</description>
      <category>helm</category>
      <category>kubernetes</category>
      <category>automation</category>
    </item>
    <item>
      <title>(Spanish) Instalando y configurando Grafana en un cluster AKS</title>
      <dc:creator>Javier Marasco</dc:creator>
      <pubDate>Wed, 01 Dec 2021 14:58:38 +0000</pubDate>
      <link>https://forem.com/javiermarasco/instalando-y-configurando-grafana-en-un-cluster-aks-2ac3</link>
      <guid>https://forem.com/javiermarasco/instalando-y-configurando-grafana-en-un-cluster-aks-2ac3</guid>
      <description>&lt;h2&gt;
  
  
  Introducción
&lt;/h2&gt;

&lt;p&gt;En este post vamos a ver como instalar &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; de forma sencilla usando Helm Charts, también vamos a ver que es y como se usa Grafana&lt;/p&gt;

&lt;h2&gt;
  
  
  Que es grafana?
&lt;/h2&gt;

&lt;p&gt;Vamos a empezar por explicar que es exactamente Grafana y para que podemos usarlo. Grafana es un software de monitoreo muy ampliamente utilizado para todo tipo de ambientes, se puede utilizar en tu casa para monitorear tu smart home (como consumos eléctricos, de gas o luminosidad a lo largo del día) como también en una empresa para identificar utilizacion de recursos, consumo de CPU y memoria por parte de máquinas virtuales o contenedores, etc.&lt;br&gt;
Grafana puede interactuar con múltiples orígenes de datos gracias a sus "datasources" que son adicionales que se pueden instalar en Grafana para poder consumir metrics y logs de distintas fuentes.&lt;br&gt;
Es importante aclarar que Grafana no produce ni modifica metrics ni logs, simplemente los consume y los muestra de una forma amigable con el usuario.&lt;br&gt;
Normalmente, la creación de dashboards, paneles y otros recursos en Grafana es hecho a mano por medio de su panel de administración lo cual puede ser tedioso a veces cuando tenemos que deployar algunos componentes en la nube que van a estar por algún tiempo y queremos monitorearlos, pero a los pocos días los destruiremos, para tareas como esas podemos usar &lt;a href="https://www.powershellgallery.com/packages/PowerGrafana/0.1.0"&gt;PowerGrafana&lt;/a&gt; que es un módulo de PowerShell que puede crear dashboards, paneles y targets de forma programática.&lt;/p&gt;
&lt;h2&gt;
  
  
  Como instalarlo
&lt;/h2&gt;

&lt;p&gt;Para instalar Grafana tenemos varias opciones, dependiendo de nuestras necesidades podemos elegir entre:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Correr una máquina virtual e instalar Grafana dentro como un binario&lt;/li&gt;
&lt;li&gt;Podemos correr Grafana como un container, esto puede ser dentro de una máquina virtual o en un ambiente de containers&lt;/li&gt;
&lt;li&gt;Podemos deployar Grafana en Kubernetes de forma manual (nosotros creamos el pod y el servicio básico para que funcione)&lt;/li&gt;
&lt;li&gt;Deployamos Grafana en Kubernetes usando un Helm Chart (vamos a explorar esta opción en este post)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vamos a continuar con el uso de Helm Charts porque realmente creo que es LA forma en la que se deben deployar aplicaciones en Kubernetes, es mucho mas simple de mantener, mas manejable su configuración y además nos permite muy fácilmente deployar la misma aplicación con la misma configuración en múltiples clusters sin ningún cambio.&lt;/p&gt;

&lt;p&gt;Para comenzar vamos a agregar el repositorio de Grafana y correr un &lt;code&gt;helm repo update&lt;/code&gt; para actualizar la lista de repositorios en nuestra máquina.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I7g11zyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-repo-add.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I7g11zyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-repo-add.png" alt="" width="880" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ahora podemos inspeccionar ese repositorio y ver que Charts tenemos disponibles, veamos:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7bp3lWda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7bp3lWda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo.png" alt="" width="880" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Genial, tenemos el Chart que necesitamos &lt;code&gt;Grafana&lt;/code&gt;, pero también tenemos en el mismo repositorio otras configuraciones de Grafana para distintos propósitos, continuemos.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerrequisitos
&lt;/h2&gt;

&lt;p&gt;Como prerrequisitos vamos a necesitar muy pocas cosas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nuestro cluster de kubernetes&lt;/li&gt;
&lt;li&gt;Un &lt;code&gt;namespace&lt;/code&gt; en nuestro cluster&lt;/li&gt;
&lt;li&gt;Una terminal de nuestra preferencia (bash, zsh, powershell) con kubectl y helm instalados&lt;/li&gt;
&lt;li&gt;El repositorio de Grafana agregado en nuestro Helm local (como hicimos en el paso anterior)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Instalación
&lt;/h2&gt;

&lt;p&gt;Para este ejercicio vamos a instalar los recursos de Grafana en un &lt;code&gt;namespace&lt;/code&gt; separado, lo vamos a llamar &lt;code&gt;monitoring&lt;/code&gt;, para eso ejecutamos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace monitoring
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8jFa5894--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/create-namespace.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8jFa5894--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/create-namespace.png" alt="" width="880" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;En mi terminal tengo instalado kubens que es una aplicación que nos permite setear el namespace en el que vamos a trabajar cuando ejecutemos kubectl y de esta forma nos ahorramos tener que especificar el &lt;code&gt;-n namespace&lt;/code&gt; con cada comando&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ahora veamos nuevamente la lista de charts que agregamos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Oy6Zu70_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo-grafana.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Oy6Zu70_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-search-repo-grafana.png" alt="" width="880" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Perfecto, vemos que hay muchos charts además del que necesitamos, esto es porque en el mismo repositorio de Helm se distribuyen otras configuraciones de grafana (combinaciones e Grafana con otros productos).   &lt;/p&gt;

&lt;p&gt;Vamos a instalar el chart de Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana grafana/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bxUwXoNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-install-grafana.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bxUwXoNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/helm-install-grafana.png" alt="" width="880" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vemos que el comando instalo el chart de Grafana y ya nos esta indicando algunas cosas que son interesantes, veamos que son:&lt;/p&gt;

&lt;h3&gt;
  
  
  Warnings:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;W1115 09:12:11.499698    9800 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1115 09:12:11.529960    9800 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1115 09:12:12.040639    9800 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1115 09:12:12.040639    9800 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esos warnings suelen aparecer en varios Charts, se deben a que la API de Kubernetes está en constante evolución y muchas veces algunos features van mutando y se van deprecando en favor de otros, estos warnings en particular hacen referencia a que resource kind &lt;code&gt;PodSecurityPolicy&lt;/code&gt; está deprecado en la versión 1.21 de Kubernetes y va a ser totalmente removido en la versión 1.25 (mi clúster corre 1.22), por lo que hasta que actualice a 1.25, esto va a seguir funcionando, pero si actualizo mi clúster (o deployo este chart) a 1.25, este resource kind no va a existir más y por ende el chart va a fallar.&lt;/p&gt;

&lt;h3&gt;
  
  
  Password de admin de grafana:
&lt;/h3&gt;

&lt;p&gt;Ni bien terminemos de instalar Grafana, vamos a querer loguearnos en su dashboard, para esto necesitamos el password del usuario &lt;code&gt;admin&lt;/code&gt; y este password la podemos encontrar en un &lt;code&gt;secret&lt;/code&gt; en el mismo deploy (en el mismo namespace), el comando que nos muestra &lt;code&gt;helm install&lt;/code&gt; es lo que debemos hacer para conocer el password (Este password se guarda como todos los secrets en Kubernetes como un &lt;code&gt;base64 encoded string&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring grafana &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.admin-password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Con este comando vamos a obtener el secret primero y luego vamos a pasárselo al comando &lt;code&gt;base64&lt;/code&gt; para que lo decodifique y nos lo muestre.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Si no tienen el comando base64, pueden simplemente usar brew (Linux y MacOS) o chocolatey (Windows)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Dado que estoy corriendo estos comandos en Windows, mi ejemplo es:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring grafana &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.admin-password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Y en mi caso el output es &lt;code&gt;xzUvc2esXl2aSivqhQQ5X3ZZ01TZ2HMCEPdpWVSJ&lt;/code&gt;, ese es el password de mi usuario &lt;code&gt;admin&lt;/code&gt;, este valor va a ser distinto en cada caso.&lt;/p&gt;

&lt;h3&gt;
  
  
  Accediendo Grafana por primera vez!
&lt;/h3&gt;

&lt;p&gt;El último paso es acceder a Grafana, nos menciona que podemos acceder a grafana por medio del service &lt;code&gt;grafana.monitoring.svc.cluster.local&lt;/code&gt;, pero esto solo funcionaria en caso de que estén corriendo el clúster de kubernetes de forma local en sus máquinas, en caso de que estén corriendo el clúster en la nube (como en mi caso) tienen que hacer un port-forward del puerto 3000 (el puerto default de Grafana) del pod a algún puerto que no estén usando en sus máquinas (podemos usar el 3000 también por practicidad), para esto necesitamos averiguar el nombre del pod, lo podemos hacer simplemente corriendo &lt;code&gt;kubectl get pod&lt;/code&gt; o de forma más especifica &lt;code&gt;kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Este comando nos va a decir el nombre del pod de grafana en el namespace &lt;code&gt;monitoring&lt;/code&gt; que tenga los labels &lt;code&gt;name=grafana&lt;/code&gt; y &lt;code&gt;instance=grafana&lt;/code&gt;, en mi caso esto es &lt;code&gt;grafana-5c999c4fd5-czxdw&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Ahora podemos hacer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward grafana-5c999c4fd5-czxdw 3000:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NVVGun2v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/grafana-login-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NVVGun2v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/javiermarasco/articles/main/Articles/Images/grafana/grafana-login-1.png" alt="" width="880" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usamos &lt;code&gt;admin&lt;/code&gt; como usuario y  el password que obtuvimos antes como password &lt;code&gt;xzUvc2esXl2aSivqhQQ5X3ZZ01TZ2HMCEPdpWVSJ&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;En este punto tenemos Grafana instalado y configurado en nuestro clúster, tengan en cuenta que esta configuración de Grafana es muy simple y guarda toda la información sobre Dashboards y DataSources en un storage de tipo &lt;code&gt;EmptyDir&lt;/code&gt; que es un tipo de storage volátil (cuando el pod se destruye, el storage también por lo que obtenemos un pod nuevo con un storage nuevo cada vez que matemos el pod, perdiendo toda la configuración que realicemos).&lt;/p&gt;

&lt;p&gt;Si queremos cambiar este comportamiento, lo que debemos hacer es descargar el chart a nuestra máquina y ajustarlo para que en lugar de un &lt;code&gt;EmptyDir&lt;/code&gt; utilice una storage class que nos dé persistencia. En un próximo artículo sobre Helm Chart voy a avanzar sobre como se puede hacer esto y mucho más.&lt;/p&gt;

&lt;h1&gt;
  
  
  Por último
&lt;/h1&gt;

&lt;p&gt;Espero que esta explicación ayude a conocer lo básico de como instalar Grafana usando Helm Charts, si este post te fue útil, por favor compártelo con otros y si tienes sugerencias para más contenido o mejoras, por favor hacémelo saber en los comentarios de este post. ¡Muchas gracias!&lt;/p&gt;

</description>
      <category>aks</category>
      <category>grafana</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
