<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Thijs Dieltjens</title>
    <description>The latest articles on Forem by Thijs Dieltjens (@thijsdieltjens).</description>
    <link>https://forem.com/thijsdieltjens</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thijsdieltjens"/>
    <language>en</language>
    <item>
      <title>Monitoring/logging your K8S NodeJS applications with elasticsearch</title>
      <dc:creator>Thijs Dieltjens</dc:creator>
      <pubDate>Fri, 15 Oct 2021 09:27:42 +0000</pubDate>
      <link>https://forem.com/thijsdieltjens/monitoringlogging-your-k8s-nodejs-applications-30k7</link>
      <guid>https://forem.com/thijsdieltjens/monitoringlogging-your-k8s-nodejs-applications-30k7</guid>
      <description>&lt;p&gt;&lt;em&gt;A quick guide on how to set up everything you need to start logging and monitoring your NodeJS applications hosted on Kubernetes using elasticsearch&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We recently moved our application stack towards Kubernetes. While we immediately benefited from its advantages, we suddenly lacked centralized application level logs for our NodeJS microservices. Previously our Express API was perfectly capable of providing this data on its own. Now it became a lot trickier to aggregate this when multiple pods ran simultaneously. &lt;/p&gt;

&lt;p&gt;This triggered a web search for the ideal tool(s) to give us a better understanding on performance and also any errors that would occur. Given we are a startup (&lt;a href="http://www.bullswap.com"&gt;www.bullswap.com&lt;/a&gt;), we gave preference to a cloud-agnostic, open source solution and that is how we ended up looking at elasticsearch (Elasticsearch, Kibana, APM Server).&lt;/p&gt;

&lt;p&gt;With both Kubernetes and Elasticsearch changing so rapidly it was not an easy task to get the right information. That is why we wanted to share our end result below so you do not have to go the same trouble.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubectl access to an up-to-date K8S cluster with enough capacity to handle at least an additional 3GB RAM usage&lt;/li&gt;
&lt;li&gt;A NodeJS application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What are we setting up?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ElasticSearch cluster: &lt;a href="https://www.elastic.co/"&gt;https://www.elastic.co/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kibana: provides data visualization on elasticsearch data&lt;/li&gt;
&lt;li&gt;APM Server: receives data from an APM agent and transforms it into elasticsearch documents&lt;/li&gt;
&lt;li&gt;Transform your NodeJS services into APM Agents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All code you see should be placed in yaml files and executed using &lt;code&gt;kubectl apply -f {file_name}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Elasticsearch&lt;/strong&gt;&lt;br&gt;
To keep everything separated from your regular namespaces we first set up a new namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging
---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we used a lot of the configuration we found on &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes"&gt;this tutorial&lt;/a&gt; to set up an elasticsearch service consisting of three statefulsets. The setup is described by the following yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: elasticsearch:7.14.1
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should slowly start deploying three new pods. Once they are all started quickly take a glance at the logs of one them to check everything is fine :).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Kibana&lt;/strong&gt;&lt;br&gt;
Now it is time to get Kibana started. Here we need to set up a new service consisting of a single replica deployment of the kibana image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  selector:
    app: kibana
--------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: kibana:7.14.1
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying/creating the yaml file and allowing the pods to get ready you should be able to test whether it is working correctly.&lt;br&gt;
You can do so by looking up the pod name and port forwarding it to localhost.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl port-forward kibana-xyz123456789 5601:5601--namespace=kube-logging&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Navigating to &lt;code&gt;localhost:5601&lt;/code&gt; should show you the loading Kibana interface. If Kibana notifies you that there is no data available, you can relax as this is completely normal 😊. &lt;/p&gt;

&lt;p&gt;When everything appears to be working, it can be useful to set up a LoadBalancer/Ingress so you can access Kibana from the internet. If you do so however, make sure you put security in place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up APM Server&lt;/strong&gt;&lt;br&gt;
I am grateful for &lt;a href="https://medium.com/logistimo-engineering-blog/how-did-i-use-apm-in-kubernetes-ecosystem-8f22d52beb03"&gt;this article&lt;/a&gt; to set me on the right track. As it is no longer up to date you can find our configuration below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--------
apiVersion: v1
kind: ConfigMap
metadata:
  name: apm-server-config
  namespace: kube-logging
  labels:
    k8s-app: apm-server
data:
  apm-server.yml: |-
    apm-server:
      host: "0.0.0.0:8200"
      frontend:
        enabled: false
    setup.template.settings:
      index:
        number_of_shards: 1
        codec: best_compression
    setup.dashboards.enabled: false
    setup.kibana:
      host: "http://kibana:5601"
    output.elasticsearch:
      hosts: ['http://elasticsearch:9200']
      username: elastic
      password: elastic
--------
apiVersion: v1
kind: Service
metadata:
  name: apm-server
  namespace: kube-logging
  labels:
    app: apm-server
spec:
  ports:
  - port: 8200
    targetPort: 8200
    name: http
    nodePort: 31000
  selector:
    app: apm-server
  type: NodePort
--------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apm-server
  namespace: kube-logging
spec:
  # this replicas value is default
  # modify it according to your case
  replicas: 1
  selector:
    matchLabels:
      app: apm-server
  template:
    metadata:
      labels:
        app: apm-server
    spec:
      containers:
      - name: apm-server
        image: docker.elastic.co/apm/apm-server:7.15.0
        ports:
        - containerPort: 8200
          name: apm-port
        volumeMounts:
        - name: apm-server-config
          mountPath: /usr/share/apm-server/apm-server.yml
          readOnly: true
          subPath: apm-server.yml
      volumes:
      - name: apm-server-config
        configMap:
          name: apm-server-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying/creating the yaml file and allowing the pods to get ready you should be able to test whether it is correctly connecting to elasticsearch by looking at the logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final step: sending data&lt;/strong&gt;&lt;br&gt;
Below lines should be the first &lt;code&gt;require&lt;/code&gt; to load in your NodeJS application(s). When adding this to an express server you immediately start receiving logs about how transactions (http requests) are handled. You can find useful information such as &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which external services such as databases or APIs cause delays in your applications.&lt;/li&gt;
&lt;li&gt;Which API calls are slow&lt;/li&gt;
&lt;li&gt;Where and how often errors occur&lt;/li&gt;
&lt;li&gt;NodeJS CPU usage &lt;/li&gt;
&lt;li&gt;...
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apm = require('elastic-apm-node').start({
    // Override service name from package.json
    // Allowed characters: a-z, A-Z, 0-9, -, _, and space
    serviceName: '{CHANGE THIS TO YOUR APPLICATION/SERVICE NAME}',
    // Set custom APM Server URL (default: http://localhost:8200)
    serverUrl: 'http://apm-server.kube-logging.svc.cluster.local:8200'
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Send a few requests to your server and you should be seeing a service appear in Kibana. (Observability &amp;gt; APM)&lt;br&gt;
By clicking on it you should be able to see a nice overview of transactions, throughput and latency. If for any reason this is not happening I suggest you take a look at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NodeJS logs (connection issues to APM will be logged here)&lt;/li&gt;
&lt;li&gt;APM logs (issues connecting to elasticsearch will be here)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the case of an express server you often will already catch a lot of the errors and send for example 500 errors. For that reason  elasticsearch will not treat it as an error. While you are able to distinguish based on the HTTP status codes, it can make sense to add the following line wherever you deal with unsuccesful events. This way it will be treated as an error.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apm.captureError(error);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Definitely explore the possibilities of Elasticsearch/Kibana/APM Server as it is capable of doing a lot more!&lt;/p&gt;

&lt;p&gt;We hope this article is useful for some. Our goal was to save you the time we spent on figuring it out for &lt;a href="https://dev.toour%20construction%20equipment%20rental%20platform"&gt;https://www.bullswap.com&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>node</category>
      <category>elasticsearch</category>
    </item>
  </channel>
</rss>
