<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mario</title>
    <description>The latest articles on Forem by Mario (@mfahlandt).</description>
    <link>https://forem.com/mfahlandt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mfahlandt"/>
    <language>en</language>
    <item>
      <title>DATACENTER IN A SUITCASE - A REAL SMALL EDGE CASE</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Mon, 15 Jan 2024 14:32:21 +0000</pubDate>
      <link>https://forem.com/mfahlandt/datacenter-in-a-suitcase-a-real-small-edge-case-3edc</link>
      <guid>https://forem.com/mfahlandt/datacenter-in-a-suitcase-a-real-small-edge-case-3edc</guid>
      <description>&lt;p&gt;Usually when we talk about datacenters, we get the impression it’s going to be big. There are more and more use cases emerging where your datacenter either needs to be portable or is limited in size and power supply. Here we will take a look on how to create a portable datacenter.&lt;/p&gt;

&lt;h2&gt;
  
  
  THE UNUSUAL CLOUD-NATIVE DEMANDS
&lt;/h2&gt;

&lt;p&gt;In the cloud-native space, we’re shifting more components into unconventional spots — Edge. This means locations that might have limited energy reserve or are prone to outages or are limited in space.. We need systems that are self-sufficient, energy-efficient, and portable.&lt;/p&gt;

&lt;p&gt;Consider the idea: building everything inside a suitcase. We’re talking about leveraging ARM processors, not your typical server processors. These ARM units are powerful and energy-efficient—perfect for a low-energy, highly portable setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  WHY ARM PROCESSORS?
&lt;/h2&gt;

&lt;p&gt;ARM processors, short for Advanced RISC Machines, power nearly every smartphone. They operate on Reduced Instruction Set Computing (RISC) architecture, executing instructions predictably and efficiently.&lt;/p&gt;

&lt;p&gt;The licensing model for ARM is open, enabling various companies to develop specialized processors for different use cases. With advancements to 64-bit technology, ARM processors become more feasible in the cloud-native realm.&lt;/p&gt;

&lt;h2&gt;
  
  
  THE ARM ADVANTAGE
&lt;/h2&gt;

&lt;p&gt;Energy efficiency is a big win with ARM processors. They consume far less energy compared to traditional x86 architectures, making them ideal for portable data centers.&lt;/p&gt;

&lt;p&gt;Moreover, ARM processors offer hardware customization, allowing tailored designs for specific needs, enhancing efficiency further. This can also be adopted by hardware vendors due to the flexible licensing.&lt;/p&gt;

&lt;h2&gt;
  
  
  CHALLENGES AND SOLUTIONS
&lt;/h2&gt;

&lt;p&gt;However, there are challenges. Compatibility with software not optimized for ARM remains a hurdle. Performance per core might lag compared to x86 processors, and standardization issues persist due to the diversity of ARM options.&lt;/p&gt;

&lt;p&gt;The compatibility problem will solve over time as more and more software is getting adopted also on ARM architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  THE PRACTICAL SETUP
&lt;/h2&gt;

&lt;p&gt;I recently acquired a mini-ITX board from TuringPi, equipped to house ARM computing modules. Sure, there are other boards, but this one had networking and storage features baked in, making it a convenient choice. Plus, the price was reasonable, around $200.&lt;/p&gt;

&lt;p&gt;Building this setup wasn’t without its challenges. From USB compatibility issues to missing packages on Ubuntu, each step required troubleshooting and adaptation.&lt;/p&gt;

&lt;h2&gt;
  
  
  VIRTUALIZATION: THE MISSING LINK
&lt;/h2&gt;

&lt;p&gt;Virtualization plays a crucial role in this setup. While containers run most software, there are cases where VMs are necessary, especially for non-containerized applications. Virtualization provides a layer of control and safety, ensuring system stability.&lt;/p&gt;

&lt;p&gt;With the help of KubeVirt we can leverage the OpenSource way to virtualize our ARM based hardware nodes to create VM’s to give the possibility of facilitating a smoother transition to a complete ARM infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  FUTURE STEPS AND USE CASES
&lt;/h2&gt;

&lt;p&gt;Moving forward, the plan is to explore multi cluster deployment, optimize power sources with solar energy, and upgrade hardware for better performance. Also, an automated installation with the help of PXEBoot and Tinkerbell to Set up the whole infrastructure is something to take into consideration to automate and streamline the management of multiple “Datacenters”&lt;/p&gt;

&lt;p&gt;The last question to be answered would be what is the use-case? As already mentioned, we are here focusing on Edge scenarios. Here the possibilities are also quite diverse. Beginning with just a small cluster that needs to run inside a store or a small part of a manufacturing line and shop floor to some true remote cases.&lt;/p&gt;

&lt;p&gt;Some of those could range from expedition teams that need on site compute power to work at the location on research up to military use cases too. Those could be having a portable Intel capable station, that can be carried by a single soldier and adapt to location changes, but have the power to support with intel gathering, data analysis and general compute power.&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION: A WORK IN PROGRESS
&lt;/h2&gt;

&lt;p&gt;This unconventional approach is still a work in progress. While it’s not production-ready yet, it’s an intriguing concept with promising potential for specific environments and scenarios.&lt;/p&gt;

&lt;p&gt;And that’s the gist! While it might sound like a crazy idea at first, the notion of a suitcase-sized data center opens doors to exciting possibilities in the cloud-native landscape.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>linux</category>
    </item>
    <item>
      <title>NodeJS Continuous Deployment in Google Cloud with Kubernetes &amp; Container Builder</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Sun, 17 Mar 2019 23:11:59 +0000</pubDate>
      <link>https://forem.com/mfahlandt/nodejs-continuous-deployment-done-with-container-builder-and-kubernetes-engine-in-google-cloud-570p</link>
      <guid>https://forem.com/mfahlandt/nodejs-continuous-deployment-done-with-container-builder-and-kubernetes-engine-in-google-cloud-570p</guid>
      <description>&lt;p&gt;So you  want your app to be deployed to your Kubernetes Cluster without caring about any manual step?&lt;br&gt;
I got you covered, it's super simple to create a Continuous Deployment Pipeline with Google Cloud.&lt;br&gt;
For the sake of understanding I choose an NodeJS Express Application, but it also works with react or PHP or any other application layer.&lt;/p&gt;

&lt;p&gt;Let's get started:&lt;/p&gt;
&lt;h2&gt;
  
  
  Because IAM admin
&lt;/h2&gt;

&lt;p&gt;First we need to give the container builder the rights to access our Kubernetes API. Remember this does not give access to a certain cluster. It just allows the cloudbuilder service account to access our Kubernetes Cluster. So jump to the &lt;a href="https://console.cloud.google.com/iam-admin/iam?"&gt;IAM settings page&lt;/a&gt; and look for the cloudbuild service account. If it does not exist you might have to enable the &lt;a href="https://console.cloud.google.com/cloud-build/"&gt;cloudbuild API&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It should look like this&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9cazgh3pe9rkfykz0gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9cazgh3pe9rkfykz0gu.png" alt="cloud build service account" width="800" height="24"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to add the rights to access the Kubernetes API of our clusters so klick on the pen and look for the following.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ouize81u4t7sy7krjp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ouize81u4t7sy7krjp0.png" alt="Kubernetes API access rights" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prepare the application
&lt;/h2&gt;

&lt;p&gt;I won't go into details on how to setup an express application and introduce testing to it. &lt;br&gt;
I created a repository with the sample application, that we can use&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/mfahlandt"&gt;
        mfahlandt
      &lt;/a&gt; / &lt;a href="https://github.com/mfahlandt/gcp-continuous-deployment-node-demo"&gt;
        gcp-continuous-deployment-node-demo
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This is an example project to show how you can easily create a continuous deployment to google cloud
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;NodeJS Continuous Deployment done with Container Builder and Kubernetes Engine&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;To find all the details on how to use this repository please look refere to the corresponding block post on &lt;a href="https://dev.to" rel="nofollow"&gt;dev.to&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/mfahlandt/gcp-continuous-deployment-node-demo"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;To give you an overview, we have a basic express app with 2 backend routes to retrieve users or and user by id.&lt;br&gt;
Also we have a test folder that have tests for the two routes in it. These tests are written with the help of chai and mocha. &lt;br&gt;
If you download the repository you can do the following to see if the tests are working.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt;
&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before the app can run we need the service and the deployment in the Kubernetes Cluster. So let's quickly create a service and a deployment. All of the files you also can find in the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: server-production
  labels:
    app: YOUR-PROJECT-ID
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
        - name: server
          image: gcr.io/PROJECT_ID/REPOSITORY:master
          imagePullPolicy: Always
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              value: "production"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only important part here is that you change the project id and the repository to the path that the repository will have.&lt;/p&gt;

&lt;p&gt;After this we only need a service to expose our app to the internet. So quickly apply the service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
kind: Service
apiVersion: v1
metadata:
  name:  server
spec:
  selector:
    app:  server
  ports:
    - name:  server
      protocol: TCP
      port: 80
      targetPort: 3000
  type: LoadBalancer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ready to deploy
&lt;/h2&gt;

&lt;p&gt;Now we need to go to the most important part of the whole Setup. The cloudbuild.yaml. There we will define everything for our continuous deployment steps.&lt;/p&gt;

&lt;p&gt;The first amazing part will be, that it is possible to put all of the important data in environment variables defined in the build, so you can use the cloud build for different setups.&lt;/p&gt;

&lt;p&gt;First we install all of the node dependencies and run the test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  - name: 'gcr.io/cloud-builders/npm'
    args: ['install']
  - name: 'gcr.io/cloud-builders/npm'
    args: ['run', 'test']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this we build a docker image with all the repositories files inside and a proper defined environment, so you can easily do a staging deployment as well, or even branch deployment. And we push it to the google image repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  - name: 'gcr.io/cloud-builders/docker'
    args:
      - build
      - '--build-arg'
      - 'buildtime_variable=$_NODE_ENV'
      - '-t'
      - gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID
      - '.'
  - name: 'gcr.io/cloud-builders/docker'
    args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also important to see, we tag the image with the unique build id to make use of the apply ability of kubernetes, so the image is actually changed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  - name: 'gcr.io/cloud-builders/kubectl'
    args:
      - set
      - image
      - deployment
      - $_DEPLOYMENT
      - $_DEPLOYMENT=gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID
    env:
      - 'CLOUDSDK_COMPUTE_ZONE=$_CLUSTER_ZONE'
      - 'CLOUDSDK_CONTAINER_CLUSTER=$_CLUSTER_NAME'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally we set the image in the kubernetes cluster. BAM! Commit hook, automated testing, if successful automated deployment, no downtime. &lt;/p&gt;

&lt;p&gt;Now we open &lt;a href="https://console.cloud.google.com/cloud-build/triggers/add"&gt;the container builder trigger&lt;/a&gt; and choose where our code is located.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbh6n4l2csxlkeivn9j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbh6n4l2csxlkeivn9j2.png" alt="create trigger" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the last trigger step we now can add the custom variables. This is the first point where we actually define the cluster. So everything is aggregated at one place and ready to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqi2aacgbc3qhbkw5hzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqi2aacgbc3qhbkw5hzd.png" alt="create trigger part 2" width="563" height="796"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we just need to commit to the master and the trigger is started.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqcaquffafcns9bftruj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqcaquffafcns9bftruj.png" alt="Build successful" width="563" height="796"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;YIHA now we have continuous deployment, without setting up any extra services like jenkins, ant or chef. Pretty amazing&lt;/p&gt;

&lt;p&gt;I'm thinking of creating a tutorials series from zero to hero in cloud are you interested? drop me a comment!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>node</category>
      <category>docker</category>
    </item>
    <item>
      <title>Scaling properly a stateful app like Wordpress with Kubernetes Engine and Cloud SQL in Google Cloud</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Sun, 10 Mar 2019 18:15:43 +0000</pubDate>
      <link>https://forem.com/mfahlandt/scaling-properly-a-stateful-app-like-wordpress-with-kubernetes-engine-and-cloud-sql-in-google-cloud-27jh</link>
      <guid>https://forem.com/mfahlandt/scaling-properly-a-stateful-app-like-wordpress-with-kubernetes-engine-and-cloud-sql-in-google-cloud-27jh</guid>
      <description>&lt;p&gt;There are a lot of examples in the web that show you how you can run Wordpress in Kubernetes. The main issue with this Examples is: Only one Pod running with wordpress and you cannot really scale it.&lt;/p&gt;

&lt;p&gt;So i faced the issue, that i needed a highly scalable setup for wordpress and here is what i came up with.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it so hard to scale stateful apps?
&lt;/h2&gt;

&lt;p&gt;These apps write to the disc directly and most of the time you cannot prevent it. This is often the case in PHP based applications that use some kind of plugin system. So files cannot be stored in some kind of bucket but have to be in the filesystem of the application. &lt;/p&gt;

&lt;p&gt;Now you say something like but there is a Stateless plugin like &lt;a href="https://de.wordpress.org/plugins/wp-stateless/" rel="noopener noreferrer"&gt;https://de.wordpress.org/plugins/wp-stateless/&lt;/a&gt; that writes to cloud buckets. Yes this is true, but it does not store the plugins there or the files, that some plugins might directly write in there folder (sad that this happens but true)&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do?
&lt;/h2&gt;

&lt;p&gt;We need a couple of things, we want a scalable database, we need some kind of shared filebase for our application and the application itself.&lt;/p&gt;

&lt;p&gt;For the sake of shortening we will just use a predefined Wordpress Docker Image, although you should always try to create additions to these Dockerfiles, that fits your own needs. Use them as a base but extend them to your needs.&lt;/p&gt;

&lt;p&gt;So we need a shared disc and here we encounter our first problem. We need a ReadWriteMany volume in our Kubernetes Cluster and the problems start. The Cloud providers do not have this.&lt;br&gt;
If you check &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noopener noreferrer"&gt;the Kubernetes documentation&lt;/a&gt;&lt;br&gt;
you will see neither GCEPersistantDsik nor AzureDisk nor AWSElasticBlockStore support what we need. &lt;br&gt;
There are options like CloudFileStore in Goolge Cloud or AzureFile but they are way to expensive and to big for our case (We do not need 1TB to store our Wordpress thank you)&lt;/p&gt;
&lt;h2&gt;
  
  
  NFS to the rescue
&lt;/h2&gt;

&lt;p&gt;But when we look at the list we see the saviour: NFS to the rescue. Let's create the only option we have a ReadWriteOnce Storage connected to our NFS. So we need a Storage Class ideally shared between regions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: regionalpd-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
 type: pd-standard
 replication-type: regional-pd
 zones: europe-west3-b, europe-west3-c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we need to create the Volume Claim&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: nfs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: ""
  volumeName: nfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Let’s create our NFS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
 name: nfs-server
spec:
 clusterIP: 10.3.240.20
 ports:
   - name: nfs
     port: 2049
   - name: mountd
     port: 20048
   - name: rpcbind
     port: 111
 selector:
   role: nfs-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we add the NFS itself. The good thing here, we can use a predefined service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: nfs-server
spec:
 replicas: 1
 selector:
   matchLabels:
     role: nfs-server
 template:
   metadata:
     labels:
       role: nfs-server
   spec:
     containers:
       - name: nfs-server
         image: gcr.io/google_containers/volume-nfs:0.8
         ports:
           - name: nfs
             containerPort: 2049
           - name: mountd
             containerPort: 20048
           - name: rpcbind
             containerPort: 111
         securityContext:
           privileged: true
         volumeMounts:
           - mountPath: /exports
             name: nfs
     volumes:
       - name: nfs
         gcePersistentDisk:
           pdName: nfs
           fsType: ext4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CloudSQL so secure so much beauty
&lt;/h2&gt;

&lt;p&gt;Alright so we have a running NFS for our static data. So next big step connect Cloud SQL. So let’s say you already have Setup an Cloud SQL Mysql. How do you connect your pods to it? &lt;/p&gt;

&lt;p&gt;We use the SQL proxy for it that comes as a sidecar to our container. The good thing about this is, our MySQL is not exposed and we can use localhost. Amazing isn’t it?&lt;/p&gt;

&lt;p&gt;First you have to activate the &lt;a href="https://console.cloud.google.com/flows/enableapi?apiid=sqladmin" rel="noopener noreferrer"&gt;Cloud SQL Admin API&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you need to create a &lt;a href="https://console.cloud.google.com/iam-admin/serviceaccounts/" rel="noopener noreferrer"&gt;service account&lt;/a&gt; that have actual access to cloud SQL.&lt;/p&gt;

&lt;p&gt;Here we create a new role that have the rights for Cloud SQL &amp;gt; Cloud SQL-Client&lt;/p&gt;

&lt;p&gt;Download the created private Key this one we need to access the SQL instance.&lt;/p&gt;

&lt;p&gt;Now create a database user if you have not already done so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud sql users create [DBUSER] --host=% --instance=[INSTANCE_NAME] --password=[PASSWORD]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we need the name of the instance, easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud sql instances describe [INSTANCE_NAME]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or in the webinterface you find it here:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6dhf4g165bm0igzyyrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6dhf4g165bm0igzyyrw.png" alt="Google Cloud Webinterface" width="455" height="289"&gt;&lt;/a&gt;&lt;br&gt;
Now we save the credentials to our Kubernetes instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic cloudsql-instance-credentials \
    --from-file=credentials.json=[PROXY_KEY_FILE_PATH]
kubectl create secret generic cloudsql-db-credentials \
    --from-literal=username=[DBUSER] --from-literal=password=[PASSWORD]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  So we are ready to Setup our Wordpress aren’t we?
&lt;/h3&gt;

&lt;p&gt;Let’s create the service as a first step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
 name: wlp-service
 labels:
   app: wlp-service
spec:
 type: LoadBalancer
 sessionAffinity: ClientIP
 ports:
   - port: 443
     targetPort: 443
     name: https
   - port: 80
     targetPort: 80
     name: http
 selector:
   app: wordpress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alright now we have the Service Up and running only missing is the pod itself.&lt;br&gt;
Let's Split it up a bit so i can explain&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
 name: wordpress
 labels:
   app: wordpress
spec:
 replicas: 2
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: wordpress
 template:
   metadata:
     labels:
       app: wordpress
   spec:
     containers:
       - name: wordpress
         image: wordpress:7.3-apache
         imagePullPolicy: Always
         env:
           - name: DB_USER
             valueFrom:
               secretKeyRef:
                 name: "cloudsql-db-credentials"
                 key: username
           - name: DB_PASSWORD
             valueFrom:
               secretKeyRef:
                 name: "cloudsql-db-credentials"
                 key: password
         ports:
           - containerPort: 80
             name: wordpress
           - containerPort: 443
             name: ssl

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would be enough to run wordpress, but without the database or the persistent nfs. One by one let's add the cloud sql proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       - name: cloudsql-proxy
         image: gcr.io/cloudsql-docker/gce-proxy:1.11
         command: ["/cloud_sql_proxy",
                   "-instances=[YOUR INSTANCESTRING THAT WE LOOKED UP]=tcp:3306",
                   "-credential_file=/secrets/cloudsql/credentials.json"]
         securityContext:
           runAsUser: 2  # non-root user
           allowPrivilegeEscalation: false
         volumeMounts:
           - name: cloudsql-instance-credentials
             mountPath: /secrets/cloudsql
             readOnly: true
     volumes:
       - name: cloudsql-instance-credentials
         secret:
           secretName: cloudsql-instance-credentials
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cool now we can access our Cloud SQL with localhost :) It basically adds a second container to your pod that proxys everything comming to 3306 to our cloud SQL instance without exposing the traffic to the public net.&lt;/p&gt;

&lt;p&gt;And now we want to mount our wp-content directory to the NFS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeMounts:
           - name: my-pvc-nfs
             mountPath: "/var/www/html/wp-content"
volumes:
        - name:  my-pvc-nfs
        nfs:
            server: 10.3.240.20
            path: "/"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you would start saying, but Mario why the hack do you put in a fixed IP for the NFS. There is a reason. This is the only case that i know where the internal dns is not working properly. &lt;/p&gt;

&lt;p&gt;And that's it now we can scale our pods by creating and hpa&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
 name: wordpress
 namespace: default
spec:
 maxReplicas: 10
 metrics:
   - resource:
       name: cpu
       targetAverageUtilization: 50
     type: Resource
 minReplicas: 3
 scaleTargetRef:
   apiVersion: extensions/v1beta1
   kind: Deployment
   name: wordpress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All our wp-content files go to the nfs and is shared between the instances. Yes you are correct the NFS is now our single point of failure but an NFS is way more stable than having just one machine running. If you use Caching like redis or increase the fpm cache you can further reduce the load time.&lt;/p&gt;

&lt;p&gt;Cool isn’t it?&lt;/p&gt;

&lt;p&gt;Are you interested in basic Kubernetes / Cloud walkthroughs? Just let me know&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>wordpress</category>
    </item>
    <item>
      <title>Move PostgreSQL AWS RDS to Google Cloud SQL</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Sun, 03 Mar 2019 18:25:05 +0000</pubDate>
      <link>https://forem.com/mfahlandt/move-postgresql-to-google-cloud-sql-5bma</link>
      <guid>https://forem.com/mfahlandt/move-postgresql-to-google-cloud-sql-5bma</guid>
      <description>

&lt;p&gt;We have the issue, that we have to move a large postgreSQL database away from Amazon's AWS to Google's GCP.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problems where:
&lt;/h2&gt;

&lt;p&gt;Large Database: 160GB+ We only had the Snapshots of AWS&lt;/p&gt;

&lt;h3&gt;
  
  
  Get a snapshot out of RDS into Storage
&lt;/h3&gt;

&lt;p&gt;To do this we created a new compute engine and connected to it via ssh. We want to get the dump file direct to a new bucket in storage. So we have to enable the bucket as new volume of the machine:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can either Login or use the Service Account, but keep in mind that the Service Account needs the rights to create a bucket.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gsutil mb gs://my-new-bucket/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we have to mount the Bucket to our machine. For this wie use Cloud Storage FUSE, to install it we need the following steps:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export GCSFUSE_REPO=gcsfuse-lsb_release -c -s
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gcsfuse
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And now we can finally mount it&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcsfuse db /mnt/gcs-bucket
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So we have a place to store the dump, what next? We have to install the same PostgresQL Version on the Machine as the remote Server is, to get a working pg_dump&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main 9.5" | sudo tee /etc/apt/sources.list.d/postgresql.list
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.5
sudo apt-get install postgresql-9.5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we finally can do the dump:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dump -h yourRDS.rds.amazonaws.com -p 5432 -F c -O -U postgres DATABASE &amp;gt; /mnt/gcs-bucket/db.dump
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Depending on how large your Database is this will take quite a while. Whats next, create your SQL Instance on GCP. There is an import function for SQL Files out of the Bucket but sadly not for dumps, so we have to do the restore the hard way.  &lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_restore -h YourNewSQLInstanceIP -n public -U postgres-user -d DATABASE -1 /mnt/gcs-bucket/db.dump
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will even take longer, be sure to whitelist the IP of the Compute Engine, that it can have access to the SQL Instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  I did everything like you told me but i receive weird errors
&lt;/h2&gt;

&lt;p&gt;Something like this?&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 4198; 0 0 ACL children_of(integer) 
postgrespg_restore: [archiver (db)] could not execute query: ERROR: role "user" does not exist Command was: REVOKE ALL ON FUNCTION children_of(root_id integer) FROM PUBLIC;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Easy to answer, you have missing users on your new Database that are referenced by in the Dump.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to avoid this?
&lt;/h3&gt;

&lt;p&gt;Easy to answer, create the users. Sadly you can't export them due some regulations RDS that makes it impossible to do a pg_dumpall -g (Only User and Roles)&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dumpall -h yourRDS.cd8cncmdv7f0.eu-central-1.rds.amazonaws.com -g  -p 5432  -U postgres &amp;gt; /mnt/gcs-bucket/db_roles.dump
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This do not work and you will receive the error&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_dumpall: query failed: ERROR:  permission denied for relation pg_authid
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, rolbypassrls, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid ORDER BY 2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Because AWS RDS don't do the query as the superuser and so you cannot export it. However if you create them manually it will work fine&lt;/p&gt;

&lt;p&gt;till next time&lt;/p&gt;


</description>
      <category>googlecloud</category>
      <category>postgres</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>We need you! Join tech communities!</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Thu, 17 Jan 2019 15:04:51 +0000</pubDate>
      <link>https://forem.com/mfahlandt/we-need-you-join-tech-communities-2epg</link>
      <guid>https://forem.com/mfahlandt/we-need-you-join-tech-communities-2epg</guid>
      <description>&lt;p&gt;All of us benefit in some way of communities. One of the obvious examples is this very plattform here. Without it i would probably not blog. Thanks for it at this point to the dev.to team! &lt;/p&gt;

&lt;p&gt;Since 2 and a half year i am the main organizer of the &lt;a href="https://www.meetup.com/GDG-cloud-munich/"&gt;Google Developer Group&lt;/a&gt; for cloud related topics in munich and i must admit it’s not easy. I mean i like the event themselves but organizing it in your spare time is challenging. I even run in the issue that it was impossible for me for more than 6 months to organize a meetup. There had been multiple reasons. &lt;br&gt;
We organized a 2 day conference that was completely organized by a joined force of organizers of different meetups / communities called &lt;a href="https://dachfest.com/"&gt;dachfest&lt;/a&gt;. It had been a huge success, i will write about it in another blog post. But it took all my spare time that i have from organizing meetups. It would have helped if i had more help organizing the meeetup group with someone else. Here comes the main point of this article.&lt;/p&gt;

&lt;p&gt;You probably have been to meetups already or to un-conferences or or or. &lt;br&gt;
The people that are organizing it need your help badly. How to help? And what is in for you? The last question should not be asked, but i know a lot of people do so we will try to answer it as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  First things first: Where to find a community
&lt;/h2&gt;

&lt;p&gt;There are a lot of possibilities, most of them are based on the area you are in. In Europe / North America  the plattform &lt;a href="https://www.meetup.com"&gt;Meetup&lt;/a&gt; is used to organize communities to various topics. Also i know of Google Groups that is used to organize or twitter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Give a talk
&lt;/h2&gt;

&lt;p&gt;It gives me hard times to find speakers (an the meetup have 680 members…). Half of the talks on each meetup is held by me. That brings more overhead. You need to prepare talks for each meetup that takes time, to test research and build the presentation. Also it is kinda boring always seeing myself on stage as well as i’m not the all knowledge person, so basically it narrows the field of talks.&lt;br&gt;
Everyone can give a talk, don’t be shy.  No one will kill you or blame you if something does not go as planned. People are happy that it is not themselves up there. &lt;br&gt;
Most meetups do something like lightning talks, short 5-10 Minute talks, try to start there or team up with a colleague to give a talk. We are all happy to have you and we will surly spend some time to go over your talk with you.&lt;br&gt;
&lt;strong&gt;what’s in for me&lt;/strong&gt;: Training! You improve your skill to speak in front of a crowd. This will help you when you talk to customers or with your fellow company people. Also you get references for talks that some conferences want to see before they consider your talks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Give location and catering
&lt;/h2&gt;

&lt;p&gt;Another important topic is, where to meet. I don’t want to be fixed to one location / company. Communities should be independent and not reliant to company or govs. If you can provide space and some snacks (based on region also  beer, obviously we are in bavaria)  it is helpful. The easiest way is to contact the organizers directly, most of them are super happy to have you as host for the next meetup, but please be a little flexible with dates and be open to have someone at the meetup present. Because the will most likely happening in the evening or on weekends.&lt;br&gt;
&lt;strong&gt;what’s in for me&lt;/strong&gt;: You have a lot of tech folk at your office, most meetups offer like 5 min of advertising and recruiting talk. You can talk to a cpl of people that have a same problem stack / tech stack&lt;/p&gt;

&lt;h2&gt;
  
  
  Join a community as organizer / create a community
&lt;/h2&gt;

&lt;p&gt;The most time consuming option for shure. A lot of communities are looking for co-organizers ( Me for example). It’s hard to find reliable people that can give talks organize new meetups do some advertisement for the meetup, find speakers and so on. &lt;br&gt;
Another option here is: There is no meetup / group  for my topic. Easy create one! People will join you fast. I created the meetup and after two weeks ~ 100 People joined and i had about 30 people on our first meetup. &lt;br&gt;
There is always a need for you. Just do it!&lt;br&gt;
&lt;strong&gt;what’s in for me&lt;/strong&gt;: Time management and organisation skills! &lt;/p&gt;

&lt;h2&gt;
  
  
  Ok that all sound fine, but i feel uncomfortable around people
&lt;/h2&gt;

&lt;p&gt;No problem we got you covered: Get involved in online communities like here. Take part in discussions, share your knowledge. Create tutorials / video tutorials. Or one of the coolest things: Contribute to open source. &lt;br&gt;
Our whole world depend on open source projects, help us with a little bit of your time.&lt;/p&gt;

&lt;p&gt;Thanks to all the people that are already organizing community work and are contributing to all of the above points: You Rock!&lt;/p&gt;

</description>
      <category>meta</category>
      <category>career</category>
      <category>community</category>
      <category>discuss</category>
    </item>
    <item>
      <title>One year home office, things I have learned</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Mon, 07 Jan 2019 10:48:59 +0000</pubDate>
      <link>https://forem.com/mfahlandt/one-year-homeoffice-things-i-have-learned-3518</link>
      <guid>https://forem.com/mfahlandt/one-year-homeoffice-things-i-have-learned-3518</guid>
      <description>&lt;p&gt;To be honest it's just a little bit more than one year. Actually one year and three months. &lt;/p&gt;

&lt;p&gt;Ther first and most important rule above all:&lt;/p&gt;

&lt;h3&gt;
  
  
  Wear pants, good pants that you would wear in office, every working day.
&lt;/h3&gt;

&lt;p&gt;Something i learned pretty fast. When you get up and you get well dressed for the day you can draw a clear line between work and spare time. So you have a clean start in the day and in the evening you can look forward to get of the "work" clothes on the evening. &lt;br&gt;
Also my wife complained about me looking less and less appealing, so this is a must, you do not want to look like the troll under the bridge.&lt;br&gt;
Positive thing, you are always prepared for surprising video chats coming up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting basic work times, but don't get up and sit on your desk
&lt;/h3&gt;

&lt;p&gt;For myself it was important to create some kind of working day. So i set myself fixed times to start work. I start working between 8.30am and 9.30am. This is also easy for all people that want to reach me, to know when i am available. Important is that there is another BUT. I never get up and sit directly down at my desk, basically i created something like a way to work. So when getting up the first things are, preparing coffee and taking the dog for a walk, afterwards i have breakfast and then start working. This also helps you to create some routine and don't fall in the trap with oversleeping.&lt;/p&gt;

&lt;h3&gt;
  
  
  Have brakes, don't eat at your desk!
&lt;/h3&gt;

&lt;p&gt;This is probably one of the biggest issues, not scheduling breaks or eat at your working desk. Also an error that i made for a long time and sometimes still do. Eating at your desk or don't make proper brakes. I try to get up from my desk at least every 2 hours for some minutes. Stretch, get something to drink and so on. If you have a portable device, change places. If you want to sit on your couch for some time, do it! But please do breakes and stop working, to refresh your mind. Because of the dog, i have to go outside for at least once in the working hours. If you don't have something similar find an activity to get you away from the working station. It helps getting new energy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Invest in your equipment
&lt;/h3&gt;

&lt;p&gt;We sit the whole day, so invest in a good Setup that fits you. Buy a proper chair, your back will thank it to you. I underestimated this a long time, now i have to see the back doc every once and awhile. Don't make the same mistake. Ask your employer for support. The company saves a lot of money not having to maintain an office spot for you so it is more than fair if the pay for a good chair and maybe a desk. Also get a keyboard that feels right for you and a proper webcam and microphone / headset, your clients and colleagues will thank you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feel so lonely
&lt;/h3&gt;

&lt;p&gt;A big challenge that will not be easy or achievable by all is the separation and missing of human interaction. You won't see a lot of people. For me this is just what i have needed, i like working alone in my house more than working in any kind of office. But you will have less contact with people. I do sports and meet friends beside work which is enough for me. For a lot of people this is not. Honestly i cannot help you here. Some colleagues of mine are in skype the whole day, to have people to talk to. &lt;/p&gt;

&lt;h3&gt;
  
  
  Oh sweet sweet distraction
&lt;/h3&gt;

&lt;p&gt;Another hard point, distraction. The kitchen could be cleaned, the garden needs some fixing and procrastinate on the interweb ;) &lt;br&gt;
Being home adds more distraction than being in the office. Keep in mind also your family can be distraction. When my wife is home i have the feeling i get less done than the times she is not. How to deal with this. Easiest way for internet distraction get yourself a website blocker that have timing windows so you won't end up on facebook or youtube. For everything else get back to the point where we said define work hours and make it clear to your family that this time is work time.&lt;/p&gt;

&lt;h3&gt;
  
  
  STOP WORKING!
&lt;/h3&gt;

&lt;p&gt;Also the other side you maybe end up and can't stop working. So set yourself a fixed time each day when you will stop working. I think this is a bigger issue than distraction at least for me. mostly i stop working between 6pm and 7pm and i do not read any more mails after this and i stopped working on weekends (except for high time projects, but that's another story).&lt;br&gt;
You won’t do yourself any good if you do over hours each and every day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stay fit
&lt;/h3&gt;

&lt;p&gt;Maybe you are missing out a walk or the fitnessstudio after work. Don’t! Search yourself a sport activity to stay fit, because you will even get out less than in an office job. Good thing is you can go at afternoon to the fitness studio or climbing house. You define your work hours yourself. That’s a huge advantage, use it! I stopped it and it was bad for me, now i go climbing at least 2 times a week in a complete empty hall.&lt;/p&gt;

&lt;h3&gt;
  
  
  Last words
&lt;/h3&gt;

&lt;p&gt;This is my perspective, so it does not necessarily apply to you. Are you working from home and have other tips? Let me know! Do you think starting working from home drop a comment&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>yearinreview</category>
    </item>
    <item>
      <title>Holidays Side Project Time, or maybe NOT?</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Tue, 25 Dec 2018 12:40:51 +0000</pubDate>
      <link>https://forem.com/mfahlandt/holidays-side-project-time-or-maybe-not-2jcf</link>
      <guid>https://forem.com/mfahlandt/holidays-side-project-time-or-maybe-not-2jcf</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pjfcAAJe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/9r0npe7taqckvt12horg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pjfcAAJe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/9r0npe7taqckvt12horg.png" alt="alt text" title="By Nathan Dumlao" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On 247 days i wrote code this year and will probably do it on two days more to come. On my twitter timeline and reading in a lot of blog posts i read, holidays are side project time. 4-5 years ago i would have agreed on this but now it's different. &lt;/p&gt;

&lt;p&gt;My parents moved from a it's a daytrip distance to it require some organisation  and at least 4 days to get there distance two years ago. So we see them only once a year. This makes me feel bad when i remember the days in the past where i wasted my time at my parents with side projects instead of spending some time together. Yes you read correctly i wrote wasted. &lt;br&gt;
Each of us write code in one form or another most of the year and it should be fun and it should stay fun and here is the point: At one point you will not have the joy anymore, at one point it will get a necessity. For me this would be the worst moment in my life, i have to work in a job that brings me no joy anymore. &lt;br&gt;
I can hear the outcry ohh you are not a passionate software developer, but honestly  i don't see it. The most import job of all of us is to look out after ourselves and our loved ones. Especially these day are made for it. Most of your loved ones have time and you can get  together play board games, go for a walk, watch movies together and most important give your brain a rest from doing the stuff it hast to do all year long.&lt;br&gt;
Another argument i often hear: side projects are the only way i can try new things. This is an argument that makes me mad when i hear it. There is a simple reason for this. As an employer the knowledge of my employees is the most valued possession i have. So if any developer now tells me holidays are the only days in the year to move forward my technical knowledge it means that the developer does not get the chance and the time by his employer to learn new stuff. Ask your employer to get time to try new stuff. If the answer is yes you know your knowledge is valued, if the answer is no, maybe time to look for a new job.&lt;/p&gt;

&lt;p&gt;Don't feel bad if you don't work on a side projects on the holidays, even if you don't spend quality time with loved ones. Even if you have spare time and just want to relax by reading a NON tech book, playing games or even the heck, just watch a movie or a series that you missed in all the stress this year. &lt;/p&gt;

&lt;p&gt;Step back take a break, you won't miss out anything. Bring your mindset to a state where it says the heck i hope i can get back to work to do coding again.&lt;/p&gt;

&lt;p&gt;Now what is your way, are you just now working on a side project for the holidays? Or did you give yourself a day of?  Anyways, happy holidays&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Fast Knowledge: Kubernetes port forwarding</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Thu, 29 Nov 2018 11:45:50 +0000</pubDate>
      <link>https://forem.com/mfahlandt/fast-knowledge-kubernetes-port-forwarding--33d0</link>
      <guid>https://forem.com/mfahlandt/fast-knowledge-kubernetes-port-forwarding--33d0</guid>
      <description>&lt;p&gt;When i finally discovered the option in Kubernetes to forward a port of a pod to localhost, it changed the way i started working with my cluster.&lt;/p&gt;

&lt;p&gt;I often have my MongoDB, MYSQL or Elasticsearch inside of a Kubernetes cluster, running on pods. Naturally it has no public access point or port forwarding to the outside of the cluster. But sometimes you want to use a Client to access your Database or you want to use your cli locally to find errors.&lt;/p&gt;

&lt;p&gt;The solution is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward localport:podport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to easily forward your mongoDB Port to localhost for example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward pods/mongo-0 27017:27017

or

kubectl port-forward deployment/mongo-0 27017:27017


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can use use your MongoDB Client (e.g Robo 3T) to connect to localhost:27017 and you are connected to your remote NOSQL Database.&lt;/p&gt;

&lt;p&gt;For more information look &lt;a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
      <category>database</category>
    </item>
    <item>
      <title>Remove and modify nested documents in MongoDB</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Mon, 01 Oct 2018 13:05:54 +0000</pubDate>
      <link>https://forem.com/mfahlandt/remove-and-modify-documents-in-nested-array-in-mongodb-nm1</link>
      <guid>https://forem.com/mfahlandt/remove-and-modify-documents-in-nested-array-in-mongodb-nm1</guid>
      <description>&lt;p&gt;I have a rather complex Document Structure to fulfill the approach of have all the data available when you execute one query. However there is the big issue of modifying it. &lt;/p&gt;

&lt;p&gt;So lets have a look at our Object that we want to modify&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
    "_id" : ObjectId("5b0bf696cb5dd80010bea0a3"),
    "domains" : [ 
        {
            "_id" : ObjectId("5b0bf696cb5dd80010bea0a4"),
            "DKIMRecordName" : "mailjet._domainkey.example.de.",
            "DKIMRecordValue" : "k=rsa; p=000000000000000000000000",
            "DKIMStatus" : "Not Checked",
            "Domain" : "example.de",
            "IsCheckInProgress" : false,
            "SPFRecordValue" : "v=spf1 include:spf.mailjet.com ?all",
            "SPFStatus" : "Not Checked",
            "emails" : [ 
                {
                    "_id" : ObjectId("5b0bf696cb5dd80010bea0a5"),
                    "CreatedAt" : ISODate("2018-05-28T12:31:18.000Z"),
                    "DNSID" : "2837371580",
                    "Email" : "no-reply@example.de",
                    "EmailType" : "unknown",
                    "Filename" : "",
                    "ID" : "216556",
                    "IsDefaultSender" : false,
                    "Name" : "",
                    "Status" : "Inactive"
                },
                {
                    "_id" : ObjectId("5b0bf696cb5dd45410bea0a5"),
                    "CreatedAt" : ISODate("2018-05-28T12:31:18.000Z"),
                    "DNSID" : "2837371580",
                    "Email" : "newsletter@example.de",
                    "EmailType" : "unknown",
                    "Filename" : "",
                    "ID" : "216556",
                    "IsDefaultSender" : false,
                    "Name" : "",
                    "Status" : "Inactive"
                }
            ]
        }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if you want to modify one of them we will use the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.yourCollection.update({'domains.emails.Email': 'no-reply@example.de'},
{ $set: {'email.$.emails':  
    {
                    "_id" : ObjectId("5b0bf696cb5dd80010bea0a5"),
                    "CreatedAt" : ISODate("2018-05-28T12:31:18.000Z"),
                    "DNSID" : "2837371580",
                    "Email" : "no-reply@example.de",
                    "EmailType" : "unknown",
                    "Filename" : "",
                    "ID" : "216556",
                    "IsDefaultSender" : false,
                    "Name" : "",
                    "Status" : "Active"
                }

  },
{multi: true})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why do we have to put in the whole Object? Sadly we can not use a second  positional $ operator. So we have to provide the full nested Document. Additional it MUST be included as part of the Query document, otherwise it would not recognize it. This would throw the Error "The positional operator did not find the match needed from the query."&lt;/p&gt;

&lt;p&gt;Also what we can do is a pull operation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.yourCollection.update({'email.emails.Email': 'no-reply@example.de'},
{ $pull: {'email.$.emails':  {'Email': 'no-reply@example.de'}}  },
{multi: true})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will remove the Child Document from the nested Array.&lt;/p&gt;

&lt;p&gt;Be aware, the multi: true will not work and you have to run the query multiple times.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>nosql</category>
      <category>database</category>
    </item>
    <item>
      <title>How to move Google Cloud images to other accounts</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Thu, 20 Sep 2018 13:11:36 +0000</pubDate>
      <link>https://forem.com/mfahlandt/how-to-move-google-cloud-images-to-other-accounts-1iod</link>
      <guid>https://forem.com/mfahlandt/how-to-move-google-cloud-images-to-other-accounts-1iod</guid>
      <description>&lt;p&gt;I handle multiple accounts for my customers and i have some images that i want to use on some of those accounts.&lt;/p&gt;

&lt;p&gt;You now have different options on how to achieve this. The best one would always have a docker file for your images and use this as a baseline, we will look at this option in another article soon.&lt;/p&gt;

&lt;p&gt;The easiest option however is, if we already have an image based on one machine to copy it to the other account.&lt;/p&gt;

&lt;p&gt;to create images there are several ways to do it, the easiest way is to create it in you CloudConsole.&lt;/p&gt;

&lt;p&gt;Compute Engine =&amp;gt; Images =&amp;gt; [+] Create Image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm682lm7t0p3i4gyn2gvr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm682lm7t0p3i4gyn2gvr.jpg" title="Create Image" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However if you like the console more do it by the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute images create IMAGE_NAME --source-disk=SOURCE_DISK 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have an image we want to export it to a cloud storage bucket&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute images export --destination-uri gs://bucket-name/imagename.tar.gz --image imagename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this we will find the image as tar.gz file in out Cloud Storage. Now we have to make it available for other users.&lt;br&gt;
To keep it simple we just make it public available. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fu5ysbbt7iryvqkzje4sr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fu5ysbbt7iryvqkzje4sr.jpg" title="Manage rights" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The trick is here, add a User with Name allUsers and give it Read access.&lt;br&gt;
Now it's freely downloadable.&lt;/p&gt;

&lt;p&gt;Now we switch to the Account where we need the image and import it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
gcloud compute images create imagename     --source-uri gs://bucket-name/imagename.tar.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it.&lt;/p&gt;

&lt;p&gt;Please tell me if you want to see Video Tutorials for Google Cloud&lt;/p&gt;

&lt;p&gt;Cheers&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Copy Files from and to Kubernetes pods and Docker container</title>
      <dc:creator>Mario</dc:creator>
      <pubDate>Mon, 17 Sep 2018 16:00:35 +0000</pubDate>
      <link>https://forem.com/mfahlandt/copy-files-from-and-to-kubernetes-pods-and-docker-container-4lgh</link>
      <guid>https://forem.com/mfahlandt/copy-files-from-and-to-kubernetes-pods-and-docker-container-4lgh</guid>
      <description>&lt;p&gt;I often want to have some database dumps from my mongo also in my local Setup. To achieve this is pretty easy.&lt;br&gt;
You can either copy files or whole folders.&lt;br&gt;
What we need is the kubectl command line tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copy Files from a pod to your machine
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;{{&lt;/span&gt;namespace&lt;span class="o"&gt;}}&lt;/span&gt;/&lt;span class="o"&gt;{{&lt;/span&gt;podname&lt;span class="o"&gt;}}&lt;/span&gt;:path/to/directory /local/path
eg:
kubectl &lt;span class="nb"&gt;cp &lt;/span&gt;mongo-0:/dump /local/dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copy Files to a pod
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; /local/path namespace/podname:path/to/directory 
eg:
kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; /local/dump mongo-0:/dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The amazing hint on this is, this works as well with docker.&lt;br&gt;
Just change kubectl to docker and it will work as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copy Files from a docker container to your machine
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp &lt;/span&gt;containerID:/path/to/directory /local/path
eg:
docker &lt;span class="nb"&gt;cp &lt;/span&gt;mongo-0:/dump /local/dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copy Files to a docker container
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp&lt;/span&gt; /local/path containerID:path/to/directory
eg:
docker &lt;span class="nb"&gt;cp&lt;/span&gt; /local/dump mongo-0:dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why i write this obvious post? I'm tiered looking up the correct syntax now and then and just looking at my blog post speed things up.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
