<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ernesto Freyre</title>
    <description>The latest articles on Forem by Ernesto Freyre (@efreyreg).</description>
    <link>https://forem.com/efreyreg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/efreyreg"/>
    <language>en</language>
    <item>
      <title>Runtime Config for Frontend applications using TwoFlags</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Fri, 10 Apr 2020 14:03:46 +0000</pubDate>
      <link>https://forem.com/efreyreg/runtime-config-for-frontend-applications-using-twoflags-42k4</link>
      <guid>https://forem.com/efreyreg/runtime-config-for-frontend-applications-using-twoflags-42k4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mb5bFluV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3caWOIyaIgvC-7NIaYOmLg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mb5bFluV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A3caWOIyaIgvC-7NIaYOmLg.jpeg" alt=""&gt;&lt;/a&gt;&lt;a href="https://www.pexels.com/photo/white-and-black-music-mixer-164746/"&gt;&lt;/a&gt;&lt;a href="https://www.pexels.com/photo/white-and-black-music-mixer-164746/"&gt;https://www.pexels.com/photo/white-and-black-music-mixer-164746/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cloudflare Workers had become my favorite Serverless platform. The introduction of Workers KV, a storage service, added a huge potential for new applications. One of this applications is Feature Flags.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are Feature Flags?
&lt;/h4&gt;

&lt;p&gt;Feature Flags or feature toggle is a software engineering technique to hide, enable or disable features on applications during runtime. Is also used to provide runtime configurations to applications making them environment agnostic.&lt;/p&gt;

&lt;p&gt;Disabling, Enabling or Enhancing features is achieved by toggling a boolean Flag, &lt;strong&gt;false&lt;/strong&gt; by default, once we want to enable it we don’t need to go back to the code change the logic and redeploy our application. Centralizing this decisions with all the management overhead already taken care for you.&lt;/p&gt;

&lt;p&gt;We can also progressively enable certain features for a percentage of users or directly allow some users to have the feature enabled via an override.&lt;/p&gt;

&lt;p&gt;Feature Flags are also a good way to abstract runtime configurations, anything that an Application needs to work, it could be API Urls, Client IDs, Integration IDs, like Maps Keys, Logging Keys, etc. noting that this are not secret keys, since everything on the Client by definition is public. With a dynamic runtime configuration setting using an external Feature Flags service your Application builds are totally environment agnostic. This means you only need to build your applications once and then use the build artifacts to deploy on any environment vs. having to build your application for that particular environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Commercial solutions
&lt;/h4&gt;

&lt;p&gt;There are several commercial solutions that provide great features and support. One problem common to all is the pricing model, ranging from expensive to limiting in terms of active monthly users.&lt;/p&gt;

&lt;p&gt;This limits adoption of Feature Flags on smaller projects, projects where cost could be a factor or where usage surpass the limits imposed by the subscription plans available.&lt;/p&gt;

&lt;p&gt;For this reasons we build TwoFlags, an OpenSource Feature Flags service built on top of Cloudflare Workers and Workers KV.&lt;/p&gt;

&lt;h4&gt;
  
  
  TwoFlags
&lt;/h4&gt;

&lt;p&gt;TwoFlags name is inspired in the colloquial phrase “just a couple of …” in this case, “just a couple of flags” Which is a fun way to say, I don’t need more than a couple of flags, so, 2 flags.&lt;/p&gt;

&lt;p&gt;The code can be found here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/twoflags-io"&gt;TwoFlags&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API and Resolver Services: Written in TypeScript, Install instructions inside &lt;a href="https://github.com/twoflags-io/twoflags-api"&gt;https://github.com/twoflags-io/twoflags-api&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;OpenAPI 3.0 Spec for API and Resolver, &lt;a href="https://twoflags-io.github.io/twoflags/"&gt;https://twoflags-io.github.io/twoflags/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;React SDK, TypeScript and documentation.&lt;a href="https://github.com/twoflags-io/react-featureflags"&gt;https://github.com/twoflags-io/react-featureflags&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To deploy your own instance of the TwoFlags API service you will need a Cloudflare (&lt;a href="https://www.cloudflare.com/"&gt;https://www.cloudflare.com/&lt;/a&gt;) account with access to Workers service (cost $5/month). Follow the instructions on the API repository.&lt;/p&gt;

&lt;p&gt;Once you have your TwoFlags service in place lets create the Flags for our demo.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you need to test it first send us a message, we can create you a trial account o a demo environment we have for development purposes.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating Environment, Namespaces and Flags.
&lt;/h4&gt;

&lt;p&gt;TwoFlags has support for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environments: Each Environment is the top level organization element. It contains an id, name and origins array. The origins is the set of domains your application will be deployed, this helps the Flag Resolver identify the environment without the Application explicitly doing it (important feature for agnostic frontend builds)&lt;/li&gt;
&lt;li&gt;Namespaces: Namespaces is the concept that matches your applications. This is for each application on your company you will need a Namespace&lt;/li&gt;
&lt;li&gt;Flags: Flags are the actual data toggles. You can enable/disable them and they also have a type: string, boolean, number, segment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Assuming you have a running TwoFlags service running.&lt;/p&gt;

&lt;p&gt;Create an Environment: id: local, name: Local, origins: localhost:3000&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X POST https://api.twoflags.io/environments \
-d '{"id":"local","name":"Local","origins":["localhost:3000"]}'

$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
https://api.twoflags.io/environments

{"data":[{"id":"local","name":"Local","origins":["localhost:3000"]}],"slots":4}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Create a Namespace: id: frontend, name: Frontend&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X POST https://api.twoflags.io/namespaces \
-d '{"id":"frontend","name":"Frontend"}'

$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
https://api.twoflags.io/namespaces

{"data":[{"id":"frontend","name":"Frontend"}],"slots":19}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Create a couple flags (pun intended)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maintenance: type: boolean&lt;/li&gt;
&lt;li&gt;color: type: string
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X POST https://api.twoflags.io/flags \
-d '{"id":"maintenance","name":"Maintenance", "type":"boolean", "active": true}'

$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X POST https://api.twoflags.io/flags \
-d '{"id":"color","name":"Background Color", "type":"string", "active": true}'

$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
https://api.twoflags.io/flags

{"data":[{"id":"maintenance","name":"Maintenance","type":"boolean","active":true},{"id":"color","name":"Background Color","type":"string","active":true}],"slots":98}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;With the Environment, Namespace and Flags in place we only need to set the values of each flag and query the Resolver.&lt;/p&gt;

&lt;p&gt;To query the Resolver we do:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'origin: [http://localhost:3000/'](http://localhost:3000/') \
'https://resolver.twoflags.io?account=&amp;lt;account&amp;gt;&amp;amp;ns=frontend'

{"flags":{},"environment":"local","namespace":"frontend"}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;As you can see our flags object is empty. This is because we haven’t set any of the values in this environment/namespace&lt;/p&gt;

&lt;p&gt;Setting the values of flags on environment/namespace&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X PATCH https://api.twoflags.io/values \
-d '{"id":"maintenance","environment":"local", "namespace":"frontend", "value": false}'

$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X PATCH https://api.twoflags.io/values \
-d '{"id":"color","environment":"local", "namespace":"frontend", "value": "#007DFF"}'

$ curl -H 'origin: [http://localhost:3000/'](http://localhost:3000/') \
'https://resolver.twoflags.io?account=&amp;lt;account&amp;gt;&amp;amp;ns=frontend'

{"flags":{"maintenance":false,"color":"#007DFF"},"environment":"local","namespace":"frontend"}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;As you can see the Resolver also returns the detected environment and the namespace.&lt;/p&gt;
&lt;h4&gt;
  
  
  React Integration.
&lt;/h4&gt;

&lt;p&gt;Using the REST endpoint for the Resolver is documented, you can use it to integrate the Feature Flags service on your frontend or backend applications (as long as it doesn’t contain secret flags)&lt;/p&gt;

&lt;p&gt;For React, we already have an integration ready.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/twoflags-io/react-featureflags"&gt;twoflags-io/react-featureflags&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s use our existing flags on a front end application. For this I will use a template NextJS app (written in TypeScript) with a couple pages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/outsrc/template-frontend"&gt;outsrc/template-frontend&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once cloned or used as base template, we will need to install the React helper component.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone [https://github.com/outsrc/template-frontend](https://github.com/outsrc/template-frontend) demo
$ cd demo
$ yarn
$ yarn add @twoflags/react-featureflags
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Next step is to create an _app.tsx component inside the src/pages folder. This component will wrap all pages so we can inject the Feature Flags provider there and make it available for the rest of the application.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Notice we only need to specify the clientID , the featureFlagsAPI and the namespace. The environment will be resolved to local because we will be running our application on localhost:3000 and thats the origin of the local environment.&lt;/p&gt;

&lt;p&gt;This is the feature that makes our application builds environment agnostic. We only have to provide the environments for all the origins that our application will be deployed.&lt;/p&gt;

&lt;p&gt;With this in place we can start querying for feature flags on our components.&lt;/p&gt;

&lt;p&gt;Time to make some changes to the application to support the feature flags that we have on our namespace. First, maintenance which indicates that our Application is on maintenance mode. For this we will create a HOC to wrap our pages and if maintenance mode is enabled then an overlay message will be displayed.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This component (and style) will read the Feature Flags object, containing the maintenance flag and render an overlay over the whole app if evaluates TRUE.&lt;/p&gt;

&lt;p&gt;To use it on our pages we can wrap them like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export default **_blockOnMaintenance_** (Index)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is the result.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SDvYiwmdEkA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Un-setting the flag.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lkfm9Ca0Q2I"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;As you can see Flags updates on the page are not real time, the React integration is constantly pooling from the Resolver at a pre-determined time interval (default is 30 seconds). Although if the page captures the focus the flags are resolved in that moment.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZM_kKljY5yk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This works for all feature flags types we define on our API. Having boolean toggles makes it easier for avoiding users to access some parts of our application.&lt;/p&gt;

&lt;p&gt;String flags are mostly used to store integration Keys to third party services, like authentication (Auth0, Okta), logging (Sentry, NewRelic, Loggly), etc. Never use it to store secrets. Anything on the browser or any client is by definition insecure. This is true for TwoFlags or any other Feature Flags service.&lt;/p&gt;

&lt;h4&gt;
  
  
  Segments
&lt;/h4&gt;

&lt;p&gt;Segments are a special Flag type that takes a numeric value between 0 and 100. (percentage) and resolves only 2 values ‘A’ or ‘B’ based on that threshold value and a correlation ID.&lt;/p&gt;

&lt;p&gt;A Correlation ID is a hashed user id. To guarantee a user always see the same value while the threshold value of the segment progresses towards 100. The hash avoids leaking the actual user identification to the service.&lt;/p&gt;

&lt;p&gt;This is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flag value = 0: Always A&lt;/li&gt;
&lt;li&gt;Flag value = 100: Always B&lt;/li&gt;
&lt;li&gt;No Correlation ID: Always A&lt;/li&gt;
&lt;li&gt;Flag value 0 &amp;lt; flag &amp;lt; 100 + Correlation ID: A (100-flag percent), B (flag percent) (Ex. flag=20 + Correlation ID, 80% of users get A, 20% of users get B)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is useful when we want to enable a feature for a percentage of your users and check conversion on it.&lt;/p&gt;

&lt;p&gt;Let’s create a segment flag&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H 'Authorization: Bearer &amp;lt;apikey&amp;gt;' \
-X POST https://api.twoflags.io/flags \
-d '{"id":"experiment001","name":"Experiment 001", "type":"segment", "active": true}'

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Setting the value to 50, so 50% of users will get A and 50% will get B&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -H ‘Authorization: Bearer &amp;lt;apikey&amp;gt;’ \
-X PATCH [https://api.twoflags.io/values](https://api.twoflags.io/values) \
-d ‘{“id”:”experiment001",”environment”:”local”, “namespace”:”frontend”, “value”: 50}’
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s change our application to use the experiment001 segment flag. First a new helper component Experiment that will render one of two sub-components based on an experiment’s value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/3e29ccc0ba624e98a00abb91eaed9af4/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/3e29ccc0ba624e98a00abb91eaed9af4/href"&gt;https://medium.com/media/3e29ccc0ba624e98a00abb91eaed9af4/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be dynamically changing a fictitious user id, for that we will need a unique id generator function. One of my favorites is cuid (&lt;a href="https://www.npmjs.com/package/cuid"&gt;https://www.npmjs.com/package/cuid&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ yarn add cuid
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And the changes to our index.tsx page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/media/f345becfafc6c25e547da4f4868bc599/href"&gt;&lt;/a&gt;&lt;a href="https://medium.com/media/f345becfafc6c25e547da4f4868bc599/href"&gt;https://medium.com/media/f345becfafc6c25e547da4f4868bc599/href&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the Experiment component tied to the experiment001 flag we just created.&lt;/p&gt;

&lt;p&gt;The button at the bottom will create a new random UID and store it to display it and pass it to the uniqueIDUpdater function. This function will trigger a feature flags update so, If the new user id falls into the group B we will see a different message on the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6vWWBMN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2MbJ85kkKrIh3_L3oE_Iig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6vWWBMN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2MbJ85kkKrIh3_L3oE_Iig.png" alt=""&gt;&lt;/a&gt;This user is seeing variant A&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JO4mE6AL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMZQjYP45wHxq5cHeZ5Y8Hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JO4mE6AL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AMZQjYP45wHxq5cHeZ5Y8Hw.png" alt=""&gt;&lt;/a&gt;This user is seeing variant B&lt;/p&gt;

&lt;p&gt;Once you decide based on logs data what version you want to keep, you can remove the experiment001 flag and the code from the page keeping the winning variant. Overriding variants for users is on the roadmap.&lt;/p&gt;

&lt;p&gt;That’s it, We wrote TwoFlags to learn about Cloudflare Workers and Workers KV, taking into account our own needs. We plan to keep improving the code and add more features. You can contribute to the code, let us know, PRs welcome.&lt;/p&gt;

</description>
      <category>react</category>
      <category>opensource</category>
      <category>programming</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Kubernetes rolling updates, rollbacks and multi-environments</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Sun, 15 Dec 2019 20:50:57 +0000</pubDate>
      <link>https://forem.com/efreyreg/kubernetes-rolling-updates-rollbacks-and-multi-environments-5657</link>
      <guid>https://forem.com/efreyreg/kubernetes-rolling-updates-rollbacks-and-multi-environments-5657</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CRwSxy5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AT2f2yMlT14HkVI-GfK7fPw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CRwSxy5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AT2f2yMlT14HkVI-GfK7fPw.jpeg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@tomfisk?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Tom Fisk &lt;/a&gt;from &lt;a href="https://www.pexels.com/photo/birds-eye-view-photo-of-freight-containers-2226458/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On previous post (&lt;a href="https://dev.to/efreyreg/deploy-an-app-on-kubernetes-gke-with-kong-ingress-letsencrypt-and-cloudflare-3bk8"&gt;https://itnext.io/deploy-an-app-on-kubernetes-gke-with-kong-ingress-letsencrypt-and-cloudflare-94913e127c2b&lt;/a&gt;) we learned how to deploy an application with 2 micro-services (frontend and backend) to Kubernetes, using Kong Ingress, LetsEncrypt to provide TLS certificates and Cloudflare for proxy and extra security.&lt;/p&gt;

&lt;p&gt;In this post we want to do some updates to our deployed application, roll them back in the case of errors and last but not least use multiple environments so we can test our application before deploying to production.&lt;/p&gt;

&lt;p&gt;First I will re-deploy my original application. (I always delete un-used applications, no need to spend money on hosting them)&lt;/p&gt;

&lt;p&gt;One nifty feature of kubectl is you can concatenate all resource files and apply them on bulk. So this is my file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;To apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc.yml
namespace/outsrc created
ingress.extensions/outsrc-dev-ingress created
service/service-frontend created
deployment.apps/deployment-frontend created
service/service-backend created
deployment.apps/deployment-backend created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;And after a couple seconds (container running, TLS certs emitted) we have the application back online:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xd3k0Zw2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcSS5v0MN-kZVi8BPxMQMvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xd3k0Zw2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AcSS5v0MN-kZVi8BPxMQMvg.png" alt=""&gt;&lt;/a&gt;outsrc.dev deployed to Kubernetes&lt;/p&gt;

&lt;p&gt;So far all good. Now, we need to make some changes to the Application. We will add a map page containing the US map, we will link the map from both the main page and the state page. After adding the code for this feature and dockerize it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build -t outsrc-demo-front .
...

$ docker tag outsrc-demo-front:latest gcr.io/outsrc/outsrc-demo-front: **1.1.0**

$ docker push gcr.io/outsrc/outsrc-demo-front:**1.1.0  
...**
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Notice the version is different, We will use docker image tags to do a rolling update.&lt;/p&gt;
&lt;h4&gt;
  
  
  Rolling updates
&lt;/h4&gt;

&lt;p&gt;Every time we want to update the application we deployed on our kubernetes cluster we change our deployment resource files and update them. Each time a change is detected a rolling update will be performed.&lt;/p&gt;

&lt;p&gt;To avoid downtime kubernetes will update each replica of our running container one by one and re-routing the services on top.&lt;/p&gt;

&lt;p&gt;To check updates history:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout history deployment/deployment-frontend
deployment.extensions/deployment-frontend
REVISION CHANGE-CAUSE
1 \&amp;lt;none\&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Lets update the frontend container image tag:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl set image deployment/deployment-frontend frontend-container=gcr.io/outsrc/outsrc-demo-front: **1.1.0**
deployment.extensions/deployment-frontend image updated
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We could also modify the deployment resource file, change the image tag and apply it via kubectl . This is my preferred way to handle updates, since it keeps the source of truth on the resource descriptor files.&lt;/p&gt;

&lt;p&gt;Watch the update:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout status -w deployment/deployment-frontend

Waiting for deployment "deployment-frontend" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "deployment-frontend" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "deployment-frontend" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
deployment "deployment-frontend" successfully rolled out
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Now the deployment’s rollout history:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout history deployment/deployment-frontend
deployment.extensions/deployment-frontend
REVISION CHANGE-CAUSE
1 \&amp;lt;none\&amp;gt;
2 \&amp;lt;none\&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;And our application has a US Map page now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t3973gRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AfG5G3ZZ_h8cCvS_WWsFFuQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t3973gRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AfG5G3ZZ_h8cCvS_WWsFFuQ.png" alt=""&gt;&lt;/a&gt;New map page deployed&lt;/p&gt;

&lt;p&gt;Oh no, we have a bug! rollback..&lt;/p&gt;

&lt;p&gt;A user found a bug on our newly deployed version of the application. We need to rollback to the previous known working version.&lt;/p&gt;

&lt;p&gt;Let’s roll it back:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout undo deployment/deployment-frontend
deployment.extensions/deployment-frontend rolled back

$ kubectl rollout status -w deployment/deployment-frontend
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
deployment "deployment-frontend" successfully rolled out
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Our application was reverted to previous version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bo97QeO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALZrf_CzWCquo1xMGfFTVXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bo97QeO9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALZrf_CzWCquo1xMGfFTVXg.png" alt=""&gt;&lt;/a&gt;Application version 1.0.0&lt;/p&gt;

&lt;p&gt;Notice on the history the revision numbers are still going up:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout history deployment/deployment-frontend
deployment.extensions/deployment-frontend
REVISION CHANGE-CAUSE
2 \&amp;lt;none\&amp;gt;
3 \&amp;lt;none\&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We can always revert to any revision. In this case Revision #2 is our Maps revision. Let’s bring it back.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl rollout undo deployment/deployment-frontend --to-revision=2
deployment.extensions/deployment-frontend rolled back

$ kubectl rollout status -w deployment/deployment-frontend
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "deployment-frontend" rollout to finish: 1 old replicas are pending termination...
deployment "deployment-frontend" successfully rolled out
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ON1dKgR1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0ablqsFTm63CMYgzuArCPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ON1dKgR1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0ablqsFTm63CMYgzuArCPQ.png" alt=""&gt;&lt;/a&gt;Back to version 1.1.0 (with US Map)&lt;/p&gt;

&lt;p&gt;So far so good. Now, this going back and forth on a live website for features and bugs is not a good thing. Our users will feel frustrated if we roll out a feature just to find it has bugs and then roll it back. That’s one of the reasons we have different &lt;strong&gt;deployment environments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is very common to find this set of environment:&lt;/p&gt;

&lt;p&gt;master | staging | production&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;master&lt;/strong&gt; : Or development, usually most updated version, matches the master branch on the repo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;staging&lt;/strong&gt; : Most close to production, usually where last QA is performed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;production&lt;/strong&gt; : Is what your users see.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this setting, any new feature or bugfix will go from master, to staging and then to production.&lt;/p&gt;

&lt;p&gt;Out our US States application let’s create one more environment: &lt;strong&gt;development&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First we need to select a subdomain, for my current application I will choose: dev.outsrc.dev (master.outsrc.dev is fine too)&lt;/p&gt;

&lt;p&gt;First: DNS, let’s make a DNS registry making our subdomain pointing to the cluster proxy IP this is: (Remember our DNS from previous post was hosted on Cloudflare)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cc0vqyZC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFHmaA6fzgKHi7GnRInmjrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cc0vqyZC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AFHmaA6fzgKHi7GnRInmjrg.png" alt=""&gt;&lt;/a&gt;dev.outsrc.dev -&amp;gt; CNAME proxy.outsrc.dev&lt;/p&gt;

&lt;p&gt;After this a NSLOOKUP of dev.outsrc.dev will return:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dig dev.outsrc.dev
; \&amp;lt;\&amp;lt;\&amp;gt;\&amp;gt; DiG 9.10.6 \&amp;lt;\&amp;lt;\&amp;gt;\&amp;gt; dev.outsrc.dev
;; global options: +cmd
;; Got answer:
;; -\&amp;gt;\&amp;gt;HEADER\&amp;lt;\&amp;lt;- opcode: QUERY, status: NOERROR, id: 38625
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;dev.outsrc.dev. IN A

;; ANSWER SECTION:
dev.outsrc.dev. 300 IN CNAME proxy.outsrc.dev.
proxy.outsrc.dev. 300 IN A **35.209.76.89**

;; Query time: 66 msec
;; SERVER: 192.168.1.254#53(192.168.1.254)
;; WHEN: Sun Dec 15 14:48:42 EST 2019
;; MSG SIZE rcvd: 79
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;dev.outsrc.dev points right at our cluster.&lt;/p&gt;

&lt;p&gt;Now Lets create a different set of resource files, on a different namespace: outsrc-dev&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-dev.yml
namespace/outsrc-dev created
ingress.extensions/dev-outsrc-dev-ingress created
service/service-frontend created
deployment.apps/deployment-frontend created
service/service-backend created
deployment.apps/deployment-backend created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Ready, we now can access &lt;a href="http://dev.outsrc.dev"&gt;http://dev.outsrc.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2QLDtfTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2Hq44RQeyBjjiQh5VxEhGg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2QLDtfTj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2Hq44RQeyBjjiQh5VxEhGg.png" alt=""&gt;&lt;/a&gt;dev.outsrc.dev&lt;/p&gt;

&lt;p&gt;Now, this is a subdomain we don’t want everybody to be able to access it. This is restricted to the internal developers team, the product managers and QA teams, designers, test engineers, etc.&lt;/p&gt;

&lt;p&gt;We need to limit who can access this subdomain. There are several ways we can achieve this. One solution is using Kong Plugins (&lt;a href="https://docs.konghq.com/hub/"&gt;https://docs.konghq.com/hub/&lt;/a&gt;), more specific IP Restriction Plugin (&lt;a href="https://docs.konghq.com/hub/kong-inc/ip-restriction/"&gt;https://docs.konghq.com/hub/kong-inc/ip-restriction/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So now you know why I used Kong Ingress on the previous post.&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Restrict Access by IP with Kong Plugin
&lt;/h4&gt;

&lt;p&gt;First let’s create a plugin resource descriptor&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Apply it first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f ip-restrict.yml
kongplugin.configuration.konghq.com/ip-restriction created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After this we can modify our Ingress resource file to signal that all routes should be IP restricted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: dev-outsrc-dev-ingress
 namespace: outsrc-dev
 annotations:
 kubernetes.io/ingress.class: kong
 kubernetes.io/tls-acme: 'true'
 cert-manager.io/cluster-issuer: letsencrypt-production
**plugins.konghq.com: ip-restriction**
spec:
 tls:
 - secretName: dev-outsrc-dev-tls
 hosts:
 - dev.outsrc.dev
 rules:
 - host: dev.outsrc.dev
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And update it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-dev.yml
namespace/outsrc-dev unchanged
ingress.extensions/dev-outsrc-dev-ingress configured
service/service-frontend unchanged
deployment.apps/deployment-frontend unchanged
service/service-backend unchanged
deployment.apps/deployment-backend unchanged
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now if you try to access from an IP address not whitelisted you will be greeted by:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-bZKdAI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AWg4-1iO4Ll4QyCkDRy3OxQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-bZKdAI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AWg4-1iO4Ll4QyCkDRy3OxQ.png" alt=""&gt;&lt;/a&gt;Denied!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: One nice thing about this IP restriction plugin is you can whitelist IP addresses and update only the plugin resource. The Ingress will use the updated list.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Rolling updates require only a change on the deployment container.&lt;/li&gt;
&lt;li&gt;Rollbacks are actually quite fast.&lt;/li&gt;
&lt;li&gt;We can rollback to any previous deployed version. (Still not sure what are the limits here)&lt;/li&gt;
&lt;li&gt;To avoid going back and forth on production use deployment environments.&lt;/li&gt;
&lt;li&gt;Use different namespaces for your different environments if deploying on the same cluster.&lt;/li&gt;
&lt;li&gt;Optimally, create 2 clusters, separate Production environment from Staging and Development.&lt;/li&gt;
&lt;li&gt;Check Kong Plugins, some of them are really nice or you can write your own (Ready to learn Lua? or send me a text I might help you writing it)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy hacking…&lt;/p&gt;

</description>
      <category>kong</category>
      <category>googlecloudplatfor</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Make your web application graciously survive a backend general failure.</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Fri, 06 Dec 2019 16:08:58 +0000</pubDate>
      <link>https://forem.com/efreyreg/make-your-web-application-graciously-survive-a-backend-general-failure-30bc</link>
      <guid>https://forem.com/efreyreg/make-your-web-application-graciously-survive-a-backend-general-failure-30bc</guid>
      <description>&lt;p&gt;&lt;em&gt;(This is not a coding post)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VGLhLYZu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AbJWkXu7X6m52jAb2UXyfIg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VGLhLYZu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AbJWkXu7X6m52jAb2UXyfIg.jpeg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@george-desipris?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;GEORGE DESIPRIS &lt;/a&gt;from &lt;a href="https://www.pexels.com/photo/big-waves-under-cloudy-sky-753619/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software engineers wishful thinking is: “My application will never fail” but in the meantime they prepare for the worst. All engineers will try to make their application resilient, good engineers know their application will eventually fail and prepare for that.&lt;/p&gt;

&lt;p&gt;Frontend deployment has evolved a lot in the last 5 years. JAMstack (&lt;a href="https://jamstack.org/"&gt;https://jamstack.org/&lt;/a&gt;) is gaining adoption all over the board. For marketing websites, e-commerce, enterprise and consumer applications. Main benefits include: better performance, security, cheaper and easier scaling and improved developer experience. The use of edge CDN gives this web applications some characteristics of mobile applications, For example no need for servers. Clients (browsers) when accessing the application most of the time get static cached assets that are served on the edge by CDNs which makes starting time really quick. Deploy cycles involves busting the caches of the CDN so a new version gets served to the user.&lt;/p&gt;

&lt;p&gt;In this scheme backend API services are called to perform all operations, querying data, processing input data (Forms), workflows, etc. But, what happens when the Backend API is down? Our users still have the application interface but the backend is not responding. In this cases most applications have ways to let know users something is not working as expected. (same way mobile application does)&lt;/p&gt;

&lt;h4&gt;
  
  
  Surviving a backend failure at starting time?
&lt;/h4&gt;

&lt;p&gt;Frontend applications that are served as static assets are cached on the user’s browser and in the edge CDN. So there is no actual request to company’s infrastructure to retrieve them unless there is an update and the caches are bust, so next request will get redirected to origin so the new assets can be cached.&lt;/p&gt;

&lt;p&gt;Since Frontend application delivery is separated from the company’s infrastructure there can be the case when a Frontend Application is running but the infrastructure is not. (Backend API failure or more drastic general failure)&lt;/p&gt;

&lt;p&gt;In this case everything the user tries to do on the application will irremediably fail. If the backend API is not responding and the company anticipates the resolution will take some time a more specialized solution is employed.&lt;/p&gt;

&lt;h4&gt;
  
  
  What applications do while downtime?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Company can use feature flags integration to activate a Maintenance Page (LaunchDarkly, Rollup)&lt;/li&gt;
&lt;li&gt;Maintenance Pages supports User communication directly to support services team via communication integrations, (Intercom, HelpCruch, LiveChat, etc.)&lt;/li&gt;
&lt;li&gt;Log errors the user is experiencing (Sentry, NewRelic)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This are common, sometimes obvious solutions client applications use to make the general application failure less stressful to the end user.&lt;/p&gt;

&lt;p&gt;Ex: While company’s infrastructure is down a user access the client application and is greeted by a maintenance page stating that the company is going through some issues, their are working to solve it and in the mean time customer support can address their concerns via a live chat bubble in the maintenance page directly. All access logs and page metrics are still being logged as usual.&lt;/p&gt;

&lt;p&gt;This example describes a better experience: Certainly better than a cryptic error page, a faulty application that does nothing or simple or an empty white page. Depending on your company’s business this could represent an irremediable lost of clients, trust, resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  What else?
&lt;/h4&gt;

&lt;p&gt;Depending on the business type some other solutions are possible to implement that further improves customer satisfaction and displays attention to detail and service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A low volume-high cost sales service could provide, if possible, an offline sales experience with higher human touch. (Insurance is a good example, while backend systems are down clients could still submit an insurance application form via email or live chat that will be processed later). This is, instead of using the regular sales process, a more simplified sales formulary is displayed and data gathered is sent via live chat integration. Customer support will follow up with client once systems are back online. (this is possible since this formulary won’t have backend API dependencies)&lt;/li&gt;
&lt;li&gt;Showcase of their products or guides on how to use them. Act as marketing website in the meantime.&lt;/li&gt;
&lt;li&gt;A resource management system can provide a way to let the clients create and locally (browser) store simplified versions of their resources, once the system gets back online those locally stored resources will get converted to regular resources and persisted on the central database. Example: A task management application provides a formulary to create a Task, stored locally in the browser, deferring its storage and processing for the time the system gets back online. The user still can perform some work that will be completed once the system is back online.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Conclusions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;While joyful coding your application, prepare for the worst.&lt;/li&gt;
&lt;li&gt;Evaluate how your application might fail and provide a better experience if that happens&lt;/li&gt;
&lt;li&gt;Leverage third party services that could improve end users experience in failure time. Especially around communication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy hacking…&lt;/p&gt;

</description>
      <category>failure</category>
      <category>frontenddev</category>
      <category>webdev</category>
      <category>jamstack</category>
    </item>
    <item>
      <title>Deploy your NextJS Application on a different base path (i.e. not root)</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Thu, 05 Dec 2019 17:13:25 +0000</pubDate>
      <link>https://forem.com/efreyreg/deploy-your-nextjs-application-on-a-different-base-path-i-e-not-root-3lme</link>
      <guid>https://forem.com/efreyreg/deploy-your-nextjs-application-on-a-different-base-path-i-e-not-root-3lme</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AzAiDd5TnE8E1fBbd6HPD2g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AzAiDd5TnE8E1fBbd6HPD2g.jpeg"&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@skitterphoto?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Skitterphoto &lt;/a&gt;from &lt;a href="https://www.pexels.com/photo/architectural-design-architecture-brick-wall-bricks-422844/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels" rel="noopener noreferrer"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of NextJS default assumptions is that we will deploy our applications on the root base path of a domain. This is / . NextJS routing converts each file inside the pages folder to a matching path. So if we have a file named ./pages/index.js this matches / , for a file named ./pages/about.js it will be accessible at /about This is a pretty simple scheme, is basically how hyperlinks work. All you have to do to link both pages is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import Link from 'next/link'

const Index = () =\&amp;gt; (
 \&amp;lt;div\&amp;gt;
 ...
 \&amp;lt;Link href='/about'\&amp;gt;\&amp;lt;a\&amp;gt;About Us\&amp;lt;/a\&amp;gt;\&amp;lt;/Link\&amp;gt;
 ...
 \&amp;lt;/div\&amp;gt;
)

export default Index


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;For most applications this works right out of the box. Now, some applications do have the requirement to be served under a different base path than / Usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application segmentation, several teams might be responsible for different parts of the application. Example: One team is responsible for the Dashboard (served at /dashboard) while other team owns the Sales Process (served at/sales)&lt;/li&gt;
&lt;li&gt;Internationalization: An application default language is English, while moving to a new market team decided to add support for Spanish, translations where added and the Spanish version is deployed under the /es base path, now Spanish speaking users are redirected to /es/dashboard and /es/sales&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NextJS Official documentation includes a section for Multi-Zones (&lt;a href="https://nextjs.org/docs#multi-zones" rel="noopener noreferrer"&gt;https://nextjs.org/docs#multi-zones&lt;/a&gt;) this is the feature that allows NextJS applications to be served under a different base path. The most important part of this feature is using the assetPrefix setting on the next.config.js file.&lt;/p&gt;

&lt;p&gt;The examples for multi-zone listed in the documentation all use Zeit’s Now cloud (&lt;a href="https://zeit.co/home" rel="noopener noreferrer"&gt;https://zeit.co/home&lt;/a&gt;). But this is not a Now cloud exclusive feature. (perhaps this is not clear in the documentation)&lt;/p&gt;

&lt;p&gt;To deploy a NextJS application under a different base path we need a reverse proxy that makes the mapping from whatever path we decide to serve our application to the correct URL. Of course having a reverse proxy for local development is not optimal. Although for academic purposes we will use NGINX to implement the 2 use cases we described above.&lt;/p&gt;

&lt;p&gt;According to the documentation and the examples to run our application on a different base path we need to set the assetPrefix setting &lt;strong&gt;AND&lt;/strong&gt; use the same base path on the Link’s as parameter. Since we don’t want to be rewriting the same code all over for every link, lets abstract that behavior on a custom Link component:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;In the Application’s next.config.js file, add this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

module. **exports** = {
**assetPrefix** : **_process_**. **env**. **BASE\_PATH** || **''** ,
**...**
 **publicRuntimeConfig** : {
 ...
**basePath** : **_process_**. **env**. **BASE\_PATH || ''** ,
 ...
 },
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To run our Application on a different base path we do:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ BASE\_PATH=/sales yarn dev
[wait] starting the development server ...
[info] waiting on http://localhost:3000 ...
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This also works for static exports or production builds:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ yarn build

# Production build (SSR)
$ BASE\_PATH=/sales yarn start

# Static export
$ BASE\_PATH=/sales yarn export
$ cd out
$ ws -p 3000


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we do this on development and try to access &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; our application won’t completely work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Av7LH368-SZvrp0_icsxuZw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2Av7LH368-SZvrp0_icsxuZw.png"&gt;&lt;/a&gt;All resources failing to resolve while accessing &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All application’s resources (JS, CSS, Images) will be prefixed with the /sales base path. Without a reverse proxy to do the right mapping it won’t work.&lt;/p&gt;

&lt;h4&gt;
  
  
  Installing and Configuring a local NGINX Reverse Proxy.
&lt;/h4&gt;

&lt;p&gt;There are several ways you can locally install and configure a NGINX reverse proxy. My preferred way is to use Kong (&lt;a href="https://konghq.com/" rel="noopener noreferrer"&gt;https://konghq.com/&lt;/a&gt;) via a NPM package I put together to manage it from the CLI. &lt;a href="https://www.npmjs.com/package/dev-kong" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/dev-kong&lt;/a&gt;. (The only dependency is having docker locally installed, since this package depends on it to run a dockerized kong instance)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ npm install -g dev-kong
$ kong --version
0.8.2

$ kong start
Starting Kong
Creating network "t\_default" with the default driver

Creating t\_kong-database\_1 ...
Creating t\_kong-database\_1 ... done

Creating t\_kong\_1 ...
Creating t\_kong\_1 ... done


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once started we have a local NGINX reverse proxy we can control with a CLI.&lt;/p&gt;

&lt;p&gt;Accessing localhost on the browser will give you:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIi0Ck0M4M_wfl2pCrGaVUw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIi0Ck0M4M_wfl2pCrGaVUw.png"&gt;&lt;/a&gt;Nothing configured yet.&lt;/p&gt;

&lt;p&gt;We also need a fake or local domain to resolve to the &lt;strong&gt;loopback IP&lt;/strong&gt; address (usually 127.0. 0.1). Most simple way to do this is to add the domain (I picked for my tests: outsrc.local) to the /etc/hosts file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ sudo sh -c 'echo "127.0.0.1 outsrc.local" \&amp;gt;\&amp;gt; /etc/hosts'

# Check it
$ cat /etc/hosts
...
...
...
127.0.0.1 outsrc.local


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And finally the mapping on NGINX:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# First get local network IP address (Mac OS only)
$ ipconfig getifaddr en0
172.20.10.2

$ kong add --stripuri sales outsrc.local http://172.20.10.2:3000 /sales
┌──────────────────────────┬──────────────────────────────────────┐
│ http\_if\_terminated │ true │
├──────────────────────────┼──────────────────────────────────────┤
│ id │ 775a9dc2-4b86-4258-82c8-4f2913f5a219 │
├──────────────────────────┼──────────────────────────────────────┤
│ retries │ 5 │
├──────────────────────────┼──────────────────────────────────────┤
│ preserve\_host │ false │
├──────────────────────────┼──────────────────────────────────────┤
│ created\_at │ 1575559214000 │
├──────────────────────────┼──────────────────────────────────────┤
│ upstream\_connect\_timeout │ 60000 │
├──────────────────────────┼──────────────────────────────────────┤
│ upstream\_url │ http://172.20.10.2:3000 │
├──────────────────────────┼──────────────────────────────────────┤
│ upstream\_read\_timeout │ 60000 │
├──────────────────────────┼──────────────────────────────────────┤
│ upstream\_send\_timeout │ 60000 │
├──────────────────────────┼──────────────────────────────────────┤
│ https\_only │ false │
├──────────────────────────┼──────────────────────────────────────┤
│ strip\_uri │ true │
├──────────────────────────┼──────────────────────────────────────┤
│ uris │ /sales │
├──────────────────────────┼──────────────────────────────────────┤
│ name │ sales │
├──────────────────────────┼──────────────────────────────────────┤
│ hosts │ outsrc.local │
└──────────────────────────┴──────────────────────────────────────┘


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Show mapped paths:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kong list


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArYZys9R5Y-Be6M_ZZJA4Lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ArYZys9R5Y-Be6M_ZZJA4Lw.png"&gt;&lt;/a&gt;kong list output&lt;/p&gt;

&lt;p&gt;Above table reads: One endpoint named: sales when accessing outsrc.local/sales route it to &lt;a href="http://172.20.10.2:3000" rel="noopener noreferrer"&gt;http://172.20.10.2:3000&lt;/a&gt; and for all requests remove the /sales prefix.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(We need to use local network IP because our NGINX instance is running inside a docker container and our frontend application is running on the host)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Any number of path mappings can be added. Lets add one for the dashboard application we will run on a different port:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ BASE\_PATH=/dashboard yarn dev --port 3010
[wait] starting the development server ...
[info] waiting on http://localhost:3010 ...
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And the mapping:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kong add --stripuri dashboard outsrc.local http://172.20.10.2:3010 /dashboard
...


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Running kong list again we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ARWvL_AKN_jsVsF_blRPyyA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ARWvL_AKN_jsVsF_blRPyyA.png"&gt;&lt;/a&gt;sales and dashboard apps running on different ports and different base paths.&lt;/p&gt;

&lt;h4&gt;
  
  
  Demo time. Multiple Apps different base paths
&lt;/h4&gt;

&lt;p&gt;If you follow the previous steps, you already have a local domain pointing to 127.0.0.1, NGINX installed and running. We need an Application.&lt;/p&gt;

&lt;p&gt;Lets clone a repo with an Application (already prepared) twice:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ git clone --branch efg/custom-name git@github.com:outsrc/template-frontend.git dashboard-app

$ git clone --branch efg/custom-name git@github.com:outsrc/template-frontend.git sales-app


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install dependencies yarn install and run each application specifying APP_NAME and BASE_PATH&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ APP\_NAME=Dashboard BASE\_PATH=/dashboard yarn dev --port 3010

$ APP\_NAME=Sales BASE\_PATH=/sales yarn dev --port 3000


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our two mappings are the same so I won’t repeat them here.&lt;/p&gt;

&lt;p&gt;On the browser we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AozQzzQGeNbjgQvDwPZrSkA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AozQzzQGeNbjgQvDwPZrSkA.png"&gt;&lt;/a&gt;Dashboard application on /dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A3WYNJO9fZpACWOc7Q_ybnQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A3WYNJO9fZpACWOc7Q_ybnQ.png"&gt;&lt;/a&gt;Sales application on /sales&lt;/p&gt;

&lt;p&gt;Done! We have two NextJS applications running side by side on the same domain, different base paths.&lt;/p&gt;

&lt;h4&gt;
  
  
  Demo time. Same application Spanish Translation
&lt;/h4&gt;

&lt;p&gt;First lets clear the current path mappings we have on NGINX&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kong delete sales
Deleted

$ kong delete dashboard
Deleted


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Clone the code branch with Internationalization and the Spanish translation:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ git clone --branch efg/with-intl git@github.com:outsrc/template-frontend.git spanish-app
$ cd spanish-app
$ yarn install
...

$ LOCALE=es BASE\_PATH=/es yarn dev --port 3010


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will start the Application with the Spanish localization on base path /es&lt;/p&gt;

&lt;p&gt;Mapping the path on NGINX:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ kong add --stripuri spanish outsrc.local http://172.20.10.2:3010 /es


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGXwvK5nled3qhUU8OWbKAQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AGXwvK5nled3qhUU8OWbKAQ.png"&gt;&lt;/a&gt;Spanish translation mapped to /es&lt;/p&gt;

&lt;p&gt;We get this on the browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ApkfkR3pu2U6ZM_iEzorrqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2ApkfkR3pu2U6ZM_iEzorrqg.png"&gt;&lt;/a&gt;Our Spanish translated application served on /es&lt;/p&gt;

&lt;p&gt;I intentionally leaved out some important pieces in terms of Internationalization. Like, detecting users browser preferences so we can redirect them to the right path.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusions.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;NextJS DOES support deploying applications on different base paths other than the root base path.&lt;/li&gt;
&lt;li&gt;Combination of assetPrefix and Link as parameter.&lt;/li&gt;
&lt;li&gt;Deploy to a different base path is not a developing time task. Is an SRE task. Meaning, Frontend developers should not be focused too much where the applications are getting deployed (base path) only be ready to support it. Local development should always use root path.&lt;/li&gt;
&lt;li&gt;Works on static exports.&lt;/li&gt;
&lt;li&gt;Prefer to use runtime configuration (&lt;a href="https://nextjs.org/docs#runtime-configuration" rel="noopener noreferrer"&gt;https://nextjs.org/docs#runtime-configuration&lt;/a&gt;) over build time configuration (&lt;a href="https://nextjs.org/docs#build-time-configuration" rel="noopener noreferrer"&gt;https://nextjs.org/docs#build-time-configuration&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;If you really need to use NGINX locally, I recommend you yo use Kong (via dev-kong NPM package)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nextjs</category>
      <category>nginx</category>
      <category>typescript</category>
      <category>react</category>
    </item>
    <item>
      <title>Deploy an App on Kubernetes (GKE) with  Kong Ingress, LetsEncrypt and Cloudflare.</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Sat, 30 Nov 2019 14:46:01 +0000</pubDate>
      <link>https://forem.com/efreyreg/deploy-an-app-on-kubernetes-gke-with-kong-ingress-letsencrypt-and-cloudflare-3bk8</link>
      <guid>https://forem.com/efreyreg/deploy-an-app-on-kubernetes-gke-with-kong-ingress-letsencrypt-and-cloudflare-3bk8</guid>
      <description>&lt;h3&gt;
  
  
  Deploy an App on Kubernetes (GKE) with Kong Ingress, LetsEncrypt and Cloudflare.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3PoAg5uC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzlQFqDU5AOcY0Y9wtIQOEg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3PoAg5uC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AzlQFqDU5AOcY0Y9wtIQOEg.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have a small app (hopefully a couple of microservices, Frontend and Backend) I wouldn’t recommend you use Kubernetes to deploy them. There are better fully managed alternatives out there. But…&lt;/p&gt;

&lt;p&gt;If you think you really need it or you want to provide this kind of service to others, then this guide will serve as blueprint.&lt;/p&gt;

&lt;h4&gt;
  
  
  Inputs, Outputs and Steps.
&lt;/h4&gt;

&lt;p&gt;Our end result will be: A web application served on our domain with TLS enabled, robust enough to withstand a DDoS attack.&lt;/p&gt;

&lt;p&gt;For this, Kong Ingress will help us have our frontend to be served on the root path and a backend service on the /api path. LetsEncrypt will provide TLS certificates and Cloudflare will provide extra security and DNS services. All hosted on a Kubernetes cluster on GKE.&lt;/p&gt;

&lt;p&gt;(Note: Kong Ingress is not strictly necessary since we only have a couple services, but, considering once you go microservices route there is usually an internal explosion of new services we might want to be prepared)&lt;/p&gt;

&lt;p&gt;To achieve this we need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend app: We will be using NextJS (&lt;a href="https://nextjs.org/"&gt;https://nextjs.org/&lt;/a&gt;) for its simplicity&lt;/li&gt;
&lt;li&gt;Backend app: Will only resolve static requests. (Connecting to a DB or any other resource is out of this guide’s scope)&lt;/li&gt;
&lt;li&gt;A domain: You can buy one on: Google Domains (&lt;a href="https://domains.google/"&gt;https://domains.google/&lt;/a&gt;), Cloudflare (&lt;a href="https://www.cloudflare.com/"&gt;https://www.cloudflare.com/&lt;/a&gt;) or GoDaddy (&lt;a href="https://www.godaddy.com/"&gt;https://www.godaddy.com/&lt;/a&gt;). The one I picked for this guide is &lt;strong&gt;outsrc.dev&lt;/strong&gt; (on Google Domains, .dev domains on Google Chrome are forced to use TLS)&lt;/li&gt;
&lt;li&gt;A Google Cloud account. (&lt;a href="https://cloud.google.com/"&gt;https://cloud.google.com/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Backend Service
&lt;/h4&gt;

&lt;p&gt;Our backend service is very simple and it does not depend on any external dependency. We only want to have a couple of APIs we can use.&lt;/p&gt;

&lt;p&gt;/states Returns a list of US States, only the 2 letter code.&lt;/p&gt;

&lt;p&gt;/states/&amp;lt;code&amp;gt; Returns an object with the State code and the State’s name&lt;/p&gt;

&lt;p&gt;The code is hosted here: &lt;a href="https://github.com/ernestofreyreg/outsrc-demo-back"&gt;https://github.com/ernestofreyreg/outsrc-demo-back&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is a little bit of Javascript and most important a Dockerfile&lt;/p&gt;

&lt;h4&gt;
  
  
  Frontend Application
&lt;/h4&gt;

&lt;p&gt;Our frontend application is very simple. Only 2 pages, First will load a list of States from API, second will show State’s detail (also from API). States will get pulled from the backend service.&lt;/p&gt;

&lt;p&gt;/ Front page, shows a list of US States with links&lt;/p&gt;

&lt;p&gt;/state?state=&amp;lt;code&amp;gt; Shows the State code and name and a back link.&lt;/p&gt;

&lt;p&gt;Code was written in Typescript, you can find the source code here: &lt;a href="https://github.com/ernestofreyreg/outsrc-demo-front"&gt;https://github.com/ernestofreyreg/outsrc-demo-front&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, contains a Dockerfile We want to ship this Application as a container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 0: Create a GCP Project.
&lt;/h3&gt;

&lt;p&gt;Go to the Google Cloud Console at (&lt;a href="https://console.cloud.google.com/"&gt;https://console.cloud.google.com/&lt;/a&gt;) and create a new Project. (I named mine Outsrc and the project’s ID is outsrc)&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Build, Tag &amp;amp; Push Docker images for Frontend and Backend.
&lt;/h3&gt;

&lt;p&gt;Let’s clone and build the 2 services for Frontend and Backend.&lt;/p&gt;

&lt;p&gt;First Frontend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone [git@github.com](mailto:git@github.com):ernestofreyreg/outsrc-demo-front.git
$ cd outsrc-demo-front
$ docker build -t outsrc-demo-front .
$ docker tag outsrc-demo-front:latest gcr.io/outsrc/outsrc-demo-front:1.0.0
$ docker push gcr.io/outsrc/outsrc-demo-front:1.0.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Backend:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone [git@github.com](mailto:git@github.com):ernestofreyreg/outsrc-demo-back.git
$ cd outsrc-demo-back
$ docker build -t outsrc-demo-back .
$ docker tag outsrc-demo-back:latest gcr.io/outsrc/outsrc-demo-back:1.0.0
$ docker push gcr.io/outsrc/outsrc-demo-back:1.0.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Once we finished we will have this on our GCP Console on the Container Registry service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GFsYFFiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ARs8AN0qaaGlCV4QXKkpzaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GFsYFFiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ARs8AN0qaaGlCV4QXKkpzaw.png" alt=""&gt;&lt;/a&gt;Container Registry after pushing our 2 service images&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Create a Kubernetes Cluster on GCP
&lt;/h3&gt;

&lt;p&gt;For this we will use GCP’s Managed Kubernetes service or GKE (Google Kubernetes Engine)&lt;/p&gt;

&lt;p&gt;GCP Console -&amp;gt; Kubernetes Engine -&amp;gt; Clusters -&amp;gt; Create Cluster&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--00tSa-A6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aj7mKHujqLFs5alN4PNgUew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--00tSa-A6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Aj7mKHujqLFs5alN4PNgUew.png" alt=""&gt;&lt;/a&gt;Creating our Kubernetes Cluster&lt;/p&gt;

&lt;p&gt;I used the First Cluster template (small pool of nodes, good for experimenting, just changed the pool size to 3 from 1)&lt;/p&gt;

&lt;p&gt;Creating a cluster takes some time.&lt;/p&gt;

&lt;p&gt;Once created we can connect our local dev box to the cluster, so we can use the kubectl command to control our cluster.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials outsrc-cluster --zone us-west1-a --project outsrc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Check cluster nodes (should show 3 nodes in the cluster)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-outsrc-cluster-pool-1-f00215b6-7d0t Ready \&amp;lt;none\&amp;gt; 11h v1.14.8-gke.12
gke-outsrc-cluster-pool-1-f00215b6-dvvl Ready \&amp;lt;none\&amp;gt; 11h v1.14.8-gke.12
gke-outsrc-cluster-pool-1-f00215b6-tct2 Ready \&amp;lt;none\&amp;gt; 11h v1.14.8-gke.12
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy our Applications on Kubernetes
&lt;/h3&gt;

&lt;p&gt;To deploy our application on Kubernetes we need several things:&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3.1 Namespace
&lt;/h4&gt;

&lt;p&gt;Our apps will share the same Namespace in Kubernetes.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;We will use the kubectl apply command to create all of our Kubernetes artifacts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-namespace.yml
namespace/outsrc created

$ kubectl get namespaces
NAME STATUS AGE
default Active 12m
kube-node-lease Active 12m
kube-public Active 12m
kube-system Active 12m
outsrc Active 19s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;We need to include the parameter --namespace=outsrc to all commands or… we can:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl config set-context --current --namespace=outsrc
Context "gke\_outsrc\_us-west1-a\_outsrc-cluster" modified.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;All subsequent kubectl commands will already mapped to the outsrc namespace (you can override by setting the --namespace=... parameter.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3.2: Deployments
&lt;/h4&gt;

&lt;p&gt;Our 2 services Frontend and Backend need Deployment resource files.&lt;/p&gt;

&lt;p&gt;Backend:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-back-deployment.yml
deployment.apps/outsrc-back-deployment created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Frontend:&lt;/p&gt;

&lt;p&gt;Frontend service requires one runtime parameter: the API_URL pointing to the backend service. Since this service will be accessed from the outsrc.dev domain then we need to specify a URL + path where we are going to serve the backend of this App. In this case will be &lt;a href="https://outsrc.dev/api"&gt;https://outsrc.dev/api&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-front-deployment.yml
deployment.apps/outsrc-front-deployment created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;At this point we created 2 services running on our cluster, each with 2 replicas.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
outsrc-back-deployment 2/2 2 2 11m
outsrc-front-deployment 2/2 2 2 2m48s

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
outsrc-back-deployment-5cbf946975-6tshn 1/1 Running 0 18s
outsrc-back-deployment-5cbf946975-prtqk 1/1 Running 0 11m
outsrc-front-deployment-7995b6bdc4-g9krr 1/1 Running 0 35s
outsrc-front-deployment-7995b6bdc4-mlvk2 1/1 Running 0 2m54s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;To check Frontend services logs:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl logs -l service=front
\&amp;gt; Ready on [http://localhost:3000](http://localhost:3000)
\&amp;gt; Ready on http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Step 3.3: Services
&lt;/h4&gt;

&lt;p&gt;Services is a networking resource Kubernetes uses to manage access to running Pods. Create a couple of Services resource definition for Frontend and Backend services.&lt;/p&gt;

&lt;p&gt;Frontend:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Backend:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-front-service.yml
service/outsrc-front-service created

$ kubectl apply -f outsrc-back-service.yml
service/outsrc-back-service created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;How is our cluster looking so far? From the GCP Console we can see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--quBl8HF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AI-4c1RzRJ4O-hVjxVttfJA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--quBl8HF3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AI-4c1RzRJ4O-hVjxVttfJA.png" alt=""&gt;&lt;/a&gt;Our Frontend and Backend services deployed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wRp_iY4f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACSJYz6CQI8Tabtbrex78YA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wRp_iY4f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ACSJYz6CQI8Tabtbrex78YA.png" alt=""&gt;&lt;/a&gt;Backend and Frontend Services are mapped to port 3000&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4. Install Kong Ingress
&lt;/h3&gt;

&lt;p&gt;An &lt;strong&gt;Ingress&lt;/strong&gt; is an object that allows access to your Kubernetes services from outside the Kubernetes cluster. We are going to use Kong’s Ingress controller. (&lt;a href="https://konghq.com/solutions/kubernetes-ingress/"&gt;https://konghq.com/solutions/kubernetes-ingress&lt;/a&gt;) Even if we not necessarily need it for this exercise I included because of its versatility and support for extensions/plugins.&lt;/p&gt;

&lt;p&gt;According to its Github repo (&lt;a href="https://github.com/Kong/kubernetes-ingress-controller"&gt;https://github.com/Kong/kubernetes-ingress-controller&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f [https://bit.ly/k4k8s](https://bit.ly/k4k8s)
namespace/kong created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
configmap/kong-server-blocks created
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;(If you have problems installing Kong Ingress Controller please check &lt;a href="https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/gke.md"&gt;https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/gke.md&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Once Kong Ingress is installed a LoadBalancer will be created with a public IP Address. We need this IP Address for next step, DNS.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get service --namespace=kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.40.12.83 35.212.152.27 80:30435/TCP,443:32312/TCP 72m
kong-validation-webhook ClusterIP 10.40.9.92 \&amp;lt;none\&amp;gt; 443/TCP 72m
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;As you can see External IP Address is on the kong-proxy service is: 35.212.152.27&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 5: Setup Domain.
&lt;/h3&gt;

&lt;p&gt;I bought the outsrc.dev domain on Google Domains. You could also use your preferred provider. Once you have your domain, proceed to register on Cloudflare and add your domain. You will need to set up your domain with Cloudflare’s DNS (You can see that on the next image)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rEA8vOTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHbieVMGCLecPllCksmXU8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rEA8vOTg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AHbieVMGCLecPllCksmXU8w.png" alt=""&gt;&lt;/a&gt;Using Cloudflare’s nameservers on Google Domains&lt;/p&gt;

&lt;p&gt;Once finished this step, create the following registries on the Cloudflare DNS service:&lt;/p&gt;

&lt;p&gt;First: proxy.outsrc.dev, type A, points to the public IP: 35.212.152.27. This is the only registry that needs to point to the public IP address. All other registries will use a CNAME registry pointing to proxy.outsrc.dev&lt;/p&gt;

&lt;p&gt;Second: Main domain where the app will be deployed:&lt;/p&gt;

&lt;p&gt;outsrc.dev, type CNAME, points to proxy.outsrc.dev&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G3fcaqM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AacP07MK8DZ_kLUOuUxNuqA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G3fcaqM6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AacP07MK8DZ_kLUOuUxNuqA.png" alt=""&gt;&lt;/a&gt;DNS Zone for outsrc.dev&lt;/p&gt;

&lt;p&gt;Also notice we didn’t activated the Proxy mode on the main outsrc.dev domain. For now we will use only the Cloudflare’s DNS service.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 6: Cert Manager + LetsEncrypt
&lt;/h3&gt;

&lt;p&gt;CertManager is a native &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; certificate management controller. It can help with issuing certificates from a variety of sources, such as &lt;a href="https://letsencrypt.org/"&gt;Let’s Encrypt&lt;/a&gt;, &lt;a href="https://www.vaultproject.io/"&gt;HashiCorp Vault&lt;/a&gt;, &lt;a href="https://www.venafi.com/"&gt;Venafi&lt;/a&gt;, a simple signing key pair, or self signed.&lt;/p&gt;

&lt;p&gt;Let’s install it on the Kubernetes Cluster: (&lt;a href="https://cert-manager.io/docs/installation/kubernetes/"&gt;https://cert-manager.io/docs/installation/kubernetes/&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create namespace cert-manager
namespace/cert-manager created

$ kubectl apply --validate=false -f [https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml](https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml)
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/cert-manager configured
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:auth-delegator created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:webhook-authentication-reader created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:webhook-requester created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;To use CertManager create a ClusterIssuer, this is an object that will create the certificates we need. In this case using a LetsEncrypt issuer.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;




&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f letsencrypt-staging.yml
clusterissuer.cert-manager.io/letsencrypt-staging created

$ kubectl apply -f letsencrypt-production.yml
clusterissuer.cert-manager.io/letsencrypt-production created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;LetsEncrypt being a free public service has to protect itself for unfair use so, if you are testing and unsure your DNS zone is correctly configured, etc. I would recommend you use the letsencrypt-staging issuer.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 7: Ingress
&lt;/h3&gt;

&lt;p&gt;The Ingress resource definition for outsrc.dev Application have several references to the elements we already have: Backend and Frontend services, Kong Ingress, LetsEncrypt production issuer.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f outsrc-dev-ingress.yml
ingress.extensions/outsrc-dev-ingress created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After creation the service will be immediately accessible at &lt;a href="http://outsrc.dev"&gt;http://outsrc.dev&lt;/a&gt; but TLS won’t be probably configured yet. Issuing a certificate for the domain takes a short time. After that this is what we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VX-Oms69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEcOrraWaAkqXGe-lbDHj2A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VX-Oms69--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AEcOrraWaAkqXGe-lbDHj2A.png" alt=""&gt;&lt;/a&gt;It works!! With TLS (Thanks LetsEncrypt)&lt;/p&gt;

&lt;p&gt;As you can see our Application is served by our Kubernetes Cluster hosted on GKE, using Kong Ingress Controller and LetsEncrypt for TLS. Last step is activate Cloudflare’s Proxy mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8cyanlZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2OqKJ6towg4WXLURW-XQjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8cyanlZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A2OqKJ6towg4WXLURW-XQjg.png" alt=""&gt;&lt;/a&gt;Proxy mode for our main domain outsrc.dev&lt;/p&gt;

&lt;p&gt;Once we set Proxy mode on our main outsrc.dev domain. Cloudflare will provision a Certificate and proxy all request to our application. Enabling extra security and DDoS protection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7PdrWST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM0OJ0h54YFiCxlSvM_WlFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7PdrWST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AM0OJ0h54YFiCxlSvM_WlFA.png" alt=""&gt;&lt;/a&gt;Cloudflare certificate while proxy mode is on.&lt;/p&gt;

&lt;p&gt;The Frontend service is immediately visible, Backend not so much (although all data on the Frontend comes from the Backend), But, is ok to test Backend services too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl [https://outsrc.dev/api/states/CA](https://outsrc.dev/api/states/CA)
{"state":"CA","name":"California"}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Last Step: Conclusions
&lt;/h3&gt;

&lt;p&gt;As you can see deploying a fairly simple Application on Kubernetes (via GKE) can be straightforward. But this doesn’t mean is simple, specially securing it, which is not covered in this guide (only partially) and maintaining it (We will see how rolling upgrades and rollbacks in other guides)&lt;/p&gt;

&lt;p&gt;Happy hacking…&lt;/p&gt;




</description>
      <category>kubernetes</category>
      <category>letsencrypt</category>
      <category>googlecloudplatfor</category>
      <category>ingress</category>
    </item>
    <item>
      <title>Frontend dockerized build artifacts with NextJS</title>
      <dc:creator>Ernesto Freyre</dc:creator>
      <pubDate>Tue, 26 Nov 2019 17:39:10 +0000</pubDate>
      <link>https://forem.com/efreyreg/frontend-dockerized-build-artifacts-with-nextjs-49c7</link>
      <guid>https://forem.com/efreyreg/frontend-dockerized-build-artifacts-with-nextjs-49c7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4W2Pwv3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0tSfmFLTulxJ5P03_x4yHQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4W2Pwv3c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A0tSfmFLTulxJ5P03_x4yHQ.jpeg" alt=""&gt;&lt;/a&gt;Photo by &lt;a href="https://www.pexels.com/@tomfisk?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Tom Fisk &lt;/a&gt;from &lt;a href="https://www.pexels.com/photo/aerial-photography-of-container-van-lot-3063470/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While deploying Frontend applications there are several ways you can go. None bad, just different use cases. You can dockerize it (this is make a docker container with your application assets and runtime) and deploy it to any infrastructure that supports it (Kubernetes, et al) or you can go a simpler (and more popular by the day) route of creating a static build of your app and serve it over a CDN (Content Delivery Network) with all the benefits this entails (No servers, content the edge closer to users so faster experience, etc).&lt;/p&gt;

&lt;p&gt;Now, you probably want to have runtime environments, most of the time at least 3: development, staging and production. This affects your build and deploy pipelines. Let’s say you have your latest app version working well (tested and all) on staging environment and decide to deploy latest version to production. Depending on how builds are created you can end up with a broken version of your app on production, just by having broken dependencies that are not correctly managed. So, your build pipeline performs another build of the production branch (or tag) and now we shipped broken code to our users. Not good.&lt;/p&gt;

&lt;p&gt;Dockerizing our application definitively helps. We can create a docker image per commit, environment agnostic, tagged and stored on our registry. We can promote or run this docker image on any environment with confidence. Since we have NextJS on the title of the post, let’s see how to dockerize a NextJS application.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The Dockerfile described has 2 stages. First, will install all dependencies (including development dependencies) and make a production build, also removing non-production dependencies . Second stage will copy relevant files including build and production dependencies. Giving us a more lean and compact image we can then run with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -d -p 3000:3000 fe-app-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since we want to run the same image across runtime environments we can also do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Development
$ docker run -d -p 3000:3000 \
-e API=[https://dev-api.myapp.com](https://staging-api.myapp.com) \
fe-app-image

# Staging
$ docker run -d -p 3000:3000 \
-e API=[https://staging-api.myapp.com](https://staging-api.myapp.com) \
fe-app-image

# Production
$ docker run -d -p 3000:3000 \
-e API=[https://api.myapp.com](https://staging-api.myapp.com) \
fe-app-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Or even for local development or tests&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Local dev
$ docker run -d -p 3000:3000 \
-e API=[http://1](https://staging-api.myapp.com)92.168.1.87:5000 \
fe-app-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Docker images are neat. Now. For our runtime environments we still depend on servers to deploy our app so our users can access it. The other alternative we described was static deploys. This is, build your app so the output is just a bunch of HTML, JS and CSS files we can put on a folder and serve it via a CDN. The main problem this approach has is lack of runtime. In other words we cannot make the static build environment agnostic. Injecting environment properties then becomes a problem we need to solve, via config endpoints (fetch before app loads), environment sniffing (checking domain the app is running and inferring env vars from it), injecting HTTP headers (not sure yet). All requiring extra work. (If you solved this problem please comment with your solutions).&lt;/p&gt;

&lt;p&gt;What we usually see with static deploy is: every time we want to deploy to a specific environment we have to run the build process with the runtime vars so the build has them baked in. This approach works, is probably what you are using right now if you are doing static deploys at all. But, still has the problem described above. If some dependency changed or is not well managed at build time we cannot guarantee our build will work the same way.&lt;/p&gt;

&lt;p&gt;How can we be protected from this problem and still do static deploys. (Having no servers to maintain is really appealing) Well, One approach is to still create a docker image of your app (using Dockerfile described above). So, build time is separated from deploy time.&lt;/p&gt;

&lt;p&gt;At deploy time, we can pull any image (easy rollbacks FTW) and run it changing the entrypoint so instead of running the app we will be exporting its statics assets. (This is viable on NextJS thanks to the next export command)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Deploying to production
$ docker run \
-e API=[https://api.myapp.com](https://staging-api.myapp.com) \
-v ~/cd-folder/out:/app/out \ 
--entrypoint "node\_modules/.bin/next" \
fe-app-image export

# Copy static assets from ~/cd-folder/out to your production CDN
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Why?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Build and deploys are separated. Dependency problems are no longer an issue.&lt;/li&gt;
&lt;li&gt;Deploy optionality: We can now choose how we are going to deploy our apps. Kubernetes using docker or static deploy using a CDN&lt;/li&gt;
&lt;li&gt;Easy rollbacks. We can build, tag and store all of our builds on a docker registry. We can then choose what version do we want to deploy directly from the registry.&lt;/li&gt;
&lt;li&gt;Easier local development experience. Any dev team member, Frontend or not can run any version of frontend locally.&lt;/li&gt;
&lt;li&gt;SSR optionality. Static deploys don’t support SSR completely, just parcial renderings of pages. But, you can go back and support it by deploying your app again as a docker container.&lt;/li&gt;
&lt;li&gt;Easier local automated tests. Just run your docker container pointing to a mountebank server &lt;a href="http://www.mbtest.org/"&gt;http://www.mbtest.org/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy hacking!&lt;/p&gt;




</description>
      <category>kubernetes</category>
      <category>react</category>
      <category>jamstack</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
