<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Daniel Albuschat</title>
    <description>The latest articles on Forem by Daniel Albuschat (@danielkun).</description>
    <link>https://forem.com/danielkun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/danielkun"/>
    <language>en</language>
    <item>
      <title>Cryptography: What is the difference between Hashing, Signing and MAC?</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Sun, 30 Jan 2022 14:04:29 +0000</pubDate>
      <link>https://forem.com/danielkun/cryptography-what-is-the-difference-between-hashing-signing-and-mac-5dbp</link>
      <guid>https://forem.com/danielkun/cryptography-what-is-the-difference-between-hashing-signing-and-mac-5dbp</guid>
      <description>&lt;p&gt;&lt;em&gt;I set out to learn everything about cryptography in 2022 and share what I have learned along the way. I started the &lt;a href="https://www.cryptography-primer.info/"&gt;Cryptography Primer&lt;/a&gt; this year, where you can find more information, and which will grow each week. Following is an &lt;a href="https://www.cryptography-primer.info/hash_sign_mac/"&gt;excerpt&lt;/a&gt; from this site explaining everything you need to know to get started with hashes, MACs and digital signatures. Have fun!&lt;/em&gt;_&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://twitter.com/dalbuschat"&gt;Daniel&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hashes, MACs and digital signatures are primitives of cryptography, where hashes are also used outside of cryptography - e.g. to validate that a message has not been corrupted during transport.&lt;/p&gt;

&lt;p&gt;Hashes, MACs and digital signatures have a few things in common:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They can be used to validate the "integrity" of a message - this means that you can be sure that the message was not corrupted if it matches the hash, signature or MAC that you compare it with.&lt;/li&gt;
&lt;li&gt;The original message can not be extracted from them.&lt;/li&gt;
&lt;li&gt;Hence, they &lt;strong&gt;don't&lt;/strong&gt; encrypt messages and are not encryption algorithms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a table showing the differences of the possibilities for each primitive:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Hash&lt;/th&gt;
&lt;th&gt;Message Authentication Code (MAC)&lt;/th&gt;
&lt;th&gt;Digital Signature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Validate that data has not been tampered with or has been corrupted ("Integrity")&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validate the sender of a message by using the Private Key ("Authentication")&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validate the sender of a message by using the Public Key ("Authentication")&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prove that the sender has written and published a message ("Non-Repudiation")&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What are use-cases for hashes?
&lt;/h2&gt;

&lt;p&gt;A hash basically "reduces" an arbitrary large message into a fixed size digest in a non-reversible way. In particular, a hash function aims to do this in a way that possible &lt;em&gt;collisions&lt;/em&gt; are as unlikely as possible. Nowadays, when you say "hash function", you usually mean cryptographic hash functions. There are non-cryptographic hash functions&lt;sup id="fnref1"&gt;1&lt;/sup&gt;, too (but some even refuse to call those hash functions): Most notably CRC&lt;sup id="fnref2"&gt;2&lt;/sup&gt; (cyclic redundancy check), which is often used to verify that data has not been (unintentionally) corrupted during transport.&lt;/p&gt;

&lt;p&gt;But even cryptographic hash functions can be used for non-cryptographic as well as cryptographic use cases:&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-Cryptographic use-cases for hash functions
&lt;/h3&gt;

&lt;p&gt;Here are some examples how hash functions are used in non-cryptographic context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate that a message has not been corrupted (or modified) during transport. For example, you can often find hashes next to a download link that can be used to validate that the file has exactly the same content as it supposed to have after you have downloaded it.&lt;/li&gt;
&lt;li&gt;"Shrink" information to a unique identifier that can be used for lookups. For example, you can look up a whole sentence or even a whole paragraph of text in a database by using it's hash, instead of comparing all characters of the paragraph in the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cryptographic use-cases for hash functions
&lt;/h3&gt;

&lt;p&gt;Here are some examples how hash functions ar used in cryptograhpic context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Usually digital signatures are not applied to the whole message or data block, but on a hash digest of that message. In this scenario, the collision-resistance of the hash function is of utter importance&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;sup id="fnref4"&gt;4&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;Store passwords&lt;sup id="fnref5"&gt;5&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;Some MAC algorithms are based on hash functions - these are called "HMAC" (hash-based message authentication code) and basically build a hash on a mixup of the Private Key and the message.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison of hashing functions
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Digest Sizes&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SHA2&lt;/td&gt;
&lt;td&gt;224, 256, 384, 512&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Recommended&lt;/p&gt;
&lt;p&gt;SHA2-family hashing functions are state-of-the-art and considered very secure. SHA2 is not less secure than SHA3.&lt;/p&gt;
Also known as SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHA3&lt;/td&gt;
&lt;td&gt;224, 256, 384, 512&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Recommended&lt;/p&gt;
&lt;p&gt;SHA3-family hashing functions are state-of-the-art and considered as secure as SHA2. SHA3 was inventend as an alternative to SHA2 in case that SHA2 would be broken, but is computationally more expensive than SHA2.&lt;/p&gt;
&lt;p&gt;Also known as SHA3/224, SHA3/256, SHA3/384 and SHA3/512.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHA1&lt;/td&gt;
&lt;td&gt;160 Bit&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Not recommended&lt;/p&gt;
&lt;p&gt;There are practical known collission attacks for SHA1.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MD5&lt;/td&gt;
&lt;td&gt;128 Bit&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Not recommended&lt;/p&gt;
&lt;p&gt;There are practical known collission attacks for MD5.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What are use-cases for digital signatures?
&lt;/h2&gt;

&lt;p&gt;Digital signatures also provide the integrity validation function of hashes. But additionally, digital signatures let you verify that the sender of the message is authentic, e.g. the message originates from the source that you expected.&lt;/p&gt;

&lt;p&gt;Because digital signatures are using "asymmetric cryptography", you can use the Public Key to validate the integrity and authenticity of the message. This has the advantage that you do not need to share a common Private Key between the sender and the recipient.&lt;/p&gt;

&lt;p&gt;Use-cases for digital signatures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish a message and "sign" it so that everyone can verify that it has been written and published by you.&lt;/li&gt;
&lt;li&gt;For example, TLS (and therefore HTTPS, which builds on TLS) uses digital signatures to authenticate the server behind the domain that you have requested data from.&lt;/li&gt;
&lt;li&gt;The underlying building block for this are x.509 certificates that are also widely used in other systems where it is important to let anybody know that a "certificate", that provides arbitrary permissions or identifications, can be trusted.&lt;/li&gt;
&lt;li&gt;Mobile platforms such as Apple's iOS and Google's Android use digital signatures to sign apps in they App Store/Play Store so that the system is able to trust these apps (and in turn is able to block running untrusted apps).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison of digital signature algorithms
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;EdDSA&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Recommended&lt;/p&gt;
&lt;p&gt;This algorithm is a variant of DSA, but uses "twisted Edwards curves", which have a few advantages&lt;sup id="fnref6"&gt;6&lt;/sup&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High performance on a wide range of systems&lt;/li&gt;
&lt;li&gt;More resilient to side-channel attacks&lt;/li&gt;
&lt;li&gt;Does not require a unique random number for each messsage&lt;/li&gt;
&lt;/ul&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ECDSA&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Recommended&lt;/p&gt;
&lt;p&gt;This algorithm is a variant of DSA, but uses elliptic curves instead of modular arithmetic. This decreases the required key size for the same safety level drastically. Additionally, the same Public/Private Key pair could be used for encryption using ECIES, which could be an advantage.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RSA&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Recommended&lt;/p&gt;
&lt;p&gt;RSA can be used for digital signatures and asymmetric encryption using the same Public/Private key pair. Compared to ECDSA and EdDSA it has much larger key sizes and is computationally more expensive. However, if your system can profit from using the same Private/Public key pair for signing and encrypting, and the somewhat rarely used ECIES is not a feasible option for you, RSA can be a good fit.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DSA&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;Use EdDSA, ECDSA or RSA instead (preference in this order)&lt;/p&gt;
&lt;p&gt;DSA was the first standardised digital signature algorithm and is still considered secure. However, the large key size and expensive computations makes it less pratical than it's modern successors such as ECDSA and EdDSA.&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What are MACs used for?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You should usually not require to use MACs yourself&lt;/strong&gt;, because these are often part of an "authenticated encryption" cipher such as &lt;a href="https://www.cryptography-primer.info/algorithms/aes/"&gt;AES-GCM&lt;/a&gt; or ChaCha20-Poly1305.&lt;/p&gt;

&lt;p&gt;MACs are similar to digital signatures, but they do not have the advantage of asymmetric cryptography, because they require the same Private Key for "signing" a message and authenticating the message.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MACs are of utter importance to prevent CCA on ciphers&lt;sup id="fnref7"&gt;7&lt;/sup&gt; - every cipher should include message authentication, which is usually accomplished by using a MAC.&lt;/li&gt;
&lt;/ul&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/List_of_hash_functions"&gt;List of hash functions&lt;/a&gt; on Wikipedia ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Cyclic_redundancy_check"&gt;CRC&lt;/a&gt; on Wikipedia ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://www.win.tue.nl/hashclash/rogue-ca/"&gt;MD5 considered harmful today - Creating a rogue CA certificate&lt;/a&gt; from Eindhoven University of Technology ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;&lt;a href="https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html"&gt;Announcing the first SHA1 collision&lt;/a&gt; on Google Security Blog ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;&lt;a href="https://www.vaadata.com/blog/how-to-securely-store-passwords-in-database/"&gt;How to Securely Store Passwords in Database?&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;&lt;a href="https://crypto.stackexchange.com/questions/60383/what-is-the-difference-between-ecdsa-and-eddsa"&gt;What is the difference between ECDSA and EdDSA?&lt;/a&gt; on crypto.stackexchange.com ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;&lt;a href="https://tonyarcieri.com/all-the-crypto-code-youve-ever-written-is-probably-broken"&gt;All the crypto code you’ve ever written is probably broken&lt;/a&gt; as blogged by Tony Arcieri. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>cryptography</category>
      <category>hash</category>
      <category>rsa</category>
      <category>ellipticcurves</category>
    </item>
    <item>
      <title>Request for Comments: Slurping Kubernetes logs</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Sat, 31 Aug 2019 08:33:41 +0000</pubDate>
      <link>https://forem.com/danielkun/request-for-comments-slurping-kubernetes-logs-5gna</link>
      <guid>https://forem.com/danielkun/request-for-comments-slurping-kubernetes-logs-5gna</guid>
      <description>&lt;p&gt;I've investigated many tools and stacks to store and search logs of containers, especially from Kubernetes clusters. In my environment, I have a hefty constraint that I don't want the logs, which main contain personal information, e.g. usernames in URIs, to be stored outside the EU. GDPR and all. That excludes the ready-to-use services like datadog, dynatrace, newrelic, etc.&lt;/p&gt;

&lt;p&gt;It seems the &lt;a href="https://www.elastic.co/de/what-is/elk-stack"&gt;ELK stack&lt;/a&gt; consisting of ElasticSearch, Logstash and Kibana seems pretty popular and can be used on-premise. However, investigating this and other solutions I found them all to be very complex, and read comments about them being hard to set up and maintain.&lt;/p&gt;

&lt;p&gt;Thinking about it, I wonder whether there is not a very simple approach that might yield good results. My idea is simple: Just write a small script that fetches logs from containers via &lt;code&gt;kubectl logs --timestamps=true&lt;/code&gt; once a minute or so. Consecutive fetches use  &lt;code&gt;--since-time&lt;/code&gt; of the last received timestamp. The script pushes the results into some database to archive them and make them searchable.&lt;/p&gt;

&lt;p&gt;I know, this is a "roll-your-own..." approach, but it seems pretty simple, but also effective. I'd like to ask the dev community for feedback on this idea, and pointers to alternatives that are not complex and can be used on-premise.&lt;/p&gt;

&lt;p&gt;So, what are your thoughts?&lt;/p&gt;

</description>
      <category>help</category>
    </item>
    <item>
      <title>Nginx: Everything about proxy_pass</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Tue, 20 Aug 2019 18:45:20 +0000</pubDate>
      <link>https://forem.com/danielkun/nginx-everything-about-proxypass-2ona</link>
      <guid>https://forem.com/danielkun/nginx-everything-about-proxypass-2ona</guid>
      <description>&lt;p&gt;With the advent of Microservices™, ingress routing and routing between services has been an every-increasing demand. I currently default to &lt;a href="https://www.nginx.com"&gt;nginx&lt;/a&gt; for this - with no plausible reason or experience to back this decision, just because it seems to be the most used tool currently.&lt;/p&gt;

&lt;p&gt;However, the often needed &lt;code&gt;proxy_pass&lt;/code&gt; directive has driven me crazy because of it's - to me unintuitive - behavior. So I decided to take notes on how it works and what is possible with it, and how to circumvent some of it's quirks.&lt;/p&gt;

&lt;h1&gt;
  
  
  First, a note on https
&lt;/h1&gt;

&lt;p&gt;By default &lt;code&gt;proxy_pass&lt;/code&gt; does not verify the certificate of the endpoint if it is https (how can this be the default behavior, really?!). This can be useful internally, but usually you want to do this very explicitly. And in case that you use publicly routed endpoints, which I have done in the past, make sure to set &lt;code&gt;proxy_ssl_verify&lt;/code&gt; to &lt;code&gt;on&lt;/code&gt;. You can also authenticate against the upstream server that you &lt;code&gt;proxy_pass&lt;/code&gt; to using client certificates and more, make sure to have a look at the available options at &lt;a href="https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/"&gt;https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  A simple example
&lt;/h1&gt;

&lt;p&gt;A &lt;code&gt;proxy_pass&lt;/code&gt; is usually used when there is an nginx instance that handles many things, and delegates some of those requests to other servers. Some examples are ingress in a Kubernetes cluster that spreads requests among the different microservices that are responsible for the specific locations. Or you can use nginx to directly deliver static files for a frontend, while some server-side rendered content or API is delivered by a WebApp such as ASP.NET Core or flask.&lt;/p&gt;

&lt;p&gt;Let's imagine we have a WebApp running on &lt;a href="http://localhost:5000"&gt;http://localhost:5000&lt;/a&gt; and want it to be available on &lt;a href="http://localhost:8080/webapp/"&gt;http://localhost:8080/webapp/&lt;/a&gt;, here's how we would do it in a minimal nginx.conf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;daemon&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/webapp/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:5000/api/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can save this to a file, e.g. nginx.conf, and run it with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nginx -c $(pwd)/nginx.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now, you can access &lt;a href="http://localhost:8080/webapp/"&gt;http://localhost:8080/webapp/&lt;/a&gt; and all requests will be forwarded to &lt;a href="http://localhost:5000/api/"&gt;http://localhost:5000/api/&lt;/a&gt;.&lt;br&gt;
Note how the /webapp/ prefix is "cut away" by nginx. That's how locations work: They cut off the part specified in the &lt;code&gt;location&lt;/code&gt; specification, and pass the rest on to the "upstream". "upstream" is called whatever is behind the nginx.&lt;/p&gt;
&lt;h1&gt;
  
  
  To slash or not to slash
&lt;/h1&gt;

&lt;p&gt;Except for when you use variables in the &lt;code&gt;proxy_pass&lt;/code&gt; upstream definition, as we will learn below, the location and upstream definition are very simply tied together. That's why you need to be aware of the slashes, because some strange things can happen when you don't get it right.&lt;/p&gt;

&lt;p&gt;Here is a handy table that shows you how the request will be received by your WebApp, depending on how you write the &lt;code&gt;location&lt;/code&gt; and &lt;code&gt;proxy_pass&lt;/code&gt; declarations. Assume all requests go to &lt;a href="http://localhost:8080:"&gt;http://localhost:8080:&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;location&lt;/th&gt;
&lt;th&gt;proxy_pass&lt;/th&gt;
&lt;th&gt;Request&lt;/th&gt;
&lt;th&gt;Received by upstream&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;/webapp/&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api/"&gt;http://localhost:5000/api/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/api/foo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/webapp/&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api"&gt;http://localhost:5000/api&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/apifoo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/webapp&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api/"&gt;http://localhost:5000/api/&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/api//foo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/webapp&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api"&gt;http://localhost:5000/api&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/api/foo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/webapp&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api"&gt;http://localhost:5000/api&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webappfoo?bar=baz&lt;/td&gt;
&lt;td&gt;/apifoo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In other words: You usually always want a trailing slash, never want to mix with and without trailing slash, and only want without trailing slash when you want to concatenate a certain path component together (which I guess is quite rarely the case). Note how query parameters are preserved!&lt;/p&gt;
&lt;h1&gt;
  
  
  $uri and $request_uri
&lt;/h1&gt;

&lt;p&gt;You have to ways to circumvent that the &lt;code&gt;location&lt;/code&gt; is cut off: First, you can simply repeat the location in the &lt;code&gt;proxy_pass&lt;/code&gt; definition, which is quite easy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/webapp/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:5000/api/webapp/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That way, your upstream WebApp will receive /api/webapp/foo?bar=baz in the above examples.&lt;/p&gt;

&lt;p&gt;Another way to repeat the location is to use $uri or $request_uri. The difference is that $request_uri preserves the query parameters, while $uri discards them:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;location&lt;/th&gt;
&lt;th&gt;proxy_pass&lt;/th&gt;
&lt;th&gt;request&lt;/th&gt;
&lt;th&gt;received by upstream&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;/webapp/&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api%24request_uri"&gt;http://localhost:5000/api$request_uri&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/api/webapp/foo?bar=baz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;/webapp/&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:5000/api%24uri"&gt;http://localhost:5000/api$uri&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;/webapp/foo?bar=baz&lt;/td&gt;
&lt;td&gt;/api/webapp/foo&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Note how in the &lt;code&gt;proxy_pass&lt;/code&gt; definition, there is no slash between "api" and $request_uri or $uri. This is because a full URI will always include a leading slash, which would lead to a double-slash if you wrote "api/$uri".&lt;/p&gt;

&lt;h1&gt;
  
  
  Capture regexes
&lt;/h1&gt;

&lt;p&gt;While this is not exclusive to &lt;code&gt;proxy_pass&lt;/code&gt;, I find it generally handy to be able to use regexes to forward parts of a request to an upstream WebApp, or to reformat it. Example: Your public URI should be &lt;a href="http://localhost:8080/api/cart/items/123"&gt;http://localhost:8080/api/cart/items/123&lt;/a&gt;, and your upstream API handles it in the form of &lt;a href="http://localhost:5000/cart_api?items=123"&gt;http://localhost:5000/cart_api?items=123&lt;/a&gt;. In this case, or more complicated ones, you can use regex to capture parts of the request uri and transform it in the desired format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;location ~ ^/api/cart/&lt;/span&gt;&lt;span class="s"&gt;([a-z]*)/(.*)&lt;/span&gt;$&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kn"&gt;   proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:5000/cart_api?&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Use try_files with a WebApp as fallback
&lt;/h1&gt;

&lt;p&gt;A use-case I came across was that I wanted nginx to handle all static files in a folder, and if the file is not available, forward the request to a backend. For example, this was the case for a Vue single-page-application (SPA) that is delivered through flask - because the master HTML needs some server-side tuning - and I wanted to handle nginx the static files instead of flask. (This is recommended by the official &lt;a href="http://docs.gunicorn.org/en/stable/deploy.html"&gt;gunicorn docs&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;You might have everything for your SPA except for your index.html available at /app/wwwroot/, and &lt;a href="http://localhost:5000/"&gt;http://localhost:5000/&lt;/a&gt; will deliver your server-tuned index.html.&lt;/p&gt;

&lt;p&gt;Here's how you can do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;location /spa/ &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kn"&gt;   root /app/wwwroot/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;   try_files &lt;/span&gt;&lt;span class="nv"&gt;$uri&lt;/span&gt;&lt;span class="s"&gt; @backend&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;location @backend &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kn"&gt;   proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:5000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that you can not specify any paths in the &lt;code&gt;proxy_pass&lt;/code&gt; directive in the @backend for some reason. Nginx will tell you:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /home/daniel/projects/nginx_blog/nginx.conf:28&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That's why your backend should receive any request and return the index.html for it, or at least for the routes that are handled by the frontend's router.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let nginx start even when not all upstream hosts are available
&lt;/h1&gt;

&lt;p&gt;One reason that I used 127.0.0.1 instead of localhost so far, is that nginx is very picky about hostname resolution. For some unexplainable reason, nginx will try to resolve all hosts defined in &lt;code&gt;proxy_pass&lt;/code&gt; directives on startup, and fail to start when they are not reachable. However, especially in microservice environments, it is very fragile to require all upstream services to be available at the time the ingress, load balancer or some intermediate router starts.&lt;/p&gt;

&lt;p&gt;You can circumvent nginx's requirement for all hosts to be available at startup by using variables inside the &lt;code&gt;proxy_pass&lt;/code&gt; directives. HOWEVER, for some unfathomable reason, if you do so, you require a dedicated &lt;code&gt;resolver&lt;/code&gt; directive to resolve these paths. For Kubernetes, you can use kube-dns.kube-system here. For other environments, you can use your internal DNS or for publicly routed upstream services you can even use a public DNS such as 1.1.1.1 or 8.8.8.8.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Additionally&lt;/em&gt;, using variables in &lt;code&gt;proxy_pass&lt;/code&gt; changes completely how URIs are passed on to the upstream. When just changing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;https://localhost:5000/api/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;set&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt; &lt;span class="s"&gt;https://localhost:5000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;proxy_pass&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt;&lt;span class="n"&gt;/api/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;... which you might think should result in exactly the same, you might be surprised. The former will hit your upstream server with &lt;code&gt;/api/foo?bar=baz&lt;/code&gt; with our example request to &lt;code&gt;/webapp/foo?bar=baz&lt;/code&gt;. The latter, however, will hit your upstream server with &lt;code&gt;/api/&lt;/code&gt;. No foo. No bar. And no baz. :-(&lt;/p&gt;

&lt;p&gt;We need to fix this by putting the request together from two parts: First, the path after the location prefix, and second the query parameters. The first part can be captured using the regex we learned above, and the second (query parameters) can be forwarded using the built-in variables &lt;code&gt;$is_args&lt;/code&gt; and &lt;code&gt;$args&lt;/code&gt;. If we put it all together, we will end up with a config like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;daemon&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="n"&gt;/dev/stdout&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;error_log&lt;/span&gt; &lt;span class="n"&gt;/dev/stdout&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="c1"&gt;# My home router in this case:&lt;/span&gt;
        &lt;span class="kn"&gt;resolver&lt;/span&gt; &lt;span class="mi"&gt;192&lt;/span&gt;&lt;span class="s"&gt;.168.178.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;~&lt;/span&gt; &lt;span class="sr"&gt;^/webapp/(.*)$&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;# Use a variable so that localhost:5000 might be down while nginx starts:&lt;/span&gt;
            &lt;span class="kn"&gt;set&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt; &lt;span class="s"&gt;http://localhost:5000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="c1"&gt;# Put together the upstream request path using the captured component after the location path, and the query parameters:&lt;/span&gt;
            &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt;&lt;span class="n"&gt;/api/&lt;/span&gt;&lt;span class="nv"&gt;$1$is_args$args&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;While localhost is not a great example here, it works with your service's arbitrary DNS names, too. I find this &lt;em&gt;very&lt;/em&gt; valuable in production, because having an nginx refuse to start because of a probably very unimportant service can be quite a hassle while wrangling a production issue. However, it makes the location directive much more complex. From a simple &lt;code&gt;location /webapp/&lt;/code&gt; with a &lt;code&gt;proxy_pass http://localhost/api/&lt;/code&gt; it has become this behemoth. I think it's worth it, though.&lt;/p&gt;

&lt;h1&gt;
  
  
  Better logging format for proxy_pass
&lt;/h1&gt;

&lt;p&gt;To debug issues, or simply to have enough information at hand when investigating issues in the future, you can maximize the information about what is going on in your &lt;code&gt;location&lt;/code&gt; that uses &lt;code&gt;proxy_pass&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I found this handy &lt;code&gt;log_format&lt;/code&gt;, which I enhanced with a custom variable $upstream, as we have defined above. If you always call your variables $upstream in all your locations that use &lt;code&gt;proxy_pass&lt;/code&gt;, you can use this &lt;code&gt;log_format&lt;/code&gt; and have often much needed information in your log:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;log_format upstream_logging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream: $request     upstream_response_time $upstream_response_time msec $msec request_time $request_time';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Here is a full example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;daemon&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;log_format&lt;/span&gt; &lt;span class="s"&gt;upstream_logging&lt;/span&gt; &lt;span class="s"&gt;'[&lt;/span&gt;&lt;span class="nv"&gt;$time_local&lt;/span&gt;&lt;span class="s"&gt;]&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt; &lt;span class="s"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;$remote_user&lt;/span&gt; &lt;span class="s"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;$server_name&lt;/span&gt; &lt;span class="s"&gt;to:&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$upstream&lt;/span&gt;&lt;span class="s"&gt;":&lt;/span&gt; &lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$request&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt; &lt;span class="s"&gt;upstream_response_time&lt;/span&gt; &lt;span class="nv"&gt;$upstream_response_time&lt;/span&gt; &lt;span class="s"&gt;msec&lt;/span&gt; &lt;span class="nv"&gt;$msec&lt;/span&gt; &lt;span class="s"&gt;request_time&lt;/span&gt; &lt;span class="nv"&gt;$request_time&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/webapp/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;access_log&lt;/span&gt; &lt;span class="n"&gt;/dev/stdout&lt;/span&gt; &lt;span class="s"&gt;upstream_logging&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kn"&gt;set&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:5000/api/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="nv"&gt;$upstream&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;However, I have not found a way to log the actual URI that is forwarded to $upstream, which would be one of the most important things to know when debugging &lt;code&gt;proxy_pass&lt;/code&gt; issues.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;I hope that you have found helpful information in this article that you can put to good use in your development and production nginx configurations.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>microservices</category>
      <category>http</category>
    </item>
    <item>
      <title>Kubernetes: Certificates, Tokens, Authentication and Service Accounts</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Sun, 19 May 2019 19:22:53 +0000</pubDate>
      <link>https://forem.com/danielkun/kubernetes-certificates-tokens-authentication-and-service-accounts-4fj7</link>
      <guid>https://forem.com/danielkun/kubernetes-certificates-tokens-authentication-and-service-accounts-4fj7</guid>
      <description>&lt;p&gt;Mostly for personal/learning experiences, I have created quite a few Kubernetes clusters, such as the one on my &lt;a href="https://dev.to/danielkun/kubernetes-its-alive-2ndc"&gt;Raspberry Pi rack&lt;/a&gt;. I also created two clusters for a production and a staging environment on ultra-cheap cloud servers from &lt;a href="https://hetzner.cloud"&gt;Hetzner Cloud&lt;/a&gt;. Luckily, none of those environments where serious business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; I'm not a Kubernetes expert, nor am I a security expert, so make sure that you second-source the information you find on this post before you rely on them. I just wanted to publish the experience and insights that I made during this trip - thanks!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why was that lucky?
&lt;/h2&gt;

&lt;p&gt;Because &lt;strong&gt;I accidentally leaked the certificates&lt;/strong&gt; for my admin access to the staging cluster. I was trying to set up a CI/CD pipeline for an open source project using &lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt;. While I was testing out the steps one by one, I dumped the content of &lt;code&gt;${HOME}/.kube/config&lt;/code&gt; that has been created from a BASE64-encoded environment variable, like described on &lt;a href="https://blog.lwolf.org/post/how-to-create-ci-cd-pipeline-with-autodeploy-k8s-gitlab-helm/"&gt;this blog post&lt;/a&gt;. That was fatal, though, since a) the job logs of open source projects are &lt;strong&gt;publicly visible&lt;/strong&gt; and b) jobs and their logs &lt;strong&gt;can not be deleted&lt;/strong&gt; manually, I had to reach out to the support for this. &lt;em&gt;Ouch!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So let's dig into what happened here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a cluster
&lt;/h2&gt;

&lt;p&gt;First of all, I created the cluster manually using &lt;code&gt;kubeadm&lt;/code&gt;, following &lt;a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/"&gt;the official docs&lt;/a&gt;. Doing so, I created a cluster with RBAC enabled and a &lt;code&gt;kube-config&lt;/code&gt; has been created for me that includes a user that is identified by a certificate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing the cluster
&lt;/h2&gt;

&lt;p&gt;After &lt;code&gt;kubeadm&lt;/code&gt; created the cluster successfully, it instructs you what to do to access your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:"&lt;/span&gt;

  &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
  &lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
  &lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;admin.conf&lt;/code&gt; that kubeadm creates includes a user identified by a certificate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes-admin&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;client-certificate-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;BASE64 ENCODED X509 CERTIFICATE&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;client-key-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;&amp;lt;BASE64 ENCODED PRIVATE KEY FOR THE CERTIFICATE&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Following these instructions is your only way to access this cluster using &lt;code&gt;kubectl&lt;/code&gt; as of now, so you should go ahead and do this now. After you copied the   &lt;code&gt;admin.conf&lt;/code&gt;, you have &lt;code&gt;cluster-admin&lt;/code&gt; access. You are &lt;code&gt;root&lt;/code&gt;, so to say.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is that?
&lt;/h2&gt;

&lt;p&gt;What &lt;code&gt;kubeadm&lt;/code&gt; did is that it created a new CA (Certificate Authority) root certificate that is the master certificate for your cluster. It looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: May 19 11:11:04 2019 GMT
            Not After : May 16 11:11:04 2029 GMT
        Subject: CN = kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    &amp;lt;REDACTED&amp;gt;
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
    Signature Algorithm: sha256WithRSAEncryption
         &amp;lt;REDACTED&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So... it doesn't really contain much besides the info that it is a CA and it's CN (= Common Name) is Kubernetes. That's because this cert only acts as a root for other certs that are used for different purposes on the cluster. You can have a look at &lt;code&gt;/etc/kubernetes/pki&lt;/code&gt; to take a peek at some of the certs that are used in your cluster and have been signed by the CA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;daniel@kube-box:~# &lt;span class="nb"&gt;ls&lt;/span&gt; /etc/kubernetes/pki/ &lt;span class="nt"&gt;-1&lt;/span&gt;
apiserver.crt
apiserver-etcd-client.crt
apiserver-etcd-client.key
apiserver.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key
ca.crt
ca.key
etcd
front-proxy-ca.crt
front-proxy-ca.key
front-proxy-client.crt
front-proxy-client.key
sa.key
sa.pub
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It is possible to allow access to clients that authenticate themselves using certificates that are trusted by the CA. This is enabled by passing this &lt;code&gt;ca.crt&lt;/code&gt; to &lt;code&gt;kube-controller-manager&lt;/code&gt; in the &lt;code&gt;--client-ca-file&lt;/code&gt; parameter. This is what &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/"&gt;the docs&lt;/a&gt; have to say about it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--client-ca-file string
If set, any request presenting a client certificate signed by one of the
authorities in the client-ca-file is authenticated with an identity 
corresponding to the CommonName of the client certificate.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Back to your &lt;code&gt;kube-config&lt;/code&gt;: The certificate that is included in BASE64 in your &lt;code&gt;admin.conf&lt;/code&gt; is signed by that exact CA. This is why it is trusted by the cluster. Let's have a look at the certificate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s1"&gt;'client-certificate-data: '&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/.kube/config | &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/.*client-certificate-data: //'&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
   openssl x509 &lt;span class="nt"&gt;--in&lt;/span&gt; - &lt;span class="nt"&gt;--text&lt;/span&gt;

Certificate:
    Data:
        Version: 3 &lt;span class="o"&gt;(&lt;/span&gt;0x2&lt;span class="o"&gt;)&lt;/span&gt;
        Serial Number: 3459994011761527671 &lt;span class="o"&gt;(&lt;/span&gt;0x30045e38cc064b77&lt;span class="o"&gt;)&lt;/span&gt;
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN &lt;span class="o"&gt;=&lt;/span&gt; kubernetes
        Validity
            Not Before: May 12 10:54:39 2019 GMT
            Not After : May 11 10:54:42 2020 GMT
        Subject: O &lt;span class="o"&gt;=&lt;/span&gt; system:masters, CN &lt;span class="o"&gt;=&lt;/span&gt; kubernetes-admin
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: &lt;span class="o"&gt;(&lt;/span&gt;2048 bit&lt;span class="o"&gt;)&lt;/span&gt;
                Modulus:
                    &amp;lt;REDACTED&amp;gt;
                Exponent: 65537 &lt;span class="o"&gt;(&lt;/span&gt;0x10001&lt;span class="o"&gt;)&lt;/span&gt;
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication
    Signature Algorithm: sha256WithRSAEncryption
         &amp;lt;REDACTED&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What does this cert tell us (and the cluster)?&lt;/p&gt;

&lt;p&gt;a) It is issued and trusted by our &lt;code&gt;kubernetes&lt;/code&gt; cluster&lt;br&gt;
b) It identifies the Organisation (&lt;code&gt;O&lt;/code&gt;) &lt;code&gt;system:masters&lt;/code&gt;, which is interpreted as a group by kubernetes&lt;br&gt;
c) It identifies the Common Name (&lt;code&gt;CN&lt;/code&gt;) &lt;code&gt;kubernetes-admin&lt;/code&gt;, which is interpreted as a user by kubernetes&lt;/p&gt;

&lt;p&gt;In other words: This certificate logs in as the user &lt;code&gt;kubernetes-admin&lt;/code&gt; with the group &lt;code&gt;system:masters&lt;/code&gt;. This is the reason why you don't need to provide the group name in the &lt;code&gt;kube-config&lt;/code&gt;, and why you can change the user's name at will in the &lt;code&gt;kube-config&lt;/code&gt;, without this changing the actual user that is being logged in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where are the permissions defined?
&lt;/h2&gt;

&lt;p&gt;In RBAC-enabled clusters, permissions are defined in &lt;code&gt;Roles&lt;/code&gt; (per namespace) or &lt;code&gt;ClusterRoles&lt;/code&gt; (for all namespaces). These permissions are then granted to objects using &lt;code&gt;RoleBindings&lt;/code&gt; and &lt;code&gt;ClusterRoleBindings&lt;/code&gt;. So what you have to look for are &lt;code&gt;RoleBindings&lt;/code&gt; and &lt;code&gt;ClusterRoleBindings&lt;/code&gt; that grant permissions to the group &lt;code&gt;system:masters&lt;/code&gt; or the user &lt;code&gt;kubernetes-admin&lt;/code&gt;. You can do this by having a look at the output of&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -A=true get rolebindings &amp;amp;&amp;amp; kubectl -A=true get clusterrolebindings&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The default setup that &lt;code&gt;kubeadm&lt;/code&gt; created for me yielded one hit for that search, the &lt;code&gt;ClusterRoleBinding&lt;/code&gt; named &lt;code&gt;cluster-admin&lt;/code&gt;, which grants permissions to a &lt;code&gt;ClusterRole&lt;/code&gt; with the same name. Here's the definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;nonResourceURLs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Group&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;system:masters&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So there we have it! The group &lt;code&gt;system:masters&lt;/code&gt;, which the certificate authorizes as, grants '*' permissions to all resources using all verbs, hence full access.&lt;/p&gt;

&lt;p&gt;Unvalidated assumption: I think that users and groups in this case are only defined in the certificates, and new users can be &lt;code&gt;created&lt;/code&gt; by issuing a new certificate with the CommonName set to the desired username and the Organisation set to the desired group. This username and group can then, without further ado, be used in &lt;code&gt;ClusterRoleBindings&lt;/code&gt; and &lt;code&gt;RoleBindings&lt;/code&gt;. I did not take the time to validate this, though, which would be possible by issuing a new certificate using &lt;code&gt;openssl&lt;/code&gt;, signed with the cluster's CA. It'd be great if someone could confirm or debunk this assumption in a comment!&lt;/p&gt;

&lt;h2&gt;
  
  
  The mystery
&lt;/h2&gt;

&lt;p&gt;So I got this far and found out how the user and group are identified and how permissions are granted to this user and group. My guess then was that when I delete the &lt;code&gt;ClusterRoleBinding&lt;/code&gt;, or rather remove the group &lt;code&gt;system:masters&lt;/code&gt; from it, that the certificate should not have access to the cluster anymore. If I did that, and it had the expected result, I would lose all access to the cluster and would have successfully logged me out for good. So I first added a &lt;code&gt;serviceaccount&lt;/code&gt; and created a &lt;code&gt;kube-config&lt;/code&gt; that logged in using a token for that &lt;code&gt;serviceaccount&lt;/code&gt; and verified that the access worked. We will see later how to do this. Then, after setting the safety net in place, I removed the &lt;code&gt;system:masters&lt;/code&gt; subject from the &lt;code&gt;ClusterRoleBinding&lt;/code&gt;. To my surprise, this did not lock the user out. I could still fully access the cluster using the old &lt;code&gt;kube-config&lt;/code&gt;… maybe someone can explain this behaviour in a comment?&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative 1: Replace the CA
&lt;/h2&gt;

&lt;p&gt;One sure-as-hell way to make the leaked certificate useless is to replace the CA in the cluster. This would require a restart of the cluster, though. And it would require to re-issue all the certificates that we have seen above, and maybe some more. I rated the possibility to totally fuck everything up and waste multiple hours on the trip at about 99%, so I abandoned the plan. :-)&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternative 2: Rebuild the whole cluster
&lt;/h2&gt;

&lt;p&gt;Luckily, it was a staging cluster, so I had plenty of freedom. Before starting my investigations, I powered off all nodes. Then, after not finding a proper solution to only make the leaked certificate useless, I killed the whole cluster using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm reset
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /etc/kubernetes
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/kubelet
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And recreated from scratch with kubeadm. (Which is so great, by the way!!)&lt;/p&gt;

&lt;p&gt;Then I went ahead and made a few dozen more commits reading 'NOT printing the content of the kube-config anymore', 'Getting CI/CD to work', 'Maybe now it works', 'Uhm what?', 'That gotta work', 'fuck CI/CD', … :-)&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons learned: Use service-accounts with tokens
&lt;/h2&gt;

&lt;p&gt;(Or other authentication methods like OpenID, as recommended in &lt;a href="//dev.to/petermbenjamin/kubernetes-security-best-practices-hlk"&gt;this awesome post&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;So my lesson learned is to do what I've seen at the big managed kubernetes providers: Use a service-account and it's access token for authorization. Here I'll show how to set up a super-user that uses a token instead of a cert:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system create serviceaccount admin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To grant super-user permissions, the easiest way is to create a new &lt;code&gt;ClusterRoleBinding&lt;/code&gt; to bind this service-account to the &lt;code&gt;cluster-admin&lt;/code&gt; &lt;code&gt;ClusterRole&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create clusterrolebinding add-on-cluster-admin &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--clusterrole&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster-admin &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--serviceaccount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system:admin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Use your new service-account
&lt;/h3&gt;

&lt;p&gt;Your admin user is now ready and armed. Now we need to log in with this user. I assume that you have the &lt;code&gt;admin.conf&lt;/code&gt; in &lt;code&gt;${HOME}/.kube/conf&lt;/code&gt;. We now want to add the new user, identified by it's token, and add a new context that uses this user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;TOKENNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get serviceaccount/admin &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.secrets[0].name}'&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="nv"&gt;TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get secret &lt;span class="nv"&gt;$TOKENNAME&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.token}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;
kubectl config set-credentials admin &lt;span class="nt"&gt;--token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$TOKEN&lt;/span&gt;
kubectl config set-context admin@kubernetes &lt;span class="nt"&gt;--cluster&lt;/span&gt; kubernetes &lt;span class="nt"&gt;--user&lt;/span&gt; admin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now go ahead and try your new, shiny service-account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context admin@kubernetes
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get all
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If this went well, you should go ahead and delete the certificate-based user and the corresponding context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config &lt;span class="nb"&gt;unset &lt;/span&gt;users.kubernetes-admin
kubectl config delete-context kubernetes-admin@kubernetes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Yay! Now we have a &lt;code&gt;kube-conf&lt;/code&gt; that only includes token-based access. This is great, because it is very easy to revoke that token if this config might be leaked or published.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to invalidate a leaked token
&lt;/h3&gt;

&lt;p&gt;This is easy! Just delete the secret that corresponds to the user's token. We already saw how to find out which is the correct secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get serviceaccount/admin &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You will see a field "name" in the "secrets" array. This is a name of a secret that holds this service-account's token. Now go ahead and simply delete it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system delete secrets/token-admin-xyz123
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then wait a few seconds, and try to access your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;dainel@kube-box:~# kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get all
error: You must be logged &lt;span class="k"&gt;in &lt;/span&gt;to the server &lt;span class="o"&gt;(&lt;/span&gt;Unauthorized&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Wohoo!&lt;/p&gt;

&lt;p&gt;But how do you regain access? Well, if you're on your master node, simply copy the &lt;code&gt;admin.conf&lt;/code&gt; back to your &lt;code&gt;${HOME}/.kube/conf&lt;/code&gt; and repeat the steps from "Use your new service-account". Kubernetes will have created and assigned a new token by now.&lt;/p&gt;

&lt;p&gt;I hope that this helped, and I'd love to hear feedback, errata, etc. in the comments!&lt;/p&gt;

&lt;p&gt;Also make sure to read the VERY comprehensible and awesome post &lt;a href="//dev.to/petermbenjamin/kubernetes-security-best-practices-hlk"&gt;"Kubernetes Security Best-Practices"&lt;/a&gt; by Peter Benjamin.&lt;/p&gt;

&lt;p&gt;And a big thank you to Andreas Antonsson, vaizki and Alan J Castonguay, who have helped me on &lt;a href="https://kubernetes.slack.com/archives/C09NXKJKA/p1558179148148800"&gt;the official Kubernetes Slack channel&lt;/a&gt; to get a better understanding of what is going on.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Where is HTTPS for IoT? (Update)</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Mon, 10 Sep 2018 21:47:44 +0000</pubDate>
      <link>https://forem.com/danielkun/where-is-https-for-iot-49ao</link>
      <guid>https://forem.com/danielkun/where-is-https-for-iot-49ao</guid>
      <description>&lt;p&gt;&lt;em&gt;Updated, 10/22/2018:&lt;/em&gt; Added proposal 1.a, with added HTTP Public Key Pinning&lt;/p&gt;

&lt;p&gt;By now everyone should know that HTTPS is secure, and that HTTPS is important. Browsers even declare HTTP websites as "insecure" in this day and age. And yet, when you look at IoT devices, such as your Smart Home gadgets, they are commonly using HTTP. This means that they communicate without encryption.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; I'm not a security expert, so please bear with me if you find information that may appear to be not totally correct or incomplete. I'll be happy to receive your feedback in a comment and update the article!&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The problem with IoT (transport-layer) security
&lt;/h1&gt;

&lt;p&gt;IoT devices are often cheap and hence the engineering efforts need to be cheap and fast, too. Often there is just no budget or time to take security into consideration. So often even absolute security basics such as authentication are often not in place. There are more than enough gadgets out there that, when you have access to someone else's LAN, you can take complete control of.&lt;/p&gt;

&lt;p&gt;This is not only a problem of no-name products from Asia, but even of devices from high-profile companies, such as the Google Home. There is &lt;a href="https://rithvikvibhu.github.io/GHLocalApi/" rel="noopener noreferrer"&gt;inofficial documentation&lt;/a&gt; of the local Google Home API that the Google Home App uses. According to this documentation - and I hope that by now the API has been revoked and redesigned - at least at some point in time the communication was unencrypted (HTTP) and unauthenticated. Wow.&lt;/p&gt;

&lt;p&gt;While not using authentication is something that you can - and must - blame any engineer for, you can not blame anyone for not using HTTPS on their IoT device. Why? Because it is plain impossible*. At least not when seeing HTTPS like it is designed to work. We'll see why this is the case later, but fact is that the mechanisms of secure communication via HTTPS are designed in a way that you can not easily use it for local communication. Ouch.&lt;br&gt;
*) Yes I know, this is an exaggeration, but how else should I have got your attention? :-)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/mqudsi" rel="noopener noreferrer"&gt;Mahmoud Al-Qudsi&lt;/a&gt; has written &lt;a href="https://neosmart.net/blog/2017/lets-stop-punishing-iot-devices-that-embrace-https-shall-we/" rel="noopener noreferrer"&gt;a nice blog-post&lt;/a&gt; on this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Here's a shocking inline picture that should increase your blood pressure instantly:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kd9exwqpcyh2qgl14gc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kd9exwqpcyh2qgl14gc.png" alt="Browser shows unsecure hint" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Immediate security measures for all IoT developers
&lt;/h1&gt;

&lt;p&gt;Before we deep-dive into HTTPS, there are a few very important security measures that every developer of IoT devices absolutely must follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use authentication for all your API endpoints.&lt;/li&gt;
&lt;li&gt;Use unique, strong passwords for each device, generated during the manufacturing process.&lt;/li&gt;
&lt;li&gt;Close all ports that are not used by your public IP using a firewall/iptables/netfilter.&lt;/li&gt;
&lt;li&gt;Don't expose your services that are written in whatever language with whatever HTTP server implementation directly - instead use a well-maintained, well-tested HTTP server as your ingress, such as &lt;a href="https://www.nginx.com/" rel="noopener noreferrer"&gt;nginx&lt;/a&gt;, which is available for ARM, too.&lt;/li&gt;
&lt;li&gt;Allow firmware updates by your users, or even automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gcir81b7383mqalgdp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gcir81b7383mqalgdp3.png" alt="Scribbled login form" width="553" height="410"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Benefits of using HTTPS in IoT
&lt;/h1&gt;

&lt;p&gt;But why should you even consider using HTTPS? Well, first of all, if you suppose that your home network is a sacred place that only you have access to, you can feel safe with HTTP. But if you accept the fact that it will always be possible to gain access to more or less all of your LAN, you will feel safer when you know that even with that intrusion possible, without knowing the proper credentials or authorization your IoT devices are still safe.&lt;/p&gt;

&lt;p&gt;One way that someone can have access to your LAN is a very old, very easy attack called "&lt;a href="https://en.wikipedia.org/wiki/DNS_rebinding" rel="noopener noreferrer"&gt;DNS rebinding&lt;/a&gt;", which we will talk about later for another reason. DNS rebinding allows an attacker &lt;a href="https://medium.com/@brannondorsey/attacking-private-networks-from-the-internet-with-dns-rebinding-ea7098a2d325" rel="noopener noreferrer"&gt;to access vital components&lt;/a&gt; such as your router - and routers often have &lt;a href="https://threatpost.com/popular-d-link-router-riddled-with-vulnerabilities/127907/" rel="noopener noreferrer"&gt;a lot of security flaws&lt;/a&gt;. Are you convinced, yet? So how can HTTPS help?&lt;/p&gt;

&lt;p&gt;HTTPS prevents communication sniffing, data manipulation and offers verification of your peer. This should make you feel safer, since even when your router is compromised, your bank account login is still safe. Also, HTTPS verifies the identity of the server, so that you can be sure who you are communicating with, and in case of a crime happening, you know who to sue. Yay!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiwbu755be24rrqqw3cl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpiwbu755be24rrqqw3cl.png" alt="HTTPS browser badge" width="683" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's remember these three important promises that HTTPS guarantees, since we will come back to them later:&lt;/p&gt;

&lt;p&gt;1) Privacy: No communication sniffing&lt;br&gt;
2) Integrity: No data manipulation&lt;br&gt;
3) Identification: You know who you are talking to (or: whose software you are talking to)&lt;/p&gt;
&lt;h1&gt;
  
  
  The core problem
&lt;/h1&gt;

&lt;p&gt;The core problem with HTTPS is that a fundamental component of it is the host name. Browsers compare the host name that you entered into the address bar with the host name that the server's certificate has been issued to. If these don't match, the browser will not connect to the server, except when your user wades through hardly visible links to finally add an exception for your device. And it will tell your users that it is highly insecure what he is currently doing along the way. Frightening!&lt;/p&gt;

&lt;p&gt;And that's not all: &lt;a href="https://community.digicert.com/en/blogs.entry.html/2013/12/19/important-changes-to-ssl-certificates-on-intranets-what-you-need-to-know.html" rel="noopener noreferrer"&gt;Since November 1st 2015&lt;/a&gt;, this domain must not be an IPv4 or IPv6 address and must be a FQDN with a public top level domain. No certificates for 192.168.1.20 or  raspberry.local!&lt;/p&gt;

&lt;p&gt;In other scenarios, where no such control is available, such as built-in clients in other IoT gadgets like the Homee that can trigger arbitrary https endpoints (often used for &lt;a href="https://ifttt.com/" rel="noopener noreferrer"&gt;IFTTT&lt;/a&gt;, but can point to your local device, too), communication will be plain impossible.&lt;/p&gt;

&lt;p&gt;So, this bites you, when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want to provide a secure web frontend hosted on your IoT device&lt;/li&gt;
&lt;li&gt;You want to provide a secure, local API on your IoT device&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  No solution in sight?
&lt;/h1&gt;

&lt;p&gt;So we seem to be shit out of luck. A versatile solution for Transport Layer Security for local HTTPS connections seems to be impossible as of now. HTTPS has been designed with the public internet in mind, and does not contain mechanisms that handle local usage properly. But what options do we have to still secure our IoT communication? Spoiler: None of them work out of the box in all environments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution #1: Ignore invalid hosts in SSL certificates
&lt;/h2&gt;

&lt;p&gt;If you are lucky enough to build all clients yourself - i.e. when you build a mobile or desktop app to connect to your device or your devices connect to each other - you can still establish an HTTPS connection. Nearly all HTTPS client allow you to handle certificate errors in your code. &lt;a href="http://www.nakov.com/blog/2009/07/16/disable-certificate-validation-in-java-ssl-connections/" rel="noopener noreferrer"&gt;Here's an example for Java&lt;/a&gt;. Just please make sure that you only ignore errors regarding the host name, and still let the connection fail when the certificate has other errors, such as when it has expired.&lt;/p&gt;

&lt;p&gt;Pro: Works in offline scenarios and not a single byte leaves your local network.&lt;br&gt;
Con: Only works when you control all clients. :-(&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution #2: Use public domain names for private IPs
&lt;/h2&gt;

&lt;p&gt;This is how &lt;a href="https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/" rel="noopener noreferrer"&gt;Plex is doing it&lt;/a&gt;. You create dynamic domain names for all your devices, e.g. with a GUID in the subdomain or anything, and resolve them to your device's private IP. This requires some infrastructure to programmatically add DNS entries and create certificates for those. This is easily possible today with Cloud DNS providers from Google, Azure &amp;amp; Co. That's rather complex, but works perfectly - as long as your user's routers do not do DNS rebinding protection.&lt;/p&gt;

&lt;p&gt;If you happen to only have customers that use routers that don't do DNS rebinding protection, or you control the local network infrastructures and internet access to resolve the domain name, or run a local DNS server, you are cool. This is a very viable and maybe the best solution then. If you don't control the network infrastructure, especially the router, the risk is high that DNS rebinding protection becomes more popular and your solution begins to fail for more and more users.&lt;/p&gt;

&lt;p&gt;Pro: Except for the domain lookup, all traffic is local and marked safe - even in security sensitive browsers - and works in many scenarios without a hassle.&lt;br&gt;
Con: Works flawlessly in most setups, but is very troublesome to the non-tech-savvy user that happens to use secure routers. :-(&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution #3: Roll your own crypto! Well, not really
&lt;/h2&gt;

&lt;p&gt;I've actually seen this done in production. You can rely on well-tested and proved, high quality JavaScript cryptography libraries that allow you to implement things like certificate validation (except the hostname, yo!) and encryption for the payload. There's even standardization for such a library &lt;a href="https://www.w3.org/TR/WebCryptoAPI/" rel="noopener noreferrer"&gt;under way in the W3C&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You should totally make sure that you have the proper knowledge to tackle this, or let your solution verify by someone that knows this kind of stuff. It's very easy to make mistakes.&lt;/p&gt;

&lt;p&gt;Pro: It's pretty much guaranteed to work and can be very safe when you do it right.&lt;br&gt;
Con: Your browser will still tell your users that communication is insecure, albeit all the effort. And you can not communicate with your device using standard clients, such as curl. :-(&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution #4: Public reverse proxy
&lt;/h2&gt;

&lt;p&gt;Crazy: To gain security that is trusted by all browsers and clients, you have to open your local network to the public internet! To get HTTPS for local devices, you could set up a publicly accessible reverse proxy that forwards incoming traffic to your local device through a secure tunnel. A simple home-grown setup could look like this:&lt;/p&gt;

&lt;p&gt;1) Create a unique and not-easily-guessable subdomain for each of your IoT devices, such as 09460e83-1759-4fbd-afed-9ee4adf8b288.iot.example.com&lt;br&gt;
2) Generate a certificate via Let's Encrypt for that domain and deploy it to your IoT device.&lt;br&gt;
3) Set up a ssh connection from your IoT device that tunnels incoming traffic on the HTTPS port to the local web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh user@09460e83-1759-4fbd-afed-9ee4adf8b288.iot.example.com &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:443:localhost:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And zap, there you go, you have HTTPS to your local device.&lt;/p&gt;

&lt;p&gt;There are services that make this even easier, such as &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;https://ngrok.com/&lt;/a&gt; and &lt;a href="https://webhookrelay.com/" rel="noopener noreferrer"&gt;https://webhookrelay.com/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pro: Will be displayed as secure in all browsers.&lt;br&gt;
Con: It does not work locally without internet access and exposes your device to the public internet. :-( But at least the access point cannot be found by guessing or scraping your service. Combined with very good passwords, this is not &lt;em&gt;that&lt;/em&gt; bad, but it's still much more exposed than it ought to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution #5: Port forwarding and dynamic DNS
&lt;/h2&gt;

&lt;p&gt;This is basically the same as Solution #4, but even less secure. Your device can connect to a dynamic DNS service (either built by yourself or you use one of the popular ones) and keep up a domain that points to the router's public IP address. Then it gets a certificate from Let's Encrypt for that domain, and via UPnP it can programmatically set up a port forwarding to itself. But heck, your device is now publicly accessible from the internet, and public IP enumeration will make it detectable by hackers.&lt;/p&gt;

&lt;p&gt;Pro: Not that complex to setup up&lt;br&gt;
Con: Very insecure. First, you don't want to use port forwarding at all, because it drills a whole in your local network that is otherwise cut off from the public net. Second, this certain setup makes your device publicly accessible by enumerating ISP's IP ranges, and it will be very likely to be targeted by script kiddies and crackers. Don't do this! :-(&lt;br&gt;
Additionally - and I am glad that this is the case by now - most routers don't allow UPnP by default and it must be enabled manually.&lt;/p&gt;

&lt;h1&gt;
  
  
  How would a proper solution look like?
&lt;/h1&gt;

&lt;p&gt;Well, the problem that we have here is trust. Your browser does not trust your device, because the browser can not verify that the device contains only software from someone trustworthy. On the public web, this is done through the hostname, because domains are registered to an entity that you can track down and stick your foot up their ass if they do something malicious. But how can you verify this with devices, that can be potentially spoofed?&lt;/p&gt;

&lt;p&gt;It seems that Apple has a solution for this in it's HomeKit echosystem. The strongest solution seems to be possible with a special Authentication Coprocessor. But as of somewhen in 2017, a software authentication solution has been offered, too. But this solution is by no means open or publicly documented (as is typical for Apple), so I don't have a clue how it works. I bet it is pretty secure, though, and would very much like to know how it works!&lt;/p&gt;

&lt;p&gt;Short of knowing Apple's secrets, I'd come up with my own proposals of how a secure system could be designed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposal #1: Simply drop the host name check in HTTPS for private IPs
&lt;/h2&gt;

&lt;p&gt;This would be a no-brainer for browsers, and IMHO not that bad for security. The full certificate is checked, like expiration date, chain of trust. But the contained domain is ignored. Instead, the user is shown the company that the certificate has been issued to, and asked whether she trusts this company. Similar to how desktop applications ask you whether the setup may run as admin/root. The security would be still very high - at least much higher than the current state of the art.&lt;/p&gt;

&lt;p&gt;How well does this solution deliver on the three promises of HTTPS?&lt;/p&gt;

&lt;p&gt;1) Privacy: The communication is encrypted and can not be sniffed, even when someone took complete control over your local network - except for the device you are connecting to.&lt;br&gt;
2) Integrity: This holds true in the same way&lt;br&gt;
3) Identification: This is not as strong as it is with internet domains. By typing in the domain, or clicking on a link, you already tell the browser beforehand who you trust. When browsing to an IP, you don't make verifyable assumptions about the identity of your peer. That's why I'd consider the one-time popup telling you the identity of whoever controls the device with the IP you entered, which is, IMHO, basically as strong as typing in a domain name.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposal #1.a: With added Public Key Pinning
&lt;/h2&gt;

&lt;p&gt;This proposal is my favorite. Browsers act like described in Proposal #1, but with added HTTP Public Key Pinning of the most immediate certificate - it must not be any arbitrary certificate from the certificate chain, but the immediate certificate that verifies the very host that you are just connecting to. The browser will "pin" this certificate to this host - which may be an IP address - and warn the user when it connects to the same host, but receives a different certificate this time. This is how ssh has been doing it for ages. This means that it's considered secure enough for a remote shell that gives you potential root access to a device, so I guess it's secure enough to connect to IoT devices that are only accessible from your local network, via a usually limited API, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposal #2: Let the manufaturer validate the device's authenticity
&lt;/h2&gt;

&lt;p&gt;This proposal is more sophisticated than #1, but more complex, too. As always, for a bit more security, you'll need exponentially more effort.&lt;/p&gt;

&lt;p&gt;I think there is no general method to verify that a device has not been tinkered with. But, for sure, the manufacturer should be the instance that knows best how to check the device for authenticity. So why not let the manufacturer do this? Browsers could implement a mechanism following this scheme:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to an IoT device using HTTPS&lt;/li&gt;
&lt;li&gt;The browser reads the certificate and validates the usual stuff, like expiration date and chain of trust - but NOT the hostname&lt;/li&gt;
&lt;li&gt;The HTTPS response contains a randomly generated session id and a random token, and the certificate includes an URL to an online validation web service (used later)&lt;/li&gt;
&lt;li&gt;The device is expected to connect to that web service, and send it a random token for the given session id&lt;/li&gt;
&lt;li&gt;The manufacturer can choose to only accept this token after it did some kind of validation of the device's integrity&lt;/li&gt;
&lt;li&gt;Before any more communication is done, let the user verify that the manufacturer from the certificate is the institution that she expected and trusts. This is much like what desktop operating systems do when installing software. Asin Proposal #1, this is the important part that establishes the trust.&lt;/li&gt;
&lt;li&gt;When the user confirms, the browser contacts the validation web service (it must be HTTPS!), send it the session ID and read back the token. This token is compared to the one contained in the HTTPS response from the device&lt;/li&gt;
&lt;li&gt;Only when the tokens match, the connection is considered successful
-As a bonus: The device's certificate and the validation service's certificate must be issued to the same company. This would further enhance the trustworthyness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How well does this solution deliver on the three promises of HTTPS?&lt;/p&gt;

&lt;p&gt;1) and 2) are the same as in the first proposal, and as strong as for public domains&lt;br&gt;
3) Is a bit stronger in this proposal, because you couple the trusted, well-tested public domain verification with the local device verification. Plus, the manufacturer can opt to implement additional steps to verify that the device has not been tempered with. This would be pretty strong, IMHO not a bit less secure than public domain verification, but of course it'd be quite a hassle to implement. Plus: It requires an active internet connection for the device and the browser. (But again, the I in IoT stands for "Internet".)&lt;/p&gt;

&lt;p&gt;I think, however, that the first proposal is trustworthy enough. You just want to verify that the device you are connecting with is the one you expect to. If this is the case, because you recognize the manufacturer's name, this should be good enough™, at least for my taste. And with the added Public Key Pinning of proposal 1.a, you get quite a solid security setup.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Fact is that local HTTPS connections will only be possible without additional tinkering after the standards have been changed. I think there is really a need that the IETF, browser developers and whoever else is needed, have a look at this issue and provide a solution to IoT manufacturers. It has always been common to offer local web pages as front-ends for IoT devices, and it will become even more popular. Because of the problems discussed in this post, which have become even more proplematic over the last months and years, currently nearly all manufacturers fall back to HTTP, to circumvent all the trouble. This is a real pity and should be changed ASAP!&lt;/p&gt;

</description>
      <category>https</category>
      <category>iot</category>
    </item>
    <item>
      <title>Go: Asynchronous Real-Time Broadcasting using Channels and WebSockets</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Mon, 09 Apr 2018 21:00:14 +0000</pubDate>
      <link>https://forem.com/danielkun/go-asynchronous-and-safe-real-time-broadcasting-using-channels-and-websockets-4g5d</link>
      <guid>https://forem.com/danielkun/go-asynchronous-and-safe-real-time-broadcasting-using-channels-and-websockets-4g5d</guid>
      <description>&lt;p&gt;Recently, I had the need to execute a long-running command on a server and send the result to - potentially - multiple clients, in real-time. I naturally chose to use WebSockets as a transport layer. But when implementing the backend in Go, I was looking for a solution to distribute the results that &lt;code&gt;exec.Command&lt;/code&gt; spit out to multiple clients in a thread-safe manner. In Go, &lt;a href="https://gobyexample.com/channels" rel="noopener noreferrer"&gt;channels&lt;/a&gt; are the tool of choice when sending data asynchronously and thread-safe, but channels communicate between two endpoints only. There's no way to "hook into" a channel from multiple clients. So I needed a different solution. On my search, I learned that you can &lt;a href="https://www.goin5minutes.com/blog/channel_over_channel/" rel="noopener noreferrer"&gt;send channels over channels&lt;/a&gt;, which makes channels a much more versatile tool than they already had been for me. Using this, much of the constrains that I thought channels had, have been lifted.&lt;/p&gt;

&lt;p&gt;Equipped with this knowledge, you can write stuff like this, which is basically a client/server infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;clientChan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;clientChan&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, we sent a channel into a channel! This doesn't do much, indeed. We need to receive the &lt;code&gt;clientChan&lt;/code&gt; and send something back, to have a true client/server feeling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="s"&gt;"Hello, World!"&lt;/span&gt;
&lt;span class="p"&gt;}(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;clientChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clientChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yay, we made it! We sent a channel via a channel, just to send back a string. That in itself might be already breathtaking (right?), but it gets even better: We can use this as a building block to send multiple clients to the server, to make the server "broadcast" stuff to multiple clients, in real-time. We use &lt;code&gt;select&lt;/code&gt; for that, so it's even very light on CPU usage.&lt;/p&gt;

&lt;p&gt;Let's have a look at the server code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="c"&gt;// Broadcast the number of clients to all clients:&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%d client(s) connected."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server just listens endlessly for new clients, adds all of them to a list and sends a text stating the number of total clients to all clients, whenever a new client connects.&lt;/p&gt;

&lt;p&gt;This is what the client code can look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clientName&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;clientChan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;clientChan&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%s: %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;clientName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this code glues it all together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Start the server:&lt;/span&gt;
&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// Connect the clients:&lt;/span&gt;
&lt;span class="n"&gt;client1Chan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;client2Chan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;client1Chan&lt;/span&gt;
&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;client2Chan&lt;/span&gt;

&lt;span class="c"&gt;// Notice that we have to start the clients in their own goroutine,// because we would have a deadlock otherwise:&lt;/span&gt;
&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Client 1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client1Chan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Client 2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client2Chan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// Just a dirty hack to wait for everything to finish up.&lt;/span&gt;
&lt;span class="c"&gt;// A clean and safe approach would have been too much boilerplate code&lt;/span&gt;
&lt;span class="c"&gt;// for this blog-post&lt;/span&gt;
&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will output, in some semi-random order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client 1: 1 client(s) connected.
Client 1: 2 client(s) connected.
Client 2: 2 client(s) connected.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voilá! In theory, this should scale "indefinitely", just as indefinitely as everything in computer science does. :-)&lt;/p&gt;

&lt;p&gt;Now we can hook this up to our WebSocket server. I chose &lt;code&gt;github.com/gorilla/websocket&lt;/code&gt; as my WebSocket library. In this example, the server will just count it's uptime in seconds, and will push this info to all clients. We can choose whether new clients get all output that has happened since the server started, or just the new stuff since the client connected. In my implementation, I chose to send only the new stuff for now.&lt;/p&gt;

&lt;p&gt;First off, let's write a slightly modified server that counts it's uptime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;uptimeServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;uptimeChan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// This goroutine will count our uptime in the background, and write&lt;/span&gt;
    &lt;span class="c"&gt;// updates to uptimeChan:&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;
            &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}(&lt;/span&gt;&lt;span class="n"&gt;uptimeChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// And now we listen to new clients and new uptime messages:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;clients&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;uptime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;uptimeChan&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="c"&gt;// Send the uptime to all connected clients:&lt;/span&gt;
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;clients&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%d seconds uptime"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;uptime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was rather easy. However, if you've read carefully, you will notice that clients are being registered via &lt;code&gt;append(clients, client)&lt;/code&gt;, but never removed. This will lead to orphaned clients that will cause the server to crash when it tries to write to them. We will leave it at that for the moment, in order to not clutter the code with boilerplate. So beware that your server will crash when you close browser tabs in the following sections.&lt;/p&gt;

&lt;p&gt;Next, we write our WebSocket server around our new &lt;code&gt;uptimeServer&lt;/code&gt; goroutine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// This upgrader is needed for WebSocket connections later:&lt;/span&gt;
&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;upgrader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;websocket&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Upgrader&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ReadBufferSize&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;WriteBufferSize&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;CheckOrigin&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt; &lt;span class="c"&gt;// Disable CORS for testing&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Start the server and keep track of the channel that it receives&lt;/span&gt;
&lt;span class="c"&gt;// new clients on:&lt;/span&gt;
&lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;uptimeServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serverChan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// Define a HTTP handler function for the /status endpoint, that can receive&lt;/span&gt;
&lt;span class="c"&gt;// WebSocket-connections only... so note that browsing it with your browser will fail.&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/status"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Upgrade this HTTP connection to a WS connection:&lt;/span&gt;
    &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;upgrader&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Upgrade&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// And register a client for this connection with the uptimeServer:&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;serverChan&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;
    &lt;span class="c"&gt;// And now check for uptimes written to the client indefinitely.&lt;/span&gt;
    &lt;span class="c"&gt;// Yes, we are lacking proper error and disconnect checking here, too:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ws&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NextWriter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;websocket&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TextMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":8080"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Phew, that's a harder nut to crack. As you can read from the inline comments, we do some stuff to set up the WebSocket infrastructure, start our "server" that counts the uptime and then define a HTTP handler function to receive client requests, which in turn will lead to clients registered to our uptime-server, and subsequently write all texts received from the uptime-server to the WebSocket clients.&lt;/p&gt;

&lt;p&gt;You can compile and start this, and test it out with this minimalistic JavaScript example:&lt;br&gt;
&lt;a href="https://jsfiddle.net/L31wy1gL/13/" rel="noopener noreferrer"&gt;https://jsfiddle.net/L31wy1gL/13/&lt;/a&gt;&lt;br&gt;
(BTW: That's why we disable CORS in the code; to be able to reach the WebSocket server from jsfiddle/codepen/your dummy index.html, etc.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu53g42hlvvh155v70vxd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu53g42hlvvh155v70vxd.gif" alt="Minimalistic JavaScript client" width="1307" height="892"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's where I'll end this blog post. You have seen a lot:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Send channels into channels
• Use this to implement a "client"-registry in a goroutine
• Send broadcast messages to these clients
• Wrap all this into a WebSocket server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But you still have much to do:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Implement proper error handling
• Implement proper handling for disconnecting clients
• Maybe turn this into a library that you can re-use across your project(s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I hope you enjoyed this read and would be grateful for suggestions and feedback in the comments.&lt;/p&gt;

</description>
      <category>go</category>
      <category>websockets</category>
    </item>
    <item>
      <title>Using Elm on a Raspberry Pi</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Sun, 04 Feb 2018 22:33:09 +0000</pubDate>
      <link>https://forem.com/danielkun/using-elm-on-a-raspberry-pi-42pi</link>
      <guid>https://forem.com/danielkun/using-elm-on-a-raspberry-pi-42pi</guid>
      <description>&lt;p&gt;Running Elm on your Raspberry Pi&lt;/p&gt;

&lt;p&gt;Elm is an awesome programming language that makes you feel comfortable, empowered and safe. Since I went out on a journey to put a Kubernetes cluster on a few Raspberry Pis and built a frontend for a few experiments (see &lt;a href="https://github.com/daniel-kun/kube-alive"&gt;kube-alive&lt;/a&gt;), I had the need to compile Elm code on Raspberry Pis. Unfortunately, there are no official Elm binaries for the arm CPU architecture distributed via npm or any other way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    daniel@kubepi-red:~ &lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;elm
    &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; elm@0.18.0 &lt;span class="nb"&gt;install&lt;/span&gt; /home/daniel/node_modules/elm
    &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; node install.js

    Unfortunately, there are currently no Elm Platform binaries available &lt;span class="k"&gt;for &lt;/span&gt;your operating system and architecture.

    If you would like to build Elm from &lt;span class="nb"&gt;source&lt;/span&gt;, there are instructions at https://github.com/elm-lang/elm-platform#build-from-source

    npm WARN This failure might be due to the use of legacy binary &lt;span class="s2"&gt;"node"&lt;/span&gt;
    npm WARN For further explanations, please &lt;span class="nb"&gt;read&lt;/span&gt;
    /usr/share/doc/nodejs/README.Debian
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I set out to compile Elm for arm using a Raspberry Pi on raspbian (stretch) myself. This turned out to be rather tedious, because of two factors that are a bad combo: Not all versions of Elm dependencies (especially Cabal) work as expected and are compatible. And compiling one version of Cabal takes literally &lt;em&gt;days&lt;/em&gt;. (Well, at least a bit more than 24 hours.)&lt;/p&gt;

&lt;p&gt;The good news is that it is pretty straight-forward and should work without hassles if you follow the steps in this tutorial and use the exact same versions I did. It will still take approximately two days to install and compile everything that you need. The even better news is that I already did this for you, and can provide a much faster way to run Elm on your Raspberry Pi via docker.&lt;/p&gt;

&lt;h1&gt;
  
  
  The shortcut: Use a prebuilt docker image
&lt;/h1&gt;

&lt;p&gt;If you don't want to wait that long to build Elm, you can use my pre-built docker image that you can run on your raspi. Find it on Docker Hub as &lt;a href="https://hub.docker.com/r/danielkun/elm-raspbian-arm32v7/"&gt;danielkun/elm-raspbian-arm32v7&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Just run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; danielkun/elm-raspbian-arm32v7 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and inside that container you can use the &lt;code&gt;elm&lt;/code&gt; commands as you are used to. Note that everything that happens in this &lt;code&gt;docker run&lt;/code&gt; session will be discarded once you close the shell. That's why you should mount a directory as a volume into the container - these changes are made directly to your filesystem and will be permanent.&lt;/p&gt;

&lt;p&gt;To run the elm docker container with the current directory mounted to /code and everything set up correctly, execute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:/code"&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; &lt;span class="s2"&gt;"/code"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"HOME=/tmp"&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="nv"&gt;$UID&lt;/span&gt;:&lt;span class="nv"&gt;$GID&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:8000 danielkun/elm-raspbian-arm32v7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to stick with this for longer, you can make your local "elm" command an alias for this, and all parameters will be appended, so that you can run "elm make", "elm reactor", etc. as if it was installed on your local machine. Set up the alias like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;elm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'docker run -it --rm -v "$(pwd):/code" -w "/code" -e "HOME=/tmp" -u $UID:$GID -p 8000:8000 danielkun/elm-raspbian-arm32v7 elm'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Credit goes to &lt;em&gt;maport&lt;/em&gt; from &lt;a href="https://github.com/maport/docker-elm"&gt;https://github.com/maport/docker-elm&lt;/a&gt; for this trick)&lt;/p&gt;

&lt;h1&gt;
  
  
  Building Elm yourself
&lt;/h1&gt;

&lt;p&gt;If you're not satisfied with using a docker image, you can still build Elm yourself, if you bring enough time.&lt;/p&gt;

&lt;p&gt;Elm does only have a few platforms it depends on, but these are massive, and not all can be obtained as binaries: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;llvm-3.5 (for ghc, can be obtained as binary)&lt;/li&gt;
&lt;li&gt;ghc 7.10.3 (can be obtained as binary)&lt;/li&gt;
&lt;li&gt;cabal-install 1.22.6 (must be compiled, hence slooooow to install)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besides these larger dependencies, there are a few packages that you can install via apt, and many that will be downloaded and compiled while building Elm via cabal.&lt;/p&gt;

&lt;p&gt;So let's start with installing the packages that we can obtain from apt, since this is the easiest part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; wget curl make libgmp-dev libnss3 apt-utils libnss-lwres libnss-mdns netbase ca-certificates cmake g++ libtinfo-dev git zlib1g-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As part of the building process and to use the elm commands later, we need a few directories for llvm, cabal and finally elm in our PATH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export PATH="$PATH:$HOME/elm/llvm-3.5/bin:$HOME/.cabal/bin:$HOME/elm/Elm-Platform/0.18/.cabal-sandbox/bin/"'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, download everything we'll need later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/elm-downloads &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ~/elm-downloadswget http://releases.llvm.org/3.5.2/clang+llvm-3.5.2-armv7a-linux-gnueabihf.tar.xz
wget https://downloads.haskell.org/~ghc/7.10.3/ghc-7.10.3-armv7-deb8-linux.tar.bz2
wget http://www.haskell.org/cabal/release/cabal-install-1.22.6.0/cabal-install-1.22.6.0.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then extract the packages in a dedicated directory for building Elm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/elm &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ~/elm
&lt;span class="nb"&gt;tar &lt;/span&gt;xf ~/elm-downloads/clang+llvm-3.5.2-armv7a-linux-gnueabihf.tar.xz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;clang+llvm-3.5.2-armv7a-linux-gnueabihf llvm-3.5
&lt;span class="nb"&gt;tar &lt;/span&gt;xf ~/elm-downloads/ghc-7.10.3-armv7-deb8-linux.tar.bz2
&lt;span class="nb"&gt;tar &lt;/span&gt;xf ~/elm-downloads/cabal-install-1.22.6.0.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LLVM is now ready to be used from where it has been extracted. GHC, however, needs to be installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/elm/ghc-7.10.3/
./configure
make &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should be fairly quick, since it's not building anything, but just copies around files. Next, go for the hard (and time-consuming) task: Build cabal-install&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/elm/cabal-install-1.22.6.0
./bootstrap.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now grab a coffee. And your wife and kids and go outside for a few hours. Then go to sleep and wake up the next morning, go to work and then, after work, it might have finished. Mind you that I used the most recent Raspberry Pi 3 and it still took around 10 hours or so. If you have a Raspberry Pi 2, for example, it might take even longer.&lt;/p&gt;

&lt;p&gt;But it will finish eventually. And when it did, you can download and compile Elm now. (Remember: Your PATH is already set up correctly.) Note that I had to fix some small issue with the installation script, since it sets "split-Objs: True", which seems to be not compatible with the GHC or cabal version, so I changed it to "split-Objs: False".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; https://raw.githubusercontent.com/elm-lang/elm-platform/master/installers/BuildFromSource.hs &lt;span class="se"&gt;\&lt;/span&gt;
 | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s2"&gt;"s/split-objs: True/split-objs: False/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; BuildFromSource.hs &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; runhaskell BuildFromSource.hs 0.18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This, again, takes about a day or so. I haven't measured exactly. It takes so long because there are quite a few cabal dependencies that will be downloaded and built before Elm is built.&lt;/p&gt;

&lt;p&gt;Hooray! You should have working `elm' commands now! Have fun hacking away on your raspi and make some cool IoT projects.&lt;/p&gt;

&lt;p&gt;Finally, you can clean up quite a bit by removing no longer necessary downloads and temporary files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm -rf ~/elm-downloads ~/elm/ghc-7.10.3 ~/elm/cabal-install-1.22.6.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Elm for the win!&lt;/p&gt;

</description>
      <category>elm</category>
      <category>raspberrypi</category>
      <category>haskell</category>
    </item>
    <item>
      <title>Kubernetes: It's alive!</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Sat, 20 Jan 2018 19:29:28 +0000</pubDate>
      <link>https://forem.com/danielkun/kubernetes-its-alive-2ndc</link>
      <guid>https://forem.com/danielkun/kubernetes-its-alive-2ndc</guid>
      <description>&lt;p&gt;I recently found an interest in &lt;a href="http://kubernetes.io" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; and learned about it at night, while working at something not web-related at day. As part of my learning journey I wanted to quickly see and experience how Kubernetes actually works in action. So I decided to write a few services that can be used to trigger and observe certain behavior of Kubernetes. I started with Load Balancing, Self Healing of Services and Auto Scaling depending on CPU utilization.&lt;/p&gt;

&lt;p&gt;In this blog post I will explain how each service works and how Kubernetes behaves in practice. Note that this was the first time I wrote Go, so I take no guarantee that the code is not shitty or against Go conventions and best practices that I do not yet know. :-)&lt;/p&gt;

&lt;p&gt;This blog post addresses anyone who already has a Kubernetes cluster up and running. You should know what a Pod is, a Replica Set or a Deployment, and can build Docker containers and have used a Docker Registry.&lt;/p&gt;

&lt;p&gt;If you do not run a Kubernetes cluster already, I can recommend the book &lt;a href="https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1491935677/ref=sr_1_1?ie=UTF8&amp;amp;qid=1516176454&amp;amp;sr=8-1&amp;amp;keywords=kubernetes+up+and+running" rel="noopener noreferrer"&gt;"Kubernetes: Up &amp;amp; Running"&lt;/a&gt; and/or the blog post &lt;a href="https://www.hanselman.com/blog/HowToBuildAKubernetesClusterWithARMRaspberryPiThenRunNETCoreOnOpenFaas.aspx" rel="noopener noreferrer"&gt;"How to Build a Kubernetes Cluster with ARM Raspberry Pi"&lt;/a&gt; by Scott Hanselman. If you don't know what Kubernetes is or how it works, you can read the excellent series of blog posts &lt;a href="https://blog.giantswarm.io/understanding-basic-kubernetes-concepts-i-introduction-to-pods-labels-replicas/" rel="noopener noreferrer"&gt;"Understanding Basic Kubernetes Concepts"&lt;/a&gt; by Puja Abbassi from &lt;a href="https://giantswarm.io" rel="noopener noreferrer"&gt;giantswarm.io&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  For the impatient
&lt;/h1&gt;

&lt;p&gt;Before running kube-alive on your cluster, make sure that you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl installed and configured to a running cluster (check that "kubectl get nodes" gives you a list of at least one node in state "Ready")&lt;/li&gt;
&lt;li&gt;bash&lt;/li&gt;
&lt;li&gt;Your cluster runs on Linux on amd64 or ARM CPUs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you do not have a cluster up and running already, I recommend the already mentioned article by Scott Hanselman to start a cluster on Raspberry Pis, or you can use &lt;a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt; to run a local cluster on your PC or Mac.&lt;/p&gt;

&lt;p&gt;If you just want to deploy kube-alive to your cluster and see it in action, you can do this with this single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; https://raw.githubusercontent.com/daniel-kun/kube-alive/master/deploy.sh | bash

Using 192.168.178.79 as the exposed IP to access kube-alive.
deployment &lt;span class="s2"&gt;"getip-deployment"&lt;/span&gt; created
service &lt;span class="s2"&gt;"getip"&lt;/span&gt; created
deployment &lt;span class="s2"&gt;"healthcheck-deployment"&lt;/span&gt; created
service &lt;span class="s2"&gt;"healthcheck"&lt;/span&gt; created
deployment &lt;span class="s2"&gt;"cpuhog-deployment"&lt;/span&gt; created
service &lt;span class="s2"&gt;"cpuhog"&lt;/span&gt; created
horizontalpodautoscaler &lt;span class="s2"&gt;"cpuhog-hpa"&lt;/span&gt; created
deployment &lt;span class="s2"&gt;"frontend-deployment"&lt;/span&gt; created
service &lt;span class="s2"&gt;"frontend"&lt;/span&gt; created

FINISHED!
You should now be able to access kube-alive at http://192.168.178.79/.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Load-Balancing
&lt;/h1&gt;

&lt;p&gt;The most basic capability of Kubernetes is load balancing between multiple services of the same kind. To observe whether the request was served from the same or from different instances, I decided to let the service return it's host's IP address. In order to run a service in Kubernetes, you need to a) write the service, b) build a container hosting the service, c) push the container to a registry, d) create an object in Kubernetes that runs your container and finally e) make the service accessible from outside the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase A: Writing the Service
&lt;/h2&gt;

&lt;p&gt;So let's dig into the code. I wrote a server in Go that serves on port 8080, parses the output of the command "ip a" and returns the container's IP address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"fmt"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"bufio"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"os/exec"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"log"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"strings"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"net/http"&lt;/span&gt;

&lt;span class="c"&gt;/**
getip starts an HTTP server on 8080 that returns nothing but this container's IP address (the last one outputted by "ip a").
**/&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;getIP&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Left out for brevity, see &lt;/span&gt;
&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c"&gt;//raw.githubusercontent.com/daniel-kun/kube-alive/master/src/getip/main.go &lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;getIP&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"'getip' server starting, listening to 8080 on all interfaces.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListenAndServe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":8080"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase B: Building the Container
&lt;/h2&gt;

&lt;p&gt;Since everything running in Kubernetes must be a container, I wrote a Dockerfile to run this service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; golang&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; main.go /go/src/getip/main.go&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go &lt;span class="nb"&gt;install &lt;/span&gt;getip
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; /go/bin/getip&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Dockerfile is simple: It uses a golang base container that is prepared to compile and run Go code. It then copies over the only source code file, main.go and compiles and installs it to /go/bin/ using "go install".&lt;/p&gt;

&lt;p&gt;The installed binary /go/bin/getip is set as the Entrypoint, so that when no argument is given to docker run, it executes our service.&lt;/p&gt;

&lt;p&gt;You can build the container using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that there is a "." at the end of the command, meaning that you must have cd'ed to the getip source directory before executing docker build.&lt;/p&gt;

&lt;p&gt;After docker build finishes, you will be able to see the new container with a new, randomly generated image id via&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The container is only available locally on the machine that it has been built. Since Kubernetes will run this container on any node that it sees fit, the container must be made available on all nodes. That's where a Docker Registry steps into the game, which is basically a remote repository for Docker containers that is available from all nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase C: Pushing the Container to a Registry
&lt;/h3&gt;

&lt;p&gt;I first tried to set up a local registry, which can be done, but the setup is not &lt;br&gt;
portable across clusters. That's why I decided to simply use Docker's own registry, &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;https://hub.docker.com&lt;/a&gt;. To push your freshly built container, you first need to register at Docker Hub, then tag the container with the repository, desired container name and an optional tag. If no tag is given, "latest" is assumed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag &amp;lt;your-repository&amp;gt;/getip &amp;lt;image &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="c"&gt;# tag the docker image with your repository name and the service name, such as "getip"&lt;/span&gt;
docker login &lt;span class="c"&gt;# enter your username and password of http://hub.docker.com now.&lt;/span&gt;
docker push &amp;lt;your-repository&amp;gt;/getip &lt;span class="c"&gt;# and then push your container&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is now available to be pulled (without authorization) by anyone - including your Kubernetes nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase D: Define a Replica Set
&lt;/h3&gt;

&lt;p&gt;To let Kubernetes know that it should run this container as a service, and to run multiple instances of this service, you should use a Replica Set. I wrapped the Replica Set into a Deployment to easily upgrade the service later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1beta2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your-repository&amp;gt;/getip&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I set the number of replicas to 4, which means that Kubernetes will do everything it can to always have exactly 4 instances running at any time. However, this does not give us a single URL to connect to these instances. We will use a Service to Load Balance between these instances:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getip&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This service provides a single, load-balanced URL to access the individual service instances. It remaps default HTTP port 80 to the service's own port 8080 in the process. The service will be available as &lt;a href="http://getip.svc.default.cluster" rel="noopener noreferrer"&gt;http://getip.svc.default.cluster&lt;/a&gt; or, even shorter, as &lt;a href="http://getip" rel="noopener noreferrer"&gt;http://getip&lt;/a&gt; on any running Kubernetes Pod.&lt;/p&gt;

&lt;p&gt;However, this service is only available from inside Kubernetes and not from "outside" the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase E: Publish the Service
&lt;/h2&gt;

&lt;p&gt;I decided to build my own nginx container to serve the static HTML and JavaScript files that make up the frontend  and publish the services from a specific IP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# empty&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/www/data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;# for the frontend SPA&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;# Forward traffic to &amp;lt;yourip&amp;gt;/getip to the getip service.&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/getip&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://getip/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;# I have left out the other services like "cpuhog" and "healthcheck" here for brevity.&lt;/span&gt;
        &lt;span class="c1"&gt;# See their code on https://github.com/daniel-kun/kube-alive/&lt;/span&gt;

        &lt;span class="c1"&gt;# Allow WebSocket connections to the Kubernetes API:&lt;/span&gt;
        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/api&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default/api&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;"Upgrade"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
              &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Authorization&lt;/span&gt; &lt;span class="s"&gt;"Bearer&lt;/span&gt; &lt;span class="s"&gt;%%SERVICE_ACCOUNT_TOKEN%%"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we see that nginx expects the SPA in /www/data/, which will be the target of COPY commands in our Dockerfile. The service getip is reached via Kubernetes DNS, which will automatically resolve a service's name to it's Cluster IP, which in turn load-balances requests to the service instances. The third location /api is used by the frontend to receive information  about running pods. (Currently, the full API is exposed with full admin privileges, so this is highly insecure - do it in isolated environments only! I will fix this in the near future.)&lt;/p&gt;

&lt;p&gt;Here's the Dockerfile for the frontend service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; nginx.conf /etc/nginx/&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; index.html /www/data/&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; output/main.js /www/data/output/main.js&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; run_nginx_with_service_account.sh /kube-alive/&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; /kube-alive/run_nginx_with_service_account.sh &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shell script &lt;code&gt;run_nginx_with_service_account.sh&lt;/code&gt; will substitute variables in the &lt;code&gt;nginx.conf&lt;/code&gt; to use the Kubernetes Service Account token in the authorization header to let nginx handle the authorization so that the frontend does not have to.&lt;/p&gt;

&lt;p&gt;So now we are prepared to put the last piece of the puzzle into place: A Replica Set to run the frontend and a Service that externally publishes the frontend. Note that I wrapped the Replica Set into a Deployment again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1beta2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your-repository&amp;gt;/frontend_amd64&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;externalIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;put your external IP, e.g. of your cluster's master, here&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You can &lt;code&gt;kubectl apply&lt;/code&gt; this after you inserted a valid &lt;code&gt;externalIP&lt;/code&gt; and everything should be up and running to execute your first experiment with Kubernete's load balancing.&lt;/p&gt;

&lt;p&gt;Reaching to the IP that your "kubectl" is configured against, should give you this UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffridjed7sbd6j04wajye.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffridjed7sbd6j04wajye.gif" alt="Kubernetes Experiment #1: Load Balancing" width="1716" height="1418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;… followed by more experiments. These all follow the same concept as getip, you can have a look at their code and deployment yamls here:&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Healing
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlmdli2g8juf2tghlywh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlmdli2g8juf2tghlywh.gif" alt="Kubernetes Experiment #2: Self-Healing" width="1714" height="1418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code of the Self-Healing experiment is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/daniel-kun/kube-alive/tree/master/src/healthcheck" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/tree/master/src/healthcheck&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/daniel-kun/kube-alive/blob/master/deploy/healthcheck.yml" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/blob/master/deploy/healthcheck.yml&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Rolling Updates
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykkyf7trx7wxq2x4qvdi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykkyf7trx7wxq2x4qvdi.gif" alt="Kubernetes Experiment #3: Rolling Updates" width="1714" height="1418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code of the Rolling Updates experiment is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/daniel-kun/kube-alive/tree/master/src/incver" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/tree/master/src/incver&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/daniel-kun/kube-alive/blob/master/deploy/incver.yml" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/blob/master/deploy/incver.yml&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto Scaling (cpu-based)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog3sh78vzddtdbyzmk6o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog3sh78vzddtdbyzmk6o.gif" alt="Kubernetes Experiment #4: Auto Scaling" width="1714" height="1418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code of the Auto Scaling experiment is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/daniel-kun/kube-alive/tree/master/src/cpuhog" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/tree/master/src/cpuhog&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/daniel-kun/kube-alive/blob/master/deploy/cpuhog.yml" rel="noopener noreferrer"&gt;https://github.com/daniel-kun/kube-alive/blob/master/deploy/cpuhog.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I named the service "cpuhog" because it uses as much CPU as it can for 2 seconds for every request.&lt;/p&gt;

&lt;p&gt;I plan to add more experiments in the future, such as an experiment for rolling updates using Deployments.&lt;/p&gt;

&lt;p&gt;I hope that you found this blog post and the kube-alive services useful and would be thankful if you could leave feedback in the comments. Maybe one day kube-alive will be a starting point to see Kubernetes behaviour live in action for many starters and engineers that are evaluating Kubernetes for their own use.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update from 01/25/2018:&lt;/em&gt; I removed the security warning, because security on the latest version of kube-alive has been tightened. Only a portion of the API is exposed (pods in the kube-alive namespace), and the frontend runs with a service account that has access only to the dedicated kube-alive namespace and only to read and list Pods. Hence, there is not much more information available via the API than is visible in the frontend anyways.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update from 03/08/2018:&lt;/em&gt; Updated the gifs to the new, improved visuals and added the rolling updates experiment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>container</category>
      <category>docker</category>
    </item>
    <item>
      <title>Hacky and Clean Programming</title>
      <dc:creator>Daniel Albuschat</dc:creator>
      <pubDate>Tue, 09 May 2017 20:09:47 +0000</pubDate>
      <link>https://forem.com/danielkun/hacky-and-clean-programming</link>
      <guid>https://forem.com/danielkun/hacky-and-clean-programming</guid>
      <description>&lt;p&gt;When I do programming, I do it in either of two "modes". Each of these modes serves a different purpose and I use them at different stages to achieve a goal. It is very important to be aware of the current stage you are in, because programming in the wrong mode can be most harmful for either productivity or code quality.&lt;/p&gt;

&lt;p&gt;I call these two modes the "hacker mode" and the "clean mode".&lt;/p&gt;

&lt;h1&gt;
  
  
  Hacker mode
&lt;/h1&gt;

&lt;p&gt;In "hacker mode", I try out either new technologies or architectural ideas. For example, when your current goal is to write a library to control a Bluetooth  device, I would first go into "hacker mode" and try things out and figure out what would be the most efficient and most stable way to use the raw Bluetooth API.&lt;/p&gt;

&lt;p&gt;When hacking, I do not care for code quality. I may violate my own guidelines, use ugly casts, name variables "x", "foo" or "data". The uglier, the better. Because I will definitely throw that code away. This is a very important aspect that should not be violated. You have to be aware that you will throw the code away, and you have to actually do this in the end. If what you hacked somehow does not work out, throw it away and start with a new hacking session - and throw this one away, too, once you learned the important aspects. No clean code. No docs. No comments. No unit tests.&lt;/p&gt;

&lt;p&gt;And if you are afraid that, when re-writing what you have hacked before, the Second System Effect (as described by Fred Brooks in The Mythical Man Month) might kick in, I can assure you that with this method I have not yet ever experienced this. I think the Second System Effect applies to larger scales only, and only when putting effort into producing high quality in the first system, too.&lt;/p&gt;

&lt;p&gt;After I hacked away and figured out which calls must be made in which order and know the parameters for best results and found a good structure for the API and the implementation, I sit back, have a good look at it and memorize the important parts. Then I stash the code aside for later reference and mentally mark it as "To Be Deleted".&lt;/p&gt;

&lt;p&gt;Now I can switch to "clean mode".&lt;/p&gt;

&lt;h1&gt;
  
  
  Clean mode
&lt;/h1&gt;

&lt;p&gt;I am programming in "clean mode" when I have a good mental model of what I want to create. When I know the important key parts of the implementation and have an idea of how the API and the architecture should look like. Often this information comes from a hacking session. Sometimes it is just there because I have been thinking about the problem for days, months or even years and now finally decided that everything is clear and can be put into code.&lt;/p&gt;

&lt;p&gt;Most of the time, when programming in clean mode, I am doing &lt;em&gt;DDD - Documentation Driven Development&lt;/em&gt;. Don't worry, this is not the new hip paradigm that you missed. It's essentially TDD, but I am writing the documentation of the code even before writing the tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Most important: The Docs
&lt;/h2&gt;

&lt;p&gt;One major argument why TDD is cool is because you get "a feeling how your code works out when it is put into actual use". The same argument goes for writing docs. &lt;em&gt;(Important note: I am talking about documentation that describes the functionality and properties of a class/function/etc. I am not talking about inline code comments, which describe details of the implementation).&lt;/em&gt; When writing docs, I always try to state the important details, not the obvious. When thinking about the non-obvious, you will learn whether the overall design is slick, or does have a few rough edges. With docs and tests combined, you can be pretty sure that the design is sound and that it works out well in practice. You usually even come up with new test cases this way.&lt;/p&gt;

&lt;p&gt;There are a few, easy rules to follow when writing docs. This is especially important when writing them before the actual implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt;, it must be clear what the &lt;em&gt;purpose&lt;/em&gt; of the element you are documenting is. If there is not one core function, but multiple tasks  that are accomplished, you are most likely doing it wrong. Describe the core function of the element in one sentence, i.e. in a @brief. If the core function of the element can be unambigously deduced from it's name, you can skip this part.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt;, state the preconditions that must be met to use the element. The less preconditions you can name, the better. If there are no preconditions that are not enforced by the type system, you are doing it right. If the type system is not strong enough to express the constraints of the input parameters, document them in detail. The rumor has it that Haskell has an awesome type system, but unfortunately I have had not yet a chance to use it for productive work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third&lt;/strong&gt;, and maybe most important for maintenance, state the side effects that may happen and which conditions they may likely happen under. If you are a really good programmer, you are writing functional code and can skip this part. You simply do not have side-effects.&lt;/p&gt;

&lt;p&gt;Additionally to these, the standard rules obviously apply: Describing each parameter in detail (again: not the obvious!), possible exceptions thrown, how the return value has to be interpreted, etc.&lt;/p&gt;

&lt;p&gt;While I do this, I always have two aspects in mind:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What actual use does the element have for a potential user and which goals would she target?&lt;/li&gt;
&lt;li&gt;How will I possibly implement this element?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And it is very important not to get distracted by the implementation and create a bad API that does not resemble the tasks that a user of the class want it to accomplish, but instead just wraps the underlying technology. But it is also important to not create APIs that can not possibly be implemented in any reasonable way, or the performance will suffer considerably. You should avoid leaky abstractions for (nearly) all costs, but there is often a limit to this. &lt;em&gt;(This is often worth prototyping in hacky mode.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When I am done writing the docs, I begin writing the tests and see how my idea of how the API might be put to use actually performs. I use the docs that I have written and the side-effects and corner-cases that I have described to derive test cases and therefore get a pretty decent coverage.&lt;/p&gt;

&lt;p&gt;I often draw some rough pseudo-UML to visualize the dependencies and relationships.&lt;/p&gt;

&lt;p&gt;When actually implementing the functionality, I apply all the lessons that you have learned from Clean Code. I am aware of the docs and update them, when necessary. This may also lead to redesigning the API and therefore the tests. I am consciously taking the risk that implementation details that do not fit into the API are costly, because I have experienced it many times that this approach is worth it, since it results in well architectured, maintainable, clean and most importantly easy to use code.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final notes
&lt;/h1&gt;

&lt;p&gt;As stated in the beginning, it is most important to distinguish these two modes of programming and apply the correct one in each situation. Also, I would advice against using a mixture of both. Do not write hacky code in a clean code base and do not write clean code while hacking. It is just not worth it, because you either harm your code bases quality or are less productive than you could.&lt;/p&gt;

&lt;p&gt;Only do hacking in fresh, isolated code bases that are drilled down to the minimal. This of course requires you to isolate parts and think of good, small components to build your software in, which is valuable in itself.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
