<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kaspar Von Grünberg</title>
    <description>The latest articles on Forem by Kaspar Von Grünberg (@kvgruenberg).</description>
    <link>https://forem.com/kvgruenberg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kvgruenberg"/>
    <language>en</language>
    <item>
      <title>Oops, we're multi-cloud. A hitchhiker's guide to surviving.</title>
      <dc:creator>Kaspar Von Grünberg</dc:creator>
      <pubDate>Wed, 10 Feb 2021 15:00:11 +0000</pubDate>
      <link>https://forem.com/kvgruenberg/oops-we-re-multi-cloud-a-hitchhiker-s-guide-to-surviving-jom</link>
      <guid>https://forem.com/kvgruenberg/oops-we-re-multi-cloud-a-hitchhiker-s-guide-to-surviving-jom</guid>
      <description>&lt;p&gt;Over the last few years, enterprises have adopted multicloud strategies in an effort to increase flexibility and choice and reduce vendor lock-in. According to Flexera's 2020 State of the Cloud Report most companies embrace multicloud, with 93 percent of enterprises having a multicloud strategy. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. Multicloud makes so many things more complicated that you need a damn good reason to justify this. At &lt;a href="https://humanitec.com"&gt;Humanitec&lt;/a&gt;, we see hundreds of ops and platform teams a year, and I am often surprised that there are several valid reasons to go multi-cloud. I also observe that those teams which succeed are those that take the remodelling of workflows and tooling setups seriously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is multicloud computing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Put simply,  multicloud means: an application or several parts of it are running on different cloud-providers. These may be public or private but typically include at least one or more public providers. It may mean data storage or specific services are running on one cloud providers and others on another. Your entire setup can run on different cloud providers in parallel. This is distinct from hybrid cloud services where one component is running on-premise, other parts of your application are running in the cloud.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why adopt the multicloud approach?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you don’t have a very specific reason, I usually recommend fiercely staying away from multi-cloud. As the author and enterprise architect &lt;a href="https://martinfowler.com/articles/oss-lockin.html"&gt;Gregor Hohpe&lt;/a&gt; puts it: “Excessive complexity is nature's punishment for organizations that are unable to make decisions.” Multicloud significantly increases the complexity around developer workflows, staffing, tooling, and security. The core risk remains that it ads redundancy to your workflows. If you were already struggling with managing dozens of deployment scripts in different versions for one target infrastructure, all of this doubles as you go multi-cloud.&lt;/p&gt;

&lt;p&gt;Often multicloud happens involuntarily. For example, legacy is a common culprit for a multicloud approach where generations of teams have chosen particular vendors based on what they needed at a particular time. As people left, new people came on board adding more vendors based on personal preferences and skill sets without signing off on the legacy cloud solutions. This can also happen when a company is acquired by another with their own workplace preferences.&lt;/p&gt;

&lt;p&gt;At Humanitec, we analyze and work with hundreds of platform teams and get a close-up on their operational setup. It felt counterintuitive but during the last two years I actually came across several valid and compelling reasons to adopt multicloud:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Avoid cloud-vendor lock-in&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Companies may opt for a multicloud approach to avoid the risk of cloud vendor lock-in. If your cloud-based applications are dependent on proprietary capabilities specific to the offerings of your cloud platform (Amazon Kinesis for example), this can leave you in a state of cloud vendor lock-in. This means you are beholden to the changes to the product offerings and the price increases without recourse. Having data locked-in to a single provider also increases the risk if something goes wrong. Further, using only one cloud provider could prove constraining as a company grows. However, as Gregor Hohpe asserts:&lt;/p&gt;

&lt;p&gt;“Many enterprises are fascinated with the idea of portable multicloud deployments and come up with ever more elaborate and complex (and expensive) plans that'll ostensibly keep them free of cloud provider lock-in. However, most of these approaches negate the very reason you'd want to go to the cloud: low friction and the ability to use hosted services like storage or databases.”&lt;br&gt;
Feature and pricing optimization &lt;br&gt;
Critically, the use of multiple vendors provides the opportunity to gain the best bits of various cloud providers. Every cloud vendor is different when it comes to features and pricing. One may excel in integration with specific technologies, one is better at hosting VMs, another may offer better support, and yet another may be cheaper. A company may choose to prioritize a more expensive cloud provider as it offers greater security for expensive data while using another for less critical data. &lt;/p&gt;

&lt;p&gt;Further, while workloads can be built to be vendor neutral, some are better served from specific cloud platforms. Apps that use APIs native to AWS such as Alexa skills are best served by using Amazon Web Services, for example. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance and risk minimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Risk minimization and compliance regulations and requirements are common to mission-critical enterprises like public infrastructure, healthcare, and banking. They may be required under HIPAA regulations to store patient data on-premises or in a private cloud, for example. Such industries also may opt to run parallel structures on different cloud providers, meaning if one of the major cloud vendors were to fail, in a disaster recovery case they could simply continue running on another provider.&lt;br&gt;
Other examples can be found in the banking sector. Compliance there, for example, requires that systems cannot be subject to price shocks. An alternative must therefore be available in the hypothetical event that a cloud provider suddenly increases its prices sharply.&lt;/p&gt;

&lt;p&gt;Legal requirements and geographical enforcement&lt;br&gt;
Sometimes business activity in a particular country requires a multicloud approach. While you typically run your architecture by default on one big US vendor (like AWS, Azure, GCP), to operate your app in China or Russia you’d be required to run on an Chinese or Russian vendor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge computing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Distributed models such as Edge computing are a common way to support mission-critical IoT applications using predictive analytics such as autonomous vehicles, smart city technology, industrial manufacturing, and remote monitoring such as off-shore oil refineries. It also is used to pre-process data such as video and mobile data locally. Edge computing is useful in scenarios that require greater data latency with little to no lag and down-time.  It follows that while edge computing does the heavy lifting of collecting, processing and analyzing at the edge, the data that goes to the cloud for deeper and historical processing is significant in size (data from a connected car alone is about 6TB) and can be more manageable in a designated cloud. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges with multicloud setups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As mentioned before, such multicloud environments can create an abundance of unanticipated challenges.&lt;/p&gt;

&lt;p&gt;As Kelsey Hightower notes, it may significantly compromise your effectiveness as an organization.(&lt;a href="https://twitter.com/kelseyhightower/status/1164203419822772224"&gt;https://twitter.com/kelseyhightower/status/1164203419822772224&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skilling up and the shortage of cloud specialists&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers who work with the same cloud service provider over time, gain a deep domain knowledge of its specific tools, processes, and configurations. This knowledge is hard-baked into a company’s skills and accomplishments and generates significant value. Shifting to multiple vendors shatters this expertise as developers now have to contend with upskilling, relearning, and certification. How transferable is the existing skillset? What is the cost (time, financial, and developer frustration) in upskilling? Do you need to hire new staff with specialist knowledge? I find that even with the more simple things like authorization management there are huge differences for every provider. Ever tried applying what you learned at GCP in terms of IMAP on AWS? Good luck. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lack of a single interface&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multicloud setups are hard to manage because it’s hard to keep track of what is running where. This includes network connectivity, workload allocation across clouds, data transmission rates, storage, archiving, backup, and disaster recovery. It can be hard to integrate Information from different sources and gain an understanding of the movements within and across multiple clouds.  Each cloud provider may have its own dashboard with specific APIs and its own interface rules, procedures, and authentication protocols. There’s also the challenge of migrating and accessing data.&lt;/p&gt;

&lt;p&gt;Management of multiple delivery and deployment processes&lt;br&gt;
A multicloud setup requires multiple deployment pipelines which adds additional complexity. From managing config files to database provisioning to deployment pipelines: with every cloud provider, the amount of work increases substantially, diverting developers from other essential tasks. More complexity also can increase workloads on ops teams. Increased tooling and more layers of abstraction also mean a greater risk of misconfiguration. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration, portability, and interoperability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The integration of multiple cloud vendors with your existing application and databases can be challenging. Clouds typically differ when it comes to APIs, functions, and containerization. Are you able to migrate components to more than one cloud without having to make major modifications in each system? How does interoperability flow between your public and private clouds? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud sprawl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Technopedia defines cloud sprawl as the uncontrolled proliferation of an organization’s cloud presence. It happens when an organization inadequately controls, monitors, and manages its different cloud instances, resulting in numerous individual cloud instances which may then be forgotten but continue to use up resources or incur costs since most organizations pay for public cloud services. No vendor is going to tell you you’re running on too many machines when the money keeps coming in.&lt;/p&gt;

&lt;p&gt;As well as unnecessary costs, cloud sprawl means the risk of data integrity and security problems. Unmanaged or unmonitored workloads may be running in QA or dev/test environments using real production data and pose as potential attack vectors for hackers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As alluded to with cloud sprawl, one of the biggest challenges you’re likely to face with a multicloud environment is security. Security protocols may differ between clouds and may require extensive customization. It’s essential to be able to see the security of the whole multicloud environment at any one time to prevent cyber attacks and respond to security vulnerabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Siloed vendor features and services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While there are similarities with cloud providers, there are also siloed vendor offerings, features, and services to make their product more compelling than those of their competitors. A multicloud environment is more complex if a lack of equivalent features results in difficulties such as replicating architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to survive multicloud challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Standardize to the lowest common denominator &lt;br&gt;
The more things vary between clouds the worse it gets for you. Look out for whatever helps you standardize layers so you have to worry less about switching context between clouds. There are certain scenarioss where this is impossible. The authentication stuff mentioned above is an example. You just have to deal with that and this will require specializing at least one colleague. &lt;/p&gt;

&lt;p&gt;But as you move higher up the stack, there are strategies you can use. For instance, provisioning resources with IaC scripts or using Terragrunt is one of them. In my opinion, an almost must is making sure workloads are strictly containerized. No containers, no multi-cloud. Next, make sure you use the managed Kubernetes offerings (and only those). Yes, K8s can be a beast, but it equalizes configurations and practices across the board and there are great operating layers (hint: such as Humanitec) that can help you manage this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define a single source of truth for configurations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dealing with configurations is sufficiently hard on one cloud already, especially if you work with unstructured scripts or you don’t have your shit together if it comes to versioning. I wrote a piece on this recently if you want to dive in. But if you streamline this in a way that you don’t use unstructured scripts but define a baseline template that works for all clouds (with Kubernetes as a common denominator) this gets a lot easier. If a developer wants to change anything, she applies changes to this template through a CLI/API or UI and at deployment time you create manifests for each deploy. You then route the manifests to a specified cluster and just save the information deploy = manifest + target-cluster. This way you have a defined, auditable structure and history of all deploys across clouds. Maybe you also want to build a dashboard that shows what state of which config is applied to which cluster? This will make it a lot easier.&lt;/p&gt;

&lt;p&gt;Abstract multi-cloud entirely from application developers &lt;br&gt;
If you think multi-cloud is only draining for the ops team you are wrong. Multi-cloud heavily disrupts the workflows of your application development team too. Waiting times for new environments, colleagues, or pieces of infrastructure increase significantly. I wrote a piece on how those little minutes pile up. What I see a lot is that teams respond with endless training slots to get their front ends up to speed on helm charts. The truth is they don’t give a shit. They are benchmarked against their ability to write typescripts, and YML doesn’t cut the deal. Use the config management approach lined out above, get them a slick UI or CLI and let the only touchpoint they have with the clouds be the specification of what environment type in what cloud vendor they want to spin up. Don’t do this, and your ops team will be the extended help-desk going down under in TicketOps tasks. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Look at Internal Developer Platforms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because they do pretty much all of the above out of the box. They standardize configs, help the ops teams orchestrate infrastructure, log deploys so every developer can roll back, provide a RBAC layer on top of all clouds to manage permissions and help you manage environments and environment variables across clouds, workloads and applications. This is the community page around &lt;a href="https://internaldeveloperplatform.org"&gt;Internal Developer Platforms&lt;/a&gt; This provides a good overview on what this actually is.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>
Don’t call us, we’ll call you: Automate delivery workflows with webhooks</title>
      <dc:creator>Kaspar Von Grünberg</dc:creator>
      <pubDate>Thu, 14 Jan 2021 10:22:52 +0000</pubDate>
      <link>https://forem.com/kvgruenberg/don-t-call-us-we-ll-call-you-automate-delivery-workflows-with-webhooks-1jij</link>
      <guid>https://forem.com/kvgruenberg/don-t-call-us-we-ll-call-you-automate-delivery-workflows-with-webhooks-1jij</guid>
      <description>&lt;p&gt;🗣🗣 Announcing Humanitec's new feature "Webhooks" and a webinar on automating delivery workflows on Kubernetes. 🗣🗣&lt;/p&gt;

&lt;p&gt;Super excited to announce that we’re shipping webhooks for optimizing your continuous delivery workflows when you are building your Internal Developer Platform.&lt;/p&gt;

&lt;p&gt;👨‍🌾 Users can set up webhooks to notify other tools and systems when an event happens in their Humanitec organisation, such as the start or completion of a deployment. The actions could be as simple as posting a Slack message or as complex as running a suite of integration tests or initializing data.&lt;/p&gt;

&lt;p&gt;🦴 Check it out in our docs-site or join this webinar and we show you hands-on! Link to the webinar in the comments.&lt;/p&gt;

&lt;p&gt;Check the announcement: &lt;a href="https://humanitec.com/blog/webhooks"&gt;https://humanitec.com/blog/webhooks&lt;/a&gt;&lt;br&gt;
Let me show it to you: &lt;a href="https://humanitec.com/webinars/automating-delivery-workflows"&gt;https://humanitec.com/webinars/automating-delivery-workflows&lt;/a&gt;&lt;br&gt;
Sign up for a free trial: &lt;a href="https://humanitec.com/lp/request-a-free-trial"&gt;https://humanitec.com/lp/request-a-free-trial&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>programming</category>
    </item>
    <item>
      <title>Every second matters - the hidden costs of unoptimized developer workflows</title>
      <dc:creator>Kaspar Von Grünberg</dc:creator>
      <pubDate>Mon, 20 Jul 2020 09:32:11 +0000</pubDate>
      <link>https://forem.com/kvgruenberg/every-second-matters-the-hidden-costs-of-unoptimized-developer-workflows-25jf</link>
      <guid>https://forem.com/kvgruenberg/every-second-matters-the-hidden-costs-of-unoptimized-developer-workflows-25jf</guid>
      <description>&lt;p&gt;We tend to underestimate how inefficient workflows impact developer productivity and distract from the task at hand. In this article we explain the first- and second-order effects of inefficient developer workflows. We share real-life examples and analyse at what point you should invest in automation vs. doing things manually.&lt;/p&gt;

&lt;p&gt;"We've already automated our setup so much, there is nothing left to do." We hear this sentence frequently when talking to DevOps practitioners and engineering managers. No-one is ever automated to a degree that is enough. This article shares best practices and how these relate to the order costs of non-automated tasks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Seconds pile up
&lt;/h2&gt;

&lt;p&gt;Unlocking a smartphone using a 5-digit pin takes 2.21 seconds including failed attempts. It gets more interesting if you multiply this with the amount of times the average adult (US-data)  unlocks their phone every day &lt;a href="https://www.statista.com/statistics/1050339/average-unlocks-per-day-us-smartphone-users/"&gt;(79 times)&lt;/a&gt;. Multiply this by 365 and you will end up with 60.553 seconds which equals 1009 minutes or nearly 17 hours a year unlocking your freaking phone. That's an entire day (awake) that you spend unlocking your phone!&lt;/p&gt;

&lt;p&gt;You buy the new iPhone with a FaceID. All of the sudden you have almost one day more for leisure - every year.&lt;/p&gt;

&lt;p&gt;Compare the mobile phone scenario to a 10 person engineering team with an average cost per headcount per hour of 70$. You end up with annual costs of nearly 14.000$. Beware of seconds, they matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  First-order effect is known, but following effects are ignored
&lt;/h2&gt;

&lt;p&gt;In the above scenario we looked at direct time lost due to inefficiency. But an often overlooked component is distraction. Whatever takes you out of deep focus mode is fatal to your productivity. Even little distractions lead to an enormous time to recover the task process according to  &lt;a href="https://web.archive.org/web/20150206014318/https://www.cc.gatech.edu/~vector/papers/sqj.pdf"&gt;a study&lt;/a&gt; of the Georgian Institute of Technology. The top 10% of people managed to get back into focus after 1 minute, but the average person needs 15 minutes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DgewwEkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efdd4de2a67a53bf419759d_QcH_pAMspbbdVvBCDdZ57EAYm9klm-Y96CxpxdjKIzwvGqWqIY-scc85ZQFr_x_0f9_nCKooyx7HH2g2qtQZpIr_bGRq-H1bI-hxd-mgmg9X33sw30PBjWUuTdTxF6e-1gSw4zAI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DgewwEkT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efdd4de2a67a53bf419759d_QcH_pAMspbbdVvBCDdZ57EAYm9klm-Y96CxpxdjKIzwvGqWqIY-scc85ZQFr_x_0f9_nCKooyx7HH2g2qtQZpIr_bGRq-H1bI-hxd-mgmg9X33sw30PBjWUuTdTxF6e-1gSw4zAI.png" alt="Distraction in software development"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://web.archive.org/web/20150206014318/https://www.cc.gatech.edu/~vector/papers/sqj.pdf"&gt;Source: Parnin, C. &amp;amp; Rugaber, S. (2010)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you apply this to our calculation above, we would end up with a 6-digit cost for an engineering team of 10 using pins rather than FaceID. It's worth exploring this time wasting in your &lt;a href="https://humanitec.com/blog/7-things-that-kill-your-developer-productivity"&gt;developer workflow&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasons for losing time in the engineering workflow 
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insufficient test-automation 
&lt;/h3&gt;

&lt;p&gt;You have done a pull-request, there is no automated integration test and the service goes into production after a manual review failed to identify the edge-case. It fails in one specific scenario that surfaces in a user-complaint a week later. By that time, you have already moved on to another feature which you need to interrupt now, get back into the specific task from a week ago and on goes the wheel.&lt;/p&gt;

&lt;p&gt;The first order effect in this case is the time spent fixing the feature a second time. The second order effect is the time spent on getting in the code again. &lt;a href="https://www.researchgate.net/figure/255965523_fig1_Figure-3-IBM-System-Science-Institute-Relative-Cost-of-Fixing-Defects"&gt;Research by IBM&lt;/a&gt; demonstrates how the cost of fixing a bug increases through the stages. The better the hit-rate of your QA automation the less cost-effects you produce. &lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerability scans before commit vs. after
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/nigelsimpson/"&gt;Nigel Simpson&lt;/a&gt; is the Director of Enterprise Technology at a Fortune 100 company. He's laser focused on reaching the highest degree of developer experience possible for his teams. His article &lt;a href="https://www.linkedin.com/pulse/self-aware-software-lifecycle-nigel-simpson/"&gt;The Self-Aware Software Lifecycle&lt;/a&gt; is definitely worth a read as are all of his other articles. Previously his intervention teams would ship a feature, then colleagues in security would analyse the shipped packages for open source vulnerabilities. They'd reject the package due to risk profiles meaning the developers  had to fix the package again and ship it once more. Nigel introduced &lt;a href="http://snyk.io/"&gt;Snyk&lt;/a&gt;, which allows developers to get these screenings in real time while committing. Nigel explained the first and second order affects to me in his own words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Developers didn't have a sense of ownership over the security of the applications they were developing. Since a security organization had established security analysis as a service, and were subject matter experts in vulnerability remediation, developers didn't focus on security until late in the development process. As a result, the findings of the security review generated re-engineering disruption shortly before applications were due to be released. By introducing a developer-oriented security analysis tool, Snyk, developers gained actionable intelligence about vulnerabilities, enabling them to take action sooner with less disruption to the development schedule."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Time spend waiting for container build times
&lt;/h3&gt;

&lt;p&gt;Travis CI takes twice as long to build an average web app (tested for a 7-service PHP ecommerce app, containerized) as Semaphore. We have 10 developers at Humanitec. We do around 110 builds that take us 743 minutes on average every month. We are using Github Actions. If we switched to a provider optimized for lower container build times such as Semaphore, we would save 196 minutes every month. These would be your first order effects. You could of course still do another task in the meantime, but if you look at the cost of distraction you end somewhere in this range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NaQUa2fc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efdd4df65fee52b543fa046_c72fafoSQFltd5x4lQV-bHsUrOicqZp7C0b2Sb79KYRFFnWuqq_63_72sTDFuEJQUIDdosD91f_TeSP0-9HnA6yJFMcpdQ7SwD3c-BhM9HetrwJEBrtyn3urKsN1yCO9tVYVPUr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NaQUa2fc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efdd4df65fee52b543fa046_c72fafoSQFltd5x4lQV-bHsUrOicqZp7C0b2Sb79KYRFFnWuqq_63_72sTDFuEJQUIDdosD91f_TeSP0-9HnA6yJFMcpdQ7SwD3c-BhM9HetrwJEBrtyn3urKsN1yCO9tVYVPUr1.png" alt="Cointainer Build Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://web.archive.org/web/20150206014318/https://www.cc.gatech.edu/~vector/papers/sqj.pdf"&gt;Source: Parnin, C. &amp;amp; Rugaber, S. (2010)&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Static vs. Dynamic environment setups
&lt;/h3&gt;

&lt;p&gt;When developing a web application, the defined development environment consists of: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Application configuration (e.g. environment variables, access keys to 3rd party APIs).&lt;/li&gt;
&lt;li&gt;  Infrastructure configuration (e.g. aK8s cluster, DNS configuration, SSL certificate, databases).&lt;/li&gt;
&lt;li&gt;  Containerized code as build artefacts from your CI pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If someone is told to &lt;a href="https://humanitec.com/environment-management"&gt;spin up a new environment&lt;/a&gt; this is firstly usually undertaken manually or semi-manually. The second time, when you set up your staging environment, you will start over again. Someone needs a Feature-branch? Same config, same work. This is what we call a static environment setup. It comes with several negative first and second order effects.&lt;/p&gt;

&lt;p&gt;Conversely, dynamic environments allow you to create any environments with any configuration for as long as needed. Afterwards, they can be torn down to minimize any running costs.&lt;/p&gt;

&lt;p&gt;This is only possible if: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The environment setup is scripted&lt;/li&gt;
&lt;li&gt;  All dependencies are extracted into configuration variables&lt;/li&gt;
&lt;li&gt;  Resources like k8s clusters and databases can be issued customized to your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But suddenly as a first order effect, it's possible to create and utilize additional environments on demand. This enables feature development and feature testing without blocking the static environment for others.&lt;/p&gt;

&lt;p&gt;Second order effects results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Less time seeking specific DevOps and infrastructure knowledge &lt;/li&gt;
&lt;li&gt;  No waiting someone else to build a new environment for you&lt;/li&gt;
&lt;li&gt;  No asking colleagues to do that which doesn't allow you to test sub-feature branches to reduce your error rate.&lt;/li&gt;
&lt;li&gt;  No need to pay for a permanent second static development environment to be able to develop and test two features at the same time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scripting Hell
&lt;/h3&gt;

&lt;p&gt;A lot of the teams we see script everything and tell us "it's stable, and if something comes up we go in and change it." If you go down the microservice alley, the .YML files will start piling up. Add &lt;a href="https://humanitec.com/kubernetes"&gt;Kubernetes&lt;/a&gt; on top and the pile is growing. You are running on top of GCP and they are rolling out an enforced update to their cluster configs. So you go in every file and change the setting and dependencies. Picture a really small, simple app with only 5 microservices. These are already 10 places you are working on now.&lt;/p&gt;

&lt;p&gt;Your first order effects are easy to calculate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Understand the problem (30 minutes), &lt;/li&gt;
&lt;li&gt;  Fix all the files (30 minutes), &lt;/li&gt;
&lt;li&gt;  Get back into the other task (15 minutes).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second order effects are even more significant: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Key-person dependency: Our above calculation assumes you know exactly about all the files you have to update. If you don't you end up in a badly documented setting trying to find your way through scripts.&lt;/li&gt;
&lt;li&gt;  Increased security incidences: Our favourite example is back-up scripts in databases that you forgot to update. (The second order effects of a hacked MongoDB is something I probably don't have to calculate for you).  &lt;/li&gt;
&lt;li&gt;  It's very hard to achieve true Continuous Delivery with scripting. Even ecommerce giant Zalando has a platform team of 110 people full-time scripting their workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to start investing in further automation?
&lt;/h2&gt;

&lt;p&gt;The graphic below explores whether automation is worth the effort:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  How often do you repeat the task in a given time-frame&lt;/li&gt;
&lt;li&gt;  How long does every run-through take?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the most time you should spend automating the task. Keep the second order effect in mind and it really pays off, you'll be working not only faster but more effectively.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRVSCqgB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efde128e61f7089dab69aad_is_it_worth_the_time_2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRVSCqgB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5efde128e61f7089dab69aad_is_it_worth_the_time_2x.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://xkcd.com/1205/"&gt;xkcd.com&lt;/a&gt;; (&lt;a href="https://creativecommons.org/licenses/by-nc/2.5/"&gt;CC BY-NC 2.5&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion 
&lt;/h2&gt;

&lt;p&gt;Even for my very personal day to day work, set aside an hour a month to look at how I can optimize. I will sort applications on my devices for fast access depending on usage, I will look into my project management setup or simply the way I organize my inbox.&lt;/p&gt;

&lt;p&gt;I very much encourage every team to regularly (maybe ones a quarter) spend an afternoon as a team and reflect on your workflow. We do groomings for everything right? Take the time and think how you want to work together as a team. It will pay off faster than you think.&lt;/p&gt;

&lt;p&gt;If you liked the article or if you want to discuss &lt;a href="https://humanitec.com/blog/7-things-that-kill-your-developer-productivity"&gt;developer workflows&lt;/a&gt; in more detail, feel free to register for one of our free &lt;a href="https://humanitec.com/webinars"&gt;webinars&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>7 things in your DevOps workflow that kill your developer productivity</title>
      <dc:creator>Kaspar Von Grünberg</dc:creator>
      <pubDate>Wed, 01 Jul 2020 11:18:03 +0000</pubDate>
      <link>https://forem.com/kvgruenberg/7-things-in-your-devops-workflow-that-kill-your-developer-productivity-4oa1</link>
      <guid>https://forem.com/kvgruenberg/7-things-in-your-devops-workflow-that-kill-your-developer-productivity-4oa1</guid>
      <description>&lt;p&gt;There's a lot written about how the way developers structure their daily work can cause unproductivity. An example is when unnecessary meetings are scheduled across the day so nobody can get into deep focus mode. Today I want to look into the biggest killers in developer productivity: the way you configure and setup your DevOps workflow. In almost all situations I've come across there were some quick-wins that help you avoid most of the problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Killer #1: Go all in on microservices without the proper tooling
&lt;/h2&gt;

&lt;p&gt;When teams work in a monolithic setup everything sort of works. The toolchain is prepared to handle this one monolith well, but yes, changing one small thing requires the deployment of the whole monolith. End to end tests need to be run in order to verify that everything is still fine. The bigger the monolith is, the less efficient this will be. So the team goes ahead and adopts microservices. Their first experience is great, colleagues can work on individual services independently, deployment frequency goes up and everybody is happy.&lt;/p&gt;

&lt;p&gt;The problems start when teams get carried away with microservices and take the "micro" a little too seriously. From a tooling perspective you will now have to deal with a lot more yml files, docker files, with dependencies between variables of these services, routing issues etc. They need to be maintained, updated, cared for. Your CI/CD setup as well as your organizational structure and probably your headcount needs a revamp.&lt;/p&gt;

&lt;p&gt;If you go into microservices for whatever reason, make sure you plan sufficient time to restructure your tooling setup and workflow. Just count the number of scripts in various places you need to maintain. Think about how long this will take, who is responsible and what tools might help you keep this under control. If you choose tools, make sure they have a community of users also using them for microservice setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Killer #2: Adopting Containers without a plan for externalising configuration
&lt;/h2&gt;

&lt;p&gt;Containerization is an awesome technology for a lot of situations. However it comes with a price tag and can have an impact on your productivity. Containers add overhead from a security perspective and through necessary configuration and environment management etc. They can also hurt your productivity and developer experience if you don't agree on certain conventions as a team.&lt;/p&gt;

&lt;p&gt;The most common mistake I'm seeing is: building your config files or &lt;a href="https://humanitec.com/blog/environment-configs-kubernetes"&gt;environment variables&lt;/a&gt; into your container. The core idea of &lt;a href="https://humanitec.com/blog/benefits-of-containerization"&gt;containerization&lt;/a&gt; is portability. By hard coding configuration you will have to start writing files and pipelines for every single environment. You want to change a URL? Nice, go ahead and change this in 20 different places and then rebuild everything.&lt;/p&gt;

&lt;p&gt;Before you start using containers at scale and in production, sit down as a team and agree what config conventions are important to you. Make sure to consistently cover this in code-reviews and retros. Refactoring this a-priori is a pain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Killer #3: Adopt Kubernetes the wrong way 
&lt;/h2&gt;

&lt;p&gt;All the cool kids are hyped about this open source project called &lt;a href="https://humanitec.com/kubernetes"&gt;Kubernetes&lt;/a&gt;. However, Kubernetes is hard to keep running and hard to integrate into your developer flow while keeping productivity and experience high. A lot can go wrong:&lt;/p&gt;

&lt;p&gt;Kubernetes worst case: Colleague XY really wanted to get his hands dirty and found a starter guide online. They set up a cluster on bare-metal and it worked great with the test-app. They then started migrating the first application and asked their colleagues to start interacting with the cluster using &lt;a href="https://humanitec.com/blog/deploy-with-kubectl-hands-on-with-kubernetes"&gt;kubectl&lt;/a&gt;. Half of the team is now preoccupied learning this new technology. The poor person that is now maintaining the cluster will be full time on this the second the first production workload hits the fan. The &lt;a href="https://humanitec.com/blog/continuous-integration-vs-continuous-delivery-vs-continuous-deployment"&gt;CI/CD&lt;/a&gt; setup is completely unprepared for dealing with this and overall productivity is going down as the entire team is trying to balance Kubernetes.&lt;/p&gt;

&lt;p&gt;What can be done to prevent this: Kubernetes is an awesome technology and can help achieve a PaaS like developer experience if done right. After all, it descends from Borg - the platform Google built to make it easy for their Software Engineers to build massively scalable applications. Thus, it's kind of an open source interpretation of Google's internal platform.&lt;/p&gt;

&lt;p&gt;Best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Wherever possible teams should not setup and run the barebone cluster themselves but use a managed Kubernetes service. Read the reviews on what managed Kubernetes cluster suits your needs best. At the time of me writing this article &lt;a href="https://humanitec.com/blog/how-to-set-up-a-kubernetes-cluster-on-gcp"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt; is by far the best from a pure tech perspective (though permission schema is still a pain - what is your problem with permissions, Google?) closely followed by Azure Kubernetes Service (AKS). Amazon's Elastic Kubernetes Service (Amazon EKS) and is racing to catch up.&lt;/li&gt;
&lt;li&gt; Use automation platforms or aContinuous Delivery API as offered by Humanitec. They allow you to run your workload on K8s out of sight of your developers. There is almost zero value in exposing everyone to the complexity of the entire setup. I know the argument with "everybody should be able to do everything" but the pace of change is so fast and the degree of managed automation so high that it really doesn't make sense.&lt;/li&gt;
&lt;li&gt; If teams really want developers to manage the Kubernetes cluster themselves, they should give them adequate time to really understand the architecture, design patterns, kubectl etc. and to really focus on this.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Killer #4: Forget to deal with Continuous Delivery 
&lt;/h2&gt;

&lt;p&gt;"Wait, I already have a CI Tool". There is a common misconception that the job is done well if there is a Continuous Integration setup. You are still missing &lt;a href="https://humanitec.com/blog/benefits-and-best-practices-of-continuous-delivery"&gt;Continuous Delivery&lt;/a&gt;! The confusion is not helped by a lot of these vendors coining the term "CI/CD tool" giving you the impression you've nailed Continuous Delivery if you have Jenkins, CircleCI etc. - that's not the case.&lt;/p&gt;

&lt;p&gt;A well tuned Continuous Delivery setup, either self-scripted or "as-a-Service" is much more the "glue" in a teams toolchain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It allows all the different components, from source control system to CI-Pipeline, from database to cluster and from DNS setup to IaC to be integrated into a streamlined and convenient developer experience.&lt;/li&gt;
&lt;li&gt;  It's a way to structure, maintain and manage the growing amount of yml and configuration scripts. If done well this allows your developers to dynamically spin up environments with the artefacts built by the CI-Pipeline and fully configured with databases provisioned and everything set up.&lt;/li&gt;
&lt;li&gt;  It can act as a version control system for configuration states with an auditable record on what is deployed where, in what config and it allows you to roll back and forth as well as manage blue/green/canary deploys. &lt;/li&gt;
&lt;li&gt;  Well thought through CD setups have a game-changing effect on developer productivity. They make developers self-serving with less dependencies within the team while increasing maintainability of your setup. &lt;/li&gt;
&lt;li&gt;  Teams using these practices ship more frequently, faster, show overall higher performance and happiness. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Killer #5: Unmaintainable test automation in a limited test setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://humanitec.com/blog/robot-framework-to-improve-testing"&gt;Efficient testing is not possible without automation&lt;/a&gt;. With continuous delivery comes continuous responsibility to not break anything.\&lt;br&gt;
You need to continuously make sure to not fall into the &lt;a href="https://blogs.agilefaqs.com/2011/02/01/inverting-the-testing-pyramid/"&gt;trap of inverting your test pyramid&lt;/a&gt;. For this you need to be able to run the right kind of tests at the right point of your development lifecycle.&lt;/p&gt;

&lt;p&gt;Sufficient CI tooling will help you to put your unit and integration tests into the right place while CD tooling with configuration management and environment management helps you to run your automated end to end tests in a reliable way.&lt;/p&gt;

&lt;p&gt;Well done setups allow developers or testers to dynamically spin up environments that are preconfigured. Strictly externalize your configuration and make sure to have a configuration management that injects these variables at deployment time. This leads to a number of positive improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Run the right tests at the right time, while providing efficient feedback to the development team&lt;/li&gt;
&lt;li&gt;  Developers gain autonomy and you reduce key person dependencies,&lt;/li&gt;
&lt;li&gt;  QAs are now able to test subsets through feature-environments, &lt;/li&gt;
&lt;li&gt;  QA can parallelize testing which will save time while being able to test on subsets of your data. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Killer #6: Manage your databases yourself
&lt;/h2&gt;

&lt;p&gt;The teammate who just left was responsible for setting up MongoDB for a client project and of course used the open source project to run it themselves. And of course the handover was 'flawless' and of course the database wasn't protected properly and one evening this shows up where the data was supposed to be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fUf2mZdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5ef20ebc811a33683a2b5e0b_database-will-be-deleted.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fUf2mZdU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://assets.website-files.com/5c73bbfe3312822f153dd310/5ef20ebc811a33683a2b5e0b_database-will-be-deleted.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;And of course: &lt;/p&gt;

&lt;p&gt;You check the backups. &lt;/p&gt;

&lt;p&gt;There was a syntax error. &lt;/p&gt;

&lt;p&gt;You now have to reverse engineer all the data. &lt;/p&gt;

&lt;p&gt;This is a real life example that happens frequently.&lt;/p&gt;

&lt;p&gt;Self managed DBs are an operational and security risk. They are distracting, boring and unnecessary. Use Cloud SQL or other offerings and sleep well. We are commonly seeing managed offerings from companies such as &lt;a href="https://aiven.io/"&gt;Aiven.io&lt;/a&gt;. These companies offer most databases, they can run them on all the big cloud providers for you and they are more feature-rich, mature and sophisticated. Also, they are often cheaper and ensure zero lock-in at a higher developer convenience which if it goes hand in hand I'd always prefer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Killer #7: Go multi-cloud for no reason
&lt;/h2&gt;

&lt;p&gt;There is a difference between just going multi-cloud and trying to design your systems to be cloud agnostic and portable. The latter has a lot of different advantages such as dynamic environments and makes more sense than going multi-cloud. Sure there's a historical legacy: some teams had been using GCP and the other department started with AWS and here you are. Others include specialization. One might argue that GPUs run more efficiently on GCP than on AWS or cost-reasons. But these effects to really surface you need a sufficient size. Uncomplicated multi-cloud setups require a high degree of automation and shielding of provisioning and setup tasks from developers. Otherwise one ends up in scripting hell.&lt;/p&gt;

&lt;p&gt;As a general rule: don't do multi-cloud if not absolutely necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope these points help you avoid the biggest mistakes in this field. Remember what Nicole Forsgren, Jez Humble and Gene Kim write in their book &lt;a href="https://itrevolution.com/book/accelerate/"&gt;"Accelerate"&lt;/a&gt;: "the top 1% of teams ship 10x more often".&lt;/p&gt;

&lt;p&gt;This is because they are getting the most of what is possible today. I spend 1 hour a month looking at my personal workflows, my to-do lists, the way I organize my apps. Why? Because it really adds up over the weeks if you have inefficient flows. These tiny things such as searching for your photo app distract your brain. Stop and spend an afternoon a month to ensure your productivity is streamlined. It will help you focus on innovation rather than configuration and will make for a happier team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/kaspar-von-gr%C3%BCnberg-73872380/"&gt;Reach out &lt;/a&gt;directly to me if you have ideas, comments or suggestions. &lt;a href="https://humanitec.com/webinars"&gt;Or register for one of our free webinars to get in touch with us.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>microservices</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
