<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ambassador Labs</title>
    <description>The latest articles on Forem by Ambassador Labs (@ambassadorlabs).</description>
    <link>https://forem.com/ambassadorlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ambassadorlabs"/>
    <language>en</language>
    <item>
      <title>The Developer Experience and the Role of the SRE Are Changing, Here's How</title>
      <dc:creator>Kate Packard</dc:creator>
      <pubDate>Wed, 28 Jul 2021 20:38:26 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/the-developer-experience-and-the-role-of-the-sre-are-changing-here-s-how-675</link>
      <guid>https://forem.com/ambassadorlabs/the-developer-experience-and-the-role-of-the-sre-are-changing-here-s-how-675</guid>
      <description>&lt;h2&gt;
  
  
  Mario Loria of CartaX talked with Ambassador about the changing developer experience, the changing role of the SRE, and delivering visibility and a self-service platform for developers
&lt;/h2&gt;

&lt;p&gt;The application development landscape has fundamentally changed in recent years. In a recent interview with Ambassador Labs, Mario Loria from CartaX said he believes this is still uncharted territory, particularly for developers in the cloud-native space. As he sees it, site reliability engineers (SREs) play a key role in guiding developers through the learning curve toward comprehensive self-service of the supporting platforms and ecosystem, and ultimately to service ownership. This requires a major shift in company and management culture, and developer (and SRE) mindset and tooling as well as insight to make the journey to full lifecycle ownership not just smoother and more transparent but also technically feasible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two worlds colliding: The monolith and service-oriented architecture
&lt;/h2&gt;

&lt;p&gt;The traditional monolith continues to exist in parallel with cloud-native application development. The operations side of the equation, according to Mario, understands that this has caused a big shift in deploying, releasing, and operating applications, and now the role of SREs is to help developers understand and own this shift. Developers know how to code, but building in the necessary understanding (and ownership) of the “ship” and “run” aspects of the lifecycle introduces a steep learning curve. For developers, this means taking on new responsibilities with the support of SREs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling developers to own the full application lifecycle
&lt;/h2&gt;

&lt;p&gt;In a cloud-native, service-oriented architecture, how much responsibility for an application's lifecycle should a developer own?&lt;/p&gt;

&lt;p&gt;According to Mario, developers should own the full life cycle of services but in most cases don't: "It should not be up to me as an SRE to define how your application gets deployed or at what point it needs to be rolled back, or at what point it needs to be changed, or when its health check should be modified." Developers should be capable of — and empowered — to make these determinations.&lt;/p&gt;

&lt;p&gt;Many SREs' experience has forced them to "ride to the rescue" when issues arise. Developers have generally been conditioned to hand over deployment and operations to SRE teams rather than investigate how they might handle issues on their own. This isn't illogical; this is how pre-cloud (and pre-DevOps) monolithic application development worked. Getting to a place in which developers can own the full application life cycle isn't always a linear path, nor is it always organizationally sanctioned or supported.&lt;/p&gt;

&lt;p&gt;How does an organization and its developers round the bend on the learning curve?&lt;/p&gt;

&lt;h2&gt;
  
  
  Understand the changing developer experience to support developer ownership
&lt;/h2&gt;

&lt;p&gt;Mario shared a set of prescriptions on how organizations, SRE teams, and developers might better embrace and support the new development paradigm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zC9JQOt1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1ne5wjnmrrbzlg2oiys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zC9JQOt1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1ne5wjnmrrbzlg2oiys.png" alt="Chart describing how orgs, SRE teams, and devs can support the new dev paradigm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Refocus culture and top-down support
&lt;/h3&gt;

&lt;p&gt;An organization and its leadership needs to get behind the end-to-end "developer-as-service-owner" mindset as part of their higher-level strategy. Leadership needs to make this clear from the top down that there is a focus (and accountability) on ownership. Developers learning this new culture come to it from their own history and experience, which can vary considerably. Changing the developer mindset starts with empathy — understanding their goals, practices and skills — and closely follows with good communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rethink communication patterns
&lt;/h3&gt;

&lt;p&gt;Mario believes that supporting developer ownership, and getting developers to embrace this model as well, comes down to communication. Communication has to change from an "it's their [SRE] problem" to a "this is our [developer] problem" frame of reference. Even down to the individual developer and their experience, each problem they encounter is likely something that affects the business. Everyone is pushing toward the same goals. If that fundamental idea of shared responsibility is broken, it's not really possible to change the culture to adopt and sustain the new developer experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consider SRE-supported developer education and tooling
&lt;/h3&gt;

&lt;p&gt;Mario's views on the role of SRE in supporting the new developer experience, and more fundamentally, developer education can be summed up with the old adage: feed someone a fish, they eat for a day; teach someone to fish, they eat for a lifetime. Mario's take is that developers should start triaging and debugging themselves when they run into issues. As he frames it, it's not that developers should never ask for SRE support, but instead that the role of SREs in cloud-native organizations should support developers learning to gather intelligence, troubleshoot basic issues, and understand what the components of their applications are doing without SRE intervention.&lt;/p&gt;

&lt;p&gt;The SRE role, in this framework, supports the idea of SRE-supported education to help developers gain a better understanding of how to handle their services themselves rather than asking SREs to fix broken issues. Mario believes this is the way SREs and developers should work together in the long term: helping put the developer in the driver's seat while training them to drive high-performance machines (applications).&lt;/p&gt;

&lt;h3&gt;
  
  
  Reshape the developer and the SRE experience
&lt;/h3&gt;

&lt;p&gt;To persuade developers to take on the responsibility, the SRE function needs to provide the platform surface area -- or control plane -- and visibility to make this possible. That is, giving developers control and providing visibility into what the repercussions of their actions can be. A lot of questions surface that developers will need to know how to answer effectively to, as Mario put it, “shift left”, make more decisions, and do more autonomously with interactive self-service.&lt;/p&gt;

&lt;p&gt;SREs can create the platforms, or pure infrastructure, to empower developers. Developers need to ship their code safely, and the more they understand about how to do this, the more self-sufficient they become. Meanwhile the more SREs can focus on creating the "platform as a service", or “paved path”, that helps reduce cognitive load for a developer taking on full code-ship-run ownership, the more clarity the developer gets into the processes they need to ship and run their code without breaking anything.&lt;/p&gt;

&lt;p&gt;In many ways, this reshapes not just the developer experience but also the SRE experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ease the climb: Centralizing information and creating the single pane of glass
&lt;/h3&gt;

&lt;p&gt;"In terms of developing, deploying, and operating [in the cloud]... especially if you think of someone you just onboarded into the team, there's a mountain they have to climb to actually understand how we manage our services end-to-end. Going back to the frontend we should aim to create something like a single pane of glass, where instead of having all these different places to look or tune, or to try and understand what's going on, we can see everything transparently…".&lt;/p&gt;

&lt;p&gt;Giving the developer the tools needed to bring everything together is likely to hinge on something like a developer control plane (DCP), where information can be centralized and made visible.&lt;/p&gt;

&lt;p&gt;From an operational standpoint, considerable thought needs to go into how much time and effort the business wants to invest in building or running all the constituent components themselves. While dependent on use case, cloud-native development is all about achieving speed, and this is facilitated by the centralization of distributed information and the creation of a single pane of glass to make a frictionless means for pulling disparate sources of information together to, in a sense "ease the climb".&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Developers should work with SREs as collaborators, not first responders
&lt;/h2&gt;

&lt;p&gt;For developers who want to thrive in the cloud-native development space, learning to rely on SREs in a new way will be a key success factor. SREs should become trusted partners for deploying, releasing, and running services, and not just treated as first responders who are responsible for dealing with incidents. Developers should take the opportunity to share their pain points and also learn about tooling and best practices from SRE teams, with the goal of “paving the path” to developer autonomy, self-service, and full service ownership.&lt;/p&gt;

</description>
      <category>sre</category>
      <category>devex</category>
    </item>
    <item>
      <title>Debugging Go Microservices in Kubernetes with VScode</title>
      <dc:creator>Peter ONeill</dc:creator>
      <pubDate>Fri, 09 Apr 2021 14:04:58 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/debugging-go-microservices-in-kubernetes-with-vscode-kli</link>
      <guid>https://forem.com/ambassadorlabs/debugging-go-microservices-in-kubernetes-with-vscode-kli</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Tutorial: Learn to debug Go microservices locally while testing against dependencies in a remote Kubernetes cluster&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11668%2F1%2AkFNAnhDkPcDDqPAXVWyafg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11668%2F1%2AkFNAnhDkPcDDqPAXVWyafg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many organizations adopt cloud native development practices with the dream of shipping features faster. Although the technologies and architectures may change when moving to the cloud, the fact that we all still add the occasional bug to our code remains constant. The snag is that many of your existing local debugging tools and practices can’t be used when everything is running in a container or on the cloud.&lt;/p&gt;

&lt;p&gt;Easy and efficient debugging is essential to being a productive engineer, but when you have a large number of microservices running in Kubernetes the approach you take to debugging has to change. For one, you typically can’t run all of your dependent services on your local machine. This then opens up the challenges of remote debugging (and the associated fiddling with debug modes and exposing ports correctly). However, there is another way. And the &lt;a href="https://www.getambassador.io/products/telepresence" rel="noopener noreferrer"&gt;CNCF Telepresence tool&lt;/a&gt; enables this path.&lt;/p&gt;

&lt;p&gt;This article walks you through using &lt;a href="https://www.getambassador.io/products/telepresence/" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; to seamlessly connect your local development machine to a remote Kubernetes cluster, allowing you to use your favorite debugging tools with all of your microservices. Giving you the ability to comfortably debug your remote apps with your existing skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Difficulty with Debugging Applications Running in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Splitting your application into microservices introduces a number of challenges. Particularly with debugging. Splitting an application into several, if not dozens, of microservices creates a complex dependency tree that becomes nearly impossible to replicate in staging.&lt;/p&gt;

&lt;p&gt;Sure, you can use unit tests with tools like &lt;a href="https://github.com/golang/mock" rel="noopener noreferrer"&gt;GoMock&lt;/a&gt; and &lt;a href="https://github.com/prashantv/gostub" rel="noopener noreferrer"&gt;GoStub&lt;/a&gt; to simulate external dependencies or introduce synthetic data into the mix, but it still leaves you unsure if it will work with data from your actual services.&lt;/p&gt;

&lt;p&gt;And after you have deployed your service into your cluster, running a remote debugging session can be tricky to get right, due to the complicated configuration of ports and protocols that need to be set via your Kubernetes service YAML. You often lose the ability to use your favorite debugging tools and strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy a Sample Microservice Application
&lt;/h2&gt;

&lt;p&gt;In this article, we’re going to be working with the sample Edgey Corp application written in Go. There are detailed instructions for deploying this application in my &lt;a href="https://blog.getambassador.io/go-kubernetes-rapidly-developing-golang-microservices-bfe36cfb5893" rel="noopener noreferrer"&gt;previous blog post&lt;/a&gt;. Or just apply the YAML from the URL to your Kubernetes cluster, clone the repository, and jump right into the action.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Apply the manifest for the Edgey Corp app to your K8s cluster&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml]&lt;span class="o"&gt;(&lt;/span&gt;https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Git clone the code for the Edgey Corp app to your machine&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone &lt;span class="o"&gt;[&lt;/span&gt;https://github.com/datawire/edgey-corp-go.git]&lt;span class="o"&gt;(&lt;/span&gt;https://github.com/datawire/edgey-corp-go.git&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have the Edgey Corp app running in our Kubernetes cluster, and we have the code on our local machine let’s pop it open with VSCode.&lt;/p&gt;

&lt;p&gt;Let’s check the application is running successfully. Run a kubectl get pods to see the pods are up and running for the 3 microservices. verylargedatastore, verylargejavaservice and dataprocessingservice.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME READY STATUS RESTARTS AGE
verylargedatastore-cd998dfc6-k5bkw 1/1 Running 0 19s
verylargejavaservice-77748f79d6-dx4kj 1/1 Running 0 20s
dataprocessingservice-f5b644d95-s98hp 1/1 Running 0 20s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With the sample application up and running we can configure VScode to start debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure VScode and Delve
&lt;/h2&gt;

&lt;p&gt;VScode is a great all-purpose IDE with countless extensions to get you up and running quickly. We are going to use the Go extension made by the Google team with the Go debugger tool called Delve. Here are links to each tool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://code.visualstudio.com/Download" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=golang.Go" rel="noopener noreferrer"&gt;Go for Visual Studio Code Extension&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/go-delve/delve/tree/master/Documentation/installation" rel="noopener noreferrer"&gt;Delve the Go debugger&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll want to make sure that Delve is both installed to your local system and connected to VScode through the Go extension.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Delve to your local system
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go install github.com/go-delve/delve/cmd/dlv@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Connect Delve to VScode
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open VScode now&lt;/li&gt;
&lt;li&gt;Access the command palette with (command+shift+p)&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select Go: install/update tools&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2Ama0Vk4VyFLwuJATL" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2Ama0Vk4VyFLwuJATL"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the box for DLV and any other options you might find useful&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click Ok&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AYSYFL3PxFcXN0Oif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AYSYFL3PxFcXN0Oif"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Run your Go code in debugging mode
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open the project directory for the Edgy Corp Go App in VScode&lt;/li&gt;
&lt;li&gt;Open up the file dataprocessingservice/main.go&lt;/li&gt;
&lt;li&gt;Click the debug icon on the right side.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lastly, click “Run and Debug”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AQpM5RjnbTP234HbO" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AQpM5RjnbTP234HbO"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should now see the debug console starting to log output.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;API server listening at: 127.0.0.1:48808
Welcome to the DataProcessingGoService!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s check that it’s working by navigating to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;localhost:3000&lt;/a&gt; in your browser. You can also navigate to &lt;a href="http://localhost:3000/color" rel="noopener noreferrer"&gt;localhost:3000/color&lt;/a&gt; to see how the Go application response to the color endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2420%2F0%2AThpJxNqjmgEFwBWk" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2420%2F0%2AThpJxNqjmgEFwBWk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Intercept Your Service with Telepresence
&lt;/h2&gt;

&lt;p&gt;So debugging one microservice in isolation is fine, but it still leaves us assuming what is going to happen when it is connected to the rest of the microservice in our application. Let’s introduce Telepresence here to debug our local Go microservice as part of the larger application.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install telepresence CLI&lt;/p&gt;

&lt;p&gt;macOS&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl -fL [https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence](https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence) -o /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Linux&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo curl -fL [https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence](https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence) -o /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make the binary executable&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod a+x /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect your local machine to your Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ telepresence connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test the connection by cURL-ling our front-end service via its Kubernetes service name.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl [http://verylargejavaservice.default:8080/color](http://verylargejavaservice.default:8080/color)
“green”
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Notice two things here:&lt;/p&gt;

&lt;p&gt;A) You are able to refer to the remote Service directly via its internal cluster name as if your development machine is inside the cluster&lt;/p&gt;

&lt;p&gt;B) The color returned by the remote DataProcessingService is “green”, versus the local result you saw above of “blue”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Let’s take our connecting to the cluster a step further now by initiating an intercept from our remote dataprocessingservice to our local debug session running on &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;. This will send any traffic destined for the remote service down to our debugger&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;telepresence intercept dataprocessingservice &lt;span class="nt"&gt;--port&lt;/span&gt; 3000
Launching Telepresence Daemon v2.1.2 &lt;span class="o"&gt;(&lt;/span&gt;api v3&lt;span class="o"&gt;)&lt;/span&gt;
Connecting to traffic manager…
Connected to context peteroneilljr-office-hours &lt;span class="o"&gt;([&lt;/span&gt;https://34.67.161.22]&lt;span class="o"&gt;(&lt;/span&gt;https://34.67.161.22&lt;span class="o"&gt;))&lt;/span&gt;
Using deployment dataprocessingservice
intercepted
Intercept name: dataprocessingservice
State : ACTIVE
Destination : 127.0.0.1:3000
Intercepting : all TCP connections
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Let’s try to cURL our front-end service once again. This time we should see our local debugger logging the interaction between the front-end service and our local dataprocessingservice.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="o"&gt;[&lt;/span&gt;http://verylargejavaservice.default:8080/]&lt;span class="o"&gt;(&lt;/span&gt;http://verylargejavaservice.default:8080/&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Awesome! Now that our local debugger is hooked up to the remote cluster we should be all set to start debugging!&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 4: Step Through Your Breakpoints
&lt;/h2&gt;

&lt;p&gt;Let’s set a breakpoint on the getColor function. Hover over just to the left of the line number and click on the red dot. Once it’s set, visit &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;localhost:3000&lt;/a&gt; in your browser, VScode should pop to the foreground. Click the play button to close the request and the webpage should finish loading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AqAb8oiaF11niS3BX" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AqAb8oiaF11niS3BX"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now your IDE is processing traffic directly from the cluster, let’s open up our sample application in a web browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://verylargejavaservice.default:8080" rel="noopener noreferrer"&gt;http://verylargejavaservice.default:8080&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VScode will pop open on the breakpoint again. This time let’s click the step button (the one with the clockwise rotating arrow) until the breakpoint reaches the fmt.Println and you see the c: “green” in the variables window. If you try to adjust the variable directly in the IDE variables pane you will receive the following error.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Failed to set variable — literal string can not be allocated because function calls are not allowed without using ‘call’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;More details about the bug here: &lt;a href="https://github.com/golang/vscode-go/issues/1173" rel="noopener noreferrer"&gt;https://github.com/golang/vscode-go/issues/1173&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To work around this issue we can adjust the variable in the debug terminal with the command call c = “orange”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AfzzykE46ruEa51u0" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AfzzykE46ruEa51u0"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s end the breakpoint and send the info back to the web browser. Click the play button in the top command bar.&lt;/p&gt;

&lt;p&gt;Awesome! You should see the verylargejavaservice loading with the orange background colors. We’ve successfully made a request to our front-end service running in our remote cluster, where Telepresence intercepted the traffic sending it through our local debugging session, where we inspected and updated the data to send back the orange color variable.&lt;/p&gt;

&lt;p&gt;Introducing Telepresence into the development flow gives us a bridge from our local development environment to our remote Kubernetes cluster. Allowing us to test and debug traffic from the remote cluster as if it was on our local machine, in other words, giving us feedback from our local service to see how it will perform when running in the remote cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: (Bonus) Clone your Pod’s environment variables to your debug session
&lt;/h2&gt;

&lt;p&gt;In some scenarios, a service can inherit environment variables from the cluster or configuration. In cases like this, cloning the remote deployments environment variables to our local service will be necessary to ensure closer parity between the two environments. To do this we will leverage Telepresence to copy the environment variables from the remote service and save them to a .env file. Then we will tell VScode to add these variables to the debug environment.&lt;/p&gt;

&lt;p&gt;To let VScode know that we want to include an environment file, we need to create a launch configuration specifying where the file is going to be.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stop your last debugging session if it is still running.&lt;/li&gt;
&lt;li&gt;Click the Gear Icon in the top right corner to open the launch.json file.&lt;/li&gt;
&lt;li&gt;From here click the Add Configuration button.&lt;/li&gt;
&lt;li&gt;Select Go: Launch File from the dropdown menu.&lt;/li&gt;
&lt;li&gt;Add the line &lt;code&gt;"envFile": "${workspaceFolder}/go-debug.env"&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AUVJa0F7EJnPuaWPS" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AUVJa0F7EJnPuaWPS"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Launch with env file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"go"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"request"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"launch"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"debug"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"program"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${file}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"envFile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${workspaceFolder}/go-debug.env"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that VScode knows where the file is going to be located let’s generate the file with Telepresence. This time run the intercept from the terminal window within VScode to ensure the file is created in the project directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;telepresence intercept dataprocessingservice &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3000 &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;go-debug.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AoqM3wW4uiFddwjM6" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2AoqM3wW4uiFddwjM6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, start the debug session. Click the debug icon on the right side and click the launch session button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AoxEFynFBcBO9nQZy" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AoxEFynFBcBO9nQZy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now your local process is running with all the same variables as if it was running in the remote cluster.&lt;/p&gt;

&lt;p&gt;You can also &lt;a href="https://www.getambassador.io/docs/latest/telepresence/reference/volume/" rel="noopener noreferrer"&gt;locally access volumes mounted&lt;/a&gt; into your remote Services. This is useful if you are storing configuration, tokens, or other state required for the proper execution of the service. We’ll cover this in more detail in a future tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn More About Telepresence
&lt;/h2&gt;

&lt;p&gt;Today, we’ve learned how to use Telepresence to easily debug a Go microservice running in Kubernetes. Now, instead of trying to mock out dependencies or fiddle around with remote debugging, we can iterate quickly with an instant feedback loop when locally debugging using our favorite IDE and tools.&lt;/p&gt;

&lt;p&gt;If you want to learn more about Telepresence, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io/docs/latest/telepresence/quick-start/qs-go/" rel="noopener noreferrer"&gt;Read the docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Watch the &lt;a href="https://www.youtube.com/watch?v=W_a3aErN3NU" rel="noopener noreferrer"&gt;demo video&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read more about &lt;a href="https://www.getambassador.io/docs/latest/telepresence/howtos/intercepts/#intercepts" rel="noopener noreferrer"&gt;Intercepts&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Learn about &lt;a href="https://www.getambassador.io/docs/pre-release/telepresence/howtos/preview-urls/#collaboration-with-preview-urls" rel="noopener noreferrer"&gt;Preview URLs&lt;/a&gt; for easy collaboration with teammates&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://a8r.io/Slack" rel="noopener noreferrer"&gt;Join our Slack channel&lt;/a&gt; to connect with the Telepresence community&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vscode</category>
      <category>telepresence</category>
      <category>go</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Go &amp; Kubernetes: Rapidly Developing Golang Microservices</title>
      <dc:creator>Peter ONeill</dc:creator>
      <pubDate>Wed, 03 Mar 2021 19:06:55 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/go-kubernetes-rapidly-developing-golang-microservices-3nlf</link>
      <guid>https://forem.com/ambassadorlabs/go-kubernetes-rapidly-developing-golang-microservices-3nlf</guid>
      <description>&lt;h3&gt;
  
  
  Build a cloud development environment with Telepresence &amp;amp; Golang
&lt;/h3&gt;

&lt;p&gt;Kubernetes is a container orchestration platform that enables its users to deploy and scale their microservice applications at any scale: from one service to thousands of services. Unleashing the power of Kubernetes is often more complicated than it may initially seem — the learning curve for application developers is particularly steep. Knowing what to do is just half the battle, then you have to choose the best tools to do the job. So how do Go developers create a development workflow on Kubernetes that is fast and effective?&lt;/p&gt;

&lt;p&gt;Application developers face two unique challenges when trying to create productive development workflows on Kubernetes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Most development workflows are optimized for local development, and Kubernetes applications are designed to be native to the cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As Kubernetes applications evolve into complex microservice architectures, the development environments also become more complex as every microservice adds additional dependencies. These services quickly start to need more resources than are available in your typical local development environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this tutorial, we’ll set up a development environment for Kubernetes and make a change to a Golang microservice. Normally to develop locally, we’d have to wait for a container build, push to registry and deploy to see the effect of our code change. Instead, we’ll use &lt;a href="http://www.getambassador.io/products/telepresence/" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; to see the results of our change instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy a Sample Microservices Application
&lt;/h2&gt;

&lt;p&gt;For our example, we’ll make code changes to a Go service running between a resource-intensive Java service and a large datastore. We’ll start by deploying a sample microservice application consisting of 3 services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VeryLargeJavaService&lt;/strong&gt;: A memory-intensive service written in Java that generates the front-end graphics and web pages for our application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DataProcessingService&lt;/strong&gt;: A Golang service that manages requests for information between the two services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VeryLargeDataStore&lt;/strong&gt;: A large datastore service that contains the sample data for our Edgey Corp store.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: We’ve called these VeryLarge services to emphasize the fact that your local environment may not have enough CPU and RAM, or you may just not want to pay for all that extra overhead for every developer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2A7J_48_5o8juPX5E6" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3200%2F0%2A7J_48_5o8juPX5E6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this architecture diagram, you’ll notice that requests from users are routed through an ingress controller to our services. For simplicity’s sake, we’ll skip the step of &lt;a href="https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/#kubernetes-yaml" rel="noopener noreferrer"&gt;deploying an ingress controller&lt;/a&gt; in this tutorial. If you’re ready to use Telepresence in your own setup and need a simple way to set up an ingress controller, we recommend checking out the &lt;a href="https://www.getambassador.io/products/edge-stack/" rel="noopener noreferrer"&gt;Ambassador Edge Stack&lt;/a&gt; which can be easily configured with the &lt;a href="https://app.getambassador.io/initializer" rel="noopener noreferrer"&gt;K8s Initializer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s deploy the sample application to your Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml]&lt;span class="o"&gt;(&lt;/span&gt;https://raw.githubusercontent.com/datawire/edgey-corp-go/main/k8s-config/edgey-corp-web-app-no-mapping.yaml&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Set up your local Go development environment
&lt;/h2&gt;

&lt;p&gt;We’ll need a local development environment so that we can edit the &lt;code&gt;DataProcessingService&lt;/code&gt; service. As you can see in the architecture diagram above, the &lt;code&gt;DataProcessingService&lt;/code&gt; is dependent on both the &lt;code&gt;VeryLargeJavaService&lt;/code&gt; and the &lt;code&gt;VeryLargeDataStore&lt;/code&gt;, so in order to make a change to this service, we’ll have to interact with these other services as well. Let’s get started!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository for this application from GitHub.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    git clone &lt;span class="o"&gt;[&lt;/span&gt;https://github.com/datawire/edgey-corp-go.git]&lt;span class="o"&gt;(&lt;/span&gt;https://github.com/datawire/edgey-corp-go.git&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Change directories into the DataProcessingService
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nb"&gt;cd &lt;/span&gt;edgey-corp-go/DataProcessingService
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the Go server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    go build main.go &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;See your service running!
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    10:23:41 app | Welcome to the DataProcessingGoService!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;In another terminal window, curl localhost:3000/color to see that the service is returning blue.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nv"&gt;$ &lt;/span&gt;curl localhost:3000/color

    “blue”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Rapid Development with Telepresence
&lt;/h2&gt;

&lt;p&gt;Instead of waiting for a container image to build, pushed to a repository, and deployed to our Kubernetes cluster we are going to use Telepresence, an open source Cloud Native Computing Foundation project. Telepresence creates a bidirectional network connection between your local development and the Kubernetes cluster to enable &lt;a href="https://www.getambassador.io/use-case/local-kubernetes-development/" rel="noopener noreferrer"&gt;fast, efficient Kubernetes development&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Telepresence (~60MB):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;# Mac OS X&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fL&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence]&lt;span class="o"&gt;(&lt;/span&gt;https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/telepresence

    &lt;span class="c"&gt;#Linux&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fL&lt;/span&gt; https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Make the binary executable
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+x /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test Telepresence by connecting to the remote cluster
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nv"&gt;$ &lt;/span&gt;telepresence connect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Send a request to the Kubernetes API server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-ik&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;https://kubernetes.default.svc.cluster.local]&lt;span class="o"&gt;(&lt;/span&gt;https://kubernetes.default.svc.cluster.local&lt;span class="o"&gt;)&lt;/span&gt;

    HTTP/1.1 401 Unauthorized
    Cache-Control: no-cache, private
    Content-Type: application/json
    Www-Authenticate: Basic &lt;span class="nv"&gt;realm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"kubernetes-master"&lt;/span&gt;
    Date: Tue, 09 Feb 2021 23:21:51 GMT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! You’ve successfully configured Telepresence. Right now, Telepresence is intercepting the request you’re making to the Kubernetes API server, and routing over its direct connection to the cluster instead of over the Internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Intercept Your Golang Service
&lt;/h2&gt;

&lt;p&gt;An intercept is a routing rule for Telepresence. We can create an intercept to route traffic intended for the &lt;code&gt;DataProcessingService&lt;/code&gt; in the cluster and instead route all of the traffic to the &lt;em&gt;local&lt;/em&gt; version of the &lt;code&gt;DataProcessingService&lt;/code&gt; running on port 3000.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the intercept
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    telepresence intercept dataprocessingservice — port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Access the application directly with Telepresence. Visit &lt;a href="http://verylargejavaservice:8080" rel="noopener noreferrer"&gt;http://verylargejavaservice:8080&lt;/a&gt;. Again, Telepresence is intercepting requests from your browser and routing them directly to the Kubernetes cluster.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, we’ll make a code change. Open edgey-corp-go/DataProcessingService/main.go and change the value of the color variable from blue to orange. Save the file, stop the previous server instance and start it again with go build main.go &amp;amp;&amp;amp; ./main.&lt;/li&gt;
&lt;li&gt;Reload the page in your browser and see how the color has changed from blue to orange!&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;That’s it! With Telepresence we saw how quickly we can go from editing a local service to seeing how these changes will look when deployed with the larger application. When you compare it to our original process of building and deploying a container after every change, it’s very easy to see how much time you can save especially as we make more complex changes or run even larger services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn More about Telepresence
&lt;/h2&gt;

&lt;p&gt;Today, we’ve learned how to use Telepresence to rapidly iterate on a Golang microservice running in Kubernetes. Now, instead of waiting for slow local development processes, we can iterate quickly with an instant feedback loop and a productive cloud native development environment.&lt;/p&gt;

&lt;p&gt;If you want to learn more about Telepresence, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.getambassador.io/docs/latest/telepresence/quick-start/qs-go/" rel="noopener noreferrer"&gt;Read the docs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Watch the &lt;a href="https://www.youtube.com/watch?v=W_a3aErN3NU" rel="noopener noreferrer"&gt;demo video&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Read more about &lt;a href="https://www.getambassador.io/docs/latest/telepresence/howtos/intercepts/#intercepts" rel="noopener noreferrer"&gt;Intercepts&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn about &lt;a href="https://www.getambassador.io/docs/pre-release/telepresence/howtos/preview-urls/#collaboration-with-preview-urls" rel="noopener noreferrer"&gt;Preview URLs&lt;/a&gt; for easy collaboration with teammates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://d6e.co/slack" rel="noopener noreferrer"&gt;Join our Slack channel&lt;/a&gt; to connect with the Telepresence community&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our next tutorial, we’ll use Telepresence to set up a local Kubernetes development environment and then use Delve to set breakpoints and debug a broken service. To be notified when more tutorials are available, make sure to &lt;a href="http://www.getambassador.io" rel="noopener noreferrer"&gt;check out our website&lt;/a&gt; or &lt;a href="http://www.twitter.com/ambassadorlabs" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>kubernetes</category>
      <category>microservices</category>
      <category>telepresence</category>
    </item>
    <item>
      <title>Developing Python Applications on Kubernetes</title>
      <dc:creator>Richard Li</dc:creator>
      <pubDate>Tue, 02 Mar 2021 21:29:59 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/developing-python-applications-on-kubernetes-5339</link>
      <guid>https://forem.com/ambassadorlabs/developing-python-applications-on-kubernetes-5339</guid>
      <description>&lt;p&gt;Kubernetes has become the de-facto standard for running cloud applications. With Kubernetes, users can deploy and scale containerized applications at any scale: from one service to thousands of services. The power of Kubernetes is not free — the learning curve is particularly steep, especially for application developers. Knowing what to do is just half the battle, then you have to choose the best tools to do the job. So how do Python developers create a development workflow on Kubernetes that is fast and effective?&lt;/p&gt;

&lt;p&gt;There are two unique challenges with creating productive development workflows on Kubernetes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Most development workflows are optimized for local development, and Kubernetes applications are designed to be native to the cloud.&lt;/li&gt;
&lt;li&gt;Most Kubernetes applications either start off or evolve into a microservices architecture. Thus, your development environment becomes more complex as every microservice adds additional dependencies to test code. And in turn, these &lt;a href="https://www.getambassador.io/resources/eliminate-local-resource-constraints/" rel="noopener noreferrer"&gt;services quickly become too resource-intensive&lt;/a&gt; and exceed the limits of your local machine.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this tutorial, we’ll walk through how to set up a realistic development environment for Kubernetes. Typically, we’d have to wait for a container build, push to registry and deploy to see the impact of our change. Instead, we’ll use Telepresence and see the results of our change instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.getambassador.io/products/telepresence/" rel="noopener noreferrer"&gt;Telepresence&lt;/a&gt; is an open source project that lets you run your microservice locally, while creating a bi-directional network connection to your Kubernetes cluster. This approach enables the microservice running locally to communicate to other microservices running in the cluster, and vice versa. Since you’re running the microservice locally, you’re able to &lt;a href="https://www.getambassador.io/use-case/local-kubernetes-development/" rel="noopener noreferrer"&gt;benefit&lt;/a&gt; from any workflow or tool that you run locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy a Sample Microservices Application
&lt;/h2&gt;

&lt;p&gt;For our example, we’ll make code changes to a Go service running between a resource-intensive Java service and a large datastore. We’ll start by deploying a sample microservice application consisting of 3 services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;VeryLargeJavaService&lt;/code&gt; A memory-intensive service written in Java that generates the front-end graphics and web pages for our application&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DataProcessingService&lt;/code&gt; A Python service that manages requests for information between the two services.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VeryLargeDataStore&lt;/code&gt; A large datastore service that contains the sample data for our Edgey Corp store.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: We’ve called these VeryLarge services to emphasize the fact that your local environment may not have enough CPU and RAM, or you may just not want to pay for all that extra overhead for every developer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjs5j7k6m3etv8xno5pc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjs5j7k6m3etv8xno5pc.png" alt="image" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this architecture diagram, you’ll notice that requests from users are routed through an ingress controller to our services. For simplicity’s sake, we’ll skip the step of &lt;a href="https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/#kubernetes-yaml" rel="noopener noreferrer"&gt;deploying an ingress controller&lt;/a&gt; in this tutorial. If you’re ready to use Telepresence in your own setup and need a simple way to set up an ingress controller, we recommend checking out the &lt;a href="https://www.getambassador.io/products/edge-stack/" rel="noopener noreferrer"&gt;Ambassador Edge Stack&lt;/a&gt; which can be easily configured with the &lt;a href="https://app.getambassador.io/initializer" rel="noopener noreferrer"&gt;K8s Initializer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s deploy the sample application to your Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/datawire/edgey-corp-python/master/k8s-config/edgey-corp-web-app-no-mapping.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: This tutorial assumes you have access to a Kubernetes cluster with &lt;code&gt;kubectl&lt;/code&gt; access. If you don’t, some options include MicroK8S and Docker Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Set up your local Python development environment
&lt;/h2&gt;

&lt;p&gt;We’ll need a local development environment so that we can edit the &lt;code&gt;DataProcessingService&lt;/code&gt; service. As you can see in the architecture diagram above, the &lt;code&gt;DataProcessingService&lt;/code&gt; is dependent on both the &lt;code&gt;VeryLargeJavaService&lt;/code&gt; and the &lt;code&gt;VeryLargeDataStore&lt;/code&gt;, so in order to make a change to this service, we’ll have to interact with these other services as well. Let’s get started!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository for this application from GitHub:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/datawire/edgey-corp-python.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Install the application dependencies with &lt;code&gt;pip&lt;/code&gt; (you may need to type &lt;code&gt;pip3&lt;/code&gt; if you have Python 3 installed):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd edgey-corp-python/DataProcessingService/
pip install flask requests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the application (you may need to type &lt;code&gt;python3&lt;/code&gt; if you have Python 3 installed):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test the application. In another terminal window, we’ll send a request to the service, which should return blue.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl localhost:3000/color
blue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Make Code Changes with Telepresence
&lt;/h2&gt;

&lt;p&gt;To test a code change with Kubernetes, you typically need to build a container image, push the image to a repository, and deploy the Kubernetes cluster. This takes minutes.&lt;/p&gt;

&lt;p&gt;Telepresence is an open source, Cloud-Native Computing Foundation project that solves exactly this problem. By creating a bidirectional network connection between your local development environment and the Kubernetes cluster, Telepresence enables fast, local development.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Telepresence (~60MB):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Mac OS X
sudo curl -fL https://app.getambassador.io/download/tel2/darwin/amd64/latest/telepresence -o /usr/local/bin/telepresence
# Linux
sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Make the binary executable:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod a+x /usr/local/bin/telepresence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test Telepresence by connecting to the remote Kubernetes cluster:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;telepresence connect
```


4. Send a request to the Kubernetes API server:



```
curl -ik https://kubernetes.default.svc.cluster.local
HTTP/1.1 401 Unauthorized
Cache-Control: no-cache, private
Content-Type: application/json
Www-Authenticate: Basic realm="kubernetes-master"
Date: Tue, 09 Feb 2021 23:21:51 GMT
```



Congratulations! You’ve successfully configured Telepresence. Telepresence is intercepting the request you’re making to the Kubernetes API server, and routing over its direct connection to the cluster instead of over the Internet.

## Step 4: Set up an Intercept
An intercept is a routing rule for Telepresence. We can create an intercept that will route traffic intended for the `DataProcessingService` in the cluster and route all the traffic to the local version of the DataProcessingService running on port 3000.
1. Create the intercept:


```
telepresence intercept dataprocessingservice --port 3000
```


2. Access the application directly with Telepresence. In your browser, go to http://verylargejavaservice:8080. Again, Telepresence is intercepting requests from your browser and routing them directly to the Kubernetes cluster.
3. Now, let’s make a code change. Open `edgey-corp-python/DataProcessingService/app.py` and change `DEFAULT_COLOR` from `blue` to `orange`. Save the file.
4. Reload the page in your browser, and see how the color has changed from blue to orange.

That’s it! With Telepresence we saw how quickly we can go from editing a local service to seeing how these changes will look when deployed with the larger application. When you compare it to our original process of building and deploying a container after every change, it’s very easy to see how much time you can save especially as we make more complex changes or run even larger services.


## Learn More about Telepresence
Typically, developers at organizations adopting Kubernetes face challenges slow feedback loops from inefficient local development environments. Today, we’ve learned how to use Telepresence to set up fast, efficient development environments for Kubernetes and get back to the instant feedback loops you had with your legacy applications.

If you want to learn more about Telepresence, check out the following resources:
* Watch a [demo video](https://www.youtube.com/watch?v=W_a3aErN3NU), which shows more details on different features in Telepresence
* Check out the [Python Quickstart for Telepresence](http://docs/latest/telepresence/quick-start/qs-python/)
* Learn about [Preview URLs](https://www.getambassador.io/docs/pre-release/telepresence/howtos/preview-urls/#collaboration-with-preview-urls)  for easy collaboration with teammates
* [Join our Slack channel](https://d6e.co/slack) to connect with the Telepresence community

In our next tutorial, we’ll use Telepresence to set up a local Kubernetes development environment and then use Pycharm to set breakpoints and debug a broken service. To be notified when more tutorials are available, make sure to check out our [website](https://www.getambassador.io) or follow us on [Twitter](http://www.twitter.com/ambassadorlabs).

*This post was originally published on [Python Pandemonium](https://medium.com/python-pandemonium/developing-python-applications-on-kubernetes-75be68a3f0f9).*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>python</category>
      <category>kubernetes</category>
      <category>telepresence</category>
      <category>microservices</category>
    </item>
    <item>
      <title>SRE vs Platform Engineering</title>
      <dc:creator>Richard Li</dc:creator>
      <pubDate>Tue, 16 Feb 2021 21:08:00 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/the-rise-of-cloud-native-engineering-organizations-4dge</link>
      <guid>https://forem.com/ambassadorlabs/the-rise-of-cloud-native-engineering-organizations-4dge</guid>
      <description>&lt;p&gt;Over the past decade, engineering and technology organizations have converged on a common set of best practices for building and deploying cloud-native applications. These best practices include continuous delivery, containerization, and building observable systems.&lt;/p&gt;

&lt;p&gt;At the same time, cloud-native organizations have radically changed how they’re organized, moving from large departments (development, QA, operations, release) to smaller, independent development teams. These application development teams are supported by two new functions: site reliability engineering and platform engineering. SRE and platform engineering are spiritual successor of traditional operations teams, and bring the discipline of software engineering to different aspects of operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j8q19xyfmslnilcxoe1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j8q19xyfmslnilcxoe1.png" alt="image" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Site Reliability Engineering and Platform Engineering
&lt;/h1&gt;

&lt;p&gt;Platform engineering teams apply software engineering principles to accelerate software delivery. Platform engineers ensure application development teams are productive in all aspects of the software delivery lifecycle.&lt;/p&gt;

&lt;p&gt;Site reliability engineering teams apply software engineering principles to improve reliability. Site reliability engineers minimize the frequency and impact of failures that can impact the overall reliability of a cloud application.&lt;/p&gt;

&lt;p&gt;These two teams are frequently confused and the terms are sometimes used interchangeably. Indeed, some organizations consolidate SRE and platform engineering into the same function. This occurs because both roles apply a common set of principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform as product. These teams spend time understanding their internal customers, building roadmaps, having a planned release cadence, writing documentation, and doing all the things that go into a software product.&lt;/li&gt;
&lt;li&gt;Self-service platforms. These teams build their platforms for internal use. In these platforms, best practices are encoded, so that the users of these platforms don’t need to worry about it -- they just push the button. In the &lt;a href="https://puppet.com/resources/report/2020-state-of-devops-report/" rel="noopener noreferrer"&gt;Puppet Labs 2020 State of DevOps report&lt;/a&gt;, Puppet Labs found that High functioning DevOps organizations had more self-service infrastructure than low DevOps evolution organizations.&lt;/li&gt;
&lt;li&gt;A constant focus on &lt;a href="https://sre.google/sre-book/eliminating-toil/" rel="noopener noreferrer"&gt;eliminating toil&lt;/a&gt;. As defined in the Google SRE book, toil is manual, repetitive, automatable, tactical work. The best SRE and platform teams identify toil, and work to eliminate it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Platform Engineering
&lt;/h1&gt;

&lt;p&gt;Platform engineers constantly examine the entire software development lifecycle from source to production. From this introspective process, they build a workflow that enables application developers to rapidly code and ship software. A basic workflow typically includes a source control system connected with a continuous integration system, along with a way to deploy artifacts into production.&lt;/p&gt;

&lt;p&gt;As the number of application developers using the workflow grows, the needs of the platform evolves. Different teams of application developers need similar but different workflows, so self-service infrastructure becomes important. Common platform engineering targets for self-service include CI/CD, alerting, and deployment workflows. &lt;/p&gt;

&lt;p&gt;In addition to self-service, education and collaboration become challenges. Platform engineers find they increasingly spend time educating application developers on best practices and how to best use the platform. Application developers also find that they depend on other teams of application developers, and look to the platform engineering team to give them the tools to collaborate productively with different teams.&lt;/p&gt;

&lt;h1&gt;
  
  
  Site Reliability Engineering
&lt;/h1&gt;

&lt;p&gt;Site reliability engineers create and evolve systems to automatically run applications, reliably. The concept of site reliability engineering originated at Google, and is documented in detail in the Google SRE Book. Ben Treynor Sloss, the SVP at Google responsible for technical operations, described SRE as “what happens when you ask a software engineer to design an operations team.” &lt;/p&gt;

&lt;p&gt;SREs define service level objectives and build systems to help services achieve these objectives. These systems evolve into a platform and workflow that encompass monitoring, incident management, eliminating single points of failure, failure mitigation, and more.&lt;/p&gt;

&lt;p&gt;A key part of SRE culture is to treat every failure as a failure in the reliability system. Rigorous post-mortems are critical to identifying the root cause of the failure, and corrective actions are introduced into the automatic system to continue to improve reliability.&lt;/p&gt;

&lt;h1&gt;
  
  
  SRE and Platform Engineering at New Relic
&lt;/h1&gt;

&lt;p&gt;One of us (Bjorn Freeman-Benson) managed the engineering organization at New Relic until 2015 as it grew from a handful of customers to tens of thousands of customers, all sending millions of requests per second into the cloud. New Relic had independent SRE and platform engineering teams that followed the general principles outlined above.&lt;/p&gt;

&lt;p&gt;One of the reasons these teams were built separately was that the people who thrived in these roles differed. While both SREs and platform engineers need strong systems engineering skills in addition to classic programming skills, the roles dictate very different personality types. SREs tend to enjoy crisis management and get an adrenaline rush out of troubleshooting an outage. SRE managers thrive under intense pressure and are good at recruiting and managing similarly minded folks. On the other hand, platform engineers are more typical software engineers, preferring to work without interruption on big, complex problems. Platform engineering managers prefer to operate on a consistent cadence.&lt;/p&gt;

&lt;h1&gt;
  
  
  DevOps and GitOps
&lt;/h1&gt;

&lt;p&gt;Over the past decade, DevOps has become a popular term to describe many of these practices. More recently, GitOps has also emerged as a popular term. How do DevOps and GitOps relate to platform and SRE teams?&lt;/p&gt;

&lt;p&gt;Both DevOps and GitOps are a loosely codified set of principles of how to manage different aspects of infrastructure. The core principles of both of these philosophies -- automation, infrastructure as code, application of software engineering -- are very similar.&lt;/p&gt;

&lt;p&gt;DevOps is a broad movement that began with a focus on eliminating traditional silos between development and operation. Over time, strategies such as infrastructure automation and engineering applications with operations in mind have gained widespread acceptance as ways better build highly reliable applications.&lt;/p&gt;

&lt;p&gt;GitOps is an approach for application delivery. In GitOps, declarative configuration is used to codify the desired state of the application at any moment in time. This configuration is managed in a versioned source control system as the single source of truth. This ensures auditability, reproducibility, and consistency of configuration.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;DevOps is a set of guiding principles for SRE, while GitOps is a set of guiding principles for platform engineering.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Unlocking application development productivity
&lt;/h1&gt;

&lt;p&gt;Site reliability engineering and platform engineering are two functions that are critical to optimizing engineering organizations for building cloud-native applications. The SRE team works to deliver infrastructure for highly reliable applications, while the platform engineering team works to deliver infrastructure for rapid application development. Together, these two teams unlock the productivity of application development teams.&lt;/p&gt;

&lt;p&gt;This story was originally published on the &lt;a href="https://www.getambassador.io/resources/rise-of-cloud-native-engineering-organizations/" rel="noopener noreferrer"&gt;Ambassador blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Optimize the Kubernetes Developer Experience with Version 0</title>
      <dc:creator>Richard Li</dc:creator>
      <pubDate>Fri, 10 Jul 2020 13:39:46 +0000</pubDate>
      <link>https://forem.com/ambassadorlabs/optimize-the-kubernetes-developer-experience-with-version-0-28n7</link>
      <guid>https://forem.com/ambassadorlabs/optimize-the-kubernetes-developer-experience-with-version-0-28n7</guid>
      <description>&lt;p&gt;One of the core promises of microservices is development team autonomy, which should, in theory, translate into faster and better decision making. But sometimes, this theory doesn’t translate into reality.&lt;/p&gt;

&lt;p&gt;Why is this the case?&lt;/p&gt;

&lt;p&gt;There are a multitude of reasons for microservices not working well. Microservices, cloud-native, and Kubernetes are a new approach and culture shift, and there’s a lot of good ways and bad ways to approach the challenge.&lt;/p&gt;

&lt;p&gt;One of the keys to success is enabling a consistent developer experience for each microservice from day 0, which is critical for unlocking team autonomy and development velocity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrapping a Microservice
&lt;/h2&gt;

&lt;p&gt;Creating microservices should be cheap and easy. This enables app dev teams to quickly build and ship new microservices to address specific business needs without being encumbered by preexisting code. At the same time, this agility and flexibility do come at a cost — applications become distributed, dynamic organisms that can be harder to develop, test, and debug.&lt;/p&gt;

&lt;h2&gt;
  
  
  Better Developer Experience == Better Customer Experience
&lt;/h2&gt;

&lt;p&gt;In a &lt;a href="https://www.getambassador.io/podcasts/gene-kim-on-developer-productivity-the-five-ideals-and-platforms/" rel="noopener noreferrer"&gt;recent Ambassador podcast&lt;/a&gt;, Gene Kim spoke about how a great developer experience is critical to delivering value to customers. By creating a great developer experience, developers can ship more code, which results in happier customers.&lt;/p&gt;

&lt;p&gt;We’ve seen a similar trend in organizations that successfully adopt microservices: an emphasis on the developer experience. While it may not be a “strategic” initiative in the organization, usually there’s someone at the company who is passionate about creating a great developer workflow and is able to spend time working on continuously improving that developer workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Microservices Developer Experience
&lt;/h2&gt;

&lt;p&gt;With a monolith, there’s a common application that is the target for the development workflow. With microservices, there is no longer a single common application. Every new microservice requires a developer workflow. Without due care, it’s easy to have a smorgasbord of microservices, all with poor developer workflows. In this situation, velocity actually decreases since microservices can’t be easily and rapidly shipped. This defeats the entire rationale for adopting microservices in the first place, and development slows.&lt;/p&gt;

&lt;p&gt;At the same time, &lt;a href="https://blog.getambassador.io/why-it-ticketing-systems-dont-work-with-microservices-18e2be509bf6" rel="noopener noreferrer"&gt;microservices presents an opportunity for improving the developer experience&lt;/a&gt;. By optimizing the developer experience of each microservice, teams can build the best possible developer experience for the team (and not the organization), and continue to optimize that experience as the application and team evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Experience, Defined
&lt;/h2&gt;

&lt;p&gt;A developer experience is the workflow a developer uses to develop, test, deploy, and release software. The developer experience typically consists of both an inner dev loop and an outer dev loop. &lt;a href="https://blog.getambassador.io/four-approaches-for-microservice-testing-inner-dev-loops-in-kubernetes-bcf779668179" rel="noopener noreferrer"&gt;The inner dev loop&lt;/a&gt; is a single developer workflow. A single developer should be able to set up and use an efficient inner dev loop to code and test changes quickly. The inner dev loop is typically used for pre-commit changes. The outer dev loop is a shared developer workflow that is orchestrated by a continuous integration system. The outer dev loop is used for post-commit changes and includes automated builds, tests, and deploys.&lt;/p&gt;

&lt;p&gt;Engineering a good inner and outer dev loop is key to a great developer experience and unlocking the potential of microservices. So how can an engineer help in building a great developer experience?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Version 0 Strategy
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.getambassador.io/learn/kubernetes-glossary/version-0/" rel="noopener noreferrer"&gt;version 0 strategy&lt;/a&gt; involves shipping an end-to-end development and deployment workflow as the first milestone — before any features are coded. A good test of a version 0 milestone is if a developer on a different team is able to independently code, test, and release a change to the microservice without consulting the original team. This implies a version 0 has a development environment, a deployment workflow, and documentation that explains how to get started and ship. With a version 0 in place, the microservices team then begins with feature development, knowing that their ability to rapidly iterate and ship is already in place.&lt;/p&gt;

&lt;p&gt;The Version 0 approach works well for a number of reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The codebase is very simple, so there is no reverse engineering of obscure dependencies, monkey patches, or any other gremlins to get a working environment&lt;/li&gt;
&lt;li&gt;With no features, there is less pressure from external parties who want to implement changes and adjustments to the roadmap&lt;/li&gt;
&lt;li&gt;A great developer experience accrues benefits over time, so Version 0 maximizes the payback period&lt;/li&gt;
&lt;li&gt;Most importantly, version 0 sets the tone for the microservice, which is that developer experience is important!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Version 0 for Engineers
&lt;/h2&gt;

&lt;p&gt;Any engineer can adopt the version 0 practice (and should!). &lt;a href="https://www.getambassador.io/resources/enabling-full-cycle-development" rel="noopener noreferrer"&gt;A development team should have full autonomy over a microservice&lt;/a&gt;, which includes the development timeline and workflow! So starting with a Version 0 will help the team rapidly bootstrap the microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version 0 for Managers
&lt;/h2&gt;

&lt;p&gt;Managers can support Version 0 across the organization by asking engineering teams that are creating new microservices to start with a Version 0. As engineering organizations grow, the organization could choose to assign platform engineers focused on development workflows. These platform engineers should not implement Version 0, but instead provide tools, templates, and best practices to the microservice teams on how best to build a version 0. The Netflix engineering team adopted &lt;a href="https://netflixtechblog.com/full-cycle-developers-at-netflix-a08c31f83249#0df4" rel="noopener noreferrer"&gt;this approach&lt;/a&gt; to developer empowerment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Every engineer has felt the pain of a bad developer workflow. A trivial one-line fix takes a half-day to complete. Microservices can exacerbate this problem. The Version 0 strategy is a simple but powerful strategy that will help integrate developer experience into your organization’s development workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on the &lt;a href="https://blog.getambassador.io/k8s-might-slow-you-down-but-theres-one-thing-you-can-do-about-it-the-version-0-strategy-c81a1a0ff6e" rel="noopener noreferrer"&gt;Ambassador blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>microservices</category>
      <category>ambassador</category>
    </item>
  </channel>
</rss>
