<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Yuval Oren</title>
    <description>The latest articles on Forem by Yuval Oren (@yuvalo).</description>
    <link>https://forem.com/yuvalo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/yuvalo"/>
    <language>en</language>
    <item>
      <title>9 Ways to Speed Up Your CI/CD Pipelines</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Thu, 02 Apr 2020 13:21:47 +0000</pubDate>
      <link>https://forem.com/yuvalo/9-ways-to-speed-up-your-ci-cd-pipelines-5e19</link>
      <guid>https://forem.com/yuvalo/9-ways-to-speed-up-your-ci-cd-pipelines-5e19</guid>
      <description>&lt;p&gt;Do you know that one web service you have that builds multiple executables, a database migration script, and downloads a million libraries? &lt;br&gt;&lt;br&gt;
You wait an hour just to watch it get to 80% when it fails. For the third time by now, which means you have to make more pipeline modifications. And of course, wait another hour to see if that worked.&lt;/p&gt;

&lt;p&gt;Oh, and think about all those poor developers waiting on their Pull Request build to finish before merging to master.&lt;/p&gt;

&lt;p&gt;As a DevOps consultant, I spend much of my life waiting on builds. I see them in all flavors, shapes, and sizes. It’s not uncommon to catch me sitting there with a 1000 yards stare, after hitting that build button for the 100th time that day, expecting a different result.&lt;br&gt;&lt;br&gt;
What they don’t tell you before you join this business...&lt;br&gt;
I once joked that my spiritual name would be “The one who stares at build logs.”&lt;/p&gt;

&lt;p&gt;Anyway, there are things you can do to make your and your developer’s lives easier by making your build pipeline faster. Every minute you reduce, increase the development cadence, and reduces resource cost. Yes, and your mental stability.&lt;/p&gt;

&lt;p&gt;So here is a list of things you can do to speed overall pipeline runtime:&lt;/p&gt;
&lt;h2&gt;
  
  
  Cache Modules
&lt;/h2&gt;

&lt;p&gt;Downloading modules at build time takes a significant portion of a build. Whether you are using NPM / Maven / Gradle / PIP, dependencies tend to get bloated, and you pay for it in wait time.&lt;br&gt;
You can use caching to speed things up, instead of starting from scratch on every build.&lt;/p&gt;

&lt;p&gt;Back in the old days, before everyone started to use Docker-based agents for their build (Jenkins / Gitlab / Whatever you cool kids use today), this wasn’t always a problem. The builds sometimes shared these libraries, such as a shared .m2 folder for Maven, and the first build to introduce a library took on the download time. &lt;br&gt;&lt;br&gt;
That introduced other issues, such as conflicting versions, and race conditions when running multiple builds at the same time, but that is a story for another time.&lt;/p&gt;

&lt;p&gt;There are several solutions you can use when using Dockerized based build environments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use a shared volume that includes the cache and attach it to the container.&lt;/li&gt;
&lt;li&gt;Pre-build “build images” that include all the third-party libraries.&lt;/li&gt;
&lt;li&gt;Cache third party packages in a local repository such as Nexus / Artifactory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One other thing that I would recommend is to lock in versions of your dependencies. Not only will you save time by not downloading new versions, but it will also help you avoid conflicts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Build only what you need
&lt;/h2&gt;

&lt;p&gt;“We only run tiny microservices at our company,” - Said no one, ever.&lt;/p&gt;

&lt;p&gt;If you are building actual microservices that take just a few minutes to compile, then I’m proud of you. You are one of a select few that got what microservices are all about, and was able to pull it off.&lt;/p&gt;

&lt;p&gt;The rest of us may still need to live with not so microservices, sometimes monolithic applications that may take much time to compile and test. &lt;br&gt;&lt;br&gt;
Even if you are using microservices, sometimes a Mono Repo makes sense, and then you have the problem of building everything, even if only just one module changed.&lt;/p&gt;

&lt;p&gt;In this case, the solution is straight forward but now always simple - Build only the modules that are relevant for that commit.&lt;/p&gt;

&lt;p&gt;Angular is an excellent example of a framework that has a nifty little tool, Nx, that helps you only to build modules where files changed while respecting dependencies if needed. &lt;br&gt;&lt;br&gt;
Given two git commits,  Nx calculates the changes and then outputs the affected modules for you to build.&lt;/p&gt;

&lt;p&gt;With Maven, you will need to do some of the heavy lifting yourself, but using the dependency tree and other tricks allow you to calculate what to build.&lt;/p&gt;
&lt;h2&gt;
  
  
  Parallel all the things
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gRExH7QB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ecy1226uyxbxikyomkq7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gRExH7QB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ecy1226uyxbxikyomkq7.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When possible, use parallel processing.&lt;br&gt;
For the build phase, Maven allows you to pass the -T flag to specify the number of threads available for the build. In Gradle, it’s the —parallel-threads flag.  &lt;br&gt;&lt;br&gt;
If you have the resources, use them.&lt;/p&gt;

&lt;p&gt;Another phase that can benefit from parallelization is unit testing. These are usually individual, well-scoped tests that should have no problem running alongside each other. In Ruby, you can use gems like knapsack_pro, in JUnit, set the parallel parameter -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;configuration&amp;gt;
    &amp;lt;parallel&amp;gt;all&amp;lt;/parallel&amp;gt;
&amp;lt;/configuration&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  More resources
&lt;/h2&gt;

&lt;p&gt;Sometimes it’s just a matter of underpowered build machines.&lt;br&gt;
More CPUs means more threads.&lt;br&gt;
More memory, well, gives your jobs more memory.&lt;/p&gt;

&lt;p&gt;It’s perfectly ok to try and save money on your DevOps infrastructure, but make sure that you fully understand how it affects the bigger picture. Faster builds, or more builds in parallel, makes for quicker development and delivery.&lt;/p&gt;

&lt;p&gt;This is especially true now that you can have auto-scaled / on-demand workers that don’t have to be up 24/7. Take some of that cost-saving and move it towards faster builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use mockups
&lt;/h2&gt;

&lt;p&gt;Starting up and connecting to third-party services during unit tests is, in most cases, redundant, and wrong. You can use mockups to simulate a connection to these services and run the tests against them.&lt;/p&gt;

&lt;p&gt;For unit tests, you don’t need an actual Redis service, but instead, use a mockup. You are testing your own code, not the Redis driver.&lt;br&gt;
Oh, and using mockups requires fewer resources to run and administer, which is always a bonus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does this test spark joy? Clean up your old tests
&lt;/h2&gt;

&lt;p&gt;Time goes on, and code keeps piling in. We, developers, and engineers tend to be hoarders, and as much fun as it is to delete old code, we usually avoid doing it.&lt;br&gt;
Tests take time, and running an unnecessary test is just a waste.&lt;br&gt;
So go ahead and do some deleting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cPMUprWs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vf2iutn3z85rid2x4vix.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cPMUprWs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vf2iutn3z85rid2x4vix.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Are these really unit tests?
&lt;/h2&gt;

&lt;p&gt;Speaking of unit tests, make sure that your unit tests are just that.&lt;br&gt;&lt;br&gt;
It’s common to see unit tests that cover more than just a small piece of code, but the whole application, and at times, interaction with other external components.&lt;br&gt;&lt;br&gt;
Take a close look at your unit tests, and see if one is, in fact, an integration test. If this is the case, try moving this test further down the pipeline.&lt;br&gt;&lt;br&gt;
While you still run the test in a later phase, if you are bound to fail in an earlier stage, it’s better to fail faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tweak liveness/readiness/health thresholds
&lt;/h2&gt;

&lt;p&gt;This one tip can shave off a few minutes in the deployment phase, and if you haven’t done it already, it’s well worth your time optimizing.&lt;/p&gt;

&lt;p&gt;When you have some sort of load balancing or “service” layer on top of your application, you usually find a health check mechanism to tell which instance/pod/container is ready to accept traffic.&lt;/p&gt;

&lt;p&gt;It goes like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An instance running the application is added to the load balancer, or a pod launches in a deployment.&lt;/li&gt;
&lt;li&gt;The load balancer starts polling the app with the configured health check. There is usually a grace period before starting to poll, to let the app warm up.&lt;/li&gt;
&lt;li&gt;If the application is healthy for enough polling requests, it is considered up, and the application is ready to accept traffic. Usually, there is a configurable threshold for the number of successful poll requests.&lt;/li&gt;
&lt;li&gt;As the application lives, the load balancer keeps polling it, and if the health check fails enough times, again, with a different threshold, the application is taken out of the pool.&lt;/li&gt;
&lt;li&gt;It goes back to the pool if it passes the same number of polls as in #3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These settings are usually generic and, in many cases, remain with default values.&lt;br&gt;
In your scenario, it may not be necessary to wait 5 minutes (just an example) for the application, as it shouldn’t take more than 1 minute for it to be ready.&lt;/p&gt;

&lt;p&gt;Here are some of the things you may want to go over and tweak:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure that your health check reflects the actual status of your application.&lt;/li&gt;
&lt;li&gt;Give your applications time to start by setting the delay.&lt;/li&gt;
&lt;li&gt;Tweak the timeouts, intervals, and thresholds for deciding when the app is down or when it’s up and ready to accept traffic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In Kubernetes, you can find two layers of checks - The liveness check, that determines whether to kill the pod and the readiness check tells the service when to add the pod to the active endpoint list.  A pod can be live, but no ready.&lt;/p&gt;

&lt;p&gt;A word of caution - Make sure that when making these changes, they make sense for production, and don’t cause issues such as flapping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use fresh images
&lt;/h2&gt;

&lt;p&gt;Don’t waste time on updates and system installations - keep your images fresh and up to date.&lt;br&gt;
Consider running a daily task that builds the base images used by your applications.&lt;br&gt;
The same goes, for instance, AMIs and other VM images.&lt;/p&gt;

&lt;p&gt;You gain two things from that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You control these updates before things go to production, which gives you another safety net. You want to fail earlier in your pipeline and not when auto-scaling is launching new instances.&lt;/li&gt;
&lt;li&gt;Builds and deployments run faster as there are fewer updates to install.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://pushbuildtestdeploy.com/signup/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WPhEtQ88--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wcqji1l9emr8hguypbf5.png" alt="Signup to pushbuildtestdeploy"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>pipeline</category>
      <category>cicd</category>
    </item>
    <item>
      <title>I Swear I've Seen This Error Before</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Wed, 19 Feb 2020 11:14:51 +0000</pubDate>
      <link>https://forem.com/yuvalo/i-swear-i-ve-seen-this-error-before-4p9g</link>
      <guid>https://forem.com/yuvalo/i-swear-i-ve-seen-this-error-before-4p9g</guid>
      <description>&lt;center&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3ABq4Ncm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wvbuej7bsrsaxr2mnxjo.png" alt=""&gt;&lt;/center&gt;

&lt;p&gt;I’m currently in the midst of a substantial Kubernetes migration project, and the other day, after a messy merge, an error came up that I’ve definitely seen before. &lt;br&gt;&lt;br&gt;
I remember solving it, but as time passed, the solution slipped away from me. I couldn’t remember what did the trick.&lt;br&gt;
Well, I bet git will tell me. &lt;br&gt;&lt;br&gt;
Scrolling down the history log did highlight a few attempts to solve it, but nothing too specific.&lt;/p&gt;

&lt;p&gt;So how do you deal with errors and issues during your work? What is the best way to document them? These are not necessarily documented bugs that your QA team or automation processes caught, but the stack traces, exceptions, and errors you deal with during development.&lt;/p&gt;

&lt;p&gt;Here are a few things you can help future you:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Add the errors to the git commit message
&lt;/h2&gt;

&lt;p&gt;Create the habit of pasting in the error message / Exception / Stack Trace into your git commit messages.&lt;br&gt;&lt;br&gt;
It’s simple, and the value is right there - you can search for the error and view the solution right there in the code.&lt;br&gt;
Really, there isn’t much to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Take the time and create an issue
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---vvX7uyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s71byx7e7dk88c1rjdfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---vvX7uyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s71byx7e7dk88c1rjdfz.png" alt=""&gt;&lt;/a&gt;&lt;em&gt;&lt;a href="https://twitter.com/brianhogg"&gt;@brianhogg&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can help your team by creating an issue in your bug/ticketing system, describing the full scenario and link to the git commit.&lt;br&gt;&lt;br&gt;
It has more visibility and allows for discussions - someone may have a better view of things and may direct you to a better solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Keep a Development Journal
&lt;/h2&gt;

&lt;p&gt;I like to keep, at the least, an open markdown (never did manage to pick up &lt;a href="https://orgmode.org/"&gt;org-mode&lt;/a&gt;) file to keep notes during development. For me, it tells the story of why I did things in a certain way, and the writing process forces me to think before committing to things. &lt;br&gt;&lt;br&gt;
It’s also a great place to keep short notes, errors, and things you struggled with for future reference.&lt;br&gt;&lt;br&gt;
Later on, you can use these entries to create project documentation or defend your solution.&lt;/p&gt;

&lt;p&gt;In any case, this is another excellent place to document errors, allowing you to understand the full story behind the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Write a blog post
&lt;/h2&gt;

&lt;p&gt;Even better, you can share your solution with the world. There are still so many non-documented errors that, at best, show up in a long and still open git issue thread. &lt;br&gt;&lt;br&gt;
The post doesn’t have to be long as long as you have a searchable error and a solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Answer Stack Overflow questions
&lt;/h2&gt;

&lt;p&gt;I’ll admit that I can do much better at this, but often, such issues are posted as open questions on Stack Overflow. Sometimes there is an answer, but your use case is not 100%  similar, and nothing works.&lt;br&gt;
By the time I do solve the issue, I have a million open tabs, and I forget to write down the answer for what worked for me.&lt;/p&gt;

&lt;p&gt;Be a better community member than me and post answers on Stack Overflow.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>git</category>
    </item>
    <item>
      <title>Should I Use This Helm Chart?</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Fri, 13 Sep 2019 12:27:47 +0000</pubDate>
      <link>https://forem.com/yuvalo/should-i-use-this-helm-chart-ja3</link>
      <guid>https://forem.com/yuvalo/should-i-use-this-helm-chart-ja3</guid>
      <description>&lt;p&gt;There are so many Helm charts out there, and it's very tempting to just pick one and go, but making a rash decision can come back and haunt you later down the road.&lt;/p&gt;

&lt;p&gt;I have to admit that the official Helm repository has come a long way and if it used to mostly serve half baked Charts, by now &lt;em&gt;most&lt;/em&gt; of them even work!&lt;/p&gt;

&lt;p&gt;So, before you blindly pick a Helm chart and incorporate it into your project, I recommend going over the list below and make sure that you do your due diligence.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;tldr: Things you should keep in mind when choosing a Helm Chart:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What does a good implementation looks like?&lt;/li&gt;
&lt;li&gt;Is this the right repo for this application?&lt;/li&gt;
&lt;li&gt;Read the documentation - Readme.md&lt;/li&gt;
&lt;li&gt;Is this project active enough?&lt;/li&gt;
&lt;li&gt;Is this chart stable?&lt;/li&gt;
&lt;li&gt;Will you have to modify it? Or use it as is.&lt;/li&gt;
&lt;li&gt;Is the Chart an overkill for my requirements?&lt;/li&gt;
&lt;li&gt;What about security?&lt;/li&gt;
&lt;li&gt;Take it for a test drive.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What does a good implementation looks like?
&lt;/h2&gt;

&lt;p&gt;A helm chart is there to make your life easier. It allows you to use public or official knowledge on how to optimally implement an application. Helm Charts also tend to be very generalized to support multiple use cases, and by doing so may lead you to an implementation that is not "right" for your requirements.&lt;br&gt;
To avoid this culprit, and to be able to make the right decision when picking a Chart, you have some understanding of the underlying solution.&lt;/p&gt;

&lt;p&gt;The first thing I do before even thinking about Kubernetes is taking the time to research how the application works.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What does a successful implementation looks like?&lt;/li&gt;
&lt;li&gt;Is there a way to make it highly available? If so, what are my options?&lt;/li&gt;
&lt;li&gt;What about security? TLS, Hardening, AAA&lt;/li&gt;
&lt;li&gt;What are my options for running this solution at scale?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only then I move on to Kubernetes. With the building blocks in mind, my decision is more informed and less &lt;a href="https://en.wikipedia.org/wiki/Cargo_cult"&gt;cargo culting&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I then look at:&lt;/p&gt;

&lt;h3&gt;
  
  
  Redundancy
&lt;/h3&gt;

&lt;p&gt;Some charts implement redundancy in ways that may or may not fit your overall architecture. Sometimes the Charts is written in a way that is geared towards a different use case.&lt;br&gt;
One example is Redis Cluster vs. Sentinels. These are two different implementations of HA for Redis, and there are separate Helm Charts for each.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running the version you need
&lt;/h3&gt;

&lt;p&gt;There are cases where charts support specific versions of an application and could be a little behind the latest releases.&lt;br&gt;
Chart elements such as Configmaps and Secrets that contain the application's settings may break with the latest release.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Discovery
&lt;/h3&gt;

&lt;p&gt;How will your applications find and work with each other?&lt;br&gt;
The Kafka Chart used to support internal connections but failed to solve the problem of exposing the brokers outside the cluster. That is no longer the case, but it's an excellent example of a potential time sink that could be avoided.&lt;br&gt;
A quick look at how the Kubernetes services are configured should give you a clear view of how to interact with the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scale
&lt;/h3&gt;

&lt;p&gt;Most Charts are built for scale, but that may look a little different for your use case. Make sure that your view of "scale" corresponds with the Chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is this the right repo for this application?
&lt;/h2&gt;

&lt;p&gt;Well, I'm betting that you started your journey will a quick Google search for a chart. Then one of four things happened:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You ended up on the official Helm repository by clicking on the first result.&lt;/li&gt;
&lt;li&gt;The top result is a blog post from the company who craeted the service (elastic.co for example). It may lead you to their official Helm repo.&lt;/li&gt;
&lt;li&gt;A collection of very random GitHub projects.&lt;/li&gt;
&lt;li&gt;No real results.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I usually go back and forth between scenario #1 and #2, and try to figure out which one answers my requirements better. Both are good starting points.&lt;/p&gt;

&lt;p&gt;If I end up on the third scenario, it usually means that I'm only going to use the projects as inspiration and try not to use them "as is".&lt;br&gt;
In case there are no results at all, well, you have much work ahead of you.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Readme - Hadoop as an example
&lt;/h2&gt;

&lt;p&gt;Every good Chart has an informative and useful Readme file. Even if the documentation is very short, it could give you valuable information or point you in a better direction.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This Chart is primarily intended to be used for YARN and MapReduce job execution where HDFS is just used as a means to transport small artifacts within the framework and not for a distributed filesystem. Data should be read from cloud based datastores such as Google Cloud Storage, S3 or Swift.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Taken from the stable/hadoop Chart.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;See, if you want to use it for storing large data sets, it may not be the best choice.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/helm/charts/tree/master/stable/jenkins"&gt;stable/Jenkins&lt;/a&gt; chart has an excellent example of a readme file (and overall Chart). You can see what options are available for overriding, it gives you hints for possible problems, and features you can use.&lt;/p&gt;

&lt;p&gt;Sometimes a Chart plainly states that it's deprecated and contain a link to a different one.&lt;/p&gt;

&lt;p&gt;A Readme file is there to inform and document but also for marketing and establishing trust. Can you trust this Chart?&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Activity
&lt;/h2&gt;

&lt;p&gt;While you're looking at the Readme file on GitHub, look for other clues on the page:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many watchers, Stars and Forks does this repo have?&lt;/li&gt;
&lt;li&gt;Read some of the issues, and see if they are addressed.&lt;/li&gt;
&lt;li&gt;Is the project maintainer looking at pull requests?&lt;/li&gt;
&lt;li&gt;When was the last commit?&lt;/li&gt;
&lt;li&gt;Does it seem like this Chart is actively maintained?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just like choosing an external library, the above points are a great indicator of whether you should use this Chart or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stable Enough?
&lt;/h2&gt;

&lt;p&gt;In our current state of DevOps, everything is at Beta or Alpha, and experimental projects are running in production for the lack of a better option.&lt;br&gt;
Helm Charts are not different, and you may find yourself looking at "Incubator" charts instead of "Stable."&lt;br&gt;
Even under "stable", you may see an indication that the Chart is not 100% production-ready.&lt;/p&gt;

&lt;p&gt;Another favorite of mine is fabulous PR blog posts by vendors announcing their official Chart, or even Operator that is just around the corner. Too bad the post dates to a year ago, and the Chart is still in "alpha."&lt;br&gt;
 Look for the safer option, but don't rule out anything just because of a label. It may be safer to bet on a new way of doing something than using the wrong solution.&lt;br&gt;
The steps below may help you decide for yourself if this Chart is stable enough for &lt;em&gt;your&lt;/em&gt; needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will you have to modify it?
&lt;/h2&gt;

&lt;p&gt;While Charts are written to support as much use cases as possible, and along the way, make the templates completely unreadable, they can't solve all of the problems.&lt;br&gt;
Maybe you have a unique configuration that requires additional settings? Alternatively, suppose that you need to add elements that are just not there in the templates.&lt;br&gt;
If you realize that it will take too many modifications to make the Chart operable, think about writing your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Overkill?
&lt;/h2&gt;

&lt;p&gt;Every company has its tolerance for failures, different resources, and outlook. Some charts launch a full-blown service architecture worthy of supporting Amazon on Black Friday (not really), but in your case, it will be running the application on an on-prem Kubernetes cluster that caters to a much lower set of users.&lt;br&gt;
Is the overhead worth it? From hardware resources to supporting the Chart itself, try to keep it simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;This is a huge subject on its own, and a while ago I wrote a short article about &lt;a href="https://pushbuildtestdeploy.com/security-code-review-for-public-kubernetes-and-helm-code/"&gt;reviewing Helm and Kubernetes code with a security mindset.&lt;/a&gt; I highly recommend you go over it. There is some overlap, but it should give you a good idea of what you need to keep in mind when evaluating the Helm Chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take it for a spin
&lt;/h2&gt;

&lt;p&gt;This one is obvious - Take the Helm Chart for a test drive. Take a few hours and try to implement it on your Dev cluster or a local machine.&lt;br&gt;
You can learn a lot by trying to implement a Chart and trying to configure it for your use case.&lt;br&gt;
It can give you a glimpse of the effort it will take to customize the Chart and give a better estimate for the project scope. In some cases, you may want to quickly rule it out and save you time later down the road.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>helm</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Hide your shameful commits with Git Squash</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Wed, 07 Aug 2019 11:39:22 +0000</pubDate>
      <link>https://forem.com/yuvalo/hide-your-shameful-commits-with-git-squash-1c7l</link>
      <guid>https://forem.com/yuvalo/hide-your-shameful-commits-with-git-squash-1c7l</guid>
      <description>&lt;p&gt;We all do it, and I'm sure that you do too. You know, the rapid commits when you are testing something, then fixing a typo, then commit again, push, test and on and on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CaDYnBZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cbzshtbo4g1oxgac5lw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CaDYnBZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cbzshtbo4g1oxgac5lw6.png" alt="Real Git log from one of my projects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I usually do it when I'm working on deployment or build code, positive that it's just a tiny fix, one little modification and that's it. So I commit with a meaningless message, push test and, see an error.&lt;br&gt;
Let's call it &lt;em&gt;BDD&lt;/em&gt;: Brute-Force Driven Development.&lt;/p&gt;

&lt;p&gt;Now I'm left with a branch full of, well, embarrassing commits that are heading straight to a pull request and code review.&lt;/p&gt;

&lt;p&gt;However, I do have a little trick up my sleeve that helps me push those changes like that 10x engineer on my team.&lt;/p&gt;

&lt;p&gt;Use git squash. Other than looking smarter, you can keep the history of the branch cleaner and much more readable for others.&lt;/p&gt;
&lt;h2&gt;
  
  
  Git Squash
&lt;/h2&gt;

&lt;p&gt;Squashing commits can be done in a few ways, where your end goal is to rewrite the commit history and leave just one commit instead of multiple meaningless ones. You can choose to leave the commit message history or rewrite that as well, so it's another opportunity to communicate the changes you introduce.&lt;/p&gt;
&lt;h3&gt;
  
  
  git log - before
&lt;/h3&gt;

&lt;p&gt;The best way to understand git squash is to look at the git log. In this example, I have a feature branch that has three commits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* 510b129 (HEAD -&amp;gt; docker-rmi) missing flag
* cd62deb typo #2
* dba34d5 typo
* 46e95a5 Listing local docker images
* e30e77d (master) Starting the build process
* 9409666 Adding the build script
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  git log after
&lt;/h3&gt;

&lt;p&gt;After performing a "squash", the git log looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* 617c65c (HEAD -&amp;gt; docker-rmi) Listing all the local docker images
* 46e95a5 Listing local docker images
* e30e77d (master) Starting the build process
* 9409666 Adding the build script
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's like we went back in time!&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fuz8ZfJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qqajkuvljinx0ol6sf87.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fuz8ZfJb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qqajkuvljinx0ol6sf87.jpg" alt="Are you tellimg me you built a time machine?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that instead of all the "typo" commit messages and the missing flag, we now have just one commit message that includes all of the changes we made.&lt;/p&gt;

&lt;p&gt;There are a few ways to squash commits, and I'll show you two that cover different use cases. &lt;a href="https://stackoverflow.com/a/5201642/1092477"&gt;There is a wonderful thread on StackOverflow if you want to see more methods.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Squash on the same branch
&lt;/h3&gt;

&lt;p&gt;When you want to alter the branch history, squashing a few commits back, the &lt;code&gt;git reset --soft&lt;/code&gt; command can come in handy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;WARNING&lt;/em&gt; - Make sure you don't have uncommitted changes. Either commit or stash them before performing the reset command.&lt;/p&gt;

&lt;p&gt;If you want to squash your last three commits run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git reset &lt;span class="nt"&gt;--soft&lt;/span&gt; HEAD~3 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Also if you want to squash to a specific commit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git reset &lt;span class="nt"&gt;--soft&lt;/span&gt; 46e95a5 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To push your changes, you have to use the --force flag, as you altered the branch history.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git push &lt;span class="nt"&gt;--force&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Again, for more, see the link above to the StackOverflow thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  Squash on merge
&lt;/h3&gt;

&lt;p&gt;If you are a confident person and just want to keep the master (or any other) branch clean, you can use the &lt;code&gt;git merge --squash&lt;/code&gt; command. Most of the pull request UIs allow you squash merge as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout master
git merge &lt;span class="nt"&gt;--squash&lt;/span&gt; featurebranch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By default, the commit includes all the original messages but, you can rewrite it as well.&lt;br&gt;
Your original branch keeps the original history, so that you can reference it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do this now
&lt;/h3&gt;

&lt;p&gt;Like everything Git, knowing about the method adds another tool to your belt. But then again, there is always this "fear" of running commands for the first time on your existing codebase.&lt;br&gt;
So go ahead and create a clean git repo and try these methods out in a safe environment. Your hand will be less likely to shake once you see how it works on your own.&lt;/p&gt;

&lt;p&gt;Want to learn more? &lt;br&gt;
&lt;a href="https://pushbuildtestdeploy.com/signup"&gt;Head over to my signup page&lt;/a&gt; and never miss a post&lt;/p&gt;

</description>
      <category>devops</category>
      <category>git</category>
    </item>
    <item>
      <title>Making Sense of a Chaotic AWS Account</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Thu, 01 Aug 2019 08:13:55 +0000</pubDate>
      <link>https://forem.com/yuvalo/making-sense-of-a-chaotic-aws-account-i4h</link>
      <guid>https://forem.com/yuvalo/making-sense-of-a-chaotic-aws-account-i4h</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe8nn4i9vrw5361tpc8xx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe8nn4i9vrw5361tpc8xx.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've all been there, given access to an AWS account at your new job or project, and now you're expected to deliver results.&lt;br&gt;
Yeah, it's always overwhelming and can trigger an imposter syndrome even for the most experienced DevOps, especially if the account is particularly messy.&lt;br&gt;
Oh, and they're always messy.&lt;/p&gt;

&lt;p&gt;How do you make sense and grasp what is happening in this AWS account? How can you step into your new role with confidence and not waste time fumbling because you don't know how things work?&lt;/p&gt;

&lt;p&gt;I've compiled a list of steps to help you find that confidence, and reduce your orientation period. Who knows, you may even become that go-to person for all the AWS account questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Follow the money
&lt;/h2&gt;

&lt;p&gt;An excellent trick I found to see which services the company uses, or which projects they abandoned, is to look at the billing dashboard.&lt;/p&gt;

&lt;p&gt;The billing dashboard shows you how the organization is spending its money and is an easy way to see which regions and services they are using.&lt;br&gt;
It can save you the leg work of going service by service, dashboard by dashboard to figure out what is going on.&lt;br&gt;
A glance at the dashboard can even give you a hint if the company favors using managed services or tend to run their own.&lt;/p&gt;

&lt;p&gt;Who knows, something may immediately jump out and tell you they are overspending on resources. Use it &lt;strong&gt;later&lt;/strong&gt; to suggest improvements (Make sure you know what you are doing before rushing headfirst).&lt;/p&gt;

&lt;h2&gt;
  
  
  The user story
&lt;/h2&gt;

&lt;p&gt;Eventually, every system is there to serve a user, and you can use that to understand both what the company does and how they use their AWS account.&lt;/p&gt;

&lt;p&gt;Try to sit down and follow the user story. Ask yourself how they interact with the system and then how the data flows through the system.&lt;br&gt;
For example, if most of the clients interact with a website, follow the http request, think about authentication, load balancers, the backend servers, message queues, and anything else you encounter.&lt;br&gt;
Every step reveals another system in the underlying architecture.&lt;/p&gt;

&lt;p&gt;Do you remember those "choose your own adventure" books? You had to keep a small map on the side, and updated it every turn. (I'd pick a monster any day over finding that misconfigured Kafka cluster lurking behind the corner).&lt;/p&gt;

&lt;h2&gt;
  
  
  The lay of the land - VPC Networking
&lt;/h2&gt;

&lt;p&gt;It may be my networking background, but one of the first things I do on new environments, before I get confident enough to do any actual work, is getting a clear picture of the network topology.&lt;br&gt;
It's something you can start doing straight away by taking small steps, which I find helpful when overwhelmed, especially when I don't know where to start. It's like therapy.&lt;/p&gt;

&lt;p&gt;Ask your new peers if they already have a network topology diagram to get you started, and I promise that even if they do have something, it's outdated and filled with gaps.&lt;br&gt;
You can use pen and paper, a whiteboard, or one of the network diagram tools out there.&lt;br&gt;
As you go along, keep asking questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did they decide on doing things a certain way?&lt;/li&gt;
&lt;li&gt;Is this resource still being used?&lt;/li&gt;
&lt;li&gt;How are things connected?&lt;/li&gt;
&lt;li&gt;Which security layers are present?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every question closes another gap in your understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing documentation
&lt;/h2&gt;

&lt;p&gt;This one is a bit obvious, but try to see if the company has any documentation of the architecture.&lt;br&gt;
Most companies will, at the least, have a confluence account, or a similar solution for documentation.&lt;br&gt;
You may have to dig in a little deeper or use what they have as a reference while researching, but you may find some answers in there.&lt;/p&gt;

&lt;p&gt;Remember that documentation doesn't have to be in the form of written documents. Configuration as code is more common today, and you may find Terraform or another provisioning tool configuration. Even if the code is messy, you pretty much just struck gold. However, don't solely rely on it. Keep building your mental map of the architecture.&lt;/p&gt;

&lt;p&gt;Bonus points if you publish your own documents and make the next person's onboarding an easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Just talk to people
&lt;/h2&gt;

&lt;p&gt;Another obvious step, but there are a few things you should keep in mind when interviewing colleagues :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Be friendly and not be judgmental.  Sometimes we look at a system and think, what the? Why did they do things the way they did? Well, you are new to the project and don't know the full story. Maybe they did things in a certain way because newer solutions didn't exist when they started. They could have had problems that you never encountered.  Just keep an open mind.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remember that everyone has an agenda. When you start asking people about the system, each person tells a different story, emphasizing different things, blaming others, and telling you about the radical changes they want to introduce.  Try to stay objective and don't get sucked into their politics and worldview. They may be right, but make sure you have the full picture before making up your mind.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you are replacing someone that is leaving soon, and it happens quite often, where you are brought in just a day before that person, who knows the system like the back of his hand, is leaving for another job.  You may be expected to sit down and cover everything in one day, and then, you're on your own.  Even if they care dearly about this project and fully cooperate, the time you have is not enough. So instead of trying to cover everything, try to learn the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Passed on knowledge - Undocumented information that people "remember" - That "Cron job" that updates the whole system every night. A renegade server, which halts every once in a while, and requires a good old fashion reboot every other week.&lt;/li&gt;
&lt;li&gt;How they overcome common issues.&lt;/li&gt;
&lt;li&gt;What are the pressing matters, and what should your next steps be.&lt;/li&gt;
&lt;li&gt;A list of people you should contact regarding different areas of expertise.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Try to be respectful of their time, and remember that you may need to keep this relationship going for the next few years.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;p&gt;You may be tempted to dive headfirst to using tools that do the work automatically with the push of a button. There are benefits to using tools, but remember that no tool is perfect, and you may be missing valuable data points that are conserved in memory and not the API.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/duo-labs/cloudmapper" rel="noopener noreferrer"&gt;Cloud Mapper &lt;/a&gt; - Will help you create network diagrams and inventory reports for your AWS account.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/cycloidio/terracognita" rel="noopener noreferrer"&gt;TerraCognita&lt;/a&gt; - Tries to do the same, but outputs Terraform configuration files.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/dtan4/terraforming" rel="noopener noreferrer"&gt;Terraforming&lt;/a&gt; - Another tool to export your configuration to Terraform.&lt;/li&gt;
&lt;li&gt;And of course, &lt;a href="https://cloudcraft.co/" rel="noopener noreferrer"&gt;CloudCraft&lt;/a&gt; that creates beautiful diagrams by connecting to your AWS account.
Again, I think you should use these tools as they are, tools, to help you paint the picture, but don't blindly rely on them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to start?
&lt;/h2&gt;

&lt;p&gt;Grab a notebook and a pen, log in to your AWS console and start taking notes.&lt;br&gt;
I find that the billing dashboard and flowing the user story are good starting points, and will help you come more prepared for meeting with different stakeholders. Just figure out what works best for you and try to enjoy the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to learn more?
&lt;/h2&gt;

&lt;p&gt;Head over to &lt;a href="https://pushbuildtestdeploy.com/" rel="noopener noreferrer"&gt;Push Build Test Deploy&lt;/a&gt; and continue reading.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>You Owe Them</title>
      <dc:creator>Yuval Oren</dc:creator>
      <pubDate>Tue, 13 Nov 2018 07:40:04 +0000</pubDate>
      <link>https://forem.com/yuvalo/you-owe-them-3ah9</link>
      <guid>https://forem.com/yuvalo/you-owe-them-3ah9</guid>
      <description>&lt;p&gt;Last week I attended a conference and listened to a talk by &lt;a href="https://twitter.com/pl4n3th" rel="noopener noreferrer"&gt;Aleth Gueguen&lt;/a&gt; about GDPR. A recurring theme on the talk was maintaining a reasonable level of security and keeping the end user’s data private. Most GDPR talks tend to go over the list of things you need to comply with, and how to do everything by the book, but this was a bit different. Aleth showed more concern for the users and less for the risks for the organization.&lt;/p&gt;

&lt;p&gt;Later on that week, on a long layover, I had a chance to sit down with Aleth (over a Vienna Schnitzel). One of the things we talked about was the spirit of the GDPR compliance.&lt;/p&gt;

&lt;p&gt;The spirit of the regulation is to keep your user’s data private and safe. It is not about checking boxes and sending out annoying emails about policy changes.&lt;/p&gt;

&lt;p&gt;That got me thinking about security practices in general, especially after a few long client meetings that seemed to be missing the point.&lt;/p&gt;

&lt;p&gt;We tend to focus on the tools, process, best practices and what we do daily, but our motives are “selfish” and aimed for the company’s interests. The actual people whose data we hold usually comes in second if at all. And even when we do, it’s because of legal liability and not so much from concern for the users.&lt;/p&gt;

&lt;p&gt;The same goes not just for security experts, but for developers and DevOps engineers. You are not implementing security just because you need to, or because a big client had you to go through a due diligence process.&lt;/p&gt;

&lt;p&gt;When you are writing new code, be mindful of the end users, the people whose data you will be processing and serving. It’s not about the shiny new UI, or a neat feature, or a fully automatic deployment that runs 100/day in production. In most cases, your work is not as crucial to the user, or your client’s user’s, as having their trust breached. By you.&lt;/p&gt;

&lt;p&gt;I believe that in the future, we will see more litigation related to data breaches, and more directors will and should be found accountable. In most cases, I can’t honestly say that tech leaders are doing the minimum required, and now highly available security measures to protect their users.&lt;/p&gt;

&lt;p&gt;So the next time you read new compliance guidelines, try to understand the spirit of it, not just how to implement it. It is a way to peek into the future (if only that was the case for new JS frameworks).&lt;/p&gt;

&lt;h3&gt;
  
  
  What can you do about it?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep the users in mind&lt;/li&gt;
&lt;li&gt;When retaining information think — Do I really need to keep it?&lt;/li&gt;
&lt;li&gt;Try to map out where you are keeping personal information.&lt;/li&gt;
&lt;li&gt;Think about some non-obvious data that could be used for exposing your customers.&lt;/li&gt;
&lt;li&gt;When possible — use encryption.&lt;/li&gt;
&lt;li&gt;When there is an opportunity to improve security — do so, don’t brush it off for later.&lt;/li&gt;
&lt;li&gt;Don’t have a — this is the way it’s always been, why change now?&lt;/li&gt;
&lt;li&gt;And my all time favorite — “We don’t have time for security, we need to release it asap”. Get it out of your vocabulary. Unless you do proper risk assessments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to learn more about DevSecOps and Security? &lt;a href="https://pinesec.com/devsecops-thursday" rel="noopener noreferrer"&gt;Join the DevSecOps Thursday list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://pinesec.com" rel="noopener noreferrer"&gt;pinesec.com&lt;/a&gt; on November 12, 2018.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>gdpr</category>
      <category>security</category>
    </item>
  </channel>
</rss>
