<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: shimib</title>
    <description>The latest articles on Forem by shimib (@shimib).</description>
    <link>https://forem.com/shimib</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shimib"/>
    <language>en</language>
    <item>
      <title>Helm V3, Latest &amp; Greatest of Kubernetes</title>
      <dc:creator>shimib</dc:creator>
      <pubDate>Wed, 19 Aug 2020 17:53:39 +0000</pubDate>
      <link>https://forem.com/jfrog/helm-v3-latest-greatest-of-kubernetes-1eff</link>
      <guid>https://forem.com/jfrog/helm-v3-latest-greatest-of-kubernetes-1eff</guid>
      <description>&lt;p&gt;Helm is becoming the de facto standard for managing Kubernetes deployments. &lt;br&gt;
Although not the only tool in the landscape, it’s by far more popular than the alternatives.&lt;br&gt;
The reason for using Helm is quite obvious: managing your K8S deployments by hand requires a lot of YAML manipulation which usually leads to high maintenance and duplication.&lt;/p&gt;

&lt;p&gt;Recently, Helm v3 was released and I wanted to describe the changes and new features in detail.&lt;/p&gt;

&lt;h1&gt;
  
  
  Removal of Tiller
&lt;/h1&gt;

&lt;p&gt;If you have worked with previous versions of Helm, one of the mandatory installation components was the Tiller Helm server that needed to be installed in your K8s cluster.&lt;br&gt;
You might have asked, why is it needed? Can’t all the operations be performed from the client side?&lt;br&gt;
Well, when Helm v2 was released in 2016, some of the K8s features that we are now used to (e.g., Custom Resource Definitions (CRDs)) weren’t available yet.&lt;br&gt;
These days, there is really no need for Tiller.&lt;br&gt;
In version 3, Tiller is no more ☺&lt;/p&gt;

&lt;p&gt;With the removal of Tiller, there is no centralized namespace where all Releases’ information is stored (the namespace where the Tiller was installed). Now this information is stored in the namespace of the chart itself.&lt;br&gt;
Your releases are now under their own namespace (yes you have to create the namespace).&lt;/p&gt;

&lt;p&gt;Security is also now handled where it should, i.e., by K8s RBAC.&lt;/p&gt;

&lt;h1&gt;
  
  
  XDG-based Directory Structure
&lt;/h1&gt;

&lt;p&gt;Starting with Helm v3, directory structure and its configuration are based on the XDG Base Directory Specification.&lt;br&gt;
For those not familiar, the XDG specification defines standard environment variables for locating the home directory and various subfolders.&lt;/p&gt;

&lt;p&gt;In version 3, $HELM_HOME is no more ☺&lt;/p&gt;

&lt;p&gt;Also, the “helm init” and “helm home” commands no longer exist.&lt;/p&gt;

&lt;h1&gt;
  
  
  Library Charts
&lt;/h1&gt;

&lt;p&gt;Starting with v3, a chart can have the type (meta-data chart property) of either “application” or “library” (“application” by default).&lt;br&gt;
Library charts are common charts that are reusable and intended for use in a containing application.&lt;/p&gt;

&lt;p&gt;Regarding chart dependencies, requirements and dependencies have been moved into the chart.yaml itself.&lt;br&gt;
A Smooth Migration?&lt;/p&gt;

&lt;p&gt;While experimenting with Helm v3, I ran into some issues where I had a chart deployed with v2 and tried to delete and replace it using a v3 client.&lt;br&gt;
I got some weird errors when trying to reinstall the chart (e.g., already exists). Although looking at “helm ls” didn’t display anything about my chart.&lt;br&gt;
This error occurred because the different versions of Helm store their catalog in different locations.&lt;br&gt;
I had to revert to Helm v2 client to purge my chart.&lt;br&gt;
So, keep that in mind and follow proper &lt;a href="https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/"&gt;migration guides&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  ChartCenter
&lt;/h1&gt;

&lt;p&gt;Combined with the release of Helm v3, I was also excited to hear the announcement of ChartCenter. &lt;br&gt;
ChartCenter (&lt;a href="https://chartcenter.io/"&gt;https://chartcenter.io/&lt;/a&gt;) provides you with all the information you need about the charts you depend on, including security vulnerabilities scanning information powered by JFrog Xray.&lt;br&gt;
On the site’s UI you can dig deep into the subcomponents of the included containers and see the vulnerable components down to the application’s dependencies.&lt;br&gt;
Not only do I now have a “go-to” place for fetching my infrastructure chart, I can also assure myself that my dependencies have no critical security vulnerabilities.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Best Practices with Container Registry</title>
      <dc:creator>shimib</dc:creator>
      <pubDate>Wed, 04 Dec 2019 21:53:15 +0000</pubDate>
      <link>https://forem.com/jfrog/best-practices-with-container-registry-48d4</link>
      <guid>https://forem.com/jfrog/best-practices-with-container-registry-48d4</guid>
      <description>&lt;p&gt;With the announcement of JFrog Container Registry, I wanted to share several thoughts about some patterns and best practices regarding container registries.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Use multiple repositories&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;In many organizations there are clear and well-defined policies regarding permissions. In some companies, members of one team can’t access the images of other teams (in some cases they shouldn’t even be aware of the existence of those projects)&lt;/p&gt;

&lt;p&gt;Although you could probably define fine-grained permissions based on image names/paths, an easier and more maintainable approach would be to define it by repositories!&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Use Build Promotions&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The main idea here is to ‘promote’ your builds (images) to more mature repositories as your artifacts graduate through the CI/CD pipeline.&lt;br&gt;
This means having different repositories for the different CI/CD phases (e.g., dev, qa, pre-prod and prod).&lt;br&gt;
Then, your runtime (e.g., K8S cluster) should pull images only from a production-level repository.&lt;br&gt;
This best-practice goes hand-to-hand with the previous one of having multiple repositories.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Always use Virtual Repositories&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;A virtual repository is an aggregate of one or more repositories. It allows you to manage this group while providing your clients (developers, build jobs and consumption sites) a single and stable URI.&lt;br&gt;
When always exposing your repositories through virtual ones (even when you have a single repository) you essentially guarantee that maintenance will not require changes on the consuming side (jobs, developers, etc.…).&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Always Publish Build-info&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;If your container registry supports it, always publish build information along with the image you build.&lt;br&gt;
For example, dependency information about what actually resides in the image itself.&lt;br&gt;
Let’s discuss an example:&lt;br&gt;
Suppose you are building a Java web application (WAR file) and put that in a Docker image.&lt;br&gt;
Your Docker image doesn’t have a direct dependency on the Java application. In Docker, dependencies are always in the form of Docker layers.&lt;br&gt;
Having in your build information the dependencies themselves (i.e., the Java application and its own dependencies) will greatly help with traceability, scanning and management of the process.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;High Availability&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Well, I can’t finish the discussion of container registry best practices without mentioning HA.&lt;br&gt;
You should be aware that your K8S cluster becomes quite unusable if it can’t access the container registry.&lt;br&gt;
It is though, extremely important to connect your runtime container environments to a stable, enterprise grade, Docker registry that provides High Availability.&lt;/p&gt;

</description>
      <category>repository</category>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
