<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: JFrog</title>
    <description>The latest articles on Forem by JFrog (@jfrog).</description>
    <link>https://forem.com/jfrog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jfrog"/>
    <language>en</language>
    <item>
      <title>Implementing the JFrog Xray “Summary View” in Slack</title>
      <dc:creator>Alex Hung</dc:creator>
      <pubDate>Mon, 15 Nov 2021 17:06:41 +0000</pubDate>
      <link>https://forem.com/jfrog/implementing-the-jfrog-xray-summary-view-in-slack-o3j</link>
      <guid>https://forem.com/jfrog/implementing-the-jfrog-xray-summary-view-in-slack-o3j</guid>
      <description>&lt;p&gt;Have you ever wanted to get your engineering teams real-time information about security issues happening during software development? As you may know, JFrog Xray already allows you to scan the entire composition of your binaries and enables you to send alerts to your teams using webhooks, but now with our new Slack integration we make it quite easy for entire channels to be updated in real time. The integration further allows teams to discuss new CVE's with developers on other teams as well. In order to make the JFrog Xray notifications super digestible in Slack, we built a whole new way to view your vulnerabilities and license compliance issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The integration with Slack uses Jfrog Xray’s security and license compliance policies to trigger webhook events whenever a new violation is detected. Once configured, Xray sends a webhook event to our Slack integration which then transforms each issue in the event payload into UI cards that can be interacted with. In this blog, we’re going to talk about how we implemented one specific feature: the transformation of the payload to provide better usability to the end-user with our “Summary View” card.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary View - Why We Transform the Payload
&lt;/h2&gt;

&lt;p&gt;When JFrog Xray scans your binaries and components, it uses a “watch” to tell it which repositories to scan artifacts from. The webhook will trigger a payload of vulnerability data based on how you setup your “policy” and set of rules in Xray. This payload will include every single vulnerability that has been introduced. We realized that when building a notifications app, the usability of this can be daunting. Imagine uploading a new artifact and realizing it has hundreds of vulnerabilities - getting hundreds of notifications in a Slack channel results in a lot of noise. This creates information overload and users can feel overwhelmed by the amount of messages in the channel resulting in them muting or ignoring the channel completely – which defeats the purpose of the information. &lt;/p&gt;

&lt;p&gt;That is why we built what we call a “Summary view” of all the issues that come through the Xray payload. Christian Bongiorno (a senior software developer on the JFrog Partner team) created a transformed payload and we want to show you how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Xray Notification Watches and Policies
&lt;/h2&gt;

&lt;p&gt;Before Slack can receive messages from Xray, an Admin needs to assign your repositories inside Artifactory to a &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Configuring+Xray+Watches"&gt;watch&lt;/a&gt;. This signifies that certain repositories should be monitored. You must also decide what &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Creating+Xray+Policies+and+Rules"&gt;policy&lt;/a&gt; and rules to apply that kickoff a notification to Slack. These rules can revolve around the level of severity you want to be notified of (low, medium, high), or if you want to be notified about specific license compliance issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T9RFv7rT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i966qfklzuhnyvo8tkmd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T9RFv7rT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i966qfklzuhnyvo8tkmd.jpeg" alt="Xray Process" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have &lt;a href="https://www.youtube.com/watch?v=88hwwMJsS58"&gt;setup policies and watches in Xray&lt;/a&gt;, you can then send notifications to Slack channels where your teams are monitoring these events.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Create Summary View Notifications in Slack
&lt;/h2&gt;

&lt;p&gt;To create a notification, in the Slack app Home tab, click on the Create Notification button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rvfr2vAT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/md3by5ugl0wrezigin5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rvfr2vAT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/md3by5ugl0wrezigin5u.png" alt="Slack App Home tab" width="880" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select Xray Violation from the dropdown menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--arzaPKE2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwzsyr5a3xcx2slbji9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--arzaPKE2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwzsyr5a3xcx2slbji9i.png" alt="Slack Create Notification modal - 1" width="512" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Watch text box, type in the name of the Xray watch you want to use for this notification. This box will respond to the character you start typing and should show all the Xray watches on your JFrog platform. &lt;/p&gt;

&lt;p&gt;Next, select the channel you want the notification sent to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hWyRBLlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pjgeb84z0a5cnk5lj50n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hWyRBLlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pjgeb84z0a5cnk5lj50n.png" alt="Slack Create Notification modal - 2" width="512" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next screen will ask you if you want to get notifications by individual CVE or by Summary View.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Notifications by Component - Summary
&lt;/h2&gt;

&lt;p&gt;By default, the format type View by Component (Summary) is selected for you. This format type groups all the issues for an artifact into categories based on severity (High, Medium, Low, Unknown). Each category will include up to 5 violations. To see the full list of issues, you can use the Open in platform button which opens Xray in your browser and takes you to the full list of Xray issues. This view helps your teams understand the extent to which a specific component might be affected by vulnerabilities. &lt;/p&gt;

&lt;p&gt;Here is an example of the summary view message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--crL9jPXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s98d8nyuppwuq1h9bqix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--crL9jPXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s98d8nyuppwuq1h9bqix.png" alt="Xray summary message" width="577" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Notifications by Issue
&lt;/h2&gt;

&lt;p&gt;Additionally, you can also get notification by each individual issue. This view is useful when you already have clean artifacts in production and just want to be notified anytime a new vulnerability pops up. &lt;/p&gt;

&lt;p&gt;Here is an example of a individual security violation message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hIFe9767--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afrhnfltn9c8pz90w6ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hIFe9767--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/afrhnfltn9c8pz90w6ig.png" alt="Xray issue message" width="880" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To avoid flooding the channel, our integration automatically switches to Summary view mode if the webhook event contains more than 40 individual issues. We found that users can digest the summary view much faster when there are more than 40 issues. &lt;/p&gt;

&lt;h2&gt;
  
  
  How We Built the Transformation Code
&lt;/h2&gt;

&lt;p&gt;As we started making this integration available, we also found that many current JFrog Xray customers wanted to know how we transformed the Xray event data into a ‘Summary view’ card. We’ve made the template code available in the rest of this document. &lt;/p&gt;

&lt;p&gt;First, this is what the default Xray payload looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"created"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2021-05-28T19:37:50.075822379Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"top_severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"watch_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"slack_watch_test"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"policy_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"slack"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"issues"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"security"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"JFrog"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"created"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2021-04-08T04:02:38.999Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"summary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A flaw was found in the Nosy driver in the Linux kernel. This issue allows a device to be inserted twice into a doubly-linked list, leading to a use-after-free when one of these devices is removed. The highest threat from this vulnerability is to confidentiality, integrity, as well as system availability. Versions before kernel 5.12-rc6 are affected"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"impacted_artifacts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"manifest.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"display_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"artifactory-fluentd:1.11.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default/integrations/artifactory-fluentd/1.11.2/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"pkg_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Docker"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10fd87ba58132673ac65ee8c11a01510509f93846bdb5f20300ba5981aa75eb0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"sha1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"depth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"parent_sha"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10fd87ba58132673ac65ee8c11a01510509f93846bdb5f20300ba5981aa75eb0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"infected_files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linux-libc-dev:4.19.132-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"sha256"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"391e2df82c21b15e12cd8207d3257baf60b10c824c400e94bb1bd6128c131d55"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"depth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"parent_sha"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"c5b1980eb2a26b21e083b2930ec5cae78f473a19d8fc6affbe6b71792fbf6ae2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"display_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"debian:buster:linux-libc-dev:4.19.132-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"pkg_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Debian"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"cve"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CVE-2021-3483"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, when a Xray webhook event request arrives in our Slack integration, our transformer code extracts only the relevant information we want to use from the payload – then sorts the issues by severity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;normalize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;violation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;violation&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;issues&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;impacted_artifacts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;artifact&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;artifact&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;infected_files&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;watch_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;violation&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;watch_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;pkg_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;artifact&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pkg_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;artifact&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;default/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;artifact&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cve&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;issue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})))).&lt;/span&gt;&lt;span class="nx"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;normalizedViolations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;normalize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;violation&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;normalizedViolations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;SEVERITY_MAPPING&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;SEVERITY_MAPPING&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It then checks if the number of issues is greater than the 40 issue limit, and switches the format to the summary view.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;messageFormat&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;ISSUE_MESSAGE_FORMAT&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;reports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;SLACK_APP_MAX_ISSUES_PER_ENTRY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;messageFormat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;SUMMARY_MESSAGE_FORMAT&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;forcedSummaryFormat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards it transforms the data into a Slack UI card using the corresponding format mapper module based on the format type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mapper&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;lookupFormatMapper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;messageFormat&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;mapper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;reports&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;jpdOrigin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;policyOrWatchName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;forcedSummaryFormat&lt;/span&gt;&lt;span class="p"&gt;)?.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;format&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Slack Integration, we use the &lt;a href="https://api.slack.com/web"&gt;Slack Web API&lt;/a&gt; to send the message to the targeted channel. We take this transformer code (the examples above) and make it available to the Slack platform. That is how we turn normal Xray webhook events into a “Summary View” card. &lt;/p&gt;

&lt;p&gt;Our next goal will be to make the summary view adjustable – giving users more options and ways to build the summary. For now, we’ve made the code available on GitHub so you can also create understand how to create a custom summary from the payload that comes from JFrog Xray webhooks: &lt;a href="https://github.com/jfrog/partner-integrations/tree/main/Slack/Sample"&gt;https://github.com/jfrog/partner-integrations/tree/main/Slack/Sample&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the steps in the &lt;a href="https://github.com/jfrog/partner-integrations/blob/main/Slack/Sample/README.md"&gt;README.md&lt;/a&gt; to try this out for yourself!&lt;/p&gt;

&lt;p&gt;To learn more about the JFrog App for Slack, visit us: &lt;a href="https://jfrog.com/integration/slack/"&gt;https://jfrog.com/integration/slack/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>javascript</category>
      <category>jfrog</category>
      <category>xray</category>
    </item>
    <item>
      <title>Time to make some order with GoCenter </title>
      <dc:creator>Batel Zohar</dc:creator>
      <pubDate>Wed, 16 Dec 2020 12:42:46 +0000</pubDate>
      <link>https://forem.com/jfrog/time-to-make-some-order-with-gocenter-5c31</link>
      <guid>https://forem.com/jfrog/time-to-make-some-order-with-gocenter-5c31</guid>
      <description>&lt;p&gt;Go is becoming one of the world’s fastest-growing software languages. To keep increasing my skill set as a developer I started learning  Go a few months ago. Here is a snapshot of my journey and some insights I learned along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependency Management
&lt;/h3&gt;

&lt;p&gt;Learning a new language can be overwhelming so I decided to start with the basics - dependency management. So let’s start from the beginning the management of the dependencies, from version 1.11 Go &lt;a href="https://jfrog.com/blog/go-big-with-pseudo-versions-and-gocenter/"&gt;supports modules&lt;/a&gt;, this feature makes dependency version information explicit and easier to maintain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Go module
&lt;/h3&gt;

&lt;p&gt;A module is a collection of &lt;a href="https://golang.org/ref/spec#Packages"&gt;Go packages&lt;/a&gt; stored in a file with a go.mod file at its root. The go.mod file defines the module’s module path, which is also the import path used for the root directory, and its dependency requirements, which are the other modules needed for a successful build. Each dependency requirement is written as a module path and a specific &lt;a href="http://semver.org/"&gt;semantic version&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s start with a &lt;a href="https://github.com/jfrog/project-examples/tree/master/golang-example"&gt;simple example:&lt;/a&gt; &lt;a href="https://github.com/rsc/hello"&gt;hello world&lt;/a&gt;. In this  example the go.mod file will look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "rsc.io/hello"

require "rsc.io/quote" v1.5.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After completing a simple go run and go build we now have a hello world example which is basic,  but let’s try to make it a bit more complicated by adding yaml support. To do this we will use the following commands (I found  that version 2.2.7 is recommended) so let’s give it a go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gopkg.in/yaml.v2 v2.2.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I figured that I used a vulnerable package and I found &lt;a href="http://search.gocenter.io/"&gt;GoCenter&lt;/a&gt;that provided me an amazing way to better understand Go packages. &lt;a href="http://gocenter.io/"&gt;GoCenter&lt;/a&gt; has the following features:&lt;/p&gt;

&lt;h3&gt;
  
  
  Proxy my dependencies
&lt;/h3&gt;

&lt;p&gt;First we can use GoCenter as a &lt;a href="https://search.gocenter.io"&gt;GOPROXY&lt;/a&gt; and we will redirect all module download requests to GoCenter which can be faster than directly from the VCS.&lt;/p&gt;

&lt;p&gt;To change the GoProxy path just use the following commands:&lt;/p&gt;

&lt;p&gt;For mac and linux:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export GOPROXY=https://gocenter.io&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;For Windows:&lt;/p&gt;

&lt;p&gt;'''set GOPROXY=&lt;a href="https://gocenter.io''"&gt;https://gocenter.io''&lt;/a&gt;'&lt;/p&gt;

&lt;p&gt;For powershell: &lt;/p&gt;

&lt;p&gt;'''$env:GOPROXY=&lt;a href="https://gocenter.io''"&gt;https://gocenter.io''&lt;/a&gt;'&lt;/p&gt;

&lt;h3&gt;
  
  
  Protect your binaries
&lt;/h3&gt;

&lt;p&gt;I’ve tried to learn a bit more about the yaml packages and this is how it looks on GoCenter:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yTlrSTxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r1okzv622wrcehyp4l6e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yTlrSTxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r1okzv622wrcehyp4l6e.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First I found out that my version is vulnerable and contains &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2019-11254"&gt;CVE-2019-11254&lt;/a&gt; like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f_36BfHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9infe9g63km7ezkiq4c3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f_36BfHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9infe9g63km7ezkiq4c3.png" alt="CVE-2019-11254 of yaml.v2 go module"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also I noticed the feature that scans the dependencies in a go.mod file held by GoCenter and identifies every vulnerability. Under the dependencies tab we will get the detailed information about vulnerable components at every level of the dependency tree, once we will click on the orange triangle we will forward to the package and we can check the vulnerability page like the following example of &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yFnwk1m---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://search.gocenter.io/github.com/hashicorp/vault%3Fversion%3Dv0.11.0%26tab%3Ddependencies" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yFnwk1m---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://search.gocenter.io/github.com/hashicorp/vault%3Fversion%3Dv0.11.0%26tab%3Ddependencies" alt="hashicorp/vault:"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kOYOTz9E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dpbinjxo9i4m08fntbm5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kOYOTz9E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dpbinjxo9i4m08fntbm5.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Learn more about your packages
&lt;/h3&gt;

&lt;p&gt;So I clicked on the versions tab and saw that version 2.2.8 contains a fix and I upgraded to the latest version 2.4.0 now seems like they added some documentation and examples:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Rd38ylp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hk5qc7trtc7j567h1oio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Rd38ylp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hk5qc7trtc7j567h1oio.png" alt="As you can see the package yaml and an overview "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I love metrics. GoCenter’s metrics are colorful and provide a lot of information in a great visual way so  I can easily see that there are a lot of downloads of the packages and 37 Contributors:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s_gQxvYO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4zhj42z48euy6d5bdkmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s_gQxvYO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4zhj42z48euy6d5bdkmn.png" alt="The number of open issues forks contributors and much more"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced mode private GOPROXY
&lt;/h3&gt;

&lt;p&gt;Another advantage for developers is the ability to improve our resolution tie by integrating our &lt;a href="https://www.jfrog.com/confluence/display/JFROG/JFrog+Artifactory"&gt;JFrog Artifactory server&lt;/a&gt; and create our &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Go+Registry"&gt;Go private repository.&lt;/a&gt; We want to create a private Go repository to make sure that we are pulling directly from a &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Virtual+Repositories"&gt;virtual repository&lt;/a&gt; that contains a remote repository that points to GoCenter and our local repository with our project. A benefit of this method is that we don’t need to manage Artifactory we can just use the &lt;a href="https://jfrog.com/platform/free-trial/"&gt;SaaS version&lt;/a&gt; which is free and limited. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;To sum it all up, as I learn to write in Go I will continue to use GoCenter as a proxy for my dependencies, vulnerability scanning of my binaries, version control of my packages, beautiful metrics to give me a great visualization of the data &lt;/p&gt;

</description>
    </item>
    <item>
      <title>DevOps 101: Package Management</title>
      <dc:creator>Kat Cosgrove</dc:creator>
      <pubDate>Wed, 25 Nov 2020 16:14:39 +0000</pubDate>
      <link>https://forem.com/jfrog/devops-101-package-management-1i14</link>
      <guid>https://forem.com/jfrog/devops-101-package-management-1i14</guid>
      <description>&lt;p&gt;When you’re new to an industry, you encounter a lot of new concepts. This can make it really difficult to get your feet underneath you on an unfamiliar landscape, especially for junior engineers. What’s all this jargon? What does DevOps really mean? What’s all this software? Is DevOps a methodology, or a toolset? Is any of this actually going to make my life easier, or is it just a bunch of industry buzzwords? A lot of the documentation out there assumes you already have additional context and experience, or are proficient in some related tooling, and that doesn’t exactly make it easy to learn. DevOps has a ton of jargon in it, though. We're absolutely swimming in abbreviations and abstractions, and sometimes it's difficult to define a term satisfactorily without needing to define three more for context. It’s like running into a brick wall. &lt;/p&gt;

&lt;p&gt;Here, I'll explain package managers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What even is that?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;From Wikipedia, a package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer's operating system in a consistent manner.&lt;/p&gt;

&lt;p&gt;If you’re a developer, you have probably already used one of these. For instance, if you’re writing Node.js, you’re probably using NPM. That’s a package manager. If you’re using Linux and you’ve ever installed something with apt-get, APT is a package manager. In plain English, a package manager is what handles the heavy lifting involved in installing something (including that thing’s dependencies), validating that it is what it says it is (to an extent, by comparing checksums), keeping track of versions, upgrades, and uninstalling. This is accomplished in part through the use of a large amount of metadata, defining characteristics about a particular package, alongside the actual application binary. Modern package managers are why you can reliably run &lt;code&gt;pip install Django=-1.3.3&lt;/code&gt; or whatever and be pretty sure you are getting version 1.3.3 of Django, or just &lt;code&gt;pip install Django&lt;/code&gt; and take whatever is the most recent version.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Before Times&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We didn’t always have this, though. The earliest versions of what you could call a package manager are from the mid to late 90s, with Debian introducing dpkg in 1994 and RedHat’s RPM in 1997. Before then, and until package management really took off, we had to do a lot of things manually and we had way less information about the things we were installing.&lt;/p&gt;

&lt;p&gt;You would get a compressed directory, usually in the form of a .tar file. Decompress that, and you would find a readme file with instructions to follow. Some kind of config script would be there which, when run, would tell your C compiler what to do, where new binaries should go, what application dependencies to look for and where, etc. If anything went wrong, it would exit and you would have to go install some more dependencies. If it found everything it needed and the config script completed, it would spit out a Makefile. Run the &lt;code&gt;make&lt;/code&gt; command and, if everything compiles correctly, run &lt;code&gt;make install&lt;/code&gt; to finally install the application. Updates were just as involved, if not more so. Obviously, this is a time-consuming process. Imagine having to do that for every piece of software on your computer, and every piece of software required to run all of it. A lot of things could go wrong on your journey from downloading a .tar file to actually getting the thing installed.&lt;/p&gt;

&lt;p&gt;Understandably, everyone thought this was exhausting and the release of the first package managers for Linux was A Big Deal. It immediately changed computing and application development for the better, forever. Those early package managers pretty much just gave you install, update, and uninstall, but over time, package managers have bundled all of that work and more into simpler commands and encoded extra information alongside the application binary and its dependencies, like version numbers, checksums, dependency graphs, and more. As the popularity of package managers for Linux grew, so did the demand for something similar for other languages. Thus, a whole series of package managers for other languages were born, from CPAN for Perl to PyPi for Python to Cargo for Rust, all with the goal of making it easier to distribute and use software.&lt;/p&gt;

&lt;p&gt;This introduced a whole new problem, though. Applications are easier to install, update, manage, and remove, so we start building more complex applications, and with Continuous Integration becoming popular in the 90s, we start releasing more often, too. One organization might be using multiple languages, and we also start caring about security. This is where binary repository managers enter the playing field.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What’s a binary repository manager?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A binary repository manager is there to manage all of those binaries we now have into a system of repositories. Some of them support just one or two package types for very specialized applications, but the one I’ll talk about supports more. This isn’t the same thing as source control, which is where your code lives, but more of an extension of it. While your code might live in a repository on GitHub or BitBucket or whatever, the result of that code being built or compiled -- your artifacts -- live in a binary repository manager like &lt;a href="https://jfrog.com/artifactory/" rel="noopener noreferrer"&gt;JFrog Artifactory&lt;/a&gt;. For DevOps to really work, this is an important part of the equation. We’re delivering updates far more often now, and we need a way to organize our build artifacts in a sensible way so they’re easier for our other tools, like our CI/CD system, to interact with them and ultimately deploy the updates to the user. Without one, it’s much more difficult to track version numbers, control access, promote builds from testing to production, collect metadata, or detect security problems.  &lt;/p&gt;

&lt;p&gt;Try to imagine writing code and keeping it organized without tools like Git and GitHub. That sounds miserable, right? It’s the same for binaries and a binary repository manager, especially on teams that have multiple languages in one house. If you use a binary repository manager like Artifactory, you can store your Go, JavaScript, Docker images, Python, and Java binaries all in one tool, plus 22 other package types.&lt;/p&gt;

&lt;p&gt;Artifactory breaks up the repositories for each package type into three different classes: Local, remote, and virtual. &lt;strong&gt;Local&lt;/strong&gt; repositories are what they sound like: repositories for your build artifacts resulting from local code that exists on your machine. &lt;strong&gt;Remote&lt;/strong&gt; repositories are also fairly self-explanatory; they contain remote build artifacts, like your project's dependencies. This functions sort of like a cache, so that after the first download, your project pulls its dependencies from the associated remote repository rather than from NPM or PyPi or whatever. If you are using Docker, this is particularly helpful, since it limits the number of pulls you need to make against Docker Hub and you won't hit their new limit for pulls from anonymous users as quickly. &lt;strong&gt;Virtual&lt;/strong&gt; repositories are a little weirder -- they create a kind of envelope around the local and remote repositories for your project, and this is what you'll be interacting with most frequently.&lt;/p&gt;

&lt;p&gt;A lot of headaches are saved here, from an organizational standpoint. Things get released faster because things are more organized and easier to integrate with a CI/CD solution, and there’s no jumping around between a dozen tools that all do the same thing but for different package types. This improvement alone decreases the likelihood of a bad build making it out into the wild, because we humans are really bad at repetitive tasks, and this takes over a lot of repetition for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Cool cool cool, how do I try using one of these things? Sounds enterprisey.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It's definitely a thing that's more beneficial to whole companies or dev teams than for an individual person on a side project, but it's still good to learn. There are a handful available, but Artifactory is free up to a certain amount of storage and data transfer. &lt;a href="https://jfrog.com/artifactory/start-free/#saas" rel="noopener noreferrer"&gt;Try it here&lt;/a&gt; on the cloud provider of your choice. It also comes with Xray, a vulnerability detection tool, so throw in a CI/CD system and you're getting pretty close to an end-to-end DevOps solution to play with and learn on. If you don't quite understand CI/CD (or you just need some recommendations for where to start!), check out the previous article in this series: &lt;a href="https://dev.to/jfrog/devops-101-ci-cd-49il"&gt;DevOps 101: CI/CD&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Summarize this for me, high school essay style.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In conclusion, package managers are a set of tools that make it easier for you to install, use, update, and remove applications. They go further than just automating the steps we used to have to take manually, with config scripts and Makefiles, by also installing your dependencies and managing a bunch of extra metadata we didn’t have clear access to before. The next leap from there is the use of a binary repository as an extension of our source code repositories, to manage all of these binaries and build artifacts produced by our package managers. Doing so gives us more insight into what’s going on with our builds, a simpler way to control who has access to what, and a place to keep all of our build artifacts regardless of the languages or tools involved in our applications. This is a boon to your developers from a sanity and organization standpoint, to your users in the form of faster updates, and to your legal team in the form of faster detection of critical vulnerabilities. The invention of the package manager is possibly one of the most important innovations in computing in decades, and a universal binary repository manager is one of the most important parts of a functional DevOps pipeline.&lt;/p&gt;

&lt;p&gt;I hope I’ve helped you understand what package management is and what it does for you. If you’re still confused, that’s okay too -- there's a lot going on in this space. Stay tuned for more!&lt;/p&gt;

&lt;p&gt;Oh, and if you have specific requests, reach out to me on &lt;a href="https://twitter.com/Dixie3Flatline" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;! My DMs are open.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>packagemanagent</category>
    </item>
    <item>
      <title>De(v)lightful continuous benchmarks with Go</title>
      <dc:creator>Omer Karjevsky</dc:creator>
      <pubDate>Thu, 05 Nov 2020 10:41:14 +0000</pubDate>
      <link>https://forem.com/jfrog/de-v-lightful-continuous-benchmarks-with-go-43oo</link>
      <guid>https://forem.com/jfrog/de-v-lightful-continuous-benchmarks-with-go-43oo</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftkxbgbukzg75cucged41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftkxbgbukzg75cucged41.png" alt="We want easy benchmark testing!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following article is not going to tell you about the importance of benchmarking your Go applications.&lt;/p&gt;

&lt;p&gt;Even if we understand the importance, many times benchmark tests are ignored when developing new application features, or when improving existing ones. This can be a result of many reasons, but chiefly it is a result of the hassle required to perform a meaningful benchmark. &lt;/p&gt;

&lt;p&gt;Here, we will describe a method for making benchmarking an easy, low-friction step in the feature development lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we need
&lt;/h2&gt;

&lt;p&gt;First, let’s describe the basic requirements when benchmarking an application feature:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The benchmark result should tell us something meaningful&lt;/li&gt;
&lt;li&gt;The benchmark result should be reliable and reproducible&lt;/li&gt;
&lt;li&gt;Benchmark tests should be isolated from one another&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The benchmark tooling available in the Go testing package is a useful and accepted method to meet the aforementioned requirements, but in order to have the best workflow, we want to include additional nice-to-have ones as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The benchmark test should be (very) easy to write&lt;/li&gt;
&lt;li&gt;The benchmark suite should run as fast as possible, so it can be integrated into CI&lt;/li&gt;
&lt;li&gt;The benchmark results should be actionable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Easy, right?
&lt;/h2&gt;

&lt;p&gt;At first glance, all of these requirements should also be covered by the existing tooling.&lt;br&gt;
Looking at the example from Dave Cheney’s great &lt;a href="https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;, writing a benchmark for &lt;code&gt;Fib(int)&lt;/code&gt; function should be very easy and duplicated code between this test and another would be negligible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;BenchmarkFib10&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// run the Fib function b.N times&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;N&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Fib&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Not In the real world
&lt;/h2&gt;

&lt;p&gt;For the purpose of this article, let’s take a look at a more challenging use case. This time, we want to benchmark a function which performs a SELECT statement on an SQL database.&lt;br&gt;
The stateful nature of this flow requires us to perform a setup step before running the actual test.&lt;/p&gt;

&lt;p&gt;What would such a test look like? Probably something similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;Benchmark_dao_FindByName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctxCancel&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithCancel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;ctxCancel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c"&gt;// when working with multiple DB types, or randomized ports, &lt;/span&gt;
    &lt;span class="c"&gt;// this can involve some logic, for example, &lt;/span&gt;
    &lt;span class="c"&gt;// resolving the connection info from env var / config.&lt;/span&gt;
    &lt;span class="n"&gt;dbDriver&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;resolveDatabaseDriver&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;dbUrl&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;resolveDatabaseURL&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dbUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;require&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;dao&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewDAO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// populate the db with enough entries for the test to be meaningful&lt;/span&gt;
    &lt;span class="n"&gt;populateDatabaseWithTestData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dao&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// ignore setup time from benchmark&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResetTimer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c"&gt;// run the FindByName function b.N times&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;N&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dao&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FindByName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"last_inserted_name"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;require&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yikes, even with the most simplistic example, look at all that setup code. Now imagine writing that for each new benchmark test in a real application codebase!&lt;/p&gt;

&lt;p&gt;Another thing to note here, is that if we would have forgotten (yes, mistakes happen) to reset the test timer where required, or running the test inside the required loop, the test would become invalid without any clear indication.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what can we do?
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://grnh.se/f6768jaq1" rel="noopener noreferrer"&gt;JFrog&lt;/a&gt;, we want processes to be as pain-free as possible, so we can shift-left responsibilities to the dev teams without having it affect productivity and development times too much. An example of this mindset is writing benchmark tests as part of the application test suite, instead of being part of the end-to-end suite or the QA pipeline.&lt;/p&gt;

&lt;p&gt;So, looking at the example above, we see that the experience of writing new benchmark tests would be painful for our developers - we need to fix that. Let’s look at how we can reduce the setup code as much as possible.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Wrap application setup and benchmark state
&lt;/h4&gt;

&lt;p&gt;Most of the setup logic from the example above can be deferred to a wrapping utility.&lt;br&gt;
Resolving the DB connection info, and creating the application object, so that it can be used in the test&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;BenchSpec&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="c"&gt;// Runs b.N times, after the benchmark timer is reset.&lt;/span&gt;
    &lt;span class="c"&gt;// The provided context.Context and container.AppContainer &lt;/span&gt;
    &lt;span class="c"&gt;// are closed during b.Cleanup().&lt;/span&gt;
    &lt;span class="n"&gt;Test&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="n"&gt;myapp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Application&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;runBenchmark&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;spec&lt;/span&gt; &lt;span class="n"&gt;BenchSpec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Helper&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctxCancel&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithCancel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Cleanup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctxCancel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// when working with multiple DB types, or randomized ports, &lt;/span&gt;
    &lt;span class="c"&gt;// this can involve some logic, for example, &lt;/span&gt;
    &lt;span class="c"&gt;// resolving the connection info from env var / config.&lt;/span&gt;
    &lt;span class="n"&gt;dbDriver&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;resolveDatabaseDriver&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;dbUrl&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;resolveDatabaseURL&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dbUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;require&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;dao&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewDAO&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// did you try https://github.com/google/wire yet? ;)&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;myapp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewApplication&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dao&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// populate the db with enough entries for the test to be meaningful&lt;/span&gt;
    &lt;span class="n"&gt;populateDatabaseWithTestData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;N&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;spec&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;Benchmark_dao_FindByName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;runBenchmark&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BenchSpec&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"case #1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Test&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="n"&gt;myapp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Application&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dao&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FindByName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"last_inserted_name"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;require&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we can see the actual test code is trivial, all we need to do is pass the method we want to benchmark into the &lt;code&gt;runBenchmark&lt;/code&gt; helper function. This function handles all the heavy lifting for us.&lt;/p&gt;

&lt;p&gt;It also keeps us from making mistakes, note that the &lt;code&gt;BenchSpec.Test&lt;/code&gt; argument &lt;code&gt;t&lt;/code&gt; is scoped down to &lt;code&gt;testing.TB&lt;/code&gt; instead of giving access to the full &lt;code&gt;*testing.B&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;The wrapper utility can, of course, be extended to accept multiple test specifications at once, so that we can run multiple tests against a single database/application initialization.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Use pre-populated DB docker images
&lt;/h4&gt;

&lt;p&gt;Populating the database in each test takes too much time. Using a pre-populated &lt;a href="https://jfrog.com/knowledge-base/a-beginners-guide-to-understanding-and-building-docker-images/" rel="noopener noreferrer"&gt;docker image&lt;/a&gt; will allow us to perform our tests against a well-known DB state, without having to wait.&lt;/p&gt;

&lt;p&gt;As an added bonus, since the DB container does not require initialization actions in most cases, that means we can use an image that will start up much faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Actionability
&lt;/h2&gt;

&lt;p&gt;Now that we have an easy way to add benchmarks for our new features, what do we do with them? The output will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"^$"&lt;/span&gt; &lt;span class="nt"&gt;-bench&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"."&lt;/span&gt; ./... &lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;-benchmem&lt;/span&gt;

PASS
Benchmark_dao_FindByName   242    4331132 ns/op     231763 B/op      4447 allocs/op

ok      jfrog.com/omerk/benchleft       3.084s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simply looking at the output and deciding if performance is good enough is nice, but can we get more actionable results from our benchmark suite?&lt;/p&gt;

&lt;p&gt;When working on a change to our application, we rely on our CI to make sure we don’t break anything. Why not use the same flow to make sure we don’t degrade our application performance?&lt;/p&gt;

&lt;p&gt;So, let’s run the benchmark suite as part of our CI. But getting the results on our working branch is not enough, we also need to compare these results to a stable state. In order to do that, we can run the benchmark suite on both our working branch and a release branch or the main branch of our git repository.&lt;/p&gt;

&lt;p&gt;By using a tool like &lt;a href="https://godoc.org/golang.org/x/perf/cmd/benchstat" rel="noopener noreferrer"&gt;benchstat&lt;/a&gt;, we can compare the results of two separate benchmark runs. The output of such a comparison looks along the lines of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% benchstat &lt;span class="s2"&gt;"stable.txt"&lt;/span&gt; &lt;span class="s2"&gt;"head.txt"&lt;/span&gt;

name                                                                   old &lt;span class="nb"&gt;time&lt;/span&gt;/op    new &lt;span class="nb"&gt;time&lt;/span&gt;/op    delta
pkg:jfrog.com/omerk/benchleft goos:darwin goarch:amd64 _dao_FindByName 4.04ms ± 8%    5.43ms ±40%  +34.61%  &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.008 &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5+5&lt;span class="o"&gt;)&lt;/span&gt;

name                                                                   old &lt;span class="nb"&gt;time&lt;/span&gt;/op    new &lt;span class="nb"&gt;time&lt;/span&gt;/op    delta
pkg:jfrog.com/omerk/benchleft goos:darwin goarch:amd64 _dao_FindByName 10.9kB ± 0%    10.9kB ± 0%   +0.19%  &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.032 &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5+5&lt;span class="o"&gt;)&lt;/span&gt;

name                                                                   old &lt;span class="nb"&gt;time&lt;/span&gt;/op    new &lt;span class="nb"&gt;time&lt;/span&gt;/op    delta
pkg:jfrog.com/omerk/benchleft goos:darwin goarch:amd64 _dao_FindByName 8.60k ± 0%     8.60k ± 0%   +0.01%  &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.016 &lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4+5&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As can be seen above, we get a clear understanding of the performance differences (if any) for time, memory and number of allocations per operation.&lt;/p&gt;

&lt;p&gt;Storing the stable benchmark results in a bucket or artifact repository (such as &lt;a href="https://jfrog.com/artifactory/" rel="noopener noreferrer"&gt;JFrog Artifactory&lt;/a&gt;) is one option, but to keep performance variance to a minimum, we would prefer to run the suite on both branches on the current CI agent if possible (using the tips above to keep our tests fast is crucial).&lt;/p&gt;

&lt;p&gt;Here is an example of a (simplistic) bash script that will help us get actionable results from our benchmark suite:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail

&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;out&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;DEGRADATION_THRESHOLD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DEGRADATION_THRESHOLD&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.00&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;MAX_DEGRADATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="nv"&gt;THRESHOLD_REACHED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0

&lt;span class="k"&gt;function &lt;/span&gt;doBench&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;outFile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"^$"&lt;/span&gt; &lt;span class="nt"&gt;-bench&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"."&lt;/span&gt; ./... &lt;span class="nt"&gt;-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;-benchmem&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;outFile&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;calcBench&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
   &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;metricName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
   &lt;span class="nv"&gt;MAX_DEGRADATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/result.txt"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; 2 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;metricName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 3 | &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1 | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'match($0,/(\+[0-9]+\.[0-9]+%)/) {print substr($0,RSTART,RLENGTH)}'&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"+%"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MAX_DEGRADATION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
       &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Benchmark &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;metricName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; - no degradation"&lt;/span&gt;
       &lt;span class="k"&gt;return
   fi
   if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;THRESHOLD_REACHED&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"0"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
       &lt;/span&gt;&lt;span class="nv"&gt;THRESHOLD_REACHED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MAX_DEGRADATION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &amp;gt; &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DEGRADATION_THRESHOLD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | bc &lt;span class="nt"&gt;-l&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;fi
   &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Benchmark &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;metricName&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; degradation: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MAX_DEGRADATION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;% | threshold: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DEGRADATION_THRESHOLD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;%"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

git checkout stablebranch &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git pull
doBench &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/stable.txt"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;git checkout - &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;

git checkout -
doBench &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/head.txt"&lt;/span&gt;

benchstat &lt;span class="nt"&gt;-sort&lt;/span&gt; delta &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/stable.txt"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/head.txt"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BENCH_OUTPUT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/result.txt"&lt;/span&gt;

calcBench &lt;span class="s2"&gt;"time/op"&lt;/span&gt;
calcBench &lt;span class="s2"&gt;"alloc/op"&lt;/span&gt;
calcBench &lt;span class="s2"&gt;"allocs/op"&lt;/span&gt;

&lt;span class="nb"&gt;exit&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;THRESHOLD_REACHED&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;doBench()&lt;/code&gt; will run the full benchmark suite and output the results into the specified file. &lt;code&gt;calcBench()&lt;/code&gt; takes the comparison result output and calculates if our specified degradation threshold has been reached.&lt;br&gt;
&lt;code&gt;benchstat&lt;/code&gt; is used with the &lt;code&gt;-sort delta&lt;/code&gt; flag, so we take the highest delta into our calculation.&lt;/p&gt;

&lt;p&gt;The script performs the benchmarks against “stablebranch” and then against our working branch, compares the two results, and calculates against the threshold.&lt;/p&gt;

&lt;p&gt;The exit code can be used to fail our CI pipeline in case of a large enough performance degradation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Considerations
&lt;/h2&gt;

&lt;p&gt;It’s important to remember that your CI pipeline (such as &lt;a href="https://jfrog.com/pipelines/" rel="noopener noreferrer"&gt;JFrog Pipelines&lt;/a&gt;) is a tool meant to increase productivity and observability. Adding the benchmark step can be very helpful but does not entirely alleviate the developer’s responsibility.&lt;/p&gt;

&lt;p&gt;When following the examples above, you may notice that newly added benchmark tests are not considered when automating the degradation calculation, as there is no stable result to compare against. For this case, developer discretion is required to decide on a good base value.&lt;/p&gt;

&lt;p&gt;There will also inevitably be cases where changes would cause expected and acceptable performance degradation. The CI pipeline should support these cases.&lt;/p&gt;

&lt;p&gt;Lastly, regarding database-dependent benchmarks, it would be best to keep the tests isolated from the specific data stored in your snapshots - such tests should be based on the amount of data, and not the contents. This would allow updating the database snapshot without having to worry about breaking previously written tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Utilizing this methodology, we can reduce the cognitive load of writing benchmark tests, as well as reducing the time it takes, which ultimately allows us to include them as part of our feature development lifecycle with very little added effort. On top of that, having actionable benchmark results as part of our CI pipelines allows us to handle any issues much faster in the development lifecycle.&lt;/p&gt;

&lt;p&gt;Taking full ownership and responsibility of feature delivery becomes easier, letting our developers focus on development instead of verification.&lt;/p&gt;

&lt;p&gt;Interested in working on high-performance products with team members who care about improving workflows while keeping high standards? &lt;a href="https://grnh.se/f6768jaq1" rel="noopener noreferrer"&gt;Come join us at JFrog R&amp;amp;D&lt;/a&gt;! &lt;/p&gt;

</description>
      <category>go</category>
      <category>benchmark</category>
      <category>ci</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Docker new Download Rate Limits</title>
      <dc:creator>Batel Zohar</dc:creator>
      <pubDate>Fri, 23 Oct 2020 10:31:44 +0000</pubDate>
      <link>https://forem.com/jfrog/docker-new-download-rate-limits-4jl7</link>
      <guid>https://forem.com/jfrog/docker-new-download-rate-limits-4jl7</guid>
      <description>&lt;p&gt;The new &lt;a href="https://docs.docker.com/docker-hub/download-rate-limit/" rel="noopener noreferrer"&gt;Docker announcement&lt;/a&gt; could be a bit confusing, but in this blog post, I’ll try to summarize it and make it simpler to understand. On November 1st, Docker is planning to add a new &lt;a href="https://www.docker.com/pricing" rel="noopener noreferrer"&gt;subscription level&lt;/a&gt;, and here’s how this may affect us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;There are two main issues Docker users will now be facing: new pull request limitations, and the image retention policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pull request limitations
&lt;/h2&gt;

&lt;p&gt;This new limitation means that we can create 100 pulls for anonymous users and 200 pulls for authorized users every 6 hours.&lt;/p&gt;

&lt;p&gt;To better understand why rate limits were introduced, &lt;a href="https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/" rel="noopener noreferrer"&gt;Docker found that most Docker users pulled images at a rate you would expect for normal workflows.&lt;/a&gt; However, there is an outsized impact from a small number of anonymous users. For example, roughly 30% of all downloads on Hub come from only 1% of our anonymous users.&lt;/p&gt;

&lt;p&gt;The challenge is when pulling an existing image. Even if you don’t download the layers, this pull request will still be counted. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn9vluq2v204pia3nmyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmn9vluq2v204pia3nmyf.png" alt="From Scaling Docker to Serve Millions More Developers: Network Egress"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From Scaling Docker to Serve Millions More Developers: Network Egress&lt;/p&gt;

&lt;h2&gt;
  
  
  Image retention policy
&lt;/h2&gt;

&lt;p&gt;Images stored in free Docker Hub repositories that have not had their manifest pushed or pulled in the last 6 months, will be removed &lt;a href="https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates/" rel="noopener noreferrer"&gt;at the mid of 2021.&lt;/a&gt; This policy does not apply to images stored by paid Docker Hub subscription accounts, Docker verified publishers or &lt;a href="https://docs.docker.com/docker-hub/official_images/" rel="noopener noreferrer"&gt;official Docker Images.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's take the following example of a free subscription user who pushed a tagged image called  "&lt;strong&gt;batelt/froggy:v1&lt;/strong&gt;" to Docker Hub on Oct 21, 2019. If this tagged image was never pulled since it was pushed, it will be considered inactive by mid 2021 when the new policy takes effect. The image and any tag pointing to it will be subject to deletion.&lt;/p&gt;

&lt;p&gt;According to their wiki, Docker will also be providing tooling, in the form of a UI and APIs, that will allow users to easily manage their images.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;My favorite solution is to use &lt;a href="https://www.jfrog.com/confluence/display/jfrog/Installing+Artifactory" rel="noopener noreferrer"&gt;JFrog Artifactory&lt;/a&gt; which is an artifact repository manager. This will allow us to store and protect our Docker images within a &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry" rel="noopener noreferrer"&gt;private Docker registry,&lt;/a&gt; keeping them in our cache using a &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Repository+Management#RepositoryManagement-RemoteRepositories" rel="noopener noreferrer"&gt;remote repository,&lt;/a&gt;, reducing our requests to Docker hub and using our local images kept in cache as shown in the following diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqaqhjxnb80mew7k23d60.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqaqhjxnb80mew7k23d60.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should we use Artifactory and how it works
&lt;/h2&gt;

&lt;p&gt;JFrog provides us with the ability to host our own secure private Docker registries and proxy external Docker registries. It even provides us with a smart &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Checksum-Based+Storage" rel="noopener noreferrer"&gt;checksum-based storage.&lt;br&gt;
&lt;/a&gt; storage, which utilizes storage to maximum potential.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose Artifactory version (&lt;a href="https://jfrog.com/platform/free-trial/" rel="noopener noreferrer"&gt;Cloud&lt;/a&gt; or &lt;a href="https://jfrog.com/platform/free-trial/#hosted" rel="noopener noreferrer"&gt;On-prem&lt;/a&gt;) &lt;/li&gt;
&lt;li&gt;Create Docker repositories (&lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry#DockerRegistry-LocalDockerRepositories" rel="noopener noreferrer"&gt;local&lt;/a&gt; &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry#DockerRegistry-RemoteDockerRepositories" rel="noopener noreferrer"&gt;remote&lt;/a&gt; and &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry#DockerRegistry-VirtualDockerRepositories" rel="noopener noreferrer"&gt;virtual&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Configure repository advanced configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Artifactory version
&lt;/h2&gt;

&lt;p&gt;If you don’t want to manage Artifactory you can just use the SaaS version which is free and limited. Or you can use the JFrog container registry that supports Docker and Helm repositories&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Docker repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnlicv834hnjqq7bzucyv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnlicv834hnjqq7bzucyv.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure repository advanced configuration
&lt;/h2&gt;

&lt;p&gt;Now we can easily &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Advanced+Settings" rel="noopener noreferrer"&gt;Configure repository advanced&lt;/a&gt; options like deciding how long before Artifactory checks for a newer version of a requested artifact in a remote repository and use our local caching so we will save our docker requests:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9807zntc4qshzbtye3g5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9807zntc4qshzbtye3g5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.jfrog.com/confluence/display/JFROG/JFrog+Artifactory" rel="noopener noreferrer"&gt;Learn more about Artifactory Pro&lt;/a&gt; that contains 27 different package types like Maven NPM and much more.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to use Helm to Support Cloud-Native Development</title>
      <dc:creator>Batel Zohar</dc:creator>
      <pubDate>Mon, 21 Sep 2020 14:55:42 +0000</pubDate>
      <link>https://forem.com/jfrog/how-to-use-helm-to-support-cloud-native-development-h6g</link>
      <guid>https://forem.com/jfrog/how-to-use-helm-to-support-cloud-native-development-h6g</guid>
      <description>&lt;p&gt;One day my boss informed me that we are moving towards developing our software on the cloud. As a developer, this is one of my pet peeves as I hate spending time setting up complicated environments and prefer to focus on writing code and quickly releasing their software. And then I immediately got my first assignment to create a new software product using the CI/CD pipeline on the cloud. The questions that immediately arose were: When? How? And Why do we need it? &lt;/p&gt;

&lt;p&gt;Since then I have learned many lessons and would like to share them with you on how to prepare and configure your environment to achieve &lt;a href="https://jfrog.com/artifactory/cloud-automation/"&gt;DevOps automation in the cloud&lt;/a&gt; for your containerized secured applications using a CI/CD pipeline in Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why go to the cloud?
&lt;/h3&gt;

&lt;p&gt;Let’s first look at the main reasons for going to the cloud:&lt;/p&gt;

&lt;h4&gt;
  
  
  Achieve unlimited scalability:
&lt;/h4&gt;

&lt;p&gt;Setting up your &lt;a href="https://jfrog.com/blog/accelerating-software-delivery-in-the-cloud/"&gt;environment on the cloud&lt;/a&gt; provides unlimited scalability allowing you to grow according to your needs and is easily achieved by using cloud storage providers (Amazon S3, Google GCS or Microsoft Azure) in your environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hardware costs are no longer an issue:
&lt;/h4&gt;

&lt;p&gt;A huge advantage of going to cloud computing is the decrease in hardware costs whereby you pay only for exactly what you use. Instead of purchasing in-house equipment, hardware needs are now left to the cloud vendor. Adding new hardware every release or every quarter to meet your increasing needs can be very expensive and inconvenient, on the other hand, cloud computing alleviates these issues because resources can be acquired quickly and easily. &lt;/p&gt;

&lt;h4&gt;
  
  
  Gain redundancy:
&lt;/h4&gt;

&lt;p&gt;And the most important thing you gain is redundancy. A number of cloud storage providers can ensure your data is stored on multiple machines and complies with any regulations that need to be applied (like keeping two copies of all the data in two different regions).&lt;/p&gt;

&lt;p&gt;This blog post shows you how to prepare and configure a cloud-based environment to achieve an automated CI/CD pipeline for your containerized secured applications using Kubernetes.&lt;/p&gt;

&lt;p&gt;So let’s get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring a cloud-based automated CI/CD pipeline
&lt;/h3&gt;

&lt;p&gt;Before you start, prepare the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A version control system (such as Git)&lt;/li&gt;
&lt;li&gt;Kubernetes to deploy and manage your containerized applications&lt;/li&gt;
&lt;li&gt;A CI/CD pipeline tool like Jenkins, TeamCity, Bamboo etc.&lt;/li&gt;
&lt;li&gt;A Binary repository manager&lt;/li&gt;
&lt;li&gt;A Compliance auditor&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Setting up your cloud environment
&lt;/h3&gt;

&lt;p&gt;Let's start by creating a Kubernetes cluster. You can easily choose one of the available &lt;a href="https://dzone.com/articles/5-hosted-kubernetes-platforms"&gt;Kubernetes platforms&lt;/a&gt; in the cloud. Then isolate your environment by creating separate environments for development, staging, and production as displayed in the following diagram: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_0fsiASP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j3f7uutmhp17090ga3dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_0fsiASP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j3f7uutmhp17090ga3dg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We recommend applying the &lt;a href="https://dzone.com/articles/the-shift-left-principle-and-devops-1"&gt;"shift left”&lt;/a&gt;  principle, whereby you move back tasks typically performed at later stages to earlier stages in the pipeline. For example, when running security testing or selecting the &lt;a href="https://www.gnu.org/philosophy/free-sw.html"&gt;FOSS license.&lt;/a&gt; As mentioned earlier, the aim is to move faster to reduce delivery time while improving the quality of each release. At the same time, you are also faced with increasing pressure to reduce testing, so it means that the developer’s team needs to be integrated with the testing cycle earlier. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Configuring the VCS server
&lt;/h3&gt;

&lt;p&gt;When you finish developing the app, proceed to configure the &lt;a href="https://www.tutorialspoint.com/jenkins/jenkins_git_setup.htm"&gt;VCS server&lt;/a&gt;  VCS server in order to trigger the build after a successful pull request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Running the build
&lt;/h3&gt;

&lt;p&gt;Run the build on your CI server (for example Jenkins), to create the application’s Docker image and proceed to run the unit test against the Docker container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Testing the build
&lt;/h3&gt;

&lt;p&gt;After the builds tested, create a Docker image that will be uploaded to your private &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry"&gt;Docker registry&lt;/a&gt; and private &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Helm+Chart+Repositories"&gt;helm repository,&lt;/a&gt; then run a number of tests against the running Docker container including integration tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Scanning the build
&lt;/h3&gt;

&lt;p&gt;Approximately three decades ago, Richard Stallman changed the developer’s world forever, by introducing the &lt;a href="https://www.gnu.org/gnu/thegnuproject.en.html"&gt;GNU Project,&lt;/a&gt; the first open-source coding project that included requirements for scanning external code. In that case, I recommend using the &lt;a href="https://chartcenter.io"&gt;ChartCenter&lt;/a&gt; as our new source for our helm chart, when checking the chart view, in that case let’s talk about &lt;a href="https://chartcenter.io/bitnami/redis"&gt;our DB&lt;/a&gt; we can easily get the chart information for the dependencies like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UNm-1lyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4p3qx3i60gmhmmms306x.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UNm-1lyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4p3qx3i60gmhmmms306x.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Afterward, I can check the security report very easily and calculate the risk on this specific version&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YZD3Dy0W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l1v246o0miawzp62p8w3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YZD3Dy0W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l1v246o0miawzp62p8w3.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 6: Deploying to development
&lt;/h3&gt;

&lt;p&gt;Now we can proceed to deploy your app to the staging environment and perform additional tests against this cluster.&lt;/p&gt;

&lt;p&gt;Add the Chartcenter Helm repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add center https://repo.chartcenter.io
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 7: Deploying to staging
&lt;/h3&gt;

&lt;p&gt;After running a full test cycle in the development environment, deploy the application to the isolated Kubernetes staging cluster, run the staging tests, and proceed to the next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 8: Deploying to production
&lt;/h3&gt;

&lt;p&gt;Run a set of sanity tests and deploy them to the isolated production cluster. Be ready to perform a fast rollback, if necessary.  &lt;/p&gt;

&lt;h3&gt;
  
  
  It’s time to have some fun!
&lt;/h3&gt;

&lt;p&gt;So you see it ain’t that scary and the steps are out there, so go ahead and develop your next app in the cloud. To make it even easier, follow our &lt;a href="https://github.com/eldada/jenkins-pipeline-kubernetes"&gt;6-step CI/CD pipeline for a simple static website application based on of official nginx Docker image.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>googlecloud</category>
      <category>azure</category>
      <category>aws</category>
    </item>
    <item>
      <title>How Everyone Can Win in the New Era of Docker Hub Limits</title>
      <dc:creator>Melissa McKay</dc:creator>
      <pubDate>Wed, 26 Aug 2020 16:34:58 +0000</pubDate>
      <link>https://forem.com/jfrog/how-everyone-can-win-in-the-new-era-of-docker-hub-limits-3knd</link>
      <guid>https://forem.com/jfrog/how-everyone-can-win-in-the-new-era-of-docker-hub-limits-3knd</guid>
      <description>&lt;h4&gt;
  
  
  AN OBSERVATION OF HUMAN BEHAVIOR
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;“Don't take anything for granted, because tomorrow is not promised to any of us.”&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Kirby Puckett&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is a behavior I’ve observed repeatedly over the years that I’m certain boils down to one of the basics of human psychology. The crux of it is this: when something is readily available to us in vast quantities, the less we appreciate said thing. Put a slightly different way, we tend to be less concerned with how much we use of something when we have plenty to spare. It’s an interesting phenomenon to watch play out, and I see it every single time I open a brand new tube of toothpaste. It’s the same pattern every time - the first glob out of the tube looks remarkably like the marketing images, full coverage in the nice shape of a wave. But as I get to the end of the tube, I become predictably stingy. It turns out that the small dabs of toothpaste that I use after obsessively smoothing out and rolling up the tube is all I really need, and yet, somehow a brand new tube equates to a full, beautiful wavy glob. Come to think of it, pretty much anything I use that comes in a tube or other similar container has the same fate.&lt;/p&gt;

&lt;p&gt;Another real-life example of this behavior I regularly struggle with is how much ice cream I have access to. There is a direct relationship between my dieting success and the amount of ice cream in my freezer. I’m not saying ice cream is evil (it is), it’s the availability of too much ice cream that results in my repeated failures. My lack of self-control in this situation is a completely separate matter that I don’t wish to discuss.&lt;/p&gt;

&lt;p&gt;You might not immediately relate to the toothpaste or ice cream scenarios (are you even human?), but there is a fairly long list of essential things we have all taken for granted at one point or another - the availability of running water in your home, the ease of flipping a switch to read in the evening, cool and breathable air! Of course, all of this is in varying degrees depending on our history and current access to these things. But that is exactly the point I’m making. We intrinsically know that these resources greatly improve our well-being and are of utmost importance (some essential to life!), and yet until we are faced with some kind of limiting factor, it’s difficult for us to appreciate them in the way we should.&lt;/p&gt;

&lt;p&gt;To be clear, none of this is meant to shame or guilt anyone. This is all just an observation of something that is completely natural and probably even beneficial to us as human beings. If we spent our days worrying about everything that is essential to us and how our lives would be without them, we would be nothing but shriveling heaps of tears and angst at the end of the day. Living our lives is very much like spinning plates - it’s the wobbly plate that gets our immediate attention. The management of our resources is very much related to the quantity available to us and we are left to figure out how to deal with whatever crisis is at hand when we hit unexpected limits. We re-evaluate our needs and then we find clever solutions to subsist on what we have. And round and round we go.&lt;/p&gt;

&lt;p&gt;This brings me to our current wobbly plate in our DevOps world.&lt;/p&gt;

&lt;p&gt;Given the title of this article, you can probably tell where I’m going with this. Let’s set aside deep discussions of human behavior and life on this planet for another time, and instead, let’s figure out how to apply what we’ve learned from our observations so far to the latest happenings in DevOps tooling and resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  WHAT IS THE PROBLEM? WHY IS THIS COMING UP NOW?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;“For want is nexte to waste, and shame doeth synne ensue,” (waste not, want not)&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Richard Edwards&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt; recently updated their &lt;a href="https://www.docker.com/legal/docker-terms-service"&gt;terms of service, (&lt;em&gt;see section 2.5&lt;/em&gt;)&lt;/a&gt;, for their free service level accounts to include a rate limit (a limit on the number of pushes and pulls of Docker images over a span of six hours), as well as a retention policy for inactive images (images that are not pushed or pulled for the last six months are deleted). If you can imagine, these changes have lit quite the firestorm of discussion on social media. Everyone relying on these services is having to come to terms with these new limitations to be sure that their pipelines will not be adversely affected.&lt;/p&gt;

&lt;p&gt;Let’s break this down.&lt;/p&gt;

&lt;p&gt;Prior to these changes, developers and full-scale CI/CD systems were able to push and pull Docker images from Docker Hub without any limitations. On top of that, free storage! This is a pretty incredible service and frankly very easy to take advantage of. You know how when you have more storage, you store more things. This behavior permeates my own life across the board. My digital photo album is an excellent example. My house is another example. I moved from an apartment to a home and I magically have more stuff! Like goldfish, we tend to fill the space we’re in and then forget what we have.* Again, this is just a natural human behavior. But the moment that storage is assigned a price, (or a retention policy in the case of Docker images stored in Docker Hub), we now must take a step back and figure out how to manage our storage a little more thoughtfully. We must clean out our closets, so to speak.&lt;/p&gt;

&lt;p&gt;The new limits imposed by Docker Hub is a bit of a call to action to define some netiquette around the use of these free services. This is both a jolt to re-evaluate and consider our use of the resources affected as well as an opportunity to save ourselves from some of the negative consequences of taking these high-valued resources for granted. For those DevOps professionals out there that are already following best practices, this &lt;a href="https://www.docker.com/pricing/resource-consumption-updates"&gt;announcement from Docker&lt;/a&gt; is far from a deal-breaker for the use of Docker Hub and will certainly not result in their software development and distribution pipelines grinding to a halt. We’ll talk in the next section about what those best practices are, but first, let’s discuss the real elephant in the room, and perhaps the real fear that the Docker terms of service update has unveiled.&lt;/p&gt;

&lt;p&gt;There seems to be an unhealthy reliance on external resources when it comes to critical internal operations. Specifically, if my team of developers cannot access a Docker image when required in their personal development environments (and requests from developers could be multiple times a day depending on the circumstances), their progress on the next feature or bug fix is potentially blocked. In the same way, if my CI/CD system that is responsible for building my next software release cannot access the binaries it needs, my team may end up in a position where they cannot release. The same can be said for every intermediary step of the pipeline including initial integration and deployment to quality assurance test environments. By taking for granted the access to and storage of the most integral building blocks of our software, our software binaries, many find themselves completely at the mercy of an external service.&lt;/p&gt;

&lt;p&gt;Docker Hub is not the only organization out there whose free service level offering is subject to limitations. It is not an uncommon occurrence that near the end of the month, &lt;a href="https://www.boost.org/"&gt;Boost&lt;/a&gt;, (one of the most popular library projects in C++), reaches a point where the distributable is no longer accessible because the organization’s &lt;a href="https://github.com/boostorg/boost/issues/383"&gt;monthly download allowance has been exceeded&lt;/a&gt;. Docker and Boost have intentional limitations set. Some services will degrade or encounter downtime when demand is too high or because of any number of other reasons. For example, &lt;a href="https://www.nuget.org/"&gt;NuGet Gallery&lt;/a&gt;, the central repository for .NET packages, provides a &lt;a href="https://status.nuget.org/"&gt;status page&lt;/a&gt; to let stakeholders know what is going on when there is an outage. The most unfortunate scenario which has more to do with uncontrolled risk rather than free service limits is when a remote binary that your build relies upon just up and disappears, like what happened during the infamous &lt;a href="https://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm"&gt;NPM left-pad debacle of 2016&lt;/a&gt;. All of these examples call attention to the problems and potential productivity killers that teams face when relying on remote resources for software binaries. Another important point to make here... &lt;em&gt;this is not a new problem!&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  HOW DO I DEAL? HOW DO I MITIGATE RISK FOR MY DEVOPS TEAM?
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;“There is a store of great value in the house of the wise, but it is wasted by the foolish man.”&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Proverbs 21:20 (BBE)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So now that I have a full understanding of the real value of my Docker images and other binaries and can better evaluate the methods I use to store and retrieve them, what can I do to help keep my builds and my whole software development pipeline alive and drama free? Obviously, taking the stance of never using free services like Docker Hub is unacceptable as this will put you at a disadvantage. These services are valuable and certainly have their place. But 100% reliance on them is clearly unhealthy. Expecting them to meet an unbounded need is also unrealistic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Take an inventory of your software project.&lt;/strong&gt;&lt;br&gt;
It’s important to know exactly what libraries and packages your software project is pulling in. Understand exactly where your binaries are coming from. For Docker images, make sure you understand thoroughly what is happening when you build your images. For example, are there any lines in your Dockerfile that pull from npm, perform a pip install, or update other software packages? All of these actions potentially reach out to remote service providers and will count against any download limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Utilize multiple levels of caching.&lt;/strong&gt;&lt;br&gt;
Given that many remote offerings like Docker Hub, Boost, npm, NuGet Gallery, and many others have very real limitations and possibly unplanned downtime, it’s important to mitigate both the risk of not being able to access your binaries when needed as well as eliminate unnecessary polling for these resources. One of the most valuable things you can do is set up a caching proxy like JFrog's &lt;a href="https://www.jfrog.com/confluence/display/JFROG/JFrog+Artifactory"&gt;Artifactory&lt;/a&gt;, &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Remote+Repositories"&gt;(a remote repository)&lt;/a&gt;, for these remote resources. The next level of cache that will play an important role is a developer’s local environment. Developers should be set up to pull required resources from the caching proxy rather than repeatedly from the remote service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Modify CI/CD pipelines to pull from cache.&lt;/strong&gt;&lt;br&gt;
Even if your CI/CD processes involve building your code from scratch on brand-new, temporary instances, set them up to pull from your proxy setup in Step 2 rather than repeatedly pulling from remote sources. A misbehaving pipeline can easily meet up with throttling and other download limitations if left unchecked. It is better for your CI/CD pipelines to utilize internal resources that you have control over rather than leave them attempting to pull from remote sources that may be unavailable. If you set up your pipelines this way, you will be more empowered to troubleshoot and resolve any internal issues you experience in order to complete your pipeline processes successfully rather than be relegated to the priority queue of an external service. &lt;/p&gt;

&lt;p&gt;I expect nothing less than a ton of buzz and discussion about this move by Docker and even the thoughts I’ve written here. This is a good thing. My hope is that this move will bring to light the realities of providing a service that so many in the industry have come to rely on and ultimately what it means to be a responsible user of community resources. I also hope that we come to fully appreciate the costs associated with access to and storage of our most valuable software building blocks - that we are more thoughtful about where we put them and how we get to them since they are fundamental to our organization’s software.&lt;/p&gt;

&lt;p&gt;* &lt;small&gt;This is actually an entirely untrue statement about goldfish, but you get my meaning. Blogs like this one perpetuate these falsehoods, so here is a resource to hopefully make up for it:  &lt;a href="https://www.tfhmagazine.com/articles/freshwater/goldfish-myths-debunked"&gt;https://www.tfhmagazine.com/articles/freshwater/goldfish-myths-debunked&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockerhub</category>
      <category>cicd</category>
      <category>artifactory</category>
    </item>
    <item>
      <title>Helm V3, Latest &amp; Greatest of Kubernetes</title>
      <dc:creator>shimib</dc:creator>
      <pubDate>Wed, 19 Aug 2020 17:53:39 +0000</pubDate>
      <link>https://forem.com/jfrog/helm-v3-latest-greatest-of-kubernetes-1eff</link>
      <guid>https://forem.com/jfrog/helm-v3-latest-greatest-of-kubernetes-1eff</guid>
      <description>&lt;p&gt;Helm is becoming the de facto standard for managing Kubernetes deployments. &lt;br&gt;
Although not the only tool in the landscape, it’s by far more popular than the alternatives.&lt;br&gt;
The reason for using Helm is quite obvious: managing your K8S deployments by hand requires a lot of YAML manipulation which usually leads to high maintenance and duplication.&lt;/p&gt;

&lt;p&gt;Recently, Helm v3 was released and I wanted to describe the changes and new features in detail.&lt;/p&gt;

&lt;h1&gt;
  
  
  Removal of Tiller
&lt;/h1&gt;

&lt;p&gt;If you have worked with previous versions of Helm, one of the mandatory installation components was the Tiller Helm server that needed to be installed in your K8s cluster.&lt;br&gt;
You might have asked, why is it needed? Can’t all the operations be performed from the client side?&lt;br&gt;
Well, when Helm v2 was released in 2016, some of the K8s features that we are now used to (e.g., Custom Resource Definitions (CRDs)) weren’t available yet.&lt;br&gt;
These days, there is really no need for Tiller.&lt;br&gt;
In version 3, Tiller is no more ☺&lt;/p&gt;

&lt;p&gt;With the removal of Tiller, there is no centralized namespace where all Releases’ information is stored (the namespace where the Tiller was installed). Now this information is stored in the namespace of the chart itself.&lt;br&gt;
Your releases are now under their own namespace (yes you have to create the namespace).&lt;/p&gt;

&lt;p&gt;Security is also now handled where it should, i.e., by K8s RBAC.&lt;/p&gt;

&lt;h1&gt;
  
  
  XDG-based Directory Structure
&lt;/h1&gt;

&lt;p&gt;Starting with Helm v3, directory structure and its configuration are based on the XDG Base Directory Specification.&lt;br&gt;
For those not familiar, the XDG specification defines standard environment variables for locating the home directory and various subfolders.&lt;/p&gt;

&lt;p&gt;In version 3, $HELM_HOME is no more ☺&lt;/p&gt;

&lt;p&gt;Also, the “helm init” and “helm home” commands no longer exist.&lt;/p&gt;

&lt;h1&gt;
  
  
  Library Charts
&lt;/h1&gt;

&lt;p&gt;Starting with v3, a chart can have the type (meta-data chart property) of either “application” or “library” (“application” by default).&lt;br&gt;
Library charts are common charts that are reusable and intended for use in a containing application.&lt;/p&gt;

&lt;p&gt;Regarding chart dependencies, requirements and dependencies have been moved into the chart.yaml itself.&lt;br&gt;
A Smooth Migration?&lt;/p&gt;

&lt;p&gt;While experimenting with Helm v3, I ran into some issues where I had a chart deployed with v2 and tried to delete and replace it using a v3 client.&lt;br&gt;
I got some weird errors when trying to reinstall the chart (e.g., already exists). Although looking at “helm ls” didn’t display anything about my chart.&lt;br&gt;
This error occurred because the different versions of Helm store their catalog in different locations.&lt;br&gt;
I had to revert to Helm v2 client to purge my chart.&lt;br&gt;
So, keep that in mind and follow proper &lt;a href="https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/"&gt;migration guides&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  ChartCenter
&lt;/h1&gt;

&lt;p&gt;Combined with the release of Helm v3, I was also excited to hear the announcement of ChartCenter. &lt;br&gt;
ChartCenter (&lt;a href="https://chartcenter.io/"&gt;https://chartcenter.io/&lt;/a&gt;) provides you with all the information you need about the charts you depend on, including security vulnerabilities scanning information powered by JFrog Xray.&lt;br&gt;
On the site’s UI you can dig deep into the subcomponents of the included containers and see the vulnerable components down to the application’s dependencies.&lt;br&gt;
Not only do I now have a “go-to” place for fetching my infrastructure chart, I can also assure myself that my dependencies have no critical security vulnerabilities.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Best Practices for Onboarding Security &amp; Compliance Scanning Tools</title>
      <dc:creator>Eran Blumenthal</dc:creator>
      <pubDate>Sun, 02 Aug 2020 14:12:10 +0000</pubDate>
      <link>https://forem.com/jfrog/best-practices-for-onboarding-security-compliance-scanning-tools-322g</link>
      <guid>https://forem.com/jfrog/best-practices-for-onboarding-security-compliance-scanning-tools-322g</guid>
      <description>&lt;p&gt;Introducing, adding, or replacing a new binary security and compliance analysis tool into your SDLC, if not handled correctly, can be very disruptive to the SDLC and organization.&lt;br&gt;
This article will go over what I believe is a best practice for onboarding such tools in order to reduce disruption, improve adoption, and shift left to create a &lt;a href="https://jfrog.com/devops-tools/what-is-devsecops/"&gt;DevSecOps behaviour&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, let’s begin by describing the scenario I think about when talking about onboarding &lt;a href="https://jfrog.com/xray/"&gt;security tools&lt;/a&gt;. What usually happens, especially when introducing a new tool, is that all screens go red and alerts appear from every direction. In such scenarios, it is a reasonable knee-jerk reaction to require a system lockdown; disqualify builds, reject dependencies, etc. However, such a reaction, even though it may seem valid, is counter productive. Such a behaviour, in theory, will bring production to a halt, cause frustration and conflicts. Naturally, in real life, in most cases, no one will really stop the business. Rather, the organisation may develop Alert Fatigue and will ignore the tool, which is definitely not something we want. Keep reading to find out how to avoid this scenario and address onboarding DevSecOps tools with minimal disruption and maximum gain.&lt;/p&gt;

&lt;p&gt;The following is a list of 5 suggestions when onboarding SCA (Software Composition Analysis) tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  Involve R&amp;amp;D
&lt;/h3&gt;

&lt;p&gt;When you involve R&amp;amp;D in the process, you have a real chance at achieving a real DevSecOps process that is manageable and productive. There is approximately one security engineer per 100-200 developers. Reviewing and controlling all security issues by a single engineer is only possible when it is attached to development. Otherwise it will cause bottlenecks, delays, redundant work, and a lot of frustration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  A “watch” per application team
&lt;/h3&gt;

&lt;p&gt;Each security tool uses a different name, but most of them have some notion of scope. By scope I mean ‘some way to group a set of resources (repositories, folders, builds, etc.) for the application of relevant security and compliance governing rules.&lt;br&gt;
Creating a “watch” per application team allows giving every team their own “world” and responsibility. In turn, this provides the ability to &lt;a href="https://jfrog.com/webinar/shift-left-with-artifactory-pro-x/"&gt;“shift left”&lt;/a&gt; the responsibility of security governance.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  Dev tools integration
&lt;/h3&gt;

&lt;p&gt;Try to bring the information into the development tools (i.e. CI server, IDE), which will provide governance related information into the developmer’s environment.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  Start small and work in cycles
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Choose one team to start with.

&lt;ul&gt;
&lt;li&gt;Considerations for choosing the right team:

&lt;ul&gt;
&lt;li&gt;A vanguard team - a team which is open to changes and testing new ways to improve (especially if they care about security).&lt;/li&gt;
&lt;li&gt;A new app team - if you have the option to, choose a team which is starting a new project/service (i.e. starting “greenfield”)&lt;/li&gt;
&lt;li&gt;Or a team with less “integrations”/dependency with other teams.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Work with the chosen team to establish and improve the process before expanding to other teams.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start with “Critical” issues&lt;/strong&gt; -  As mentioned in the introduction, introducing a new SCA tool will most likely generate tens or hundreds of alerts of all types and severities.
Even in small scopes, trying to handle everything at once might not be reasonable. If you see that even the number of critical issues is too high and your tool supports it, try to be even more granular than Low/Medium/High/Critical. Try to go by CVSS score; start with 9.9-10, then 9.8-10, 9.7-10, etc. The granularity of this filtering should be based on your needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make a decision for each “critical”&lt;/strong&gt; -  Decide whether the issue needs and can be fixed, or whether to whitelist the issue. This might be a decision to temporarily, permanently whitelist the issue, or define a deadline for fixing the issue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do not use “brute force actions”&lt;/strong&gt; -  Most tools allow you to take actions. Email notifications, webhooks, Slack, Jira, etc. Some tools provide additional capabilities such as blocking download, failing builds, or similar.
Until you finish the cleanup of manageable existing issues (i.e. issues you can resolve or ignore) avoid any action that will disrupt the development lifecycle such as blocking download of a needed dependency or failing a build based on the new tool’s scan results.

&lt;ul&gt;
&lt;li&gt;Only when the build passes without violations consider a hard action future.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  Add external notifications (Jira, Slack, etc.)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You probably want to add external notifications only for new issues, otherwise you may be risking “Alert Fatigue”.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After handling criticals repeat the same process for major issues. For minor issues, you should consider if you want to handle them at this point in time, or rather move to the next project/team and only then deal with them.&lt;/p&gt;

&lt;p&gt;This post provided a general best practice flow for introducing a new SCA tool into your organisation/environment. Hopefully this will help you remove any tension that you may have between Development and Security, as well as provide some DevSecOps foundation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: A version of this blog post is also &lt;a href="https://jfrog.com/blog/best-practices-for-onboarding-jfrog-xray/"&gt;published here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>sca</category>
      <category>shiftleft</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>JFrog Xray &amp; Microsoft Teams</title>
      <dc:creator>John Peterson</dc:creator>
      <pubDate>Thu, 30 Jul 2020 22:50:09 +0000</pubDate>
      <link>https://forem.com/jfrog/jfrog-xray-microsoft-teams-4gbh</link>
      <guid>https://forem.com/jfrog/jfrog-xray-microsoft-teams-4gbh</guid>
      <description>&lt;h1&gt;JFrog Xray &amp;amp; Microsoft Teams&lt;/h1&gt;

&lt;p&gt;More than ever we need to be made aware when security issues arise. JFrog Xray is a great product that brings operational awareness to your software development lifecycle combined together with Microsoft Teams we have the channel of communication we need to ensure our team is always on top of the latest security concerns.&lt;/p&gt;

&lt;h3&gt;Getting Started&lt;/h3&gt;

&lt;p&gt;Let's get started by opening up Microsoft Teams. We will need to make a new connector on the channel we want to deliver our messages into. Click on the more options "..." icon to bring up the menu of options for a channel. Select "Connector app" to bring up the popup.&lt;/p&gt;
&lt;p&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X6uXdI_x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/okftblpbi341udgn34x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X6uXdI_x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/okftblpbi341udgn34x2.png"&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;p&gt;Inside of the Connector popup search for "Incoming Webhooks" and click the "Add" button which will then bring you to the Configure screen shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BtXt6Bh8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1e36bqvyn5bgqby1gn1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BtXt6Bh8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1e36bqvyn5bgqby1gn1a.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the name "Xray Webhook" and &lt;a href="https://github.com/jfrog/xray_msteam/raw/master/images/xray.png"&gt;download&lt;/a&gt; and save the Xray image to upload into the new webhook.&lt;/p&gt;

&lt;p&gt;Scroll down in the popup after adding the webhook to grab the URL for the incoming webhook&lt;/p&gt;

&lt;h3&gt;Deploy the integration server&lt;/h3&gt;

&lt;p&gt;Download the Github repo &lt;a href="https://github.com/jfrog/xray_msteam"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Build the code using Golang&lt;/p&gt;

&lt;p&gt;&lt;code&gt;go build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Export environment variable MICROSOFT_TEAM_WEBHOOK with the URL of the incoming webhook&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export MICROSOFT_TEAM_WEBHOOK=&lt;a href="http://the-incoming-webhook-url"&gt;http://the-incoming-webhook-url&lt;/a&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the integration server&lt;/p&gt;
&lt;p&gt;
&lt;code&gt;./xray_msteam&lt;/code&gt;
&lt;/p&gt;
&lt;p&gt;Grab the hostname/ip address of the machine running the integration server we will use this to supply to Xray webhook in the below format&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;a href="http://ip/host:8080/api/send"&gt;http://ip/host:8080/api/send&lt;/a&gt;&lt;/code&gt;&lt;br&gt;
This is the endpoint to send the messages from Xray. Last stop let's configure Xray to send the outgoing webhook.&lt;/p&gt;

&lt;h3&gt;Xray Outgoing Webhook&lt;/h3&gt;

&lt;p&gt;As an admin user, open up JFrog Unified Platform and goto the administration setting shown below for Xray&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gKKp06k0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6shza7wzkz78x76cdga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gKKp06k0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b6shza7wzkz78x76cdga.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Webhooks and click on + New Webhook to open the new webhook screen. In this screen give a name and supply the URL of the integration server. Save the new webhook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HbZ_LpgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eofe5i0j35sb1djfknvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HbZ_LpgZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/eofe5i0j35sb1djfknvi.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to add the new webhook to an Xray policy as a new rule. This is what will trigger the webhook when new violations are found in a watch associated to this policy.  Click on Policies and create a new policy or update an existing one to add a new rule using the webhook as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IlJhGk5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/466suhr46zl7nk23qnvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IlJhGk5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/466suhr46zl7nk23qnvx.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;That's it! Your done!&lt;/h3&gt;

&lt;p&gt;Congrats begin to watch the messages flow...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d_a_6aBs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cyh9spn9xfiroe55fgyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d_a_6aBs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cyh9spn9xfiroe55fgyh.png"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jfrog</category>
      <category>msteam</category>
      <category>security</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Hosting helm charts: A Maintainer’s Perspective</title>
      <dc:creator>Prasanna Raghavendra</dc:creator>
      <pubDate>Tue, 07 Jul 2020 02:07:02 +0000</pubDate>
      <link>https://forem.com/jfrog/hosting-helm-charts-a-maintainer-s-perspective-55ko</link>
      <guid>https://forem.com/jfrog/hosting-helm-charts-a-maintainer-s-perspective-55ko</guid>
      <description>&lt;p&gt;Helm charts is becoming one of the best ways to package any kubernetes application and indeed becoming &lt;a href="https://www.cncf.io/cncf-helm-project-journey/"&gt;popular&lt;/a&gt; among contributors and contributing companies. It provides a consistent management of any application on kubernetes and a very easy way to override default parameters.&lt;/p&gt;

&lt;p&gt;With the release of &lt;a href="https://chartcenter.io"&gt;ChartCenter by JFrog&lt;/a&gt; and given that I own hosting JFrog charts, it was important for me to look at this with a finer tooth.&lt;/p&gt;

&lt;p&gt;I wanted to take a step back and see who are the providers of chart hosting and hit this nice &lt;a href="https://codeengineered.com/blog/2020/helm-find-charts/"&gt;blog&lt;/a&gt;. Let us take this a step further and try to compare them and see how we can leverage these options.&lt;/p&gt;

&lt;p&gt;So, here goes my review of these options, against the following 8 criteria.&lt;/p&gt;

&lt;h4&gt;
  
  
  Host helm repo centrally
&lt;/h4&gt;

&lt;p&gt;We need to ensure that our consumers can  download these charts fast and effectively. We will need a way to host on a platform which provides a very good content caching so we are sure  that this  is available 24*7.&lt;/p&gt;

&lt;p&gt;ChartCenter does provide this by running this on a well proven Artifactory, and an operation team who have managed bintray.&lt;/p&gt;

&lt;p&gt;The other  providers seem to be only working as a directory as they directly provide install commands to the maintainer’s repo. This  leads maintainers to take the burden of hosting helm repo effectively, behind CDN.&lt;/p&gt;

&lt;h4&gt;
  
  
  Clarify chart dependencies
&lt;/h4&gt;

&lt;p&gt;A maintainer would like to ensure that the consumers are clear of the dependencies that the  chart has on both sides, where all it  is referenced so the consumer can  choose a  larger chart if needed, and what it references further  down, to ensure the consumer is aware of what consumer is picking before he/she installs without having to open the chart.&lt;/p&gt;

&lt;p&gt;ChartCenter does provide this effectively. &lt;/p&gt;

&lt;p&gt;Did not see them in any other.&lt;/p&gt;

&lt;h4&gt;
  
  
  Amount of charts hosted
&lt;/h4&gt;

&lt;p&gt;This is important to ensure that as a maintainer you are hosting on a site that is popular and more consumers are on it.  It is clear from the  numbers (provided  in  the summary table below), all are in early stages and will surely evolve in the next few months.&lt;/p&gt;

&lt;h4&gt;
  
  
  Download metrics
&lt;/h4&gt;

&lt;p&gt;Providing information on who is downloading which version of the application is very useful. I found all of  them having  gaps in this.&lt;/p&gt;

&lt;p&gt;While ChartCenter does provide download information, (by of course allowing downloads directly from ChartCenter), it reports by chart version which is not that relevant in the world of applications.  It is preferable to have  this information  by application version and also provide a trend, with download regions, something that was a given in bintray.&lt;/p&gt;

&lt;p&gt;Others, being only a portal, cannot provide download functionality and hence no download information.&lt;/p&gt;

&lt;h4&gt;
  
  
  Related charts
&lt;/h4&gt;

&lt;p&gt;When a  consumer comes in a  discovery  mode,  having related content makes a lot of sense to helm the  consumer pick the right application.&lt;/p&gt;

&lt;p&gt;Artifact Hub provides this well and felt it had relevant charts as well, when I tried wordpress.&lt;/p&gt;

&lt;p&gt;ChartCenter, Kubeapps hub and Helm hub seem to be lacking in this area.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deep curation
&lt;/h4&gt;

&lt;p&gt;Having thoughts and reflection (provided by the host) on charts helps deepen one's understanding on how to compare, install and use applications.&lt;/p&gt;

&lt;p&gt;Bitnami (Kubeapps Hub) includes their perspectives in a blog for each chart (of course popular ones). This gives a very good view-point on how one may end up using the chart.&lt;br&gt;
The other three players have not been present in this space.&lt;/p&gt;

&lt;h4&gt;
  
  
  Make overall usage safe and secure
&lt;/h4&gt;

&lt;p&gt;Having a security report at a chart level provides another nice perspective to know what is going on with the images bundled. This is an area where every chart maintainer is catching up and clearing up their internal security and also working on to depend on the cleanest and  latest  charts.&lt;/p&gt;

&lt;p&gt;ChartCenter is the only one providing this perspective. They are extending this further to allow providers to provide their view-point  on some of the  open issues. This is an interesting direction ChartCenter has taken and I like this one very much.&lt;/p&gt;

&lt;h4&gt;
  
  
  Smarter set-me up for better on-boarding
&lt;/h4&gt;

&lt;p&gt;Not all charts are  completely independent, especially when you are looking at extension products where you expect some other service is already running, then charts would make a few mandatory values to be filled before getting them working.&lt;/p&gt;

&lt;p&gt;Almost all of the providers seem to provide vanilla install commands ignoring some of the mandatory values  to be passed, finally leading to a situation where they are actually incorrect.&lt;/p&gt;

&lt;p&gt;Summary:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Criteria&lt;/th&gt;
&lt;th&gt;HelmHub (CNCF)&lt;/th&gt;
&lt;th&gt;Kubeapps Hub (Bitnami)&lt;/th&gt;
&lt;th&gt;Artifact hub (OSS)&lt;/th&gt;
&lt;th&gt;ChartCenter (JFrog)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Chart hosting&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chart dependency&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deep curation&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chart numbers&lt;/td&gt;
&lt;td&gt;Charts: 1355 Versions: NA&lt;/td&gt;
&lt;td&gt;Charts: 1400 Versions: NA&lt;/td&gt;
&lt;td&gt;Charts: 820 Versions:21k&lt;/td&gt;
&lt;td&gt;Charts: 1521 Versions:29k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Download metrics (application level)&lt;/td&gt;
&lt;td&gt;No (dependant  on hosting)&lt;/td&gt;
&lt;td&gt;No (dependant on hosting)&lt;/td&gt;
&lt;td&gt;No (dependant  on hosting)&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Related charts&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Smart installs commands&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As you can see in the  table above, all charts are hosting a limited number of charts, while this ecosystem will start growing. Soon, you will see more and more on-boarding into many of these.&lt;/p&gt;

&lt;p&gt;If your requirement is to have a completely hosted support, ChartCenter stands out as the only example. If  you want to be aligned with the CNCF ecosystem, you could look at helm hub. If  you would like to have a third-party validation along with hosting, Bitnami is a good solution.&lt;/p&gt;

&lt;p&gt;However, you could also add ChartCenter for hosting and have the other three list your charts so you have  the best of the breed.&lt;/p&gt;

&lt;p&gt;Let me know, what do you guys think?&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>chartcenter</category>
      <category>helmhub</category>
      <category>artifacthub</category>
    </item>
    <item>
      <title>Go Big or Go Home: A Quick Review of Artifactory and CodeArtifact Repository Types and Capabilities</title>
      <dc:creator>Melissa McKay</dc:creator>
      <pubDate>Sat, 20 Jun 2020 00:38:31 +0000</pubDate>
      <link>https://forem.com/jfrog/go-big-or-go-home-a-quick-review-of-artifactory-and-codeartifact-repository-types-and-capabilities-1ghn</link>
      <guid>https://forem.com/jfrog/go-big-or-go-home-a-quick-review-of-artifactory-and-codeartifact-repository-types-and-capabilities-1ghn</guid>
      <description>&lt;p&gt;This article is part of a series written by JFrog's Developer Advocates. The index can be found here:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/jbaruch" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fR3HMKKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--Q11_WZFN--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/218863/3dcd84ba-80bd-48e1-8804-7e3f6d7815a8.jpg" alt="jbaruch image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/jfrog/jfrog-artifactory-vs-aws-codeartifact-comparison-in-10-ish-parts-521n" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;JFrog Artifactory vs AWS CodeArtifact: Comparison in 10-ish parts&lt;/h2&gt;
      &lt;h3&gt;JBaruch 🎩 ・ Jun 19 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#jfrog&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#artifactory&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#codeartifact&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;I’m the first to admit that I fit well into the developer stereotype of someone that gets easily distracted by the new, shiny thing. It’s just part of my nature, and I accept it. Plus, I absolutely enjoy exploring new implementations, technologies, or strategies that promise to elicit that “Ah, much better!” feeling. From my perspective, the end result of that activity is always a success, because my effort can only go one of two ways: I either find something new that improves some aspect of my life and make changes, or I confirm that I've already made the best choice.&lt;/p&gt;

&lt;p&gt;AWS just dropped &lt;a href="https://aws.amazon.com/codeartifact/"&gt;CodeArtifact&lt;/a&gt; into their vast array of services within the AWS ecosystem. Curious, I couldn’t stop myself from checking it out and comparing it with the user experience I have had with &lt;a href="https://jfrog.com/artifactory/"&gt;JFrog Artifactory&lt;/a&gt; - both from my previous development experience and in my current position at JFrog. Here are three questions that I went in with and the answers I came out with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will it support my Go modules?
&lt;/h2&gt;

&lt;p&gt;As far as package type support goes, CodeArtifact hits the big ones - &lt;a href="https://docs.aws.amazon.com/codeartifact/latest/ug/welcome.html"&gt;Maven, PyPI &amp;amp; npm&lt;/a&gt;. My first venture was to find a way to support my Go modules, but I was unsuccessful. Other notable misses include NuGet, Bower, and Docker. &lt;/p&gt;

&lt;h4&gt;
  
  
  CodeArtifact Documentation
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;"Each repository contains three unique endpoints, one for each package format: npm, pypi, and maven."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Create Repository Screen in Artifactory
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8is2LX99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bwa0jmf1u41ktuga75qr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8is2LX99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bwa0jmf1u41ktuga75qr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Support for more package types may very well be on the roadmap for the future, but Artifactory is way ahead of the game here, &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Package+Management#PackageManagement-SupportedPackageTypes"&gt;currently supporting 26 different package types&lt;/a&gt; including a Generic type for any other currently unsupported or even custom package types. AWS has a separate Docker registry solution, Amazon Elastic Container Registry (&lt;a href="https://aws.amazon.com/ecr/"&gt;ECR&lt;/a&gt;), but Artifactory includes support for &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Docker+Registry"&gt;multiple Docker registries&lt;/a&gt; and the ability to manage my Docker images along with all of the other artifact repositories. Not to belabor the point, but I’ve gotten used to how nice it feels to have all of my artifacts managed in the same place. I liken it to having all of my clothes in my closet, right where I would expect them to be. In fact, this is actually a critical point! If I don't have my image management in the same tool, how can I correlate packages with the images they are contained in?&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I resolve dependencies?
&lt;/h2&gt;

&lt;p&gt;Organizing repositories is quite a bit different in CodeArtifact than in Artifactory. In Artifactory, there is a clear delineation between &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Repository+Management#RepositoryManagement-LocalRepositories"&gt;&lt;em&gt;local&lt;/em&gt; repositories&lt;/a&gt;, (places where your internally built artifacts go), &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Repository+Management#RepositoryManagement-RemoteRepositories"&gt;&lt;em&gt;remote&lt;/em&gt; repositories&lt;/a&gt;, (caches of artifacts obtained from 3rd parties), and &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Repository+Management#RepositoryManagement-VirtualRepositories"&gt;&lt;em&gt;virtual&lt;/em&gt; repositories&lt;/a&gt;, (aggregated repositories consisting of local and remote repositories of your choosing). Virtual repositories are especially nice because you can add any repositories you like and specify a priority order for resolution.&lt;/p&gt;

&lt;p&gt;CodeArtifact appears to have a similar capability. When creating a repository in CodeArtifact, there is an option to add any number of "upstream" repositories, the order of which determines the order of resolution. However, I was immediately disappointed that I was limited to a relatively small number of public repositories. Artifactory is ahead of the game again, by allowing users to define any external repository they need.&lt;/p&gt;

&lt;h4&gt;
  
  
  CodeArtifact Available Public Upstream Repos
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oHmMZ5Xc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvscahueutr01zfnwrll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oHmMZ5Xc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvscahueutr01zfnwrll.png" alt="CodeArtifact Public Upstream Repos"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Artifactory Basic Remote Repo Configuration
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k5ctVHhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/upiann8bhkyusrwihyzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k5ctVHhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/upiann8bhkyusrwihyzy.png" alt="Artifactory Basic Remote Repo Configuration"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What statistics can I get from my artifacts and packages?
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://docs.aws.amazon.com/codeartifact/latest/ug/packages.html"&gt;the documentation&lt;/a&gt;, CodeArtifact provides information about the artifacts stored in the repositories using CodeArtifact CLI and API commands. There's of course the basic stuff: name, version, license info, contents info, and some dependency information. I admit, at this point, I did not feel like going through the pains of actually getting an artifact uploaded to a CodeArtifact to see what the UI would look like. I spent a little time looking for an example screenshot in their documentation and didn't find any.&lt;/p&gt;

&lt;p&gt;Part of my lackluster desire to pursue much further in CodeArtifact at this time, is that in contrast, the Artifactory package detail screens are, hands down, beautiful. You can search for a specific package and then view information (in addition to the basics that CodeArtifact provides) about the activity, the number of downloads, the number of versions available, and which repositories include the package. On top of that, if you have CI/CD integrated, you can drill down into specific package information further and determine &lt;a href="https://www.jfrog.com/confluence/display/JFROG/Build+Integration"&gt;which builds&lt;/a&gt; in the system used this package. Artifactory is undoubtedly a mature, tracing solution that is invaluable in troubleshooting any build or dependency issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  Artifactory Package Detail Screen
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_8wVg8kC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rmqsc7b75icvnoq0nucz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_8wVg8kC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rmqsc7b75icvnoq0nucz.png" alt="Artifactory Package Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This investigation is over for now, but I know there will be more to come. My exploratory efforts have fallen into that second outcome - confirmation that JFrog Artifactory is a better choice. I can't wait to see what's next!&lt;/p&gt;

&lt;p&gt;Return to the index:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/jbaruch" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fR3HMKKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--Q11_WZFN--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/218863/3dcd84ba-80bd-48e1-8804-7e3f6d7815a8.jpg" alt="jbaruch image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/jfrog/jfrog-artifactory-vs-aws-codeartifact-comparison-in-10-ish-parts-521n" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;JFrog Artifactory vs AWS CodeArtifact: Comparison in 10-ish parts&lt;/h2&gt;
      &lt;h3&gt;JBaruch 🎩 ・ Jun 19 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#jfrog&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#artifactory&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#codeartifact&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>jfrog</category>
      <category>artifactory</category>
      <category>aws</category>
      <category>codeartifact</category>
    </item>
  </channel>
</rss>
