<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sven Ruppert</title>
    <description>The latest articles on Forem by Sven Ruppert (@svenruppert).</description>
    <link>https://forem.com/svenruppert</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/svenruppert"/>
    <language>en</language>
    <item>
      <title>The quick wins of DevSecOps</title>
      <dc:creator>Sven Ruppert</dc:creator>
      <pubDate>Fri, 22 Jan 2021 12:32:15 +0000</pubDate>
      <link>https://forem.com/svenruppert/the-quick-wins-of-devsecops-548a</link>
      <guid>https://forem.com/svenruppert/the-quick-wins-of-devsecops-548a</guid>
      <description>&lt;p&gt;Hello and welcome to my DevSecOps post. Here in Germany, it's winter right now, and the forests are quiet. The snow slows down everything, and it's a beautiful time to move undisturbed through the woods.&lt;/p&gt;

&lt;p&gt;Here you can pursue your thoughts, and I had to think about a subject that customers or participants at conferences ask me repeatedly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question is almost always:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What are the quick wins or low hanging fruits if you want to deal more with the topic of security in software development?&lt;/p&gt;

&lt;p&gt;And I want to answer this question right &lt;strong&gt;now!&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For the lazy ones, you can see it as youtube video as well &lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lNqADishl8w"&gt;
&lt;/iframe&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's start with the definition of a phrase that often used in the business world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Make Or Buy
&lt;/h2&gt;

&lt;p&gt;Even as a software developer, you will often hear this phrase during meetings with the company's management and sales part. &lt;/p&gt;

&lt;p&gt;The phrase is called; "&lt;strong&gt;Make or Buy&lt;/strong&gt;". Typically, we have to decide if we want to do something ourselves or spend money to buy the requested functionality. It could be less or more functionality or different so that we have to adjust ourself to use it in our context. &lt;/p&gt;

&lt;p&gt;But as a software developer, we have to deal with the same question every day. I am talking about dependencies. Should we write the source code by ourselves or just adding the next dependencies? Who will be responsible for removing bugs, and what is the total cost of this decision? But first, let's take a look at the make-or-buy association inside the full tech-stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diff between Make / Buy on all layers.
&lt;/h2&gt;

&lt;p&gt;If we are looking at all layers of a cloud-native stack to compare the value of "make" to "buy" we will see that the component "buy" is in all layers the bigger one. But first things first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W7psDFBb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xrncrgwe5wnoms4g7fdp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W7psDFBb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xrncrgwe5wnoms4g7fdp.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first step is the development of the application itself. &lt;/p&gt;

&lt;p&gt;Assuming that we are working with Java and using maven as a dependency manager, we are most likely adding more lines of code indirectly as dependency compared to the number of lines we are writing by ourselves. The dependencies are the more prominent part, and third parties develop them. We have to be carefully, and it is good advice to check these external binaries for known vulnerabilities.&lt;/p&gt;

&lt;p&gt;We should have the same behaviour regarding compliance and license usage. The next layer will be the operating system, in our case Linux. &lt;/p&gt;

&lt;p&gt;And again, we are adding some configuration files and the rest are existing binaries. &lt;/p&gt;

&lt;p&gt;The result is an application running inside the operating system that is a composition of external binaries based on our configuration.&lt;/p&gt;

&lt;p&gt;The two following layers, Docker and Kubernetes, are leading us to the same result. Until now, we are not looking at the tool-stack for the production line itself.&lt;/p&gt;

&lt;p&gt;All programs and utilities that are directly or indirectly used under the hood called DevSecOps are some dependencies.&lt;/p&gt;

&lt;p&gt;All layers' dependencies are the most significant part by far.&lt;/p&gt;

&lt;p&gt;Checking these binaries against known Vulnerabilities is the first logical step.  &lt;/p&gt;

&lt;h2&gt;
  
  
  one time and recurring efforts for Compliance/Vulnerabilities
&lt;/h2&gt;

&lt;p&gt;Comparing the effort of scanning against known Vulnerabilities and for Compliance Issues, we see a few differences.&lt;/p&gt;

&lt;p&gt;Let's start with the Compliance issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance issues:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step will be defining what licenses are allowed at what part of the production line. This definition of allowed license includes the dependencies during the coding time and the usage of tools and runtime environments. Defining the non-critical license types should be checked by a specialised lawyer. With this list of white labelled license types, we can start using the machine to scan on a regular base the full tool stack. After the machine found a violation, we have to remove this element, and it must be replaced by another that is licensed under a white-labelled one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vulnerabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The recurrent effort on this site is low compared to the amount of work that vulnerabilities are producing. A slightly different workflow is needed for the handling of found vulnerabilities. Without more significant preparations, the machine can do the work on a regular base as well. The identification of a vulnerability will trigger the workflow that includes human interaction. The vulnerability must be classified internally that leads to the decisions what the following action will be.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compliance Issues: just singular points in your full-stack
&lt;/h3&gt;

&lt;p&gt;There is one other difference between Compliance Issues and Vulnerabilities. If there is a compliance issue, it is a singular point inside the overall environment. Just this single part is a defect and is not influencing other elements of the environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gIRlbb-S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/65coi40ktdrd8267zl0l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gIRlbb-S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/65coi40ktdrd8267zl0l.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerabilities: can be combined into different attack vectors.
&lt;/h3&gt;

&lt;p&gt;Vulnerabilities are a bit different. They do not only exist at the point where they are located. Additionally, they can be combined with other existing vulnerabilities in any additional layer of the environment. Vulnerabilities can be combined into different attack vectors. Every possible attack vector itself must be seen and evaluated. A set of minor vulnerabilities in different layers of the application can be combined into a highly critical risk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EyLw_MNa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5wlzxpr0si3q6de2l3ax.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EyLw_MNa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5wlzxpr0si3q6de2l3ax.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Vulnerabilities: timeline from found until active in the production
&lt;/h2&gt;

&lt;p&gt;I want to have an eye on as next is the timeline from vulnerability is found until the fix is in production. After a vulnerability is existing in a binary, we have nearly no control over the time until this is found. It depends on the person itself if the vulnerability is reported to the creator of the binary, a commercial security service, a government or it will be sold on a darknet marketplace. But, assuming that the information is reported to the binary creator itself, it will take some time until the data is publicly available. We have no control over the duration from finding the vulnerability to the time that the information is publicly available. The next period is based on the commercial aspect of this issue. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a consumer, we can only get the information as soon as possible is spending money. &lt;br&gt;
This state of affairs is not nice, but mostly the truth.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Nevertheless, at some point, the information is consumable for us. If you are using &lt;strong&gt;JFrog Xray&lt;/strong&gt;, from the free tier, for example, you will get the information very fast. &lt;strong&gt;JFrog&lt;/strong&gt; is consuming different security information resources and merging all information into a single vulnerability database.  After this database is fed with new information, all &lt;strong&gt;JFrog Xray&lt;/strong&gt; instances are updated.  After this stage is reached, you can act.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lyCCm79w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/38n00jl7shtaxwidiyat.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lyCCm79w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/38n00jl7shtaxwidiyat.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test-Coverage is your safety-belt; try Mutation Testing.
&lt;/h2&gt;

&lt;p&gt;Until now, the only thing you can do to speed up the information flow is spending money for professional security information aggregator. But as soon as the information is consumable for you, the timer runs. It depends on your environment how fast this security fix will be up and running in production. To minimise the amount of time a full automated CI Pipeline ist one of the critical factors. &lt;/p&gt;

&lt;p&gt;But even more critical is excellent and robust test coverage. &lt;/p&gt;

&lt;p&gt;Good test coverage will allow you, to switch dependency versions immediately and push this change after a green test run into production. I recommend using a more substantial test coverage as pure line-coverages. The technique called "mutation test coverage" is a powerful one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mutation Test Coverage&lt;br&gt;
If you want to know more about this on, check out my YouTube &amp;gt;channel. &lt;br&gt;
I have a video that explains the theoretical part and the &amp;gt;practical one for Java and Kotlin.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/6Vej7YEOF8g"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The need for a single point that understands all repo-types&lt;br&gt;
To get a picture of the full impact graph based on all known vulnerabilities, it is crucial to understand all package managers included by the dependencies. Focussing on just one layer in the tech-stack is by far not enough. &lt;/p&gt;

&lt;p&gt;JFrog Artifactory provides information, including the vendor-specific metadata that is part of the package managers.&lt;/p&gt;

&lt;p&gt;JFrog Xray can consume all this knowledge and can scan all binaries that are hosted inside the repositories that are managed by Artifactory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zNq_iO_T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hhnlxa5e3enzhuwtf2yl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zNq_iO_T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hhnlxa5e3enzhuwtf2yl.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Vulnerabilities - IDE plugin
&lt;/h2&gt;

&lt;p&gt;Shift Left means that Vulnerabilities must be eliminated as early as possible inside the production pipeline. One early-stage after the concept phase is the coding itself. At the moment you start adding dependencies to your project you are possibly adding Vulnerabilities as well.&lt;/p&gt;

&lt;p&gt;The fastest way to get feedback regarding your dependencies is the &lt;strong&gt;JFrog IDE Plugin&lt;/strong&gt;. This plugin will connect your IDE to your JFrog Xray Instance. The free tier will give you access to Vulnerability scanning. The Plugin is OpenSource and available for IntelliJ, VS-Code, Eclipse,... If you need some additional features, make a feature request on GitHub or fork the Repository add your changes and make a merge request.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Try it out by yourself - &lt;a href="https://jfrog.com/artifactory/start-free/"&gt;JFrog Free Tier&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  How to use the IDE plugin?
&lt;/h2&gt;

&lt;p&gt;If you add a dependency to your project, the IDE Plugin can understand this information based on the used package manager. The IDE Plugin is connected to your JFrog Xray instance and will be queried if there is a change inside your project's dependency definition. The information provided by Xray includes the known vulnerabilities of the added dependency. If there is a fixed version of the dependency available, the new version number will be shown.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to see the IDE Plugin in Action without &amp;gt;registering for a Free Tier, have a look at my youtube video.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/PsghzAf-ODU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With the JFrog Free Tier, you have the tools in your hands to practice Shift Left and pushing it into your IDE.&lt;/p&gt;

&lt;p&gt;Create repositories for all included technologies, use Artifactory as a proxy for your binaries and let Xray scan the full stack.&lt;/p&gt;

&lt;p&gt;With this, you have a complete impact graph based on your full-stack and the pieces of information about known Vulnerabilities as early as possible inside your production line. &lt;/p&gt;

&lt;p&gt;You don't have to wait until your CI Pipeline starts complaining. This will save a lot of your time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Youtube
&lt;/h3&gt;

&lt;p&gt;If you liked this blog post, I would appreciate to have you as my new subscriber on Youtube. &lt;/p&gt;

&lt;p&gt;I have two channels, one in &lt;a href="https://www.youtube.com/channel/UCNkQKejDX-pQM9-lKZRpEwA"&gt;English&lt;/a&gt; and one in &lt;a href="https://www.youtube.com/user/svenruppert"&gt;german&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;On my channel, you will find videos about the topics Core Java, Kotlin and DevSecOps.&lt;/p&gt;

&lt;p&gt;Please give me a thumbs up and see you on my channel.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devsecops</category>
      <category>jfrog</category>
      <category>security</category>
    </item>
    <item>
      <title>Chartcenter or - Do not reinvent the wheel</title>
      <dc:creator>Sven Ruppert</dc:creator>
      <pubDate>Wed, 21 Oct 2020 15:13:23 +0000</pubDate>
      <link>https://forem.com/svenruppert/chartcenter-or-do-not-reinvent-the-wheel-3bgm</link>
      <guid>https://forem.com/svenruppert/chartcenter-or-do-not-reinvent-the-wheel-3bgm</guid>
      <description>&lt;p&gt;Who doesn't know the times when there was only one server in operation? A single computer that combines all the services that are required within the company. In the beginning, it was a simple printing service so that you only had one central printer within the company. Services were added later, such as a shared file storage system, email server and so on and so on. The requirements were continually being revised upwards, and gradually more and more servers were offering the necessary services alone and later in a network.&lt;br&gt;
The first server failure in a company could be perceived as a warning event. A person who looked after the server and subsequently the server became an independent IT department. It quickly became apparent that different optimization goals, unfortunately, stood in each other's way. As an example, I would like to mention the cost development, which sadly took an unpleasant course in connection with the design of the reliability of individual services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first virtual server
&lt;/h2&gt;

&lt;p&gt;A new age in IT began with the development of virtualization. Services could now be separated from each other even more intensely on the physical hardware. Several services on one hardware server were now easy to implement again. With this new technology, however, new weak points in the area of security were introduced into the existing IT landscape. The requirements for the IT department were changed again at this point. The approach of using this virtualization, not like a component kit has established itself over time. More and more often, it was possible to obtain prefabricated virtual machines from the respective software products. The configuration within these modules could therefore, be provided directly by the manufacturers, which represented a big leap for the success of this technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  One server becomes many servers
&lt;/h2&gt;

&lt;p&gt;Over time, the demand for the computing power provided has grown steadily. The hardware could only keep up to a limited extent at this point if you didn't want to let the hardware costs grow into infinity. It turned out that many commercially available servers could be used in such a way that a higher level of reliability is achieved compared to a particular expensive server. So it made sense to look at the way to manage this zoo of small servers efficiently. After several approaches by different companies and the most diverse communities, a combination prevails on the market worldwide. The combination of Docker as a para-virtualizer and Kubernetes as a management tool is the current industry standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  complexity
&lt;/h2&gt;

&lt;p&gt;Unfortunately, it has always been the case with distributed systems that the underlying mechanisms are not trivial. On the one hand, location transparency is desirable to realize availabilities that are not minimized even by changing the version of the individual components. However, on the other hand, it is by no means trivial to provide the services required to manage this location transparency. So it was time within IT to deal with how declarative approaches can be used to make controllability accessible to the general public in IT. One method that is becoming more and more popular is described under the term "infrastructure as source text". The aim here is to define a description language that enables the required IT system to be described in its entirety. These definitions are then used within code versioning systems, e.g. git, so that you can easily switch between different versions of a description.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't reinvent the wheel
&lt;/h2&gt;

&lt;p&gt;One of the essential principles within IT is that you should reuse existing knowledge efficiently. In this case, it means that one should take over the existing description of partial systems to compose the required overall system from them then. What sounds simple has it all in the details. There are immediately some questions that arise in practical use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where do I put my experiences?
&lt;/h2&gt;

&lt;p&gt;The question of the right place to store your experiences so that others can access them is essential. In the past, it has been shown that the approach of creating a superset is prevalent. Be it with tools like maven or npm, or with operating systems like Linux. It enormously simplifies access when there is a central authority that can be used as an initial entry point. For the definition of infrastructure compositions, the "description language" HELM is the industrial location when using Kubernetes. HELM is the package manager for Kubernetes similar to maven it is for the Java world. To offer a central entry point for the community, &lt;a href="https://chartcenter.io/"&gt;https://chartcenter.io/&lt;/a&gt; was created. It's a superset of different sources to make an efficient and user-friendly collection point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust
&lt;/h2&gt;

&lt;p&gt;The next question that arises is the question of trust. In other words, one can ask oneself about the vulnerability or security of the components offered. The parts shown are, in turn, very complex units consisting of program elements and their configuration. These sub-systems, in turn, have their characteristics and also have their dependencies. The complete dependency graph is complicated for a human to grasp. Manual control is almost impossible. Here the IT itself has to be used again.&lt;br&gt;
At chartcenter Xray from JFrog is used to check the definitions stored there. All binaries that are used directly and indirectly in these compositions are examined for known security gaps. This results in a complete dependency graph that shows where security gaps are present and how they affect the overall context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first steps
&lt;/h2&gt;

&lt;p&gt;In order to start composing your own environment, you also need to know what is actually there. Tools that support navigation in this component repository using full-text searches and taxonomies help here. Here &lt;a href="https://chartcenter.io/"&gt;https://chartcenter.io&lt;/a&gt; offers a very intuitive and user-friendly graphical interface to identify the essential components required in the shortest possible time. The additional necessary information based on the README (for example &lt;a href="https://chartcenter.io/bitnami/postgresql"&gt;https://chartcenter.io/bitnami/postgresql&lt;/a&gt; ) of the element helps to make the first decisions quickly.&lt;br&gt;
The initial commands for using the selected component are also available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, one can say that &lt;a href="https://chartcenter.io/"&gt;https://chartcenter.io &lt;/a&gt; is the next logical step to meet the requirements of IT. In order to make the existing components efficiently usable, a central location is required where this knowledge about the existence and availability of the ingredients is collected. Access is free and supports every developer who wants to use these components as well as those who want to make their knowledge and skills available to the general public. This approach supports every open source project directly and indirectly and helps to harden the IT world a little further with the security information offered.&lt;br&gt;
The next step is to visit the website to get your picture of how easy it is to use these components. &lt;/p&gt;

&lt;p&gt;Cheers Sven&lt;/p&gt;

</description>
      <category>devops</category>
      <category>devsecops</category>
      <category>helm</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>AWS-CodeArtifact versus JFrog-Artifactory</title>
      <dc:creator>Sven Ruppert</dc:creator>
      <pubDate>Fri, 19 Jun 2020 17:57:18 +0000</pubDate>
      <link>https://forem.com/svenruppert/aws-codeartifact-versus-jfrog-artifactory-1bi0</link>
      <guid>https://forem.com/svenruppert/aws-codeartifact-versus-jfrog-artifactory-1bi0</guid>
      <description>&lt;p&gt;Welcome, AWS-CodeArtifact to the world of repository managers.&lt;br&gt;
Amazon has marked the Managed Service AWS CodeArtifact as a GA, thereby giving the general public access. But what is this service all about, and how does it compare to JFrog-Artifactory? We'll take a quick look at that here in detail.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;btw: read other parts of the series here: &lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/jbaruch" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fR3HMKKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--Q11_WZFN--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/218863/3dcd84ba-80bd-48e1-8804-7e3f6d7815a8.jpg" alt="jbaruch image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/jfrog/jfrog-artifactory-vs-aws-codeartifact-comparison-in-10-ish-parts-521n" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;JFrog Artifactory vs AWS CodeArtifact: Comparison in 10-ish parts&lt;/h2&gt;
      &lt;h3&gt;JBaruch 🎩 ・ Jun 19 '20 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#jfrog&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#artifactory&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#codeartifact&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  BirdEye View
&lt;/h2&gt;

&lt;p&gt;In summary, one can say that Amazon is immersed in an existing market in which some competitors have a much longer history. You can see that from the variety of functions on the JFrog site. There is still significant potential here on the Amazon side. As with all Amazon products, the use of this service is fully tied to the AWS cloud itself. If you look at the price model, Amazon has the billing model that is typical for this platform and is difficult to predict, based on read- and write- cycles. Anyone who can foresee this must know and be able to estimate their development processes down to very delicate actions. If this is not the case, cost accounting can quickly reach areas that were not foreseen. Here, the simple license model from JFrog has a clear advantage, and there are no surprises in the planning. With JFrog-Artifactory, you also have no vendor lock-in in terms of the runtime environment and IT architecture. Here you can be sure that you will have the free decision in the future towards the cloud, or even out of it again. Migration scenarios are possible in any mix between OnPrem and SAAS.&lt;br&gt;
In terms of supported technologies, there is a distinct advantage of JFrog products. And the design options for the repository structures are much more flexible compared to the CodeArtifact options. Amazon has just three package managers in its portfolio. And also in the way the compositions can be set up, the virtual repositories are a feature from Artifactory that leads the field unbeaten. It remains to be seen what Amazon wants to bring up next.&lt;/p&gt;
&lt;h2&gt;
  
  
  Repositories and how to use binaries efficiently
&lt;/h2&gt;

&lt;p&gt;After we have seen what the basic options for repository composition look like, the question arises which advantages and disadvantages result from this for production. At this point, I assume that we are in a DevOps or already in a DevSecOps environment. For each change made available in a source code repository, a build process starts, which then leads to a binary package stored in an artifact repository. This behavior inevitably leads to a sharp increase in the resources required to keep these binary packages available for further progression. On the other hand, you also want to prevent partial results or results that have caused a termination within processing to bleed into other production processes. It is therefore essential to make a very accurate separation here. How can this be implemented in the two systems mentioned here?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1cRIUEoA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aea9hcwrnyzznfdakyw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1cRIUEoA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/aea9hcwrnyzznfdakyw6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to the previously mentioned approach, the amount of new created binary fragments usually decreases with every further step in the build pipeline. For example; From several feature branches, a develop branch from which a release branch can then become. With every further development stage, i.e. with every additional step within the production chain, the frequency with which artifacts generated decreases. As soon as a new binary has been created in a develop branch that contains the source code changes of an analogous feature branch, all created artifacts of the corresponding feature branch can be removed. Removing can now mean that they are deleted, or just hidden from the active repository structures of the subsequent production stages.&lt;/p&gt;

&lt;p&gt;Now we come to our candidate from Amazon. Here you can build a tree from repositories. This structure enables you to establish a caching process. But how do we isolate the individual stages in the production chain? Based on the tree structure and the current restriction that you can only ever have one higher-level repository, only stringent top-down structures can be mapped. Cross-sectional structures cannot be represented in this way. A subsequent stage cannot have a sub-selection of the higher-level element here. To isolate the binaries from the feature branches, you have to set up your structure at this point and then rebuild all associated binary packages during a merge in the developer branch to save the result again in the repository belonging to the develop branch. This approach has some conceptual weaknesses that I would like to point out here explicitly.&lt;/p&gt;

&lt;p&gt;One point is that you have to reproduce this procedure in every project, which is only a matter of time before bumps and errors creep into this structure. Additionally, running this procedure means that the results of a previously executed build process must be delivered again.&lt;br&gt;
It is better not only to minimize this effort at this point, which will bring temporal, financial and ecological advantages but also follow the DRY principle very clearly.&lt;/p&gt;

&lt;p&gt;What is the solution with Artifactory? Here you can assign a dedicated private local repository to each production stage in which the artifacts created. Because all operations in the JFrog products offered via REST as well, it is also possible to automatically assign a repository to a newly created feature branch. If you want to map the isolation even more explicitly, create a separate artifact repository for each processing level in a build-plan. This solution then has the additional advantage that the last failed step of the partially completed build processes can be started again at the previous finished point without having to go through all previous stages. DRY!&lt;/p&gt;

&lt;p&gt;The needed cross-sectional repositories can be defined using virtual repositories. Only those elements that are necessary for this production stage are then active within these repositories. This concept helps that no artifacts from previous steps can bleed in that do not belong to precisely this production stage for exactly this entity of the outcome to be generated. Maintenance of the artifact inventory can also very efficiently implemented since the partial repositories that are no longer required are deleted or deactivated directly.&lt;/p&gt;
&lt;h2&gt;
  
  
  last words
&lt;/h2&gt;

&lt;p&gt;In terms of supported technologies, there is a distinct advantage of JFrog With this brief overview; this topic is far from exhausted. To do this, I will gradually look at the various sub-areas over the next few weeks and make a direct comparison.&lt;br&gt;
If particular aspects of these comparisons are of interest, please contact me directly. The order of the areas has not yet been determined. But if you want to get a realistic picture, I cordially invite you to start a trial and try it out by yourself.&lt;/p&gt;

&lt;p&gt;Cheers Sven&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;btw: read other parts of the series here: &lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/jbaruch" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fR3HMKKe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--Q11_WZFN--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/218863/3dcd84ba-80bd-48e1-8804-7e3f6d7815a8.jpg" alt="jbaruch image"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/jfrog/jfrog-artifactory-vs-aws-codeartifact-comparison-in-10-ish-parts-521n" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;JFrog Artifactory vs AWS CodeArtifact: Comparison in 10-ish parts&lt;/h2&gt;
      &lt;h3&gt;JBaruch 🎩 ・ Jun 19 '20 ・ 2 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#jfrog&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#artifactory&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#codeartifact&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;



&lt;/blockquote&gt;

</description>
    </item>
  </channel>
</rss>
