<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Cloudogu GmbH</title>
    <description>The latest articles on Forem by Cloudogu GmbH (@cloudogu).</description>
    <link>https://forem.com/cloudogu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudogu"/>
    <language>en</language>
    <item>
      <title>Low-Ops Platform Cloudogu EcoSystem</title>
      <dc:creator>flxebrt</dc:creator>
      <pubDate>Wed, 14 Jun 2023 10:14:59 +0000</pubDate>
      <link>https://forem.com/cloudogu/low-ops-platform-cloudogu-ecosystem-534j</link>
      <guid>https://forem.com/cloudogu/low-ops-platform-cloudogu-ecosystem-534j</guid>
      <description>&lt;p&gt;Effectiveness with high efficiency is the key to success in our fast-paced and dynamic business world. Increasing digitalization requires quick responses to changing environments and requirements. To increase efficiency and the ability to respond while saving costs, it is important to optimize processes. Improving the integration of thematically related tools is one way to reduce administration and management work - which is becoming even more relevant in light of the current shortage of skilled workers.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll go into more detail about what makes the Cloudogu EcoSystem a low-ops platform and how it helps to support enterprises' digital transformation capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is low-ops?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cloudogu.com/en/glossary/low-ops/"&gt;Low-Ops&lt;/a&gt; is based on the phases of the &lt;a href="https://cloudogu.com/en/glossary/devops/"&gt;DevOps loop&lt;/a&gt; and is a concept that aims for &lt;a href="https://cloudogu.com/en/blog/comparison-administation-effort"&gt;little administration&lt;/a&gt; or operations, short "ops," by simplifying the deployment, operation, and monitoring of software tools through automation, standardization, and generalization.&lt;/p&gt;

&lt;p&gt;SaaS solutions already remove the administration burden from users, but for many use cases, the use of SaaS tools is not desired, for example, due to issues such as dependence on the provider's service, data protection and data security. Operating on one's own (&lt;a href="https://cloudogu.com/en/glossary/on-premises/"&gt;on-premises&lt;/a&gt;) or secure external infrastructures (private cloud) is common in these cases. This is exactly when the Cloudogu EcoSystem offers advantages by greatly simplifying the operation of a variety of tools on self-managed infrastructures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cloudogu EcoSystem as a low-ops platform
&lt;/h2&gt;

&lt;p&gt;Cloudogu's &lt;a href="https://cloudogu.com/en/glossary/cloudogu-ecosystem/"&gt;Cloudogu EcoSystem&lt;/a&gt; is the first &lt;a href="https://cloudogu.com/en/glossary/scaling/"&gt;scalable &lt;/a&gt;low-ops platform that enables diverse software tools from different vendors (packaged as &lt;a href="https://cloudogu.com/en/glossary/dogu/"&gt;Dogus&lt;/a&gt;) to be administered in unison, minimizing management overhead. The Cloudogu EcoSystem is a virtual platform that combines all these approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation:&lt;/strong&gt; It offers minimal administrative overhead, as it allows, for example, automatic data migration during upgrades. The Backup Dogu offers the possibility to automate a backup strategy, i.e. defined backup cycles according to a retention policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Standardization:&lt;/strong&gt; Uniform containerization of all tools on the platform further standardizes and simplifies administration and operation, because the tools can be administered and operated in bundles rather than individually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generalization:&lt;/strong&gt; The platform offers cross-tool services such as backup, user management, administration and restore for all tools operated on it. This enables, for example, cross-tool single sign-on or &lt;a href="https://cloudogu.com/en/blog/ecosystem-backup-and-restore-dogu"&gt;backup and restore&lt;/a&gt;. The holistic backup and restore solution enables simple administration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Cloudogu EcoSystem currently offers a selection of standard tools in the areas of project management, software development and wiki. Cloudogu regularly expands the toolstack of the Cloudogu EcoSystem and also responds to special customer requests. Each software tool (Dogu) of the toolchain runs in special containers. From the available tools, the appropriate Dogus for specific use cases can be selected as needed. The Dogus communicate with each other by means of plug-ins, and the interaction of the Dogus is continuously being expanded. Automated updates of the Dogus - including data migration if required - prevent manual interventions and reduce time spent. Thus, a multitude of tools can be administered in one step, even without special administration knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-ops improves transformability
&lt;/h2&gt;

&lt;p&gt;By reducing administration, the low-ops idea supports &lt;a href="https://cloudogu.com/en/glossary/transformability/"&gt;transformability&lt;/a&gt;, i.e. the ability to successfully manage changes in one's own environment. An important characteristic of transformability is adaptability.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The only constant in life is change" &lt;em&gt;Heraclitus&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This includes both the ability to adapt oneself to a dynamic environment and the ability to help shape the environment through proactive change. The Cloudogu EcoSystem gives companies the flexibility to use new tools or try them out first to take advantage of new helpful features. The processes of installing and operating Dogus are designed to be very simple and efficient. Companies also remain flexible in the type of operation (cloud; on-premises) and can tailor the environment to their own needs. This gives companies the opportunity for flexible adaptation to changes in the environment. This can promote continuous and iterative development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Cloudogu EcoSystem is a low-ops platform that significantly reduces the administration effort of the tools operated on it. The toolchain of the Cloudogu EcoSystem links different Dogus while enabling their central administration. The Cloudogu EcoSystem has an easy deployment of complete working environments and offers many possibilities for projects in a dynamic environment that demands digital mutability.&lt;/p&gt;

</description>
      <category>lowops</category>
      <category>administration</category>
      <category>devops</category>
    </item>
    <item>
      <title>GitOps and Kubernetes – Secure Handling of Secrets</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Wed, 18 Jan 2023 16:11:28 +0000</pubDate>
      <link>https://forem.com/cloudogu/gitops-and-kubernetes-secure-handling-of-secrets-5965</link>
      <guid>https://forem.com/cloudogu/gitops-and-kubernetes-secure-handling-of-secrets-5965</guid>
      <description>&lt;p&gt;Especially in well established companies, it may be the case that a certain way of handling secrets has become established, which is convenient, but not necessarily the safest solution. One example of this is the storage of secrets directly in the CI server. The use of a key management system (KMS) for storing secrets is a more secure solution, but it initially requires a lot of effort for the changeover. That is why KMS are not always used in the Kubernetes environment at present. GitOps with its declarative approach forces a rethink in the handling of secrets. &lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Secrets
&lt;/h2&gt;

&lt;p&gt;With Kubernetes, Secrets are basically stored unencrypted in the API server's underlying data store (etcd). This means that anyone who has access to the etcd can also obtain or modify secrets. Therefore, at least these measures should be taken:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable encryption of Secrets using Encryption at Rest, &lt;/li&gt;
&lt;li&gt;implement a least-privilege approach for Secrets by configuring RBAC rules, &lt;/li&gt;
&lt;li&gt;restrict access to Secrets to specific containers by mounting Secrets only in containers where they are needed,&lt;/li&gt;
&lt;li&gt;finding a way to get Secrets into the cluster (for example, using the CI server), and &lt;/li&gt;
&lt;li&gt;evaluating the use of an external provider for storing Secrets (KMS).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shows that even without GitOps, secure and appropriate handling of Secrets in Kubernetes is not trivial. This is reinforced by the declarative approach of GitOps, since Secrets must also be stored or at least referenced outside the cluster, namely in the source code management. This enforces a secure handling of Secrets. Most of the solutions described in the following are not only exclusively applicable with GitOps, but are basically applicable when using Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps enforces different handling of secrets
&lt;/h2&gt;

&lt;p&gt;With GitOps, the state of a project is described declaratively in the source code management and the deployment environment continuously synchronizes with Git. The CI server is only used for the build, the deployment is done by the GitOps operator. This approach allows to take automation to a new level, but also leads to the need to find a new way of handling secrets. By storing the entire state in Git, a way must inevitably be found to store secrets in a secure manner in Git, because stored directly and unencrypted in Git offers a large attack surface. For the secure handling of secrets there are basically two approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encrypted storage of the Secrets in Git&lt;/li&gt;
&lt;li&gt;Storing the secrets in a Key Management System (KMS) and referencing them in Git.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Encrypted storage in Git
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sealed Secrets (Operator)
&lt;/h4&gt;

&lt;p&gt;An option that easily works with GitOps is the Operator &lt;a href="https://github.com/bitnami-labs/sealed-secrets" rel="noopener noreferrer"&gt;Sealed Secrets&lt;/a&gt; from Bitnami. Secrets encrypted with it can only be decrypted by operators running inside the cluster, not even by the original author. For encryption, there is a CLI (and a &lt;a href="https://github.com/bakito/sealed-secrets-web" rel="noopener noreferrer"&gt;third-party web UI&lt;/a&gt;) that requires a connection to the cluster. The disadvantage of this is that the key material is stored in the cluster, the secrets are bound to the cluster and one has to take care of backups and operation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use of Key Management System (KMS)
&lt;/h3&gt;

&lt;p&gt;When using a KMS, the first step is to choose a suitable tool. Since the major operators of public clouds such as Google, Microsoft, Amazon, etc. usually offer a KMS that is easy to use and Hashicorp Vault has established itself as a solution for on-premises operated clusters, the first step is quickly completed.&lt;br&gt;
The second step is then to integrate the KMS into the cluster. There are several ways to do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running a special operator&lt;/li&gt;
&lt;li&gt;Mounting secrets in the file system using Container Storage Interface (CSI)&lt;/li&gt;
&lt;li&gt;Injecting a side car into pods&lt;/li&gt;
&lt;li&gt;GitOps operator with either native support for KMS or via plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  External Secrets (Operator)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/external-secrets/external-secrets" rel="noopener noreferrer"&gt;External Secrets&lt;/a&gt; is an operator that integrates external KMS such as Hashicorp Vault or those of the major cloud providers. It reads secrets from the external APIs and injects them into Kubernetes secrets. The operator is a new implementation after the merge of similar projects from GoDaddy and ContainerSolutions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Secrets Store CSI Driver
&lt;/h4&gt;

&lt;p&gt;Mounting secrets into the file system of pods is another exciting way to go, as the CSI Driver is an official part of Kubernetes. It is developed by the Kubernetes Special Interest Group, making it potentially very durable. The CSI Driver provides providers for popular vendors, which in turn are developed by the vendors themselves.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hashicorp Vault k8s (Sidecar Injector)
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/hashicorp/vault-k8s" rel="noopener noreferrer"&gt;Hashicorp Vault k8s&lt;/a&gt; is an operator that modifies pods via a mutating webhook to connect between vault and pod via sidecars (additional containers) to provide secrets. This has the major advantage that no secret objects are created in Kubernetes here. The disadvantage is that this way only works with Vault.&lt;/p&gt;

&lt;h4&gt;
  
  
  SOPS
&lt;/h4&gt;

&lt;p&gt;Mozilla Secret Ops (SOPS), already known from the time before Kubernetes, offers even more options – at the expense of a more complex configuration. Here, the key material can come from the key management systems (KMS) of the major cloud providers, an own HashiCorp Vault or from self-managed PGP keys. Since it is not Kubernetes-native, SOPS does not include an operator, but there are several different ways to use it with GitOps. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flux v2 provides native support.&lt;/li&gt;
&lt;li&gt;ArgoCD supports SOPS with the &lt;a href="https://github.com/argoproj-labs/argocd-vault-plugin" rel="noopener noreferrer"&gt;vault Plugin&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;There is also the &lt;a href="https://github.com/jkroepke/helm-secrets" rel="noopener noreferrer"&gt;helm secrets&lt;/a&gt; plugin, which can also be used in ArgoCD with manual configuration.&lt;/li&gt;
&lt;li&gt;There is also a third-party &lt;a href="https://github.com/isindir/sops-secrets-operator" rel="noopener noreferrer"&gt;sops-secrets&lt;/a&gt; operator available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Hashicorp Vault with External Secrets Operator in the GitOps Playground
&lt;/h2&gt;

&lt;p&gt;You can see first hand and try out the management of secrets with Hashicorp Vault and synchronization into the cluster with the External Secrets Operator in the GitOps Playground.&lt;br&gt;
The GitOps Playground is an open source project for trying out GitOps including a sample application. &lt;a href="https://github.com/cloudogu/gitops-playground" rel="noopener noreferrer"&gt;To the GitOps Playground&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The declarative approach of GitOps leads to the need to rethink the handling of secrets. The curse and the blessing is that there are many options that enable secure handling of secrets. Thanks to the different approaches there are suitable operators for different requirements. We would be happy if you post questions, feedback or suggestions for other operators, the GitOps Playground or GitOps in general in our &lt;a href="https://community.cloudogu.com/c/gitops/23" rel="noopener noreferrer"&gt;GitOps community&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>tooling</category>
      <category>productivity</category>
      <category>development</category>
    </item>
    <item>
      <title>DevOps Toolchain Cloudogu EcoSystem DevStarter</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Thu, 18 Aug 2022 15:30:00 +0000</pubDate>
      <link>https://forem.com/cloudogu/devops-toolchain-cloudogu-ecosystem-devstarter-1oni</link>
      <guid>https://forem.com/cloudogu/devops-toolchain-cloudogu-ecosystem-devstarter-1oni</guid>
      <description>&lt;p&gt;DevOps is a methodology or collection of approaches, practices and methods to deliver new software features faster. Depending on how the previous ways of working were in an organization, adopting DevOps can mean a fundamental change in ways of working and, consequently, can take a long time. It is important to note that DevOps cannot be introduced by simply using certain tools, because tools only support the implementation of the ways of working, the mindset however needs to be developed independently. Nevertheleess, choosing the right tools can help significantly with the introduction of DevOps.&lt;/p&gt;

&lt;p&gt;Since working methods and processes are often changed when DevOps is introduced, new tools are usually also required to support these changes and support the automation of processes. That’s why it’s important to have both a flexible infrastructure that allows tools to be added easily and to use flexible tools per se that are intuitive and support cross-functional sharing.&lt;/p&gt;

&lt;p&gt;That’s why we’re introducing the &lt;a href="https://partner.cloudogu.com/devstarter/?mtm_source=devTo&amp;amp;mtm_medium=onpage&amp;amp;mtm_content=DevStarter-page&amp;amp;mtm_placement=textlink"&gt;Cloudogu EcoSystem DevStarter&lt;/a&gt;, a platform that, even in its basic form, runs a variety of tools that users need for DevOps. The following graphic shows you which tools can be used in which DevOps phases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K4ozhNDg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pv1ccugvasltzztgs33.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K4ozhNDg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pv1ccugvasltzztgs33.jpg" alt="Image description" width="880" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Easy Redmine (Plan, Code and Operate)
&lt;/h2&gt;

&lt;p&gt;The Easy Redmine project management tool offers, in addition to classic project management functions for Agile and Waterfall, other helpful features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issue tracker&lt;/li&gt;
&lt;li&gt;Time tracking&lt;/li&gt;
&lt;li&gt;Resource planning&lt;/li&gt;
&lt;li&gt;Dashboards&lt;/li&gt;
&lt;li&gt;Project planning via Agile board, Gantt chart and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, Easy Redmine offers a help desk extension, making it particularly helpful in the phases &lt;strong&gt;plan&lt;/strong&gt; and &lt;strong&gt;code&lt;/strong&gt; (task planning and processing) as well as &lt;strong&gt;operate&lt;/strong&gt; (help desk).&lt;/p&gt;

&lt;h2&gt;
  
  
  BlueSpice MediaWiki (Plan and Code)
&lt;/h2&gt;

&lt;p&gt;BlueSpice is an easy-to-use wiki that can be used in any department or task area in a company to document information or collect and discuss ideas. In software development, the wiki can be used to collect detailed requirements in the &lt;strong&gt;plan&lt;/strong&gt; phase so that they can then be referred to during development (&lt;strong&gt;code&lt;/strong&gt; phase). Of course, the wiki can also be used in other phases of the software lifecycle to record information.&lt;/p&gt;

&lt;h2&gt;
  
  
  SCM-Manager (Code)
&lt;/h2&gt;

&lt;p&gt;The source code management tool &lt;a href="https://scm-manager.org"&gt;SCM-Manager&lt;/a&gt; can be used to manage Git, Mercurial as well as Subversion repositories. This makes the tool much more flexible than other solutions that only support one kind of repositories, e,g, Git. This makes the tool particularly valuable for companies that still have Mercurial repositories, for example, but would like to switch to Git. But even if only one repository type is used, SCM-Manager has many advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Variety of integrations with issue trackers like Easy Redmine available.&lt;/li&gt;
&lt;li&gt;Easy integration into continuous development processes&lt;/li&gt;
&lt;li&gt;Many functions of the tool can also be used via a GUI&lt;/li&gt;
&lt;li&gt;Extensive code review process available&lt;/li&gt;
&lt;li&gt;Simple operation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tool is used by teams in the &lt;strong&gt;code&lt;/strong&gt; phase to version the source code of software applications and then use it for builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins (Build and Deploy)
&lt;/h2&gt;

&lt;p&gt;The continuous integration server Jenkins is the central building block of the development pipeline. The tool can be connected to a variety of other tools and used to automate steps. Automated builds as well as deployments of applications are only the most obvious functions. A development pipeline can consist of these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pulling the current state of the source code from the version management (SCM-Manager).&lt;/li&gt;
&lt;li&gt;Performing the &lt;strong&gt;build&lt;/strong&gt; and running unit tests using artifacts (Nexus Repository)&lt;/li&gt;
&lt;li&gt;Start of a static code analysis (SonarQube)&lt;/li&gt;
&lt;li&gt;Upon successful code analysis, storing the built version (Nexus Repository)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;ment of the built version&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  SonarQube (Test)
&lt;/h2&gt;

&lt;p&gt;SonarQube is an open source tool that can be used to perform static code analysis. The tool provides a variety of &lt;strong&gt;test&lt;/strong&gt;s for many programming languages and allows quality gates to be set to enforce minimum code quality requirements. Examples of quality gates are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;% coverage of code with unit tests&lt;/li&gt;
&lt;li&gt;Rate of comments in the code&lt;/li&gt;
&lt;li&gt;Number of (potential) bugs in the code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Nexus Lifecycle (Test)
&lt;/h2&gt;

&lt;p&gt;Recently we have been shown several times the extent that security vulnerabilities in open source components can have. In the case of security vulnerabilities in very popular components, the news even makes it to the big media. For the majority of open source components, however, it can be very time-consuming to keep up to date on possible vulnerabilities. This is where Nexus Lifecycle comes in. The tool analyzes dependencies in your software projects and automatically informs you about known vulnerabilities. It also gives you information about possible licensing restrictions of open source software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nexus Repository (Build and Release)
&lt;/h2&gt;

&lt;p&gt;The Nexus Repository tool is an artifact repository management tool that can be used, among other things, to manage and store &lt;strong&gt;build&lt;/strong&gt; artifacts before they are &lt;strong&gt;deploy&lt;/strong&gt;ed. It can also be used to manage binaries, providing a centralized source of information within the organization and speeding up builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elasticsearch (Monitor)
&lt;/h2&gt;

&lt;p&gt;Elasticsearch can be used to collect and process a wide variety of &lt;strong&gt;monitor&lt;/strong&gt;ing data and to use that data for alerting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further DevOps tools
&lt;/h2&gt;

&lt;p&gt;As an addition to the tools included in the &lt;a href="https://partner.cloudogu.com/devstarter/?mtm_source=devTo&amp;amp;mtm_medium=onpage&amp;amp;mtm_content=DevStarter-page&amp;amp;mtm_placement=textlink"&gt;Cloudogu EcoSystem DevStarter&lt;/a&gt;, there are of course other tools widely used in the DevOps environment to enable even better collaboration between development and operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes, a system for deploying, scaling and managing container applications.&lt;/li&gt;
&lt;li&gt;Docker, a tool for isolating applications through container virtualization.&lt;/li&gt;
&lt;li&gt;Terraform, an infrastructure-as-code tool that can be used to create, modify and improve infrastructure.&lt;/li&gt;
&lt;li&gt;Selenium, a framework for creating automated tests for web applications.&lt;/li&gt;
&lt;li&gt;Nagios, a tool for monitoring services in complex IT infrastructures.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>tooling</category>
      <category>opensource</category>
      <category>news</category>
    </item>
    <item>
      <title>Kubernetes least privilege implementation using the Google Cloud as an example</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Fri, 06 May 2022 14:43:58 +0000</pubDate>
      <link>https://forem.com/cloudogu/kubernetes-least-privilege-implementation-using-the-google-cloud-as-an-example-35ng</link>
      <guid>https://forem.com/cloudogu/kubernetes-least-privilege-implementation-using-the-google-cloud-as-an-example-35ng</guid>
      <description>&lt;p&gt;Everyone knows it: granting privileges is always a balance between security, usability and maintenance effort. If permissions are granted very generously, the effort is very low and there are rarely any hurdles to use; however, security is compromised. If permissions are granted sparingly, security is higher, but there are costly processes and a lot of administrative overhead.&lt;/p&gt;

&lt;p&gt;Kubernetes offers many possibilities with its Role-based access control (RBAC), which are also extensively documented (&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/"&gt;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&lt;/a&gt;). Unfortunately, however, there are not many practical tips for actual implementation. To break out of this predicament, we have written plugins that allow you to use the Kubernetes-sudo context to get a simple but effective entry point for managing permissions. In doing so, this blog article illustrates how to install the plugins and configure the cluster using a managed Kubernetes from Google Cloud Platform as an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yMMFAcXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrawchkhb2q4h5vxwykz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yMMFAcXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrawchkhb2q4h5vxwykz.gif" alt="Kubernetes sudo context" width="880" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Permissions of developers in the cluster
&lt;/h2&gt;

&lt;p&gt;The solution presented here refers to permissions of humans, since applications should basically only have read-only access to their own secrets and the configmap in the cluster. So this is straightforward from the point of view of assigning permissions. However, when it comes to permissions for developers, it becomes much more complex, because people’s tasks and roles can change over time and because permissions can be used to protect people from making careless mistakes. This results in the high maintenance effort mentioned at the beginning.&lt;/p&gt;

&lt;p&gt;A solution with minimal maintenance effort is to give all developers the same, extensive authorizations. However, this creates the risk that harmful changes can be made accidentally at any time. Especially in productive environments, this can lead to critical downtimes and even data loss.&lt;/p&gt;

&lt;p&gt;That’s why we decided to take an approach that briefly uses additional privileges to execute commands, similar to the sudo command in Linux.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing the least-privilege approach using sudo-context
&lt;/h2&gt;

&lt;p&gt;To implement sudo-style permissions, these things must be implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the impersonate feature of Kubernetes&lt;/li&gt;
&lt;li&gt;Setting up the sudo context&lt;/li&gt;
&lt;li&gt;Granting permissions in the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will now describe these steps in detail.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the impersonate feature
&lt;/h3&gt;

&lt;p&gt;The sudo function is Kubernetes’ internal “impersonate” feature of the Kubernetes API. It allows commands to be executed as a different user, group, or service account. The first step is to enable the sudo function in the cluster. There are different ways to do this depending on the use case. How these are finally installed and configured on client and cluster side follows in the following.&lt;/p&gt;

&lt;h4&gt;
  
  
  kubectl-sudo plugin
&lt;/h4&gt;

&lt;p&gt;With the &lt;a href="https://github.com/postfinance/kubectl-sudo/blob/master/bash/kubectl-sudo"&gt;kubectl-sudo plugin&lt;/a&gt;, kubectl commands that require more extensive rights can be executed explicitly as a member of the admin group. This reduces the chance of accidentally modifying or deleting resources on the cluster, for example when running scripts or being in the wrong namespace.&lt;/p&gt;

&lt;p&gt;The plugin only works for kubectl, but other tools that use kubeconfig (helmet, fluxctl, k9s, etc.) cannot use it. Here is a simple example of how to use the plugin &lt;code&gt;kubectl sudo get pod&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  helm-sudo plugin
&lt;/h4&gt;

&lt;p&gt;In the Kubernetes environment, Helm charts are very important. So, to be able to use the functionality for Helm as well, we have developed a corresponding &lt;a href="https://github.com/cloudogu/helm-sudo"&gt;plugin&lt;/a&gt; that can be used analogously to kubectl-sudo. Analogous to the usage of the kubectl-sudo plugin, here is an example for the helm plugin &lt;code&gt;helm sudo list&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  sudo context for other tools
&lt;/h3&gt;

&lt;p&gt;Alternatively and for all other tools like fluxctl or k9s there is the possibility to create a sudo context in kubeconfig. This can then be used as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --context SUDO-mycontext # alternative to kubectl-sudo    
kgpo --context SUDO-mycontext # also works with aliases!     
helm --kube-context SUDO-mycontext   
fluxctl --context SUDO-mycontext     
k9s --context SUDO-mycontext # Changes also in k9s possible ":ctx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If auto-completion features have been installed for certain tools, they will automatically detect the available contexts and can be selected with &lt;code&gt;Tab&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Attention: When using the sudo context, always make sure to specify the namespace in which a command is to be executed. By default, only the current namespace is stored in kubeconfig when setting up the context. If a command is to be executed in a different namespace, this must be explicitly specified with a parameter: &lt;code&gt;kubectl --context SUDO-mycontext --namespace mynamespace get secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The sudo context should only ever be passed as a parameter. It should never be set as the active context, as this would grant permanent admin rights and thus undermine the protection against accidental changes. This is analogous to the &lt;code&gt;sudo su&lt;/code&gt; command under Linux, with which a user has all permissions and is not stopped from performing risky actions.&lt;/p&gt;

&lt;p&gt;However, both &lt;code&gt;kubectl sudo&lt;/code&gt; and &lt;code&gt;helm sudo&lt;/code&gt; do not require the namespace to be specified each time. Here the commands are always executed in the current namespace of the current context. Therefore, for helm and kubectl, the use of the sudo plugins is preferable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the local tools
&lt;/h3&gt;

&lt;p&gt;To create a sudo context this &lt;a href="https://github.com/cloudogu/sudo-kubeconfig/blob/master/create-sudo-kubeconfig.sh"&gt;script&lt;/a&gt; is available: &lt;code&gt;wget -P /tmp/ "https://raw.githubusercontent.com/cloudogu/sudo-kubeconfig/0.1.0/create-sudo-kubeconfig.sh"&lt;/code&gt;&lt;br&gt;
With it, only these steps are necessary to interactively create a kubeconfig for the currently selected context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x /tmp/create-sudo-kubeconfig.sh
/tmp/create-sudo-kubeconfig.sh

kubectl --context SUDO-mycontext get pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If needed, the two plugins already mentioned, kubectl-sudo and helm-sudo, can be installed via bash:&lt;/p&gt;

&lt;p&gt;Optional: install kubectl-sudo&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo bash -c 'curl -fSL https://raw.githubusercontent.com/postfinance/kubectl-sudo/master/bash/kubectl-sudo -o /usr/bin/kubectl-sudo &amp;amp;&amp;amp; chmod a+x /usr/bin/kubectl-sudo'    

kubectl sudo get pod 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optional: install helm-sudo&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm plugin install  https://github.com/cloudogu/helm-sudo --version=0.0.2

helm sudo list 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Technical realization of the authorization
&lt;/h3&gt;

&lt;p&gt;Now that the prerequisites for using the sudo function have been created on the local computer, the authorizations must be set up. We will show the steps for the technical realization as an example using a &lt;strong&gt;managed Kubernetes of the Google Cloud Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  RBAC
&lt;/h4&gt;

&lt;p&gt;With the sudo function, we can now slip into the role of users, groups and service accounts to execute commands (impersonate). In order for us to get broader rights through the “impersonate”, they must be authorized using Kubernetes’ Role-based access control (RBAC).&lt;/p&gt;

&lt;p&gt;The “impersonate” is implemented by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl-sudo: &lt;code&gt;kubectl --as=$USER --as-group=system:masters "$@"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/cloudogu/helm-sudo/blob/main/run.sh#L18"&gt;helm-sudo&lt;/a&gt;: &lt;code&gt;helm --kube-as-user ${USER} --kube-as-group system:masters "$@"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;and &lt;a href="https://gist.github.com/schnatterer/2b1f2ca2bd66bad2644e6958aae9af6e/#file-create-sudo-kubeconfig-sh-L50"&gt;create-sudo-kubeconfig.sh&lt;/a&gt;: &lt;code&gt;as-groups: [ system:masters ]&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In an existing k8s cluster, two resources must be created to give users access to the impersonate feature:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A ClusterRole, which allows to use the impersonate feature.&lt;/li&gt;
&lt;li&gt;A ClusterRoleBinding to allow individual users or groups to use the previously created ClusterRole.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sudoer.yaml
# Creates a ClusterRole which allows to impersonate users,
# groups and serviceaccounts
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: sudoer
rules:
  - apiGroups: [""]
    verbs: ["impersonate"]
    resources: ["users", "groups", "serviceaccounts"]

# cluster-sudoers.yaml
# Allows users to use kubectl sudo on all resources in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-sudoers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: sudoer
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: admins@email.com
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: user1@email.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the current ClusterRoleBinding, everyone listed there has sudo permissions in all namespaces. It is also possible to use multiple ClusterRoleBindings and thus have permissions only for certain namespaces. This is a good approach if, for example, different teams have separate namespaces. To do this, another attribute namespace: &lt;code&gt;namespace: namespace-name&lt;/code&gt; must be listed in the ClusterRoleBinding under &lt;code&gt;metadata&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At first glance, it may look like anonymous changes to the cluster are now possible because a different role is assumed. However, in the &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#console"&gt;audit logs in the Google Cloud Platform&lt;/a&gt;, the actual user principal who made a change can be seen. This means that it is possible to trace which user made a change. This works similarly in managed clusters of other cloud providers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Google Cloud Platform (GCP)
&lt;/h4&gt;

&lt;p&gt;In order for RBAC and the adjustments just described to be effective in the GCP at all, the original authorization per Google Cloud must first be bypassed. People who have the role &lt;code&gt;Owner&lt;/code&gt;, &lt;code&gt;Editor&lt;/code&gt; or &lt;code&gt;Kubernetes Engine Admin&lt;/code&gt; in the GCP are normally allowed to execute everything in the cluster, even if they are not explicitly permitted by RBAC.&lt;/p&gt;

&lt;p&gt;Under IAM, therefore, a custom role must be created once in the GCP that only allows authentication to the Kubernetes cluster. This role, which we call “Kubernetes Engine Authentication”, is assigned the following permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;container.apiServices.get&lt;/li&gt;
&lt;li&gt;container.apiServices.list&lt;/li&gt;
&lt;li&gt;container.clusters.get&lt;/li&gt;
&lt;li&gt;container.clusters.getCredentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This role now gets assigned to all users that need access to the cluster. The other permissions are then assigned by RBAC in the cluster. The role can also be assigned to entire groups. The groups are again managed via the GSuite groups. However, to do this, propagation of groups must be enabled when the cluster is created (&lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control?hl=de#groups-create-cluster"&gt;Google Groups for RBAC&lt;/a&gt;). Unfortunately, this setting cannot be activated retrospectively for an already existing cluster. For this purpose, it must be rebuilt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By using RBAC and sudo context, the effort for maintaining permissions and security are in a good balance. On the one hand, there is no need to maintain permissions for each person and on the other hand, the risk of unwanted changes, e.g. because one is in the wrong namespace, is significantly reduced.&lt;/p&gt;

&lt;p&gt;Let’s take this scenario: A developer is testing changes to a deployment in his local dev cluster. During the course of the workday, minor work is done on the productive cluster in the GCP. Now the developer forgets to switch back to his local context and wants to delete the test deployment at the end of the day. Something like this has certainly happened to some people before. This can lead to downtimes or in the worst case to data loss.&lt;/p&gt;

&lt;p&gt;However, if changes can only be applied to the production cluster using the sudo context or SUDO plugin, the accidental deletion will fail and the developer will notice his mistake. So, the vulnerability to accidental errors becomes less while maintaining ease of use, ease of implementation and high security. Since we have been using RBAC and the associated sudo context ourselves at Cloudogu, we have been working much more securely on our Kubernetes clusters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>permissions</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>GitLab DevSecOps Report 2021 - Proactively prevent vulnerabilities </title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Mon, 20 Sep 2021 10:40:20 +0000</pubDate>
      <link>https://forem.com/cloudogu/gitlab-devsecops-report-2021-proactively-prevent-vulnerabilities-12c5</link>
      <guid>https://forem.com/cloudogu/gitlab-devsecops-report-2021-proactively-prevent-vulnerabilities-12c5</guid>
      <description>&lt;p&gt;Web security or security-aware software development should no longer be a luxury. That's why terms like DevOps or DevSecOps have become an integral part of our industry. In other words, agile software development that is focused on security is one of the most important approaches to modern development. Or is it? &lt;/p&gt;

&lt;h2&gt;
  
  
  What does GitLab's DevSecOps Report 2021 have to say about this?
&lt;/h2&gt;

&lt;p&gt;In it are some very interesting findings for the importance of security in software development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;99% of applications contain at least 4 vulnerabilities, 80% even have more than 20.&lt;/li&gt;
&lt;li&gt;More than 90% of participants say that security scans run for more than 3 hours, with about a third running for more than 8 hours.&lt;/li&gt;
&lt;li&gt;For more than two-thirds of participants, it takes more than 4 hours to fix a vulnerability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These numbers show two things: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All applications contain vulnerabilities, although automated tests are already used to prevent them.&lt;/li&gt;
&lt;li&gt;It is quite costly to fix vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, the report shows that a large percentage of companies have already been victims of successful attacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More than 70% have lost critical data.&lt;/li&gt;
&lt;li&gt;Two-thirds have experienced operational disruptions and&lt;/li&gt;
&lt;li&gt;More than 60% have seen negative impacts to their brand.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on these serious impacts, we might assume that security is becoming a higher priority. However, nearly 80% of DevOps teams reported just the opposite, saying they were under pressure to shorten release cycles. As a result, more than 50% of organizations reported sometimes skipping security scans to meet deadlines. &lt;/p&gt;

&lt;h2&gt;
  
  
  Preventing cyber-attacks and IT vulnerabilities
&lt;/h2&gt;

&lt;p&gt;These results show that companies are in a dilemma: meeting deadlines or repercussions from successful cyber-attacks against themselves or their products. The simple solution to this would be to &lt;em&gt;simply&lt;/em&gt; value security over new features. But nothing is simple when you constantly must innovate to succeed in today's fast-paced world.&lt;br&gt;
Another solution to the dilemma is to equip development teams with the knowledge and tools to prevent security vulnerabilities from the start, when the code is first written. There are several ways to do this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuing education in any form to proactively improve IT security.
&lt;/h3&gt;

&lt;p&gt;There are a variety of different offerings in the area of in-person training: Training courses, eLearning, micro-learning, self-study, competitions, etc. Each of these forms of learning has a right to exist, as everyone has different preferences and strengths when it comes to learning. In addition, the different forms of learning have advantages with different levels of prior knowledge. Often a combination is very helpful. For example, in a classic training course, the basics can first be learned, which are then internalized through micro-learning or a competition. The important thing is to &lt;a title="Want developers to code with security awareness? Bring the training to them. | Cloudogu Blog" href="https://cloudogu.com/en/blog/security-learning-strategies"&gt;bring the training to the developers&lt;/a&gt; and not the other way round.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Classical &lt;strong&gt;training&lt;/strong&gt; courses have, among other things, the advantages that they impart knowledge in a short period of time without distractions and that individual questions and requirements can be addressed. A disadvantage is that they often do not take place directly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;eLearning&lt;/strong&gt; offers the freedom to work on the learning content at one's own pace, even in between.  However, this often leads to the problem that continuing with lessons can easily be lost in the daily work routine alongside other tasks.&lt;/li&gt;
&lt;li&gt;The situation is similar with &lt;strong&gt;micro-learning&lt;/strong&gt;, in which learning content is broken down into small modules and ideally integrated into the daily work routine in a context-related manner.  An example of this is the Secure Code Warrior plugin for SCM Manager (see below). The contextual integration of learning content has the advantage that the learning units are not in competition with other tasks because they are integrated into the tasks. &lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;self-study&lt;/strong&gt; there is no fixed curriculum. This has the advantage that developers only acquire exactly the knowledge they really need. The disadvantage is, that all content must be researched independently.&lt;/li&gt;
&lt;li&gt;At first glance, &lt;strong&gt;competitions&lt;/strong&gt; are &lt;em&gt;only&lt;/em&gt; suitable for deepening existing knowledge. However, they also offer the opportunity to gain new knowledge by working on problems that are new and have to be solved in a creative way. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6QISwRHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yej0sstvi3m37sjzkqpw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6QISwRHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yej0sstvi3m37sjzkqpw.jpg" alt="Secure Code Warrior tournament by Cloudogu"&gt;&lt;/a&gt;&lt;br&gt;
Learn more about the free tournament &lt;a title="Secure Code Warrior tournament by Cloudogu" href="https://my.cloudogu.com/scw-tournament"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micro-learning: improving safety through continuous and contextual learning
&lt;/h3&gt;

&lt;p&gt;Contextual learning offers the opportunity to closely integrate practice and theory to improve learning outcomes. For this purpose, suitable learning content, e.g. in the form of micro-learning, is displayed during the processing of tasks. An example of this is the integration of videos and tasks on security vulnerabilities in the code review process.&lt;/p&gt;

&lt;p&gt;Through such integrations, learning content is provided exactly when team members are working on tasks with potential security vulnerabilities. An example of this is the Secure Code Warrior plugin for SCM Manager mentioned earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GitLab's DevSecOps Report 2021 shows that software security, while perceived as an important issue, is prioritized lower than the development of new features in many organizations. This prioritization is unlikely to change much in the future. Therefore, it is necessary to change from a reactive to a proactive approach in order to meet the security requirements while &lt;a title="Is it possible to shorten release cycles and improve security at the same time? | Cloudogu Blog" href="https://dev.to/cloudogu/shorter-release-cycles-through-improved-security-496j"&gt;keeping release cycles short&lt;/a&gt;. This can be achieved through different types of training. &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>security</category>
      <category>webdev</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Learn to secure your app while coding it ...</title>
      <dc:creator>jeromesch</dc:creator>
      <pubDate>Fri, 17 Sep 2021 10:05:29 +0000</pubDate>
      <link>https://forem.com/cloudogu/learn-to-secure-your-app-while-coding-it-2bdd</link>
      <guid>https://forem.com/cloudogu/learn-to-secure-your-app-while-coding-it-2bdd</guid>
      <description>&lt;p&gt;Every Dev considers his Application as "safe" until he get's proven wrong.&lt;br&gt;
Look up at the OWASP Top10 and tell me what you can check as "done" in your current project state:&lt;br&gt;
-Broken Access Control &lt;br&gt;
-Cryptographic Failures&lt;br&gt;
-Injection &lt;br&gt;
-Insecure Design&lt;br&gt;
-Security Misconfiguration&lt;br&gt;
-Vulnerable and Outdated Components&lt;br&gt;
-Identification and Authentication Failures&lt;br&gt;
-Software and Data Integrity Failures&lt;br&gt;
-Security Logging and Monitoring Failures&lt;br&gt;
-Server-Side Request Forgery &lt;/p&gt;

&lt;p&gt;If you want to test your "secure coding skills", there's currently an tournament about exactly that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://community.cloudogu.com/t/secure-coding-tournament-how-to-take-part/189"&gt;https://community.cloudogu.com/t/secure-coding-tournament-how-to-take-part/189&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SCW is reviewing your written Code automaticly against the (listed above) vulnerabilities, rates your overall score and shows where and how you can improve your skills.&lt;/p&gt;

&lt;p&gt;Languages:&lt;br&gt;
Kubernetes&lt;br&gt;
Java&lt;br&gt;
C# / MVC&lt;br&gt;
JavaScript / React&lt;br&gt;
Go&lt;br&gt;
PHP&lt;br&gt;
Python&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>cybersecurity</category>
      <category>javascript</category>
      <category>react</category>
    </item>
    <item>
      <title>Shorter release cycles through improved security</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Tue, 07 Sep 2021 15:55:43 +0000</pubDate>
      <link>https://forem.com/cloudogu/shorter-release-cycles-through-improved-security-496j</link>
      <guid>https://forem.com/cloudogu/shorter-release-cycles-through-improved-security-496j</guid>
      <description>&lt;p&gt;The world’s reliance on software is already great and will continue to grow. That's why the security of applications will also become increasingly important. This development is further reinforced by the global pandemic, as more businesses and services have increased their online availability. The U.S. Federal Bureau of Investigation (FBI), for example, has reported a &lt;a rel="nofollow" title="Article about increased cyber attacks during the COVID pandemic" href="https://www.imcgrupo.com/covid-19-news-fbi-reports-300-increase-in-reported-cybercrimes/"&gt;300% increase in cybercrime&lt;/a&gt; since the beginning of the pandemic: This shows that with the growing reliance on software and applications, the risk of attacks is also increasing.&lt;/p&gt;

&lt;p&gt;Software developing companies are therefore in a dilemma: On the one hand, they have to release new products and upgrades quickly in order to be successful on the market. On the other hand, these should also be secure in order to avoid successful attacks. However, security is often seen as a brake on rapid development - and therefore unfortunately neglected.&lt;br&gt;
Note: Some tests, "penetration tests" for example, can take a very long time (2 weeks) - a contradiction to short development cycles.&lt;/p&gt;

&lt;p&gt;This opinion comes from the fact that for a very long time development and security teams worked separately. Development teams write new code and get it ready for release. Before release, the security team looks at the code again and makes security requests before GoLive. This loop sometimes makes the development process extremely prolonged. It can also create tension between teams, as it is seen more as criticism than a security improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevSecOps for better integration of security in software development
&lt;/h2&gt;

&lt;p&gt;The DevSecOps approach offers an answer to these challenges. It is an evolution of the DevOps approach that allows development and operations teams to work more closely together. In DevSecOps, security teams are integrated into DevOps teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  From being reactive to being proactive in security.
&lt;/h2&gt;

&lt;p&gt;No matter how teams are organized, in most cases (security) mistakes are corrected reactively. This is usually done in these ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated tests are written to check for known vulnerabilities.&lt;/li&gt;
&lt;li&gt;A bug is reported that was discovered either in manual tests or in production. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal should be to detect bugs as early as possible so that the effort to fix it is as low as possible. The shift-left approach does exactly that and has been around for a very long time. However, no matter how early a bug is found in the development process, it always requires that someone from the team has to go back into the old code to correct the bug. Reducing the effort further only works by proactively preventing bugs. For more information about how expensive it is to fix security issues, see &lt;a title="GitLab DevSecOps Report – Proactively prevent vulnerabilities" href="https://dev.to/cloudogu/gitlab-devsecops-report-2021-proactively-prevent-vulnerabilities-12c5"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn to write secure code and not discover vulnerabilities
&lt;/h2&gt;

&lt;p&gt;In order to proactively prevent (security) bugs, developers must be provided with appropriate security guidelines and knowledge about security vulnerabilities. The Open Web Application Security Project (OWASP) has defined 10 measures for this purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define security requirements&lt;/li&gt;
&lt;li&gt;Use security frameworks and libraries&lt;/li&gt;
&lt;li&gt;Secure access to databases&lt;/li&gt;
&lt;li&gt;Encrypt and escape data&lt;/li&gt;
&lt;li&gt;Validate all input&lt;/li&gt;
&lt;li&gt;Implement digital identities&lt;/li&gt;
&lt;li&gt;Enforce authorization systems&lt;/li&gt;
&lt;li&gt;Protect data everywhere&lt;/li&gt;
&lt;li&gt;Log and monitor security-relevant events&lt;/li&gt;
&lt;li&gt;Handle all errors and exceptions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Learning approaches to increase security in software development
&lt;/h2&gt;

&lt;p&gt;In order for developers to be able to implement these measures, they must of course have the necessary knowledge. There are a wide variety of approaches for this, which can be used for learning depending on the topic and personal preference. The important thing is, that the learning material should not only be &lt;a title="Want developers to code with security awareness? Bring the training to them. | Cloudogu Blog" href="https://cloudogu.com/en/blog/security-learning-strategies"&gt;easily accessible, but also timely&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security training in DevSecOps
&lt;/h3&gt;

&lt;p&gt;According to traditional understanding, security is not one of the core tasks of developers, namely the implementation of new functions. That's why training on security topics needs to be fun, challenging, and engaging in order to impart critical knowledge. It is important that trainings are conducted using real code, in the respective language and according to prior knowledge in order to have a direct knowledge transfer.&lt;br&gt;
Since the security awareness of the developers should be strengthened, it makes sense to conduct trainings regularly in small units and not, for example, to attend a training once a year. &lt;/p&gt;

&lt;h3&gt;
  
  
  Tournaments for more web security
&lt;/h3&gt;

&lt;p&gt;Learning, as mentioned before, should be fun, challenging, and engaging for developers. All of this is achieved through contests where participants compete to win by fixing security vulnerabilities. Such contests can last as little as a few hours or as long as several days. And because the topic of web security is so important to us, we have planned a competition on the topic of security to coincide with the launch of the Secure Code Warrior &lt;a title="Secure Code Warrior plugin for SCM-Manager" href="https://my.cloudogu.com/scw-for-scm-manager"&gt;plugin in our SCM-Manager&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6QISwRHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yej0sstvi3m37sjzkqpw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6QISwRHU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yej0sstvi3m37sjzkqpw.jpg" alt="Secure Code Warrior tournament by Cloudogu"&gt;&lt;/a&gt;&lt;br&gt;
Learn more about the free tournament &lt;a title="Secure Code Warrior tournament by Cloudogu" href="https://my.cloudogu.com/scw-tournament"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated real-time coaching
&lt;/h3&gt;

&lt;p&gt;Meanwhile, there are extensions for development environments that check the code in real time against specified security policies. This way, developers get feedback on whether or not their code meets security specifications right as they are writing the code. Ideally, they are given direct suggestions for corrections, as with a spell checker. In addition, suitable exercises can also be suggested. This is very much in the spirit of gamification or e-learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assessing the security leak
&lt;/h3&gt;

&lt;p&gt;In order to be able to define measures to improve security, it is first necessary to determine what the current status is. This is the only way to identify weak points. Ideally, the status quo should not be determined only once, but the development should be monitored continuously.&lt;br&gt;
One way to see the current state of knowledge of the team is to introduce badges or awards for developers who have completed certain trainings, for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To make software applications more secure without extending the development time, it is necessary to proactively write secure code. To achieve this, traditional structures must be broken down and more responsibility for the security of applications must be transferred directly to the developers. The basic prerequisite for this is that the developers have the necessary knowledge. Fortunately, there are now many interactive and engaging ways of imparting knowledge, such as training, tournaments, real-time coaching and assessments, which provide knowledge with a high level of practical relevance in small learning units.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>devops</category>
      <category>security</category>
      <category>codequality</category>
    </item>
    <item>
      <title>Automation assistants: GitOps tools in comparison</title>
      <dc:creator>schnatterer</dc:creator>
      <pubDate>Thu, 12 Aug 2021 13:20:06 +0000</pubDate>
      <link>https://forem.com/cloudogu/automation-assistants-gitops-tools-in-comparison-28ja</link>
      <guid>https://forem.com/cloudogu/automation-assistants-gitops-tools-in-comparison-28ja</guid>
      <description>&lt;p&gt;If you want to switch from classic CI/CD environments to GitOps, then you can choose from any of a large number of available tools. However, it is not always easy to tell which features they support and how suitable they are for your project at first glance. This article provides help in making a decision.&lt;/p&gt;

&lt;p&gt;The term GitOps is a combination of the name of the source code management system Git and the abbreviation Ops as in operations. The idea for adding this additional tool to the DevOps toolbox comes from the Kubernetes environment, and it promises a new level of IT automation. Like the continuous delivery approach, GitOps relies on maintaining all information in source code management. The difference, however, is that the deployment environment synchronizes its state directly from Git, and the CI server is not responsible for the roll-out. The configuration must therefore be versioned in Git. Since this is treated like code, it is referred to as "Infrastructure as Code".&lt;/p&gt;

&lt;p&gt;It is no wonder then that there now is a growing number of GitOps tools to choose from. But what range of features do they offer? Is a single one sufficient, and can it automate "everything"? This article answers these and similar questions on the basis of specific examples. It lays out selection criteria and illustrates them by comparing the well-known GitOps tools ArgoCD and Flux v2.&lt;/p&gt;

&lt;h2&gt;
  
  
  A confusing market
&lt;/h2&gt;

&lt;p&gt;Compiling an overview of GitOps tools available on the market is not as trivial a task as it might sound. On the one hand, this is due to the fact that there is a certain amount of hype surrounding the term, and vendors like to add the term GitOps to their products for marketing purposes. On the other hand, it is difficult to clearly define the term GitOps, and it is used on different levels of the stack (from physical infrastructure to the applications that are running in the cloud) in varying levels of maturity. The &lt;a rel="noreferrer noopener" title="Article by Schlomo Schapiro in iX magazine" href="https://www.heise.de/select/ix/2021/4/2032116550453239806"&gt;article “Hands off!”&lt;/a&gt; (published in iX 4/2021 - only available in German) delves into the topic in more detail.&lt;/p&gt;

&lt;p&gt;Websites such as &lt;a rel="noreferrer noopener" title="To awesome-gitops" href="https://github.com/weaveworks/awesome-gitops"&gt;awesome-gitops&lt;/a&gt;, which was launched by Weaveworks, or &lt;a rel="noreferrer noopener" title="To gitops.tech" href="https://www.gitops.tech/"&gt;gitops.tech&lt;/a&gt;, which was put together by INNOQ employees, provide an introductory overview of the available tools. When you take a closer look, you will see that the listed tools can be used to perform a wide variety of tasks related to implementing GitOps, and of course they also differ from one another in terms of their adoption, maturity, and how actively they are maintained. This article identifies three categories from the various use cases: Tools for Kubernetes, supplementary tools, and tools close to infrastructure. In addition, we compiled a table that summarizes the tools and their properties. The tables also contain various Git and GitHub-based metrics (current as of February 2021) that allow you to better assess their adoption, maturity, and how actively they are maintained.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools for Kubernetes
&lt;/h2&gt;

&lt;p&gt;When it comes to GitOps tools, the first thing that usually comes up is the topic of operators for Kubernetes. In general, an operator (which is often also called a “custom controller”) is an application that runs in the Kubernetes cluster and automates operational tasks there (see Figure 1). This operator pattern is also used to implement GitOps. The GitOps operator is used to run the reconciliation loop, which synchronizes the target state declared in the Git repositories with the actual state of the cluster. In the event of differences (e.g., due to a new commit in Git), the operator takes care of convergence to the target state by applying Kubernetes resources to the API server. Experience has shown that additional features that go beyond the core feature set are required for efficient operation.&lt;br&gt;
These include observability and a command line interface (CLI) or a user interface (UI). We will learn more about this later in the “Criteria for selecting the right tool” sidebar (at the end of the article).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FGitOps-Tools-Kubernetes.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FGitOps-Tools-Kubernetes.jpg" alt="Table with GitOps tools for Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a rel="noreferrer noopener" title="To the Weaveworks blog post" href="https://www.weave.works/blog/gitops-operations-by-pull-request"&gt;blog post&lt;/a&gt; by Weaveworks, which coined the term GitOps in 2017, also names the first GitOps operator: &lt;a rel="noreferrer noopener" title="Flux" href="https://github.com/fluxcd/flux"&gt;Flux&lt;/a&gt;. In the meantime, this has been completely rewritten as &lt;a rel="noreferrer noopener" title="To Flux v2" href="https://github.com/fluxcd/flux2"&gt;Flux v2&lt;/a&gt;. In addition to Flux and Flux v2, the associated project "Flux" develops other components. Weaveworks has now handed the project over to the Cloud Native Computing Foundation (CNCF). By now, the project is in the second maturity level: incubator phase.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noreferrer noopener" title="To ArgoCD" href="https://github.com/argoproj/argo-cd"&gt;ArgoCD&lt;/a&gt; offers an alternative to Flux. It belongs to the Argo project, which is also based at the CNCF, and which is, just like Flux, in the second maturity level (incubator phase). A comprehensive comparison of the two GitOps operators can be found later in the article.&lt;/p&gt;

&lt;p&gt;A newer competitor is  &lt;a rel="noreferrer noopener" title="To Fleet" href="https://github.com/rancher/fleet"&gt;Fleet&lt;/a&gt;, which is developed by Rancher. Its special ability is that it is able to manage not just one, but a fleet of clusters. &lt;a rel="noreferrer noopener" title="PipeCD" href="https://github.com/pipe-cd/pipe"&gt;PipeCD&lt;/a&gt; is similarly young and has an even broader focus. Like Fleet, it promises the ability to manage multiple Kubernetes clusters, and it also offers a UI. In addition, it can handle Terraform and some services from the major cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps combined with CI
&lt;/h2&gt;

&lt;p&gt;Jenkins X also offers a broader focus, but in a different area. Contrary to what the name suggests, it differs greatly from the well-known Jenkins server. It's not a monolithic tool, but rather it consists of different components, such as Tekton for running pipelines and Kaniko for building images. At the heart of Jenkins X is a &lt;a rel="noreferrer noopener" title="Jenkins X CLI" href="https://github.com/jenkins-x/jx"&gt;CLI&lt;/a&gt; that the developers have rewritten for the current &lt;a rel="noreferrer noopener" title="Jenkins X Version 3" href="https://github.com/jenkins-x/jx-cli"&gt;version 3&lt;/a&gt; along with some fundamental architectural changes.&lt;/p&gt;

&lt;p&gt;Overall, Jenkins X is more powerful than ArgoCD and Flux, and therefore it is more difficult to integrate into existing workflows. But it offers a significantly larger range of features. It is opinionated, i.e. it relieves the user of having to make many decisions, but it also reduces flexibility. Jenkins X is a complete continuous integration and continuous delivery (CI/CD) package. In contrast to this, the pure GitOps operators need an additional CI server for many use cases, which, for example, automates tests and builds images and makes them available in the registry.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noreferrer noopener" title="To werf" href="https://github.com/werf/werf"&gt;Werf&lt;/a&gt; positions itself somewhere between a pure GitOps operator and a full CI/CD approach. The project was started under the name dapp, and then renamed werf in early 2019. Like an operator, it can apply Kubernetes resources from Git to a cluster. However, it runs outside of the cluster. This means that it does not utilize the pull principle, which is often associated with GitOps, in which the cluster itself pulls its target state from Git. Unlike ArgoCD and Flux, werf can also build images. An operator that runs in Kubernetes is planned (as of version v1.2 beta).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FGitOps-CIOps_en.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FGitOps-CIOps_en.jpg" alt="Comparison of CIOps and GitOps pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps loves operators
&lt;/h2&gt;

&lt;p&gt;A central point of GitOps is the complete declarative description of the state in Git. During the process of converting from the imperative, classic CI/CD (now referred to as CI-Ops in some places to differentiate it from GitOps) certain questions will naturally arise. How do you continue to use templating tools such as Helm or Kustomize that were previously executed by the CI server?&lt;/p&gt;

&lt;p&gt;The answer to this is usually: by using additional operators. These expand the Kubernetes API server using so-called Custom Resource Definitions (CRDs) and then listen for changes to the associated Custom Resources (CRs). These CRs allow the desired state to be described declaratively, which is a perfect match for GitOps.&lt;/p&gt;

&lt;p&gt;Flux brings with it Helm and Kustomize operators, which allow Helm releases and Kustomizations to be described declaratively via CR. If such a CR is applied to the cluster (typically using the GitOps operator), the Helm or Kustomize operator takes over the templating or overlay. As an alternative to the operator, the result of templating can be written to Git (for example, via a CI pipeline). The advantages include reduced efforts for maintaining infrastructure and more transparency in the Git repository. However, these advantages are counterbalanced by the need to create comprehensive and difficult-to-maintain YAML descriptions in Git.&lt;/p&gt;

&lt;p&gt;By storing the entire state in Git, this inevitably leads to the creation of secrets there, too. However, if they are left unencrypted, these present a large attack surface. The answer to the question of how encryption and decryption can be combined with GitOps is yet again: through additional operators.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components for better security
&lt;/h2&gt;

&lt;p&gt;One simple option that works well together with GitOps is Bitnami's &lt;a rel="noreferrer noopener" title="To Sealed Secrets" href="https://github.com/bitnami-labs/sealed-secrets"&gt;Sealed Secrets&lt;/a&gt; operator. It manages the key material in the cluster itself. There is a CLI for encryption that requires a connection to the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noreferrer noopener" title="To SOPS" href="https://github.com/mozilla/sops"&gt;SOPS&lt;/a&gt; that was developed by Mozilla offers significantly more options, though at the expense of a more complex configuration. Here, the key material can come from the key management systems (KMS) of the major cloud providers, from your own HashiCorp Vault, or from configured PGP keys. SOPS itself does not contain an operator, but there are different ways to use it with GitOps. Flux v2 offers native support. There is also the &lt;a rel="noreferrer noopener" title="To helm secrets" href="https://github.com/jkroepke/helm-secrets"&gt;helm-secrets&lt;/a&gt; plug-in, which can also be used in ArgoCD with the manual configuration. There is also a &lt;a rel="noreferrer noopener" title="To sops-secrets" href="https://github.com/isindir/sops-secrets-operator"&gt;sops-secrets&lt;/a&gt; operator that has been developed by a third party.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noreferrer noopener" title="To Kamus" href="https://github.com/Soluto/kamus"&gt;Kamus&lt;/a&gt; may represent a compromise between Sealed Secrets and SOPS. It was created especially for the GitOps use case and includes an operator. It can either manage the key material itself or obtain it from the KMS of the cloud providers. Another special feature is that Kamus encrypts secrets directly for an application. They are then decrypted by the application itself or by an init container. This means that the unencrypted secret is never present on the API server and ideally also not in an environment variable with in the container.&lt;/p&gt;

&lt;p&gt;If you are using an external KMS in any case, then there are other options, such as the &lt;a rel="noreferrer noopener" title="To kubernetes-external-secrets" href="https://github.com/external-secrets/kubernetes-external-secrets"&gt;kubernetes-external-secrets&lt;/a&gt; operator that was originally started by GoDaddy and the &lt;a rel="noreferrer noopener" title="To externalsecret-operator" href="https://github.com/ContainerSolutions/externalsecret-operator"&gt;externalsecret-operator&lt;/a&gt; from Container Solutions. If you use HashiCorp Vault, you also have the option of using the &lt;a rel="noreferrer noopener" title="To Vault Secrets" href="https://github.com/ricoberger/vault-secrets-operator"&gt;Vault Secrets&lt;/a&gt; operator. This works similarly to the Sealed Secrets Operator, but instead of managing its own key material, it retrieves the secrets from Vault. The &lt;a rel="noreferrer noopener" title="To CNCF Technology Radar" href="https://www.cncf.io/announcements/2021/02/23/cncf-provides-insights-into-secrets-management-tools-with-latest-end-user-technology-radar/"&gt;CNCF Technology Radar&lt;/a&gt; from January 2021 provides an overview of the types of tools that are available for secrets management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FAdditional-GitOps-Tools.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FAdditional-GitOps-Tools.jpg" alt="Table with additional GitOps tools"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Supplementary GitOps operators can also be used for deployment strategies, such as canary releases, A/B tests, and blue/green deployments, which have now been grouped under the term “progressive delivery”. The resources of most GitOps operators are not sufficient for this. One solution is &lt;a rel="noreferrer noopener" title="To flagger" href="https://github.com/fluxcd/flagger"&gt;Flagger&lt;/a&gt;. The tool that was launched by Weaveworks is now being developed as part of the Flux project. The Argo project also has an operator for this use case: &lt;a rel="noreferrer noopener" title="To Argo Rollouts" href="https://github.com/argoproj/argo-rollouts/"&gt;Argo Rollouts&lt;/a&gt;. Both offer CRs for implementing progressive delivery strategies in interaction with various ingress controllers and service meshes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools Close to Infrastructure
&lt;/h2&gt;

&lt;p&gt;The term GitOps arose originally in the context of application deployments in Kubernetes. The tools for this use case are very mature. They are not limited to this use case, however. GitOps operators can also be used to roll out Kubernetes clusters. One scenario is to use the &lt;a rel="noreferrer noopener" title="To Kubernetes Cluster API" href="https://github.com/kubernetes-sigs/cluster-api"&gt;Kubernetes Cluster API&lt;/a&gt;, which was started as kube-deploy and renamed Cluster API (CAPI) in 2018. This can be implemented as follows: A GitOps operator runs in a management cluster and applies the CRs (defined by CAPI CRDs) stored in Git to the cluster. A infrastructure provider also running in the cluster reads these CRs and applies them to a target cluster.&lt;/p&gt;

&lt;p&gt;As soon as this cluster is in place, the question arises of how applications can be rolled out there. Flux and ArgoCD can access the API server directly for this purpose. For other operators, such as Fleet and PipeCD, the architecture is specially designed for this case. They each offer one component for the management and target cluster: With Fleet, a manager operator connects to the agent operators, and with PipeCD, a control plane connects to daemons. So this does not require access to the API server from outside the cluster.&lt;/p&gt;

&lt;p&gt;In addition to creating Kubernetes clusters, there is also an increasing number of opportunities to use various Infrastructure-as-Code (IaC) tools, such as Terraform, with GitOps. As was already mentioned, PipeCD offers support for Terraform. Terraform's vendor, HashiCorp, now also offers an official &lt;a rel="noreferrer noopener" title="To Terraform Kubernetes Operator" href="https://github.com/hashicorp/terraform-k8s"&gt;Terraform Kubernetes operator&lt;/a&gt;. However, it needs access to HashiCorp's Terraform Cloud. Alternatively, there are also third-party operators that can function without Terraform Cloud, such as the one developed by &lt;a rel="noreferrer noopener" title="To Rancher" href="https://github.com/rancher/terraform-controller"&gt;Rancher&lt;/a&gt;. However, it is still in alpha stage.&lt;/p&gt;

&lt;p&gt;Another popular alternative for GitOps with Terraform is &lt;a rel="noreferrer noopener" title="To Atlantis" href="https://github.com/runatlantis/atlantis"&gt;Atlantis&lt;/a&gt;: When generating a pull request, basing on the Terraform files that are found in the Git repository, it creates the Terraform plan and adds it to the pull request as a comment. After merging, it applies the Terraform plan. Atlantis is compatible with various Git providers. It can be flexibly hosted as either a binary, a Docker image, or as a Helm chart for Kubernetes. This makes it one of the few tools that will be of interest to those who want to implement GitOps without Kubernetes. Ansible Tower (and thus its open source upstream &lt;a rel="noreferrer noopener" title="To AWX" href="https://github.com/ansible/awx"&gt;AWX&lt;/a&gt;) is also independent of Kubernetes. Red Hat considers the range of features to be &lt;a rel="noreferrer noopener" title="Comparable to a GitOps operator" href="https://www.ansible.com/blog/ops-by-pull-request-an-ansible-gitops-story"&gt;comparable to a GitOps operator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FTools-close-to-Infrastructure.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2FTools-close-to-Infrastructure.jpg" alt="Table with tools close to infrastructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example of Atlantis is, so to speak, symbolic of GitOps outside of Kubernetes. The pull principle of GitOps is more difficult to implement without a platform like Kubernetes that can run an operator. What remains often is the topic of "Operations by Pull Request”. However, with CI Ops the creation, modification, or merging of pull requests also triggers a CI/CD pipeline. This raises the question of whether the use of pull requests is sufficient for operations to be referred to as GitOps. It is more difficult to distinguish here. In fact, Atlantis does not refer to itself as a GitOps tool. But there are other examples, such as the Terraform alternative Pulumi. It works together with CI tools, such as GitHub Actions and GitLab CI, and it can be used to comment on pull requests on the associated platforms. Pulumi &lt;a rel="noreferrer noopener" title="Pulumi calls this GitOps" href="https://www.pulumi.com/product/github-actions/"&gt;calls this GitOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, there is one tool in particular that we should not leave out when creating a list of GitOps tools that are close to the infrastructure: &lt;a rel="noreferrer noopener" title="To Ignite" href="https://github.com/weaveworks/ignite"&gt;Ignite&lt;/a&gt;, which was also launched by Weaveworks. It allows you to manage virtual machines (VMs) via GitOps. In order to do this, it runs a daemon on the physical host that can start and stop VMs in accordance with a description that is stored in a Git repository. Firecracker, which was originally launched by AWS, is used as virtualization technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  So much and yet so little
&lt;/h2&gt;

&lt;p&gt;Ironically, despite the abundance of tools, there are use cases that cannot yet be automated using GitOps. The selection of GitOps tools that are close to the infrastructure is significantly smaller than those for Kubernetes. The closer you get to the physical infrastructure, the less the tool support gets. Examples of tasks in operations that cannot yet be done via GitOps are starting physical machines, for example via the Preboot Execution Environment (PXE), or performing firmware upgrades in devices such as switches.&lt;/p&gt;

&lt;p&gt;But even with the additional operators, not all use cases in operation can be fully automated with GitOps. One example is the Horizontal Pod Autoscaler in Kubernetes. To be compatible with GitOps, it would have to write the change in the number of replicas to a Git repository instead of sending it directly to the API server.&lt;/p&gt;

&lt;p&gt;Another, even more complex example are the topics of persistence, backup, restore, and disaster recovery. Backup creation can be automated reliably using such operators as Velero. However, restoring a backup requires manual intervention. You can roll back to the state described in the GitOps repository, but this does not restore the state saved by the backup operator. Conversely, performing a manual restore using the backup operator does not reset the state in the GitOps repository.&lt;/p&gt;

&lt;p&gt;Despite the abundant number of tools, there are still gaps in the feature set of the GitOps tool chain. Given the current dynamic of development, there is room for optimism that these gaps will be closed in the foreseeable future. In general, many of the available GitOps tools are based on Kubernetes. Even setting up the cluster or other infrastructure requires a Kubernetes cluster. Implementing GitOps without Kubernetes might requirea a bit of pioneering.&lt;/p&gt;

&lt;p&gt;From the abundance of tools described above, it is now important to find the right one that satisfies your own requirements. The decision that you make will depend heavily on your use case. The "Criteria for selecting the right tool" sidebar (at the end of the article) will show you what to look out for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operators in comparison: ArgoCD vs. Flux v2
&lt;/h2&gt;

&lt;p&gt;An example can better illustrate the criteria listed in the sidebar. So it makes sense to compare the two best-known GitOps operators, ArgoCD and Flux v2. As both projects continue to develop rapidly, so the comparison presented here can only be treated as a snapshot.&lt;/p&gt;

&lt;p&gt;In order to gain an initial overview of similarities and differences, it is worth taking a look at the feature lists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both Flux v2 and ArgoCD have the following capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RBAC&lt;/li&gt;
&lt;li&gt;Multi-tenancy&lt;/li&gt;
&lt;li&gt;Observability (health status, notifications, and metrics)&lt;/li&gt;
&lt;li&gt;CLI for automation and CI integration&lt;/li&gt;
&lt;li&gt;Start of synchronization via a webhook&lt;/li&gt;
&lt;li&gt;Calling webhooks using events (ArgoCD: hooks, Flux: notifications)&lt;/li&gt;
&lt;li&gt;Rollback/roll-anywhere for certain commits in the Git repository&lt;/li&gt;
&lt;li&gt;Support for Helm and Kustomize&lt;/li&gt;
&lt;li&gt;Multi-cluster support&lt;/li&gt;
&lt;li&gt;Execution of the container as an unprivileged user&lt;/li&gt;
&lt;li&gt;Security context is configurable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Features only in ArgoCD:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for Ksonnet, Jsonnet, and others via the plug-in mechanism&lt;/li&gt;
&lt;li&gt;Web interface to administer and monitor applications in real time&lt;/li&gt;
&lt;li&gt;SSO integrations for UI and CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Features only in Flux v2:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Definition of dependencies in Helm and Kustomize applications&lt;/li&gt;
&lt;li&gt;Support for SOPS&lt;/li&gt;
&lt;li&gt;Automatic updating of the image version in the Git repository&lt;/li&gt;
&lt;li&gt;Authentication of the CLI via Kubeconfig&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation and configuration
&lt;/h3&gt;

&lt;p&gt;ArgoCD can either be installed using simple Kubernetes resources and then patched or it can be rolled out using a configurable Helm Chart. All of the common settings are available, such as ingress and service accounts. Repositories, projects, applications, and various SSO integrations can also be preconfigured for deployment. The RBAC configuration is also extensive. You can define policies, roles, and groups yourself.&lt;/p&gt;

&lt;p&gt;Another installation variant is the additional &lt;a rel="noreferrer noopener" title="To ArgoCD operator" href="https://github.com/argoproj-labs/argocd-operator/"&gt;ArgoCD operator&lt;/a&gt;. This allows the actual ArgoCD components to be installed and configured via CRD. It is not documented how you can configure ArgoCD yourself via GitOps. This is conceivable, for example, using the ArgoCD operator. It remains to be determined whether this will work reliably and, above all, whether it supports continued operation via GitOps in the event of an error.&lt;/p&gt;

&lt;p&gt;The Flux v2 operators are rolled out primarily via the CLI. However, it can also be used to generate all the necessary Kubernetes resources and to then deploy them to the cluster. There is currently no Helm Chart, which limits the usability of the common application mechanisms. During the initial installation, Flux sets up a Git repository. From this point forward, you can also configure the operator with GitOps. The CLI is extensive and allows you to control all aspects of the Flux v2 portfolio. You can use it to create, delete, read, start, and stop all Flux-v2-specific resources. This makes Flux v2 particularly suitable for scripting. This may seem contradictory for GitOps at first, but in practice there often is a gradual migration. The CLI can be used to create resources, which are then checked into a Git repository. This is useful for integratinh GitOps into existing CI/CD processes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application configuration
&lt;/h3&gt;

&lt;p&gt;ArgoCD stores the target state internally through projects and applications. Repositories, clusters, and permissions can be defined in projects. In turn, any number of applications can be assigned to a project, where they will have access to the project's clusters and repositories. ArgoCD saves these applications and projects as CRs in the cluster. The repository and the path to a deployment can then be found in an application. In addition, you can also describe configurations for Helm, Kustomize, etc. in declarative form there as well. ArgoCD then applies this deployment, which is defined via the application, to the cluster. However, ArgoCD does not save Helm releases as such in the cluster, but converts them into simple Kubernetes resources using &lt;code&gt;helm template&lt;/code&gt; and applies them to the cluster. In contrast to Flux, the Helm releases cannot be queried via &lt;code&gt;helm ls&lt;/code&gt; and there is no history of the releases in the cluster.&lt;/p&gt;

&lt;p&gt;All relevant elements are defined using CRDs, which can be configured using GitOps. Specifically, this means that you can maintain the projects, applications, etc., which are defined as CRs in your own Git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2Fargo-cd-ui.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2Fargo-cd-ui.jpg" alt="Screenshot of ArgoCD UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Flux v2, the repositories form the central point. They are used both for the deployment of workloads and the configuration of Flux itself. Flux can roll out Kubernetes resources, Helm Release CRs, and Kustomize CRs as workloads. The Helm Release CRD and Kustomize CRD also offer all of the known features of these tools in descriptive form. Since the Git repositories are also defined as CR, they can also be managed via GitOps.&lt;/p&gt;

&lt;h3&gt;
  
  
  UI
&lt;/h3&gt;

&lt;p&gt;ArgoCD comes with a web interface for administering and monitoring applications. Since GitOps is an asynchronous process, you do not receive immediate feedback when changes are made to deployments. The included notification mechanisms provide a remedy for this issue (see the Observability section). They provide feedback through various channels, such as chat or e-mail. In addition, ArgoCD offers the opportunity to monitor deployments in real time via the UI (see Figure 2). If parts of the deployment fail, you can identify them via the UI and view error messages and logs.&lt;/p&gt;

&lt;p&gt;This allows the developer to analyze their deployments and correct errors all without having to access the cluster. For authentication , there are interfaces for common protocols, such as LDAP and OIDC. Via configurable roles and groups, users can granted access the projects and applications for which they are responsible. The developers of Flux v2 are &lt;a rel="noreferrer noopener" title="Development of Flux v2 UI" href="https://github.com/fluxcd/webui"&gt;currently working on a web interface&lt;/a&gt;. However, it is still in an experimental state.&lt;/p&gt;

&lt;h3&gt;
  
  
  CLI
&lt;/h3&gt;

&lt;p&gt;All relevant features can be performed from the CLI for ArgoCD. Users can create and delete objects, such as applications or projects, as well as change states, such as rolling back applications or triggering synchronizations. This makes the CLI suitable for all types of automation and integration in CI pipelines. The CLI communicates with the ArgoCD server, which makes exposing the Kubernetes API server unnecessary.&lt;/p&gt;

&lt;p&gt;The Flux v2 CLI can also be used to access all relevant features. It also offers the option of rolling out additional tenants, which can be advantageous when automating a multi-tenancy environment. The CLI communicates directly with the Kubernetes API server, which must therefore be accessible from outside.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authorization
&lt;/h3&gt;

&lt;p&gt;ArgoCD relies on an organizational structure based on projects and applications. Applications, Git repositories, clusters and permissions (RBAC), as well as allow and deny lists for resource types can be assigned to a project. Applications offer the same options for restricting permissions. This allows users to assign permissions very easily and very accurately, including at both the project as well as the application levels.&lt;/p&gt;

&lt;p&gt;Flux v2 currently only supports restricting the operators, Helm releases, and Kustomization CRs via RBAC. Simple Kubernetes resources, such as ConfigMaps and services, can only be controlled via the operator's permissions. At the project level, this might requires several instances of the Flux operator to assign project-specific permissions. Alternatively, access to the GitOps repository can also be implemented via a CI pipeline, which then only allows permitted resources to be pushed into the GitOps repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;To send notifications about the synchronization status between Git and the cluster, ArgoCD utilizes the ArgoCD Notifications component, a notification system that is delivered with ArgoCD deployment. Events that arise when applications are rolled out can thus be forwarded to various channels. ArgoCD Notifications are currently supported on the SMTP, Slack, Opsgenie, Grafana, Telegram, and Webhooks channels.&lt;/p&gt;

&lt;p&gt;As an alternative to ArgoCD Notifications, you could use Argo Kube Notifier and Kube Watch. However, these have to be operated in addition to ArgoCD. In any case, ArgoCD Notifications is best tailored to ArgoCD. It offers useful triggers and templates and can be installed and configured using the project's own Helm Chart.&lt;/p&gt;

&lt;p&gt;To monitor the components and managed deployments, ArgoCD exposes two sets of Prometheus metrics: "Application metrics" for monitoring the status of the synchronization and the health status of deployments as well as "API Server Metrics" for monitoring the requests and responses to the API server. In addition, there are ready-made Grafana dashboards that are based on these metrics. This makes it very easy to implement a monitoring cockpit for the entire system.&lt;/p&gt;

&lt;p&gt;Flux v2 has a special controller for Observability, the notification controller. As is true for all other Flux components, alerts and notifications are configured via corresponding CRDs. They can be used to set up providers, i.e., channels, such as chats and webhooks, alert rules and recipients. The associated CRs can also be maintained in a Git repository and deployed via Flux v2. The Notification Controller is currently suitable for the Slack, Discord, Microsoft Teams, RocketChat, and Webhooks channels.&lt;/p&gt;

&lt;p&gt;There is also the option of attaching the status to a Git commit. This can be done with the following providers: GitHub, GitLab, Bitbucket, and Azure DevOps. At the moment, Flux v2 cannot connect a provider via the SMTP protocol in order to send e-mails in this way.&lt;/p&gt;

&lt;p&gt;There are a number of Prometheus metrics and Grafana dashboards that are available for monitoring the controllers and deployments (see Figure 3). On the one hand, this allows you to monitor the error-free execution of the controller, providing you with statistics on CPU and memory consumption, for example. On the other hand, you can monitor the state of all Flux v2 CRDs, such as Helm releases and Kustomizations. This also makes it possible to send alerts by e-mail again. The alert rules can be defined in either Grafana or Prometheus Alert Manager. Prometheus and Grafana can also be installed and configured using the Flux CLI. In a multi-tenant environment, it can be used to automate deployment using scripts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2Fflux-grafana.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudogu.com%2Fimages%2Fblog%2F2021%2Fflux-grafana.jpg" alt="Screenshot of Flux metrics visualized in Grafana"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both Flux v2 and ArgoCD offer many features that also reflect years of practical experience with GitOps. Both focus strongly on the GitOps core features, which makes them easy to integrate into existing CI/CD infrastructure. However, if you start out with a greenfield approach, you have to build it up separately.&lt;/p&gt;

&lt;p&gt;Overall, ArgoCD offers more options for configuration (for example, for authorization) and provides a graphical user interface. However, this UI has to be configured and operated, and it also offers a larger attack surface. It depends on the use case as to whether a UI is needed at all, and this thus justifies the higher effort. Users of OpenShift might encounter less effort, since &lt;a rel="noreferrer noopener" title="ArgoCD is integrated into the platform as OpenShift GitOps" href="https://www.openshift.com/blog/announcing-openshift-gitops"&gt;ArgoCD is integrated into the platform as OpenShift GitOps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Flux has certain little features that ArgoCD doesn't have, such as support for SOPS and automatic updates for new image versions. However, the latter is the reasons that Flux v2 has not yet appeared in a stable version. It could be difficult to opt for a product with a version number 0.x when it is the central component in the supply chain. However, we do expect the release of a stable version here soon.&lt;br&gt;
One possibility to see ArgoCD and Flux v2 in action and compare their features is the &lt;a title="GitOps Playground on GitHub" href="https://github.com/cloudogu/k8s-gitops-playground" rel="noopener noreferrer"&gt;GitOps Playground&lt;/a&gt; project that was started by the authors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would you like some more?&lt;/strong&gt;&lt;br&gt;
There is a large number of tools that either bring themselves in connection with GitOps or that are often named in that context. The authors are available for discussions about the topic in the &lt;a rel="noreferrer noopener" title="GitOps discussion forum at heise online" href="https://www.heise.de/forum/iX/GitOps/forum-467865/"&gt;GitOps discussion forum at heise online&lt;/a&gt;. To meet the often-asked question after market overviews “Why is program X not mentioned?” head on, here are a few reasons why this article does not consider some candidates. Left out are tools that&lt;br&gt;&lt;br&gt;&lt;br&gt;
  &lt;/p&gt;
&lt;ul&gt;

    &lt;li&gt;Are only called imperatively or in the CI/CD process and therefore do not have a reconciliation loop (e.g. templating tools), even if they decorate themselves with the term GitOps;&lt;/li&gt;

    &lt;li&gt;Are no longer developed actively;&lt;/li&gt;

    &lt;li&gt;Are generally not recommended for use in production;&lt;/li&gt;

    &lt;li&gt;Have a highly limited use case, for example tools that are tailored for a specific cloud provider;&lt;/li&gt;

    &lt;li&gt;Are proprietary;&lt;/li&gt;

    &lt;li&gt;Are still very new and therefore not yet widely used.&lt;/li&gt;

  &lt;/ul&gt;
&lt;br&gt;
Details about the mentioned standardization of the term GitOps can be found in the article “Hands Off”, published in iX 4/2021.

&lt;p&gt;&lt;strong&gt;Criteria for selecting the right tool&lt;/strong&gt;&lt;br&gt;
Most of the GitOps tools are currently available in the Kubernetes environment. It is no wonder, because this is the oldest and most mature GitOps use case. This sidebar summarizes the important requirements based on the authors' own experience that you should consider when making a decision. However, many of these criteria apply not just to using Kubernetes. In this respect, they can also prove to be helpful for using GitOps in other environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation and configuration of the GitOps operator&lt;/strong&gt;&lt;br&gt;
As is well known, there are many ways to roll out applications in a Kubernetes cluster. This also applies to GitOps operators. Common options are rolling out via Helm Chart, CLI, or in the form of simple Kubernetes resources. Another aspect is changing the configuration of the operator once it is deployed. There is an interesting point to note here: Can the operator itself be configured via GitOps?&lt;/p&gt;

&lt;p&gt;In addition, it is interesting to note what parts can be adapted at runtime (for example, using CRs – Custom Resources) and which parts require the operator to be restarted. Does the tool permit a multi-tenancy solution? An operator capable of managing multiple cluster can be helpful for implementing a multi-tenancy solution. Operators can support this using their own components in the target cluster, or they can communicate directly with the API server. Depending on the infrastructure, direct access to the API server may or may not be desired.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application configuration&lt;/strong&gt;&lt;br&gt;
The Git repository must be defined and configured so that the GitOps operator can synchronize the cluster with a Git repository. It is important to map existing project structures with the operator used. It is also be interesting to find out whether the operator can handle multiple Git repositories and whether these can be added and configured without restart. In addition to the use of simple Kubernetes resources, there are other ways of rolling out applications in a Kubernetes cluster. These include Kustomize, Helm, Ksonnet, and Jsonnet.&lt;/p&gt;

&lt;p&gt;The specific application types that an operator supports and whether they can be configured in accordance with the existing requirements will influence whether you decide to use it. Hooks can also play an additional role, i.e., they can react to events during the deployment in order to send messages, perform checks, or influence the deployment. Whether you need to map dependencies between resources may also be relevant. Examples of this include applying CRDs before associated CRs, or ensuring a database runs before the application is started. Occasionally, tools also offer additional opportunities for automation, such as, building images, writing new image versions to Git, or for executing entire pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI&lt;/strong&gt;&lt;br&gt;
A graphical user interface allows easy access to the operator configuration and the application resources. It enables error analysis and handling without accessing the cluster by monitoring cluster objects and manipulating manifests. However, it should be noted here that the changes are not synchronized in the Git repository. Only the associated CRs in the cluster are changed. A single sign-on (SSO) is advantageous, since an existing user management can be used to enable users to access the UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLI&lt;/strong&gt;&lt;br&gt;
A CLI can be used to develop scripts that automate GitOps processes or integrate them into CI pipelines. It is conceivable that new clusters will be rolled out, including operators, or the reconciliation loop will be triggered in existing workflows. The range of features that can be performed from the CLI will be a more important consideration depending on how extensively the CLI will be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization&lt;/strong&gt;&lt;br&gt;
The assignment of permissions is of central importance for GitOps, since in principle everything that ends up in the Git repository is also applied to the cluster. Critical or security-relevant objects could be changed, or the cluster could be compromised or otherwise rendered unusable. One countermeasure is to restrict the operator's access to certain types of resources. Role-based access control (RBAC) is suitable for making sure that the operator is not rolled out with the permissions of a cluster administrator. Additional restrictions that you impose using allow and deny lists on resources and resource types can further improve security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;&lt;br&gt;
Due to the asynchronous nature of GitOps, feedback mechanisms play an important role. Because it is only after the operator has already deployed resources on the cluster that it is possible to know whether deployment was successful. Additional tools are required to receive these notifications (e.g., chat, e-mail, metrics, or commit status). Here you must consider which tools can be used and what effort integration into your own workflow is necessary.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>flux</category>
    </item>
    <item>
      <title>Scrum vs. Kanban – How to select the best agile methodology for you</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Wed, 28 Jul 2021 11:56:23 +0000</pubDate>
      <link>https://forem.com/cloudogu/scrum-vs-kanban-how-to-select-the-best-agile-methodology-for-you-44n7</link>
      <guid>https://forem.com/cloudogu/scrum-vs-kanban-how-to-select-the-best-agile-methodology-for-you-44n7</guid>
      <description>&lt;p&gt;During the last years it became common to use agile methods in software development. The most widespread ones are Scrum and Kanban. The “&lt;a href="https://explore.digital.ai/state-of-agile/14th-annual-state-of-agile-report"&gt;State of Agile&lt;/a&gt;” survey for 2020 for example found that a vast majority of companies (~75%) uses Scrum or Scrum hybrids. The second place is held by Kanban and “Scrumban” with about 15%. That is why we want to compare those two methodologies.&lt;/p&gt;

&lt;p&gt;This post will be about Scrum and Kanban in general. For detailed information about the tools visit for example &lt;a href="http://www.scrum.org/"&gt;www.scrum.org&lt;/a&gt; or &lt;a href="http://limitedwipsociety.ning.com/"&gt;http://limitedwipsociety.ning.com/&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Agile?
&lt;/h2&gt;

&lt;p&gt;What are the reasons to use agile approaches in projects? Let’s answer this question with some figures: The State of agile survey found that the greater majority of people (70%) using the agile approach improved their ability to manage changing priorities. Before people start working with agile tools they have expectations about the benefits, and after the projects completion they can say whether their expectations were met, or not. The following image shows the importance of several aspects and whether they improved by using agile methodologies or tools. (The data is from the 8th State of Agile survey, but the results stayed almost the same over the years.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Therq-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rgv6j1seitvs1lgi10j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Therq-W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rgv6j1seitvs1lgi10j.png" alt="Important of aspects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that it is important for 75% of the participants of the survey to accelerate the “time to market” (blue bar). For 83% of projects this aspect improved by using agile methodologies (yellow bar). This behavior can be seen for all of the shown aspects: important aspects improved for the vast majority of projects. What the survey didn’t talk about, are the reasons why for some participants those aspects didn’t improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agile Manifesto
&lt;/h2&gt;

&lt;p&gt;There are a lot of agile methodologies and tools that have similarities and differences. All are based on the same principles, which are written down in the “&lt;a href="http://agilemanifesto.org/"&gt;Manifesto for Agile Software Development&lt;/a&gt;”. The manifesto states that certain aspects are more important than others and that it is necessary to internalize those facts. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4ZvcxUXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5k70urgq0srkdj0d6h7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4ZvcxUXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5k70urgq0srkdj0d6h7.png" alt="Agile manifesto"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example the manifesto says that responding to change is more important than following a plan. An important factor for the success of agile projects is that the participants have the right mindset to work in an agile project environment. Another important advice is that you shouldn’t limit yourself to just one tool. Combine aspects from different tools that fit your needs, but be aware of the fact that you are combining several tools.&lt;/p&gt;

&lt;p&gt;There is a large number of agile methodologies. Many of them have quite similar approaches and use the same principles. Some of them use a lot of restrictions, others leave a lot of free space. The &lt;strong&gt;Agile Manifesto&lt;/strong&gt; states the basic principles and each methodology uses it’s own set of tools and rules. The challenge is to find the methodology and tools that best fit your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison of Scrum and Kanban
&lt;/h2&gt;

&lt;p&gt;After showing the most common reasons for using agile approaches and their basic principles, we can now compare the two methodologies that are being used the most: Scrum and Kanban. We want to provide a first insight to the mindset of Scrum and Kanban teams and show similarities and differences of the two tools.&lt;/p&gt;

&lt;p&gt;It is said that working on projects with agile tools helps organizations to complete projects faster. This is because tools like Scrum or Kanban are process tools – they use transparency to show optimization potential and thereby help to work more effectively. An expectation to the users is to experiment by continuously adjusting to chaning circumstances and by customizing the environment. Another similarity of the two tools is, that they are both not very prescriptive – they provide a framework and leave space to adjust the methodology to your conditions. Nevertheless, Scrum is more prescriptive than Kanban.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workload
&lt;/h3&gt;

&lt;p&gt;One difference between the tools is, that Scrum limits the workflow by limiting the work in progress (WIP) indirectly whereas Kanban limits the WIP directly. In Scrum the limit of WIP is managed by the workload during one iteration, or sprint. In case of the following board the maximal workload is 4 (there can be a maximum of 4 cards in the WIP column), because there are only four cards. The Kanban board is different in one little detail, the &lt;strong&gt;2&lt;/strong&gt; in the WIP column. This limits the WIP to two simultaneous tasks. Therefore it would be allowed to start with task &lt;strong&gt;C&lt;/strong&gt; immediately, but task &lt;strong&gt;D&lt;/strong&gt; can only be started after either &lt;strong&gt;B&lt;/strong&gt; or &lt;strong&gt;C&lt;/strong&gt; are finished. The Scrum team on the other hand is allowed to start the tasks &lt;strong&gt;C&lt;/strong&gt; and &lt;strong&gt;D&lt;/strong&gt; immediately. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u5iIv_4K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zzpt9c7lgklv7ptppx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u5iIv_4K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zzpt9c7lgklv7ptppx3.png" alt="Scrum board start"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6gpY5ydy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5xax4ayzczpi0qmui6p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6gpY5ydy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5xax4ayzczpi0qmui6p0.png" alt="Kanban board start"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Response Time for new Requirements
&lt;/h3&gt;

&lt;p&gt;Another difference of the two tools is the way they respond to new requirements. Let’s say you have the following situation in your Scrum or Kanban project and someone turns up and wants to add task E to the board. In which way do the teams react to the new card? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JxJd9nkV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xckydo2mqvnp7f402rhu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JxJd9nkV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xckydo2mqvnp7f402rhu.png" alt="Scrum new item"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gZydR-87--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eb8x5cnwv3mh02qatpjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gZydR-87--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eb8x5cnwv3mh02qatpjh.png" alt="Kanban new item"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Scrum team typically says something like: “Sorry, but we have committed us to A, B, C and D for this sprint. Of course you can add E to the next sprint.”&lt;/p&gt;

&lt;p&gt;The Kanban team would say something like: “Of course you can add E to the board. But to do that you have remove either C or D, because the limit of 2 tasks is reached right now. Or you wait until we finished A or B.”&lt;/p&gt;

&lt;p&gt;Depending on the timing for the new requirement, in Scrum the response time to new requirements can be something between the full length of the sprint and one day (in case the requirement comes up one day before the new sprint starts). Therefore the average response time is half the sprint length. In Kanban the response time is as long as it takes to get capacity available. This can be instantly (by removing another task) or the time it takes to complete another task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparison
&lt;/h3&gt;

&lt;p&gt;The following tables will show you several similarities, differences and the basic work rules of the two methodologies to point out their strengths and weaknesses. The comparison should also help you to find the approach (or certain aspects) you can use in your projects.&lt;/p&gt;

&lt;h4&gt;
  
  
  Similarities
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Both…&lt;/strong&gt;&lt;br&gt;
… are pull scheduling systems. This means that the team chooses when and how much work to commit to.&lt;br&gt;&lt;br&gt;
… are based on continuous and empirical process optimization.&lt;br&gt;&lt;br&gt;
… emphasize responding to change over following a plan.&lt;br&gt;&lt;br&gt;
… emphasize responding to change over following a plan.&lt;br&gt;&lt;br&gt;
… are Lean and Agile.&lt;br&gt;&lt;br&gt;
… limit WIP.&lt;br&gt;&lt;br&gt;
… use transparency to dive process improvement.&lt;br&gt;&lt;br&gt;
… focus on delivering releasable software early and often.&lt;br&gt;&lt;br&gt;
… are based on self-organizing teams.&lt;br&gt;&lt;br&gt;
… require breaking the work into pieces.&lt;br&gt;&lt;br&gt;
… help to continuously optimize the release plan based on empirical data.&lt;/p&gt;

&lt;h4&gt;
  
  
  Work Rules
&lt;/h4&gt;

&lt;p&gt;The different approaches of the two tools can be shown best by showing their basic work rules.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scrum&lt;/th&gt;
&lt;th&gt;Kanban&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Split your organization into small, cross-functional, self organizing teams. Split your work into a list of small, concrete deliverables. Sort the list by priority and estimate the relative effort of each item. Split time into short fixed-length iterations with potentially shippable code demonstrated after each iteration.&lt;/td&gt;
&lt;td&gt;Visualize the workflow: Split the work into pieces, write each item on a card and put it on the wall. Use named columns to illustrate where each item is in the workflow.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimize the release plan and update priorities in collaboration with the customer, based on insights gained by inspecting the release after each iteration.&lt;/td&gt;
&lt;td&gt;Limit Work in Progress (WIP) – assign explicit limits to how many items may be in progess at each workflow state.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimize the process by having a retrospective after each iteration.&lt;/td&gt;
&lt;td&gt;Measure the lead time (average time to complete one item), optimize the process to make lead time as small and predictable as possible.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Differences
&lt;/h4&gt;

&lt;p&gt;To emphasize the differences between the two tools a bit more the following table shows some aspects that are prescribed or optional in the tools.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scrum&lt;/th&gt;
&lt;th&gt;Kanban&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Timeboxed iterations prescribed&lt;/strong&gt;.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Timeboxed iterations optional&lt;/strong&gt;. Can have separate cadences for planning, release, and process improvement. Can be eventdriven instead of timeboxed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Team commits&lt;/strong&gt; to a specific amount of work for each iteration.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Commitment optional&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses &lt;strong&gt;velocity&lt;/strong&gt; as default metric for planning and process improvement.&lt;/td&gt;
&lt;td&gt;Uses &lt;strong&gt;Lead Time&lt;/strong&gt; as default metric for planning and process improvement.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Cross-functional teams&lt;/strong&gt; prescribed.&lt;/td&gt;
&lt;td&gt;Cross-functional teams optional. &lt;strong&gt;Specialist teams allowed&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Items must be broken down&lt;/strong&gt; so they can be completed within 1 sprint.&lt;/td&gt;
&lt;td&gt;No particular item size is prescribed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Burndown chart prescribed&lt;/strong&gt;.&lt;/td&gt;
&lt;td&gt;No particular type of diagram is prescribed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;WIP limited indirectly&lt;/strong&gt; (per sprint)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;WIP limited directly&lt;/strong&gt; (per workflow state)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Estimation prescribed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Estimation optional&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Cannot add items to ongoing iteration&lt;/strong&gt;.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Can add new items whenever capacity is available&lt;/strong&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;A sprint backlog is owned by one specific team&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;A Kanban board may be shared by multiple teams or individuals&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prescribes 3 roles&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Doesn't prescribe any roles&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;A Scrum board is reset&lt;/strong&gt; between each sprint&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;A Kanban board is persistent&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prescribes a prioritized product backlog&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Priority setting is optional&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How to choose between Scrum and Kanban?
&lt;/h2&gt;

&lt;p&gt;As you can see, the two tools have basic similarities, but they are very different in the details. If you want to use one of them, or certain aspects, you should be aware that both are more than just boards with a lot of cards and talking about those cards. Of course, using a board is a start, but there are so much more opportunities to improve a project. If you stick to the rules and tools of the methodologies they can help you a lot. An important thing you should keep in mind is that it is not important to stick to one methodology. You are free to combine aspects, tools and rules of different methodologies to set up your project management system.&lt;/p&gt;

&lt;p&gt;So how do Scrum and Kanban look in real life? We will show an example of how projects proceed on the different boards. This again shows some advantages and disadvantages of the two methodologies and can help you to find the solution for your own project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples of Scrum and Kanban in use
&lt;/h3&gt;

&lt;p&gt;The following example represents a project that is less trivial than the one in the previous paragraph. There is a product backlog containing several tasks and a production process that consists of several steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In case of Scrum the team is in the first sprint, which consists of the tasks A, B, C, D, E and F. A is already done, B to E are at the moment in progress and F is still to do. During the upcoming sprints the team will commit to the tasks G to N.&lt;/li&gt;
&lt;li&gt;The Kanban board consists of the backlog from which maximal 2 tasks can be selected for priorization. The development section allows max. 3 tasks at once (tasks from both columns – ongoing and done – count). After the development the features need to be tested and after that they will be in the production environment and are thereby done. You can see that the board represents the different states of the production process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KRYCZg16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xotqoqzeilyadxu3wpv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KRYCZg16--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xotqoqzeilyadxu3wpv1.png" alt="Scrum in use"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QLLxxoVh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl6v8rclb1f70hd6phne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QLLxxoVh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl6v8rclb1f70hd6phne.png" alt="Kanban in use"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Scrum board will be reset after each sprint, whereas the Kanban board is persistent.&lt;/p&gt;

&lt;p&gt;After a few Scrum sprints or simply after some time the two boards could look like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wDVtAfFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lb75ffvc2yi6lpp4x3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wDVtAfFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lb75ffvc2yi6lpp4x3r.png" alt="Scrum board at end of sprint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLyLDT5v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1lvhn7kr9z30dzma0wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLyLDT5v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1lvhn7kr9z30dzma0wx.png" alt="Kanban all tasks done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For Kanban it is not necessary that there is a maximal amount of tasks for each column.&lt;/p&gt;

&lt;p&gt;The Kanban board, which contains more columns than the Scrum board, can help to optimize the process steps, because it is necessary to think about them in order to draw the board. It can also be very helpful to optimize the amount of allowed cards per column, because this can reveal bottle necks. The Scrum board and the partition of tasks into sprints helps to organize tasks and to think about reasonable packages. But remember, it is always possible to combine aspects of different methodologies and tools to find the board, rules and tools that fit your project best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Agile Tools and Methodologies
&lt;/h2&gt;

&lt;p&gt;Besides the two tools Scrum and Kanban there is a huge number of other agile tools and methodologies that serve as an inspiration for you. The more you read about them, the more you will see that they are based on the same principles (which are stated in the Agile Manifesto) and that some use the same approaches and methods. Here is a short list of some wide spread approaches that you could take a look into.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extreme Programming (&lt;a href="http://www.extremeprogramming.org/"&gt;XP&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://en.wikipedia.org/wiki/Lean_software_development"&gt;Lean&lt;/a&gt; Software Development&lt;/li&gt;
&lt;li&gt;Feature Driven Development (&lt;a href="http://www.agilemodeling.com/essays/fdd.htm"&gt;FDD&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://en.wikiversity.org/wiki/Crystal_Methods"&gt;Crystal&lt;/a&gt; Family&lt;/li&gt;
&lt;li&gt;Test Driven Development (&lt;a href="http://www.agiledata.org/essays/tdd.html"&gt;TDD&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;A Kanban board represents the workflow of a project whereas a Scrum board always consists of the same columns. Another big difference is that it is possible to limit the number of cards in certain columns when using Kanban while Scrum limits the WIP by restricting the number of cards in the &lt;strong&gt;Sprint Backlog&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Besides the two methodologies that were presented here there are numerous other agile approaches. Some of them are based on Scrum or Kanban. A very widespread approach is a combination of &lt;strong&gt;Extreme Programming&lt;/strong&gt; and &lt;strong&gt;Scrum&lt;/strong&gt;, because both supplement each other very well. It’s up to you to find the methodologies, tools and rules that you want to use to accomplish the aims of your projects. Since there is no &lt;strong&gt;“one fits all”&lt;/strong&gt; for all projects you have to experiment and try variations.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to define software requirements</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Mon, 26 Jul 2021 12:03:07 +0000</pubDate>
      <link>https://forem.com/cloudogu/how-to-define-software-requirements-45g7</link>
      <guid>https://forem.com/cloudogu/how-to-define-software-requirements-45g7</guid>
      <description>&lt;p&gt;In order to prevent misunderstandings like this there are 2 important things you need to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to define requirements clearly and&lt;/li&gt;
&lt;li&gt;how to keep track of changes during the development.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Define Requirements
&lt;/h2&gt;

&lt;p&gt;Depending on the way you work you need to describe requirements in different ways. In Scrum for example you create User Stories, in waterfall projects use cases or other forms of requirement documentation. Regardless of formal aspects you need to ensure that you have all information that are required for the implementation. To achieve this you only need to follow these 5 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identification of stakeholders: To assure that you get all relevant requirements you need to make sure that you know all stakeholders.&lt;/li&gt;
&lt;li&gt;Collect requirements: There are different ways you can find out about requirements, e.g. you can use interviews, case scenarios or prototypes. No matter which method you’re using, you should always find answers to these questions:

&lt;ul&gt;
&lt;li&gt;What is the product supposed to do?&lt;/li&gt;
&lt;li&gt;How well should it do it? (e.g. +/- limits, quality, measurable terms)&lt;/li&gt;
&lt;li&gt;Under what conditions? (e.g. environment, states)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Categorization of requirements: To get a good overview of the requirements you should categorize them. The two major categories are functional and non-functional requirements.&lt;/li&gt;
&lt;li&gt;Interpretation and recording of requirements: Only well defined requirements can be considered adequately in the product. For each requirement you should…

&lt;ul&gt;
&lt;li&gt;… define the requirement in detail.&lt;/li&gt;
&lt;li&gt;… prioritize the requirement.&lt;/li&gt;
&lt;li&gt;… analyze the impact of change.&lt;/li&gt;
&lt;li&gt;… resolve conflicting issues by talking to the stakeholders.&lt;/li&gt;
&lt;li&gt; … analyze the feasibility.&lt;/li&gt;
&lt;li&gt;… specify test cases.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Sign off: Before implementing a requirement you should get the ‘Go’ from the stakeholders.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After you talked to the stakeholders, identified requirements, defined and prioritized them, you have an evaluated list of aspects that need to be considered in the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep Track of Changes
&lt;/h2&gt;

&lt;p&gt;It is one thing to capture all requirements for a project. The other thing is to ensure that all the requirements are met by the product. The Requirements Traceability Matrix can help you to keep track of all the requirements and associated test cases. The matrix links requirements with derived product specifications and test cases. This is what the matrix looks like:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ID&lt;/th&gt;
&lt;th&gt;Priority&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Risks&lt;/th&gt;
&lt;th&gt;Specifications&lt;/th&gt;
&lt;th&gt;Test Cases&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unique identifier&lt;/td&gt;
&lt;td&gt;Priorization&lt;/td&gt;
&lt;td&gt;A short description&lt;/td&gt;
&lt;td&gt;Functional or non-functional&lt;/td&gt;
&lt;td&gt;Associated risks&lt;/td&gt;
&lt;td&gt;Link to the detailed description&lt;/td&gt;
&lt;td&gt;List of associated test cases (unit, integration, system, user tests, etc.)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To make sure that a requirement is truly considered in the product, it is necessary that it is described in the product specifications and that it is associated with at least one test case.&lt;/p&gt;

&lt;p&gt;If requirements change you have two different options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adjust the requirement specification accordingly and document the changes.&lt;/li&gt;
&lt;li&gt;Add the changed requirement as a new item to the list and treat it like a new requirement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No matter how you’re dealing with changes, the Requirements Traceability Matrix provides an overview of requirements and associated specifications. So if a specification changes, you can easily see which requirements are involved, and vice versa.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements need to be tracked
&lt;/h2&gt;

&lt;p&gt;Finding out about all requirements of a product and ensuring their implementation is the key to a happy customer. Therefore it is advisable to invest enough time into investigating and defining requirements. In addition to that it is necessary to ensure that the final product meets all requirements. This can only be achieved by considering them in the product specifications and by testing the specifications. This can be ensured by using a Requirements Traceability Matrix.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>IT compliance in practice – correctly containing and deleting data and projects in B2B software development</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Fri, 23 Jul 2021 14:22:58 +0000</pubDate>
      <link>https://forem.com/cloudogu/it-compliance-in-practice-correctly-containing-and-deleting-data-and-projects-in-b2b-software-development-2lfm</link>
      <guid>https://forem.com/cloudogu/it-compliance-in-practice-correctly-containing-and-deleting-data-and-projects-in-b2b-software-development-2lfm</guid>
      <description>&lt;p&gt;For many, compliance is an abstract concept that only managers For many, compliance is an abstract concept that only managers have to deal with. But it’s not. Compliance is everyone’s business.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is compliance and why is it so important?
&lt;/h2&gt;

&lt;p&gt;Briefly, compliance is the observance of laws, rules, standards, contracts and moral values, which can come from both outside and inside a company. The tricky thing about compliance is that non-compliance is not always immediately apparent. However, once it is discovered, things can quickly become unpleasant. Compliance affects not only management, but every employee, since anyone can knowingly or unknowingly violate it in the decisions they make. Examples of this include not using software under the terms of its license or failing to comply with data protection guidelines by not deleting data as required.&lt;/p&gt;

&lt;h2&gt;
  
  
  A special case: deletion or handing over of data
&lt;/h2&gt;

&lt;p&gt;Since compliance is about adhering to regulations of any kind, this is very complex and individual issue. That’s why this post will be dealing with a very specific topic: the deletion of data when the contract ends.&lt;/p&gt;

&lt;p&gt;In B2B software development, it is customary for contracts to contain a clause on how to hand over or destroy all records and documents related to the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IrU5BeKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhd5iwh5g69r4s6jh3jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IrU5BeKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhd5iwh5g69r4s6jh3jr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first glance, this does not seem to pose a problem. Yet, as is so often the case, the devil is in the details. Depending on how meticulous tidying up at the end of the project is, it can be very time-consuming to actually find and delete all of the information and data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inadequate isolation of projects can be a compliance issue
&lt;/h2&gt;

&lt;p&gt;The clause referred to above means that at the end of a project all systems would have to be searched to find all of the data relating to the project. This ranges from obvious data such as repositories, artifacts, Wiki entries, documents such as conceptual designs or documentation to less obvious data such as tickets in the issue tracker or source code stored on secondary systems. How extensive this search is essentially depends on two factors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The scope and duration of the project&lt;/li&gt;
&lt;li&gt;Isolation of data from different projects&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The scope and duration of projects
&lt;/h3&gt;

&lt;p&gt;The longer and larger a project is, the more data is accumulated, increasing the likelihood that some of the stored data will be forgotten and that more and more systems will be used. This means that the “search radius” must be considerably expanded. It also means that there’s a lot more to do.&lt;/p&gt;

&lt;p&gt;A wide dispersion of data can be reduced, for instance, by organizational rules on the storage of data. But rules like these often provide a lot of leeway and are hence only helpful to a limited extent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation of projects
&lt;/h3&gt;

&lt;p&gt;In addition to the size and duration of a project, its isolation from other projects is also a factor influencing what has to be done to identify all of the relevant data. Here is a simple example of different levels of project isolation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low isolation: The data from different projects are all stored on one network drive. There is no prescribed folder structure.&lt;/li&gt;
&lt;li&gt;Medium isolation: All of the data is on the same drive, but each project has its own folder.&lt;/li&gt;
&lt;li&gt;High isolation: Data from different projects are stored on their own network drives with separate hard disks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is immediately apparent how much easier it is to identify the data in a highly contained project. If all of the data for one project is stored on a separate hard disk, it can easily be handed over to the customer or erased. In medium isolation, only a few folders need to be copied or deleted. The work involved in a low isolation situation is something everyone is free to imagine on their own.&lt;/p&gt;

&lt;p&gt;It is also important to keep in mind that data may be present in any system used for the project. This quickly makes it relatively apparent why even small differences can have a big effect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backups – a story on its own
&lt;/h2&gt;

&lt;p&gt;But that’s not all. Backups of systems are created to prevent data loss during a project. Depending on the level of project isolation, what needs to be done at the end of the project can be multiplied again. Because here too, the lower the level of isolation, the more complex the search will be. If the backups are still compressed or encrypted, the task becomes even more time-consuming, and because that is not enough, it must also be remembered that every change made to the backup puts its integrity at risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Separate infrastructure for each project
&lt;/h2&gt;

&lt;p&gt;The good news is that it can also be very easy to abide by the contract: Using the &lt;a href="https://cloudogu.com/en/ecosystem/"&gt;Cloudogu EcoSystem&lt;/a&gt;, a completely independent instance can be used for each project. It contains all of the data such as repositories, issues, documentation, artifacts, etc., making it easy to delete all of the data at the end of the project or hand it over to the client. Backups are also easy to delete, since they are also created separately by project.&lt;/p&gt;

</description>
      <category>compliance</category>
      <category>infrastructure</category>
      <category>privacy</category>
    </item>
    <item>
      <title>More security thanks to micro-learning and gamification – Secure Code Warrior plugin for SCM-Manager </title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Fri, 09 Jul 2021 20:31:48 +0000</pubDate>
      <link>https://forem.com/cloudogu/more-security-thanks-to-micro-learning-and-gamification-secure-code-warrior-plugin-for-scm-manager-f1h</link>
      <guid>https://forem.com/cloudogu/more-security-thanks-to-micro-learning-and-gamification-secure-code-warrior-plugin-for-scm-manager-f1h</guid>
      <description>&lt;p&gt;The regularity of media reports on cyberattacks shows that security is, or should be, a key issue for software development teams these days. Experience also shows that security vulnerabilities are usually not created by highly specialized functions. Rather, many successful attacks exploit well-known security vulnerabilities. For this reason, we are very pleased that the learning platform &lt;strong&gt;Secure Code Warrior&lt;/strong&gt; is now integrated into our version management tool &lt;a href="https://scm-manager.org?mtm_campaign=blog&amp;amp;mtm_kwd=devto&amp;amp;mtm_source=social&amp;amp;mtm_medium=link"&gt;SCM-Manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;An example of well-known security vulnerabilities is SQL injections, where arbitrary code is injected into database queries, allowing unauthorized information to be read (for more on this, see the &lt;a href="https://en.wikipedia.org/wiki/SQL_injection"&gt;Wikipedia article&lt;/a&gt;). Such attacks are very popular because they can be carried out very easily. That’s why SQL injections have consistently ranked first in the Open Web Application Security Project’s (OWASP) top 10 security risks since 2010.&lt;/p&gt;

&lt;p&gt;These vulnerabilities are actually easy to close. Often, there just seems to be a lack of awareness, or the necessary time to perform appropriate security checks and design processes in such a way that security aspects are taken into account on an ongoing basis. Awareness can be created either classically through targeted training or through continuous learning, e.g. by means of microlearning or gamification. Secure Code Warrior is a very good example of the latter. By combining Secure Code Warrior with the &lt;strong&gt;version control management&lt;/strong&gt; tool SCM-Manager, security aspects can be integrated into processes easily and in a &lt;strong&gt;time-saving&lt;/strong&gt; manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning with Secure Code Warrior
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.securecodewarrior.com/"&gt;Secure Code Warrior&lt;/a&gt; platform makes it possible to use microlearning and gamification to gain knowledge about widespread &lt;strong&gt;security vulnerabilities&lt;/strong&gt; and thus close them. The platform offers learning content on almost 150 security topics such as SQL Injection, Cross-Site Scripting (XSS), Memory Corruption or Client Side Injection for all common programming languages such as PHP, JSP, JavaScript, C++, Java Spring, .NET and many more. The content is taught in the form of &lt;strong&gt;videos&lt;/strong&gt; (see example below) and &lt;strong&gt;programming exercises (challenges)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In combination with the plugin for the version control management tool SCM-Manager, the information is integrated directly into the software development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Version Management Tool SCM-Manager
&lt;/h2&gt;

&lt;p&gt;SCM-Manager is an &lt;strong&gt;open source&lt;/strong&gt; version management tool that Cloudogu took over in 2020 (to the official &lt;a href="https://cloudogu.com/en/blog/takeover-scm-manager?mtm_campaign=blog&amp;amp;mtm_kwd=devto&amp;amp;mtm_source=social&amp;amp;mtm_medium=link"&gt;announcement of the acquisition&lt;/a&gt;). In the same year, we released the completely revised version 2 of the tool. SCM-Manager can be operated on-premises and offers, in addition to repository management, a complete review process for changes, the ability to edit files directly in the browser, and many other features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---JvNNr6s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6xldg4ubrz6fl9mptu7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---JvNNr6s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6xldg4ubrz6fl9mptu7z.png" alt="Screenshot of repositories in SCM-Manager"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Integration of learning content about security vulnerabilities with the plugin for Secure Code Warrior is the latest enhancement of the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secure Code Warrior Plugin for SCM-Manager
&lt;/h2&gt;

&lt;p&gt;With the free plugin, videos and links to security vulnerability challenges are displayed directly in pull requests. This way developers directly get all important information about the security vulnerability. For this purpose, the description of pull requests as well as comments and tasks from reviewers are searched for keywords.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tJ4FyUOn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf8eusygk8d9j8uh78wq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tJ4FyUOn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf8eusygk8d9j8uh78wq.png" alt="Video integrated in SCM-Manager"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, the pull request shown in figure 2 contains the keyword “SQL Injection” in the description. Therefore, the corresponding learning content is displayed.&lt;/p&gt;

&lt;p&gt;This integration offers the possibility to use the information from Secure Code Warrior in different ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  SCM-Manager makes the pull request a “security issue” with Secure Code Warrior
&lt;/h3&gt;

&lt;p&gt;When a security vulnerability is found and fixed in the application, the pull request can be used to &lt;strong&gt;educate other team members&lt;/strong&gt; on the topic – by performing the review. By mentioning the security vulnerability in the pull request’s description, information about the topic is displayed. This can be used to learn the theory. At the same time, the learned basics can be comprehended in the context of the expert’s changes in the own application. This approach &lt;strong&gt;spreads the knowledge&lt;/strong&gt; of the topic over several people.&lt;/p&gt;

&lt;p&gt;To have information on security topics displayed in pull requests, it is sufficient to mention the topic in the description or title of the pull request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security-related comments directly in the SCM-Manager review process
&lt;/h3&gt;

&lt;p&gt;In SCM-Manager, reviewers can provide feedback on pull requests to point out potential security vulnerabilities. All that is required, is to &lt;strong&gt;mention a security topic&lt;/strong&gt; in comments or in tasks. Thus, the corresponding Secure Code Warrior content is displayed in an automatically generated comment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ASnTefHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kgwfn2uomt4we6i4k3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ASnTefHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kgwfn2uomt4we6i4k3p.png" alt="Video in comment of pull request"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Only “root” comments are searched for keywords, not replies to comments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forum.cloudogu.com/topic/101?mtm_campaign=blog&amp;amp;mtm_kwd=devto&amp;amp;mtm_source=social&amp;amp;mtm_medium=link"&gt;Take a look at the instructions if you want to try out the plugin.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Secure Code Warrior plugin for SCM-Manager integrates information about vulnerabilities directly into the creation and approval process for changes. All that is required is that a person involved in the process mentions the security vulnerability. All the necessary information for implementation is then provided automatically. The advantage of this approach is that knowledge about security vulnerabilities is spread throughout the team without additional effort, and team members can educate themselves through self-study using micro-learning and gamification.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
      <category>programming</category>
      <category>git</category>
    </item>
  </channel>
</rss>
