<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alcide</title>
    <description>The latest articles on Forem by Alcide (@alcide).</description>
    <link>https://forem.com/alcide</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alcide"/>
    <language>en</language>
    <item>
      <title>New Kubernetes Node Vulnerability (CVE-2020-8558) bypasses localhost boundary</title>
      <dc:creator>Gadi Naor</dc:creator>
      <pubDate>Wed, 22 Jul 2020 08:11:30 +0000</pubDate>
      <link>https://forem.com/alcide/new-kubernetes-node-vulnerability-cve-2020-8558-bypasses-localhost-boundary-64f</link>
      <guid>https://forem.com/alcide/new-kubernetes-node-vulnerability-cve-2020-8558-bypasses-localhost-boundary-64f</guid>
      <description>&lt;h3&gt;
  
  
  Vulnerability Description and Impact
&lt;/h3&gt;

&lt;p&gt;A security issue was discovered in kube-proxy which &lt;strong&gt;allows adjacent nodes/hosts to reach TCP and UDP services bound to 127.0.0.1 running on the node or in the node's network namespace (host network)&lt;/strong&gt;. This breaks security assumptions made by services listening on localhost.&lt;/p&gt;

&lt;p&gt;This security bug was originally raised in issue &lt;a href="https://github.com/kubernetes/kubernetes/issues/90259"&gt;#90259&lt;/a&gt; which details how the kube-proxy sets net.ipv4.conf.all.route_localnet=1 and causes the system not to reject traffic to localhost which originates on other hosts (martian traffic). Such traffic would look like packets on the wire with an IPv4 destination in the range 127.0.0.0/8 and a layer-2 destination MAC address of a target node may indicate that an attack is targeting this vulnerability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lxkO_ygG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bkmaxt4z8yexz2piymdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lxkO_ygG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bkmaxt4z8yexz2piymdl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, if a cluster administrator runs a TCP service on a node that listens on 127.0.0.1:1234, because of this security bug, that service would be potentially reachable by other hosts on the same LAN as the node, or by containers running on the same node as the service. If the example service on port 1234 required no additional authentication (because it assumed that only other localhost processes could reach it), then it could be vulnerable to attacks that make use of this security bug.&lt;/p&gt;

&lt;p&gt;While many Kubernetes installers explicitly disable the API Server's insecure port, and Kubernetes v1.20 is planned to remove this insecure option, An &lt;strong&gt;API server&lt;/strong&gt; that uses this insecure option and listens on 127.0.0.1:8080 &lt;strong&gt;will accept requests without authentication&lt;/strong&gt;.&lt;br&gt;
To mount such an attack on the API server, an attacker must have access to another system on the same LAN or with control of a container running on the master. Managed Kubernetes services such as EKS, AKS, GKE and others should be resilient attacks on the API server insecure port.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are You Vulnerable?
&lt;/h3&gt;

&lt;p&gt;The vulnerability affects &lt;strong&gt;kubelet&lt;/strong&gt; &amp;amp; &lt;strong&gt;kube-proxy&lt;/strong&gt; which are core Node components:&lt;/p&gt;

&lt;h4&gt;
  
  
  Affected Versions:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kubelet/kube-proxy v1.18.0-1.18.3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubelet/kube-proxy v1.17.0-1.17.6&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubelet/kube-proxy &amp;lt;=1.16.10&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or if one or more of the following items are applicable to your environments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as the cluster nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your cluster allows untrusted pods to run containers with CAP_NET_RAW which is enabled by default by Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;lsof +c 15 -P -n &lt;a href="mailto:-i4TCP@127.0.0.1"&gt;-i4TCP@127.0.0.1&lt;/a&gt; -sTCP:LISTEN&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;lsof +c 15 -P -n &lt;a href="mailto:-i4UDP@127.0.0.1"&gt;-i4UDP@127.0.0.1&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Risk:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Typical Clusters: medium (5.4) CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In clusters where API server insecure port has not been disabled: high (8.8) CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automatic Detection of adjacent Node attacks with Alcide Runtime
&lt;/h3&gt;

&lt;p&gt;This new vulnerability has long been covered by &lt;a href="//alcide.io/platform/microservices-anomaly-detection/"&gt;Alcide Runtime&lt;/a&gt; without requiring any configuration by the user. &lt;br&gt;
This detection has been part of Alcide Runtime for a long time, even prior to this vulnerability’s disclosure, and there was no need for us to add additional detection capabilities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9sDuCjDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/39u6gl0iwvivtyd4bdg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9sDuCjDC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/39u6gl0iwvivtyd4bdg5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The traffic is flagged as spoofed traffic because localhost traffic should never cross network boundaries. Furthermore, if you explicitly define firewall policies using Alcide’s microservices firewall, then pods can’t access other resources in the network unless explicitly allowed. This is what zero-trust networking is all about!&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Detection of API Server attacks with Alcide kAudit
&lt;/h3&gt;

&lt;p&gt;Exploit attempts of this vulnerability via the &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips"&gt;unsecured port of the API Server&lt;/a&gt; would show up in its audit log as entries from the "system:unsecured" principal, similar to entries from K8s services on the Master node accessing the API Server locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7zUEzUTs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oast3f8xjjmyn3ccgwy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7zUEzUTs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oast3f8xjjmyn3ccgwy1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/kaudit-K8s-forensics/"&gt;Alcide kAudit&lt;/a&gt; analyzes the audit log of the Kubernetes API Server, continuously updates and compares behavioral activity profiles to actual observed activity and automatically detects anomalous access patterns from cluster principals (in this case, "system:unsecured" account) to various resource types, k8s APIs, namespaces, and specific resources. Thus, &lt;strong&gt;Alcide kAudit can automatically alert security teams to suspected attacks that rely on CVE-2020-8558 as they occur&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Furthermore, Alcide kAudit users can configure and customize it to alert on audit entries violating their specific policy. They can, for example, add alerts on API Server activity that reads or modifies sensitive namespaces or resources in the cluster, and use such alerts if they happen as an indicator that an exploit may be in progress and should be investigated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Detection of Vulnerable Clusters with Alcide Advisor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/kubernetes-advisor"&gt;Alcide Advisor&lt;/a&gt; is a Kubernetes multi-cluster vulnerability scanner that covers rich Kubernetes and Istio security best practices and compliance checks such as Kubernetes vulnerability scanning, hunting misplaced secrets, or excessive secret access, and many more security configuration and compliance checks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7GVQoqhN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yeqdblcmbc54fluqr1re.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7GVQoqhN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yeqdblcmbc54fluqr1re.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  With Alcide Advisor users can:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Identify the vulnerable clusters for this vulnerability (as well as other CVEs)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify and explicitly allow/deny which Pods can run with elevated privileges that enables CAP_NET_RAW&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify and explicitly allow/deny which Pods can run on the host network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On applicable environments, identify Pods that can run on master nodes that can potentially exploit the Kubernetes API Server.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Kubernetes, like any software, has bugs and vulnerabilities. Leveraging Kubernetes as cloud-native application infrastructure requires operators to monitor and secure all the moving parts, whether these are the application workloads or the platform and infrastructure components. CVE-2020-8558 joins other recent vulnerability disclosures (&lt;a href="https://blog.alcide.io/new-kubernetes-control-plane-vulnerability-cve-2020-8555"&gt;CVE-2020-8555&lt;/a&gt; and &lt;a href="https://blog.alcide.io/new-kubernetes-man-in-the-middle-mitm-attack-leverage-ipv6-router-advertisements"&gt;CVE-2020-10749&lt;/a&gt;) and highlights the need for a purpose-built Kubernetes security solution that can drive cluster operators to run workloads, applications, and infrastructure while leveraging the best security practices of the native Kubernetes security controls, as well as security monitoring &amp;amp; prevention.&lt;/p&gt;

&lt;p&gt;Start your &lt;a href="https://get.alcide.io/14-day-trial"&gt;14-day&lt;/a&gt; trial or &lt;a href="https://www.alcide.io/kaudit-K8s-forensics/"&gt;request a demo&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>cve</category>
      <category>alcide</category>
    </item>
    <item>
      <title>New Kubernetes Control Plane Vulnerability (CVE-2020-8555)</title>
      <dc:creator>Nitzan Niv</dc:creator>
      <pubDate>Mon, 20 Jul 2020 13:40:33 +0000</pubDate>
      <link>https://forem.com/alcide/new-kubernetes-control-plane-vulnerability-cve-2020-8555-47bj</link>
      <guid>https://forem.com/alcide/new-kubernetes-control-plane-vulnerability-cve-2020-8555-47bj</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vq3cPJGc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9wng0j6fkevmuamj004h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vq3cPJGc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9wng0j6fkevmuamj004h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerability Description and Impact
&lt;/h3&gt;

&lt;p&gt;A security issue was &lt;a href="https://groups.google.com/forum/#!topic/kubernetes-security-announce/kEK27tqqs30"&gt;discovered&lt;/a&gt; in Kubernetes and disclosed on June 1, 2020, as CVE-2020-8552.&lt;br&gt;
The vulnerability enables an attacker to gain access to data from services that are connected to the host network of the cluster’s manager, and although the attack is not simple to execute, it can remotely bypass authorization controls and break confidentiality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are You Vulnerable?
&lt;/h3&gt;

&lt;p&gt;The vulnerability affects kube-controller-manager which is part of Kubernetes control plane:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;V1.18.0, v1.17.0 - v1.17.4, v1.16.0 - v1.16.8, and versions earlier than v1.15.11&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The affected volume types that can be abused as part of the attack execution, as explained below, are: &lt;br&gt;
GlusterFS, Quobyte, StorageFS, ScaleIO.&lt;/p&gt;

&lt;p&gt;The vulnerability is patched in Kubernetes versions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;V1.18.1+, v1.17.5+, v1.16.9+, v1.15.12+&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Remediation &amp;amp; Mitigation
&lt;/h3&gt;

&lt;p&gt;Prior to upgrading, these vulnerabilities can be mitigated by adding endpoint protections on the master or restricting usage of the vulnerable volume types and restricting StorageClass write permissions through RBAC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Breakdown of the Exploit
&lt;/h3&gt;

&lt;p&gt;According to Kubernetes’ GitHub issue, this vulnerability allows certain authorized users to access endpoints within the master's host network, such as link-local or loopback services.&lt;br&gt;
By exploiting this vulnerability, these users can leak arbitrary information (up to 500 bytes per successful malicious request) from such unprotected endpoints.&lt;/p&gt;

&lt;h4&gt;
  
  
  The attack’s steps are:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;An attacker with permissions to do so creates a pod&lt;/li&gt;
&lt;li&gt;The attacker attaches a volume to the pod. This volume can be one with certain built-in Volume types (GlusterFS, Quobyte, StorageFS, ScaleIO). An attacker with permissions to create a StorageClass can use this capability to the same effect.&lt;/li&gt;
&lt;li&gt;By using the volume attachment or storage class creation, the attacker makes the kube-controller-manager k8s component make GET requests or POST requests without an attacker-controlled request body from the master's host network.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Automatic Vulnerability and Attack Detection with Alcide kAudit
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Dk9rhvJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/748w89m3o6fjfhfzjo6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Dk9rhvJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/748w89m3o6fjfhfzjo6f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes API Server logs every request it receives in its audit log. Of specific relevance to the detection of attempts to exploit CVE-2020-8555, actions like creation of a new pod, creation of a StorageClass, and sending of any requests to the API server leave traces in the audit log.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/kaudit-K8s-forensics/"&gt;Alcide kAudit&lt;/a&gt; automatically monitors and analyzes these audit logs. It creates and dynamically updates a profile of the normal behavior in the cluster. By comparing in real-time this profile to the audit log, kAudit can detect anomalous behavior that is associated with attempts to attack the k8s infrastructure of a cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vztu5d83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3nnegfxuqc6oziuq38c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vztu5d83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3nnegfxuqc6oziuq38c7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;kAudit can detect anomalous behavior related to attempts to exploit CVE-2020-8555 at several points along the attack chain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attacker creates a pod&lt;/li&gt;
&lt;li&gt;Attacker creates a StorageClass&lt;/li&gt;
&lt;li&gt;kube-controller-manager sends unusual requests to API-Server&lt;/li&gt;
&lt;li&gt;Unusual increase in unauthorized requests or other irregular status in responses, when an attacker attempts to execute the previous steps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By combining correlated anomalies to an incident associated with the attacking user, which in this case is the one that creates the pod and StorageClass, Alcide kAudit can alert security teams that an attack was attempted and focus their attention on the relevant users, resources and actions to investigate.&lt;/p&gt;

&lt;p&gt;Like other security tools, kAudit also enables the user to configure rules to filter specific entries in the audit log that are interesting for compliance and security investigation. However, in this case, even if the user happens to create rules that identify specific actions that are part of the attack, it will be difficult and time-consuming for a security expert to link these traces of the attack and to create a holistic understanding of it, as regular cluster activity also often creates pods, creates StorageClass, sends requests from the kube-controller component to the API server, or occasionally sends unauthorized requests.&lt;/p&gt;

&lt;p&gt;On the other hand, the automated machine learning algorithm used by Alcide kAudit will sift through the log and focus the expert’s attention on the security incident and anomalous behavior that stands out against this noisy background.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Detection of Vulnerable Clusters with Alcide Advisor
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/kubernetes-advisor"&gt;Alcide Advisor&lt;/a&gt; is a Kubernetes multi-cluster vulnerability scanner that covers rich Kubernetes and Istio security best practices and compliance checks such as Kubernetes vulnerability scanning, hunting misplaced secrets, or excessive secret access, and many more security configuration and compliance checks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mNTcHR4R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hlh0341bmu6l7j3pmii3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mNTcHR4R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hlh0341bmu6l7j3pmii3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Alcide Advisor users can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify the vulnerable clusters for this vulnerability (as well as other CVEs)&lt;/li&gt;
&lt;li&gt;Define StorageClass whitelist and identify violations of this list.&lt;/li&gt;
&lt;li&gt;Identify the components, as well as ServiceAccounts that have RBAC permissions to create pods.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Kubernetes, like any software, has bugs and vulnerabilities. Leveraging Kubernetes as a cloud-native application infrastructure requires operators to monitor and secure all the moving parts, whether these are the application workloads or the platform and infrastructure components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alcide Advisor&lt;/strong&gt; can identify vulnerable clusters and understand the risk surface associated with CVE-2020-8555, as well as mitigate it. With &lt;strong&gt;Alcide kAudit&lt;/strong&gt;, machine-learning driven dynamic profiling is used to detect actual attempts to exploit known weaknesses such as CVE-2020-8555, as well as detecting attacks targeting new and unknown vulnerabilities.&lt;/p&gt;

&lt;p&gt;Try Alcide's security solution with a &lt;a href="https://get.alcide.io/14-day-trial"&gt;14-day trial&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>cve</category>
      <category>alcide</category>
    </item>
    <item>
      <title>Helm Scan With GitHub Actions &amp; K8s Advisor</title>
      <dc:creator>Gadi Naor</dc:creator>
      <pubDate>Tue, 21 Apr 2020 07:21:32 +0000</pubDate>
      <link>https://forem.com/alcide/helm-scan-with-github-actions-k8s-advisor-2i9i</link>
      <guid>https://forem.com/alcide/helm-scan-with-github-actions-k8s-advisor-2i9i</guid>
      <description>&lt;p&gt;GitHub Actions is a recent continuous integration (CI) and continuous deployment (CD) service from GitHub. GitHub Actions powers GitHub's built-in continuous integration service. In its essence, GitHub Actions help developers automate software development workflows in the same place they store code and collaborate on pull requests and issues. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uUPdAezL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vyy23hujmycktromp7nm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uUPdAezL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vyy23hujmycktromp7nm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
GitHub Actions enable developers to write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that developers can set up in their repository to build, test, package, release, or deploy any code project on GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Kubernetes KIND Cluster
&lt;/h3&gt;

&lt;p&gt;Kubernetes IN Docker, &lt;a href="https://kind.sigs.k8s.io/"&gt;KIND&lt;/a&gt;, is a tool to create local clusters for testing Kubernetes using Docker containers. KIND was primarily designed for testing Kubernetes itself but may be used for local development or CI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wsp8Mzv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i3802sohm0xkllts0z7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wsp8Mzv4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i3802sohm0xkllts0z7a.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Multi-node clusters and other advanced features may be configured with a config file - the detailed usage information and documentation are &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start"&gt;here&lt;/a&gt;. GitHub Actions has a marketplace for reusable actions, and the folks behind Helm already managed to put together &lt;a href="https://github.com/marketplace/actions/kind-cluster"&gt;Kind Cluster&lt;/a&gt;, a reusable action that can be plugged into GitHub’s automation workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Scanning of Helm Charts with GitHub Actions Workflow
&lt;/h3&gt;

&lt;p&gt;Alcide Advisor, an API driven, Kubernetes security and hygiene scanner, has a wide &lt;a href="https://github.com/alcideio/pipeline"&gt;integration surface&lt;/a&gt; into the continuous deployment (CD) platforms. However, GitHub's actions combined with KIND, introduce an interesting approach for scanning helm charts in continuous integration (CI) stage.&lt;br&gt;
In the example below, a GitHub workflow has 3 sequential jobs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Test&lt;/li&gt;
&lt;li&gt;Advisor Scan&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;strong&gt;Advisor Scan&lt;/strong&gt; Job performs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Helm 3&lt;/li&gt;
&lt;li&gt;Launch Kind Cluster using a GitHub Action&lt;/li&gt;
&lt;li&gt;Install a chart (uswitch kiam in this example) into a specific namespace&lt;/li&gt;
&lt;li&gt;Download Alcide Advisor scanner&lt;/li&gt;
&lt;li&gt;Scan with Alcide Advisor the namespace into which kiam was installed&lt;/li&gt;
&lt;li&gt;Publish the scan report into the pipeline artifacts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bP7jPmhX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d825xt88daolav5493m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bP7jPmhX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d825xt88daolav5493m7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1M-Z_VDH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/su075v5nrns1royjarcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1M-Z_VDH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/su075v5nrns1royjarcr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fS-HI4-w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4jjdjt7z8tqdnzn3v9y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fS-HI4-w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4jjdjt7z8tqdnzn3v9y4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Helm is the de-facto tool for collaborating when creating, installing, and managing applications inside of Kubernetes. Rendering helm charts with configuration into a cluster that can be scanned by Alcide Advisor, opens the door for developers &amp;amp; DevOps to ‘get a handle‘ on the security and hygiene level of new helm charts as well as helm charts changes. To see the full pipeline example go to &lt;a href="https://github.com/alcideio/pipeline"&gt;https://github.com/alcideio/pipeline&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Container Networking - Explained</title>
      <dc:creator>arikalcide</dc:creator>
      <pubDate>Mon, 20 Apr 2020 07:39:25 +0000</pubDate>
      <link>https://forem.com/alcide/container-networking-explained-3gg9</link>
      <guid>https://forem.com/alcide/container-networking-explained-3gg9</guid>
      <description>&lt;p&gt;Container networking is one of the most critical concerns in production environments where scale, security, and availability are required to be as automated and as seamless as possible. In this blog post, I want to focus on the role that container networking plays in enterprises today.&lt;/p&gt;

&lt;p&gt;In the past few years, containers have become the leading technology for implementing microservices applications. It is no longer deniable that containers have changed the way applications are developed and deployed, but not less neglected is their impact on how applications are connected to the network. While the majority of the discussion around containers focuses on developers aspects and orchestration, this blog comes to shed some light on container networking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BJRBJGFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/blko13sxrqs4h511oroa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BJRBJGFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/blko13sxrqs4h511oroa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/config/containers/container-networking/"&gt;Container networking&lt;/a&gt; is one of the most critical concerns in production environments where scale, security and availability are required to be as automated and as seamless as possible.&lt;/p&gt;

&lt;p&gt;Though they share similarities, there are some major differences between container networking and VM networking. Let’s name a few:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Containers share the same kernel. They can share the same NIC and network namespace with the host (‘host’ mode) OR they can be connected to an internal vNIC with their own network namespace (‘bridge’ mode - most used). VMs, on the other hand, simulate the entire hardware including a vNIC which is connected to the physical NIC.&lt;/li&gt;
&lt;li&gt;Containers are also ephemeral. While VMs stay for long, containers are rapidly changing, rising and disappearing as their underlying application scales.&lt;/li&gt;
&lt;li&gt;here are more containers than VMs. Multiple containers can run on the same host. More containers mean more NICs and more traffic. These require more resources including larger IP addresses space, more routing decisions, more firewall rules and more sockets in use. This means that efficient hardware is a must.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Over time and with containers becoming ubiquitous, running with multiple containers on multi-host networking has become a real connectivity issue. To address this problem, container projects adopted a model where networking was decoupled from the container runtime. In this model, the container network stack is handled by a ‘plugin’ or ‘driver’ that manages its network interfaces and defines how it connects to the network.&lt;/p&gt;

&lt;p&gt;There are two main standards for container networking configuration on Linux containers: the &lt;a href="https://github.com/containernetworking/cni"&gt;CNI (Container Network Interface)&lt;/a&gt; and the &lt;a href="https://github.com/moby/libnetwork"&gt;CNM (Container Networking Model)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;CNI project came up by CoreOS and created for writing network plugins. CNM, on the other hand, came up by Docker for the same purpose, each has its different ways for solving similar problems. Both help build modular networks along with a set of third-party vendors providing networking extended capabilities.&lt;/p&gt;

&lt;p&gt;The basic model is composed of three major components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Network - a group of endpoints that can communicate with each other directly, mostly implemented with Linux bridge.&lt;/li&gt;
&lt;li&gt;Endpoint - a network interface that joins a Sandbox to a Network. Many endpoints can exist in a sandbox but only one can belong to the network. mostly implemented with a virtual Ethernet “veth” pair.&lt;/li&gt;
&lt;li&gt;Sandbox - an isolated environment that contains the container’s network stack configuration. A sandbox can contain many endpoints from many networks, mostly implemented with Linux Network Namespace.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Current State-of-the-Art in Container Networking
&lt;/h3&gt;

&lt;p&gt;There are some areas that container networking handle fairly well:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Overlay networks allow to create and manage private multi-host networks for communication between containers and services, with isolation capabilities for the sense of a more secure network.&lt;/li&gt;
&lt;li&gt;There are however some orchestration frameworks such as Kubernetes that automates and ease the operational containers network tasks&lt;/li&gt;
&lt;li&gt;Monitoring is available with some of the great providers such as LogzIO and Datadog.&lt;/li&gt;
&lt;li&gt;Third-party plugins support moving containerized applications between hosts with their state and storage.&lt;/li&gt;
&lt;li&gt;Some CNIs support end-to-end encryption while others provide network policy capabilities for service mesh architecture.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  OK, So What Now? Moving Towards Containerized Applications
&lt;/h3&gt;

&lt;p&gt;Microservices architecture makes a lot of sense when dealing with scalability and the usage of containers helps to keep this architecture notion. Being a hybrid technology, containers can run (almost) everywhere. With easier and lighter deployment procedures it allows a quick duplication of microservices in runtime without having to provision new network resources. Microservices need to communicate with each other and are often required to be accessible to/from the outside world. &lt;/p&gt;

&lt;p&gt;With the help of containers, it is possible to manage the internal communication between microservices by grouping all microservices of the same application under the same network. Moreover, containers network isolation provides segmentation capabilities at the level of a microservice and that serves both for security and compliance considerations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ensure Your Network is Architected to Handle Containers Effectively
&lt;/h3&gt;

&lt;p&gt;When dealing with container networking, CNI and CNM fall short in meeting enterprise requirements. To apply for these requirements containers need to be agile, fast, and secure.&lt;/p&gt;

&lt;p&gt;The perimeter has &lt;a href="https://www.alcide.io/microservices-anomaly-detection/"&gt;changed&lt;/a&gt;. What once was a monolithic application fully deployed on-premise is now spread and split on multiple cloud providers, whether is a private or public. The organization gateway is no longer dealing only with virtual and physical servers, but with multiple applications and microservices hiding behind a NAT, making the job of load balancing and security even harder. It is almost impossible to manage firewall rules for every microservice, and security groups are no longer efficient for applying a robust zero trust security approach.&lt;/p&gt;

&lt;p&gt;Automated processes are one of the most crucial functionalities in an efficient, highly available and monitored data center. However, both container runtime and container networking plugin fall short in addressing these concerns. Auto-scale needs to be added (by coding it) to the cluster. In addition, running “everywhere” means that binaries still need to run on their compiled architecture and network policies need to be defined and written for each and every running container. Last but not least is the challenge of managing persistent storage for stateful applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Challenges For the Network Team
&lt;/h3&gt;

&lt;p&gt;For those coming from legacy networking backgrounds, the adoption of containers can be a real challenge. The options are wide open when it comes to container networking implementation and yet, standardization efforts have started to take place.&lt;/p&gt;

&lt;p&gt;Connectivity, availability and fast response times are the biggest concerns of any network team. These concerns become even greater when dealing with today’s complex networking stack. Containers’ network behavior acts differently from what we know in legacy networking: challenges such as maximizing the network performance and utilization are more complex with containerized applications. While data is usually going east-west, containers also add &lt;a href="https://www.alcide.io/platform/microservices-firewall/"&gt;north-south traffic&lt;/a&gt; which may require some adjustments to the network architecture and load balancers. &lt;br&gt;
It is important to keep network capacity neither under-utilized nor over-loaded and leading to a bottleneck in microservices environments.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GitOps - A Security Perspective (Part 1)</title>
      <dc:creator>Gadi Naor</dc:creator>
      <pubDate>Mon, 13 Apr 2020 13:03:45 +0000</pubDate>
      <link>https://forem.com/alcide/gitops-a-security-perspective-part-1-16ci</link>
      <guid>https://forem.com/alcide/gitops-a-security-perspective-part-1-16ci</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2FyAu2864l%2FImage%25202020-04-13%2520at%25208.29.49%2520AM.png%3Fv%3Da8a200d5ca1a59379349e629f97e44a1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2FyAu2864l%2FImage%25202020-04-13%2520at%25208.29.49%2520AM.png%3Fv%3Da8a200d5ca1a59379349e629f97e44a1" alt="https://p78.f0.n0.cdn.getcloudapp.com/items/yAu2864l/Image%202020-04-13%20at%208.29.49%20AM.png?v=a8a200d5ca1a59379349e629f97e44a1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitops.tech/" rel="noopener noreferrer"&gt;GitOps&lt;/a&gt; is a paradigm that puts Git at the heart of building and operating cloud native applications by using Git as the single source of truth and empowers developers to perform what used to fall under IT operations. This post is part a blog post series covering GitOps and Kubernetes security.&lt;/p&gt;

&lt;h1&gt;
  
  
  Kubernetes - A GitOps Companion
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/kubernetes-security" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, as the new application server, leverages a “declarative” approach when it comes to building cloud native application, which means that application configuration is guaranteed by a set of facts instead of by a set of instructions. With application’s declarations versioned in Git, we have a single source of truth, our apps can be easily deployed and rolled back to and from Kubernetes, and when disaster strikes, your cluster’s infrastructure can also be reproduced.&lt;/p&gt;

&lt;p&gt;With Git at the center of the delivery pipelines, developers can make pull requests to accelerate and simplify application deployments and operations tasks to Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2FqGudAXDm%2FImage%25202020-04-13%2520at%25208.31.54%2520AM.png%3Fv%3Da5b9b5f0fa77fdc11cfcabdb6f492de6" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2FqGudAXDm%2FImage%25202020-04-13%2520at%25208.31.54%2520AM.png%3Fv%3Da5b9b5f0fa77fdc11cfcabdb6f492de6" alt="gitops-flowchart-advisor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Is GitOps right for you?
&lt;/h1&gt;

&lt;p&gt;The fresh approach that GitOps + Kubernetes brings into the application delivery lifecycle is undeniably different, increasing engineering velocity, as well as simplifies building the CI+CD pipelines themselves. The question of whether a GitOps ‘Pull’ approach is a better fit than ‘Push’ approach is really a matter of engineering &amp;amp; operational culture of an organization, as well as almost a theological question of whether engineering are accountable for security and what visibility into this application server blackbox, security teams require.&lt;/p&gt;

&lt;h1&gt;
  
  
  GitOps &amp;amp; Security
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GitOps changes are synched into the cluster only through the cluster git repository users. The repository is secured at the same level of the git user accounts. A compromised user account in with permissions to push into the cluster git repo can introduce changes that will result in a data breach, service disruption or anything in between. There must be additional guardrails that GitOps infrastructure must implement in the form of whitelisting in order to have a cluster side guardrails.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GitOps tools like &lt;a href="https://www.weave.works/oss/flux/" rel="noopener noreferrer"&gt;Flux&lt;/a&gt;, &lt;a href="https://argoproj.github.io/argo-cd/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; and alike are practically running with cluster god permissions - and are persistent in the cluster. Kubernetes Dashboard, which is considered to be a high risk cluster, is often times removed from the cluster. &lt;br&gt;
‘Push’ based CD pipelines, such as &lt;a href="https://www.spinnaker.io/" rel="noopener noreferrer"&gt;Spinnaker&lt;/a&gt;, &lt;a href="https://jenkins.io/" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt; and alike are external to the cluster, invoke on-demand, and introduce automation-driven changes into the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2Fp9u7d8Pb%2FImage%25202020-04-13%2520at%25208.34.35%2520AM.png%3Fv%3D6f4dfdfe4ccd49f7fa23619cfd35fe3c" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fp78.f0.n0.cdn.getcloudapp.com%2Fitems%2Fp9u7d8Pb%2FImage%25202020-04-13%2520at%25208.34.35%2520AM.png%3Fv%3D6f4dfdfe4ccd49f7fa23619cfd35fe3c" alt="RBAC GitOps"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example: &lt;a href="https://github.com/fluxcd/flux/blob/master/deploy/flux-account.yaml" rel="noopener noreferrer"&gt;Flux RBAC Permission&lt;/a&gt; - Cluster God * * * * *&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitOps tools like Flux, ArgoCD and alike require cluster external access, represented as domain names (github.com, bitbucket.org, gitlab.com,..) which means that the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Kubernetes native policies&lt;/a&gt; are not suitable to implement to segment those highly privileged in-cluster components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application secrets in a GitOps era requires a Kubernetes external secret provider. For example &lt;a href="https://www.hashicorp.com/products/vault/" rel="noopener noreferrer"&gt;Hashicorp Vault&lt;/a&gt;, &lt;a href="https://aws.amazon.com/kms/" rel="noopener noreferrer"&gt;AWS KMS&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/services/key-vault/" rel="noopener noreferrer"&gt;Azure Vault&lt;/a&gt; and alike. Alternatively, teams can revert into &lt;a href="https://git-secret.io/" rel="noopener noreferrer"&gt;Git Secrets&lt;/a&gt; which means committing secrets into git in their encrypted form, and decrypting the secrets before application consumption.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.alcide.io/secured-ci-cd-pipeline" rel="noopener noreferrer"&gt;Continuous integration/continuous development (CI/CD) with the Kubernetes ecosystem&lt;/a&gt; does have a variety of tools to choose from and organizations should use the tools that are best suited for their specific use cases and culture. Glueing all the pieces together is not trivial. Integrating security and consuming security insights by various stakeholders is an equally challenging task to achieve. GitOps simplifies this in some aspects, but complicates in other aspects.&lt;/p&gt;

&lt;p&gt;Stay tuned for Part 2.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Join our upcoming webinar on April 22nd: &lt;a href="https://webinars.devops.com/gitops-best-practices-for-continuous-deployment-and-progressive-security" rel="noopener noreferrer"&gt;GitOps Best Practices for Continuous Deployment and Progressive Security&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Istio Service Mesh in 2020: Envoy In, Control Plane Simplified</title>
      <dc:creator>Alon Berger</dc:creator>
      <pubDate>Mon, 13 Apr 2020 10:42:30 +0000</pubDate>
      <link>https://forem.com/alcide/istio-service-mesh-in-2020-envoy-in-control-plane-simplified-5dgi</link>
      <guid>https://forem.com/alcide/istio-service-mesh-in-2020-envoy-in-control-plane-simplified-5dgi</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4xwwbiyn97xitv3timxe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4xwwbiyn97xitv3timxe.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since 2017, Kubernetes has soared and has played a key role within the cloud-native computing community. With this movement, more and more companies who already embraced microservices realized that a dedicated software layer for managing the service-to-service communication is required. &lt;/p&gt;

&lt;p&gt;Enter the Service Mesh, and its leading contender as a preferred control plane manager - Istio, a platform built around an Envoy proxy to manage, control and monitor traffic flow and securing services and the connections between one another. Check out this page and Istio’s blog for more information and additional features to come.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://www.cncf.io/wp-content/uploads/2020/03/CNCF_Survey_Report.pdf" rel="noopener noreferrer"&gt;CNCF Survey 2019&lt;/a&gt;, Istio is at the top of the chart as the preferred service mesh project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxfcaiequpb2nxxbrha5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxfcaiequpb2nxxbrha5y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Istio clearly made its mark as a powerful service mesh tool, it is still entwined with a relatively complex operation and integration requirements.&lt;/p&gt;

&lt;p&gt;Istio’s roadmap for 2020 is all about supporting companies as they adopt microservices architectures for application development. The main focus of Istio’s latest release is simply making it faster and easier to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Should We Expect?
&lt;/h3&gt;

&lt;p&gt;Istio’s offering is a complete solution for enabling orchestration of a deployed services network with ease. It utilizes complex operational requirements like load-balancing, service-to-service authentication, monitoring, rate-limiting and more.&lt;/p&gt;

&lt;p&gt;To achieve that, Istio provides its core features as key capabilities across a network of services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traffic management&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;li&gt;Platform support&lt;/li&gt;
&lt;li&gt;Integration and customization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With its latest release, along with some most anticipated improvements, those features are getting buffed as well.&lt;/p&gt;

&lt;p&gt;During 2019 Istio’s build and test infrastructure improved significantly, resulting in higher quality and easier release cycles. A big focus was around improving user experience, with many additional commands added to allow easier operations and smother troubleshooting experience.&lt;/p&gt;

&lt;p&gt;Furthermore, Istio’s team reported exceptional growth in contributors within the product’s community.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mixer Out, Envoy In
&lt;/h3&gt;

&lt;p&gt;Extensibility with Istio was enabled by the Mixer, an entity responsible for providing policy controls and telemetry collection, which acts as an Intermediation layer that allows fine-grained control over all interactions between the mesh and infrastructure backends.&lt;/p&gt;

&lt;p&gt;This entire model is now migrated directly in the proxies, in order to remove additional dependencies, resulting in a substantial reduction in latency and a significant improvement in overall performance. Eventually, the Mixer will be released as a separate add-on, as part of the Istio ecosystem.&lt;/p&gt;

&lt;p&gt;The new model replacing Mixer will use Envoy’s extensions, which paves the path to even more capabilities and flexibility. There is already an ongoing implementation of a WebAssembly runtime in Envoy, which will potentially extend platform efficiency, This type of flexibility was a lot more challenging to achieve with Mixer.&lt;/p&gt;

&lt;p&gt;Another key takeaway from this new model is the ability to avoid using a unique CRD for every integration with Istio.&lt;/p&gt;

&lt;h3&gt;
  
  
  Control Plane Simplified
&lt;/h3&gt;

&lt;p&gt;The desire to have fewer moving parts during deployments drove the Istio team towards &lt;a href="https://istio.io/news/releases/1.5.x/announcing-1.5/#introducing-istiod" rel="noopener noreferrer"&gt;istiod&lt;/a&gt;, a new single binary, which now acts as a single daemon, responsible for the various microservices deployments.&lt;/p&gt;

&lt;p&gt;This binary combines features from known key components such as the Pilot, Citadel, Galley and the sidecar.&lt;/p&gt;

&lt;p&gt;This approach reduces complexity within domains across the board.&lt;/p&gt;

&lt;p&gt;Installation, ongoing maintenance, and troubleshooting efforts will become much more straightforward while supporting all functionalities from previous releases.&lt;/p&gt;

&lt;p&gt;Additionally, the node-agent’s functionality used to distribute certificates, moved to the istio-agent, which already runs in each pod, reducing even more dependencies.&lt;/p&gt;

&lt;p&gt;Below is a “Before and After” of Istio’s high-level architecture.&lt;br&gt;
Can you spot the differences?&lt;/p&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F801enpquq5mmxnpsgmy3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F801enpquq5mmxnpsgmy3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxkyb62ca0ac39co7v1fh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxkyb62ca0ac39co7v1fh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Securing All Fronts
&lt;/h3&gt;

&lt;p&gt;Another major focus is on buffing up several security fundamentals like reliable workload identity, robust access policies, and comprehensive audit logging. The imperative nature of such requirements is what pushes the team to double down on stabilizing the API for these features.&lt;/p&gt;

&lt;p&gt;Inevitably, network traffic will take up several security reinforcements, including implementation of the automated rollout of mutual TLS and leveraging of Secret Discovery Service, which will introduce a safer way of distributing certificates, thus reducing the risk of detection by other workloads running on the machine.&lt;/p&gt;

&lt;p&gt;These upgrades will trim down both dependencies and requirements for cluster-wide security policies, leading to a much more robust system.&lt;/p&gt;

&lt;p&gt;Here at Alcide, we offer Istio hygiene checks as part of the &lt;a href="https://www.alcide.io/service-mesh-security/" rel="noopener noreferrer"&gt;Alcide Advisor&lt;/a&gt;.&lt;br&gt;
Check out our recent webinar on &lt;a href="https://get.alcide.io/security-for-istio-an-incremental-approach-on-demand-webinar" rel="noopener noreferrer"&gt;Security For Istio - an Incremental Approach&lt;/a&gt; to learn more.&lt;/p&gt;

</description>
      <category>istio</category>
      <category>security</category>
    </item>
    <item>
      <title>Kubernetes RBAC Visualization</title>
      <dc:creator>Gadi Naor</dc:creator>
      <pubDate>Sun, 05 Apr 2020 15:16:19 +0000</pubDate>
      <link>https://forem.com/alcide/kubernetes-rbac-visualization-48nl</link>
      <guid>https://forem.com/alcide/kubernetes-rbac-visualization-48nl</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F99ehgzrjbwide7d20zs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F99ehgzrjbwide7d20zs6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Role-based access control (RBAC) is a method of regulating access to a computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.&lt;/p&gt;

&lt;p&gt;Permissions are purely additive and there are no “deny” rules.&lt;/p&gt;

&lt;p&gt;A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in. ClusterRole, by contrast, is a non-namespaced resource, and grants access at the cluster level. ClusterRoles have several uses. &lt;br&gt;
You can use a ClusterRole to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Define permissions on namespaced resources and be granted within individual namespaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define permissions on namespaced resources and be granted across all namespaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define permissions on cluster-scoped resources&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Roles are used to define API access rules for resources within the namespace of the role, and ClusterRole is used to define API access across all cluster namespaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/alcideio/rbac-tool#rbac-tool-viz" rel="noopener noreferrer"&gt;Alcide’s rbac-tool viz&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Alcide’s &lt;a href="https://github.com/alcideio/rbac-tool" rel="noopener noreferrer"&gt;rbac-tool&lt;/a&gt;, an open-source tool from Alcide, introduces a visualization functionality of the relationships between the resources that make your cluster RBAC configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjf0yx2hsnysiwhax1sl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjf0yx2hsnysiwhax1sl6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above captures the various relationship combinations between resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roles&lt;/strong&gt; -  Defines the policy rules that constitute which API actions (read/create/update/delete) the subject (user/service) is allowed to perform on resources within the namespace resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterRoles&lt;/strong&gt; -  Defines the policy rules that constitute which API actions (read/create/update/delete) the subject (user/service) is allowed to perform on resources cluster-wide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bindings&lt;/strong&gt;, are the Kubernetes RBAC resources that define the link between principals (users of automated services).&lt;br&gt;
Bindings can point to multiple Roles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RoleBindings&lt;/strong&gt; can point to &lt;strong&gt;ClusterRoles&lt;/strong&gt; which grants the subject (user/service) cluster-wide access to the resources specified in the rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterRoleBindings&lt;/strong&gt; can point to &lt;strong&gt;ClusterRoles&lt;/strong&gt; which grants the subject (user/service) cluster-wide access to the resources specified in the rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nginx Ingress Controller RBAC
&lt;/h3&gt;

&lt;p&gt;The following diagram shows the moving parts of the RBAC resources created by an Nginx Ingress Controller.&lt;/p&gt;

&lt;p&gt;You can see 2 roles were created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A role that defines the allowed resources access within the namespace&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ClusterRole defines the cluster-wide access permissions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note for example that the ClusterRole grants &lt;strong&gt;Nginx-ingress&lt;/strong&gt; account to&lt;br&gt;
&lt;strong&gt;Update&lt;/strong&gt; the resource &lt;strong&gt;status&lt;/strong&gt; of &lt;strong&gt;ingress&lt;/strong&gt; within the &lt;strong&gt;extensions&lt;/strong&gt; and &lt;strong&gt;networking.k8s.io&lt;/strong&gt; API group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F47z4yi807tmy8to0wiex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F47z4yi807tmy8to0wiex.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above visualization was generated by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ rbac-tool viz --include-subjects="nginx-ingress"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Under the hood, Alcide’s rbac-tool connects to the cluster context pointed by your kubeconfig , lists the various RBAC related resources, and visualize the resources based on the command line filters.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Example: API access for system:unauthenticated Group on GKE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjvzq77yel9pu11kj3buk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjvzq77yel9pu11kj3buk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ rbac-tool viz --include-subjects="system:unauthenticated"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Example: GCP permission covered cloud-provider ServiceAccount for GKE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs80yovnd3bok80hquxdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs80yovnd3bok80hquxdp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ rbac-tool viz --include-subjects="^cloud-provider" --exclude-namespaces=""&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Kubernetes RBAC is a critical component in your Kubernetes deployment, which is definitely something cluster operators and builders must master.&lt;br&gt;
Alcide’s &lt;a href="https://github.com/alcideio/rbac-tool" rel="noopener noreferrer"&gt;rbac-tool&lt;/a&gt; visualization and filtering capabilities helps to unfold and simplify Kubernetes RBAC.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes RBAC | Moving from ‘It's Complicated’ to ‘In a Relationship’ </title>
      <dc:creator>Gadi Naor</dc:creator>
      <pubDate>Wed, 01 Apr 2020 11:09:48 +0000</pubDate>
      <link>https://forem.com/alcide/kubernetes-rbac-moving-from-it-s-complicated-to-in-a-relationship-1bbm</link>
      <guid>https://forem.com/alcide/kubernetes-rbac-moving-from-it-s-complicated-to-in-a-relationship-1bbm</guid>
      <description>&lt;p&gt;Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.&lt;/p&gt;

&lt;p&gt;Permissions are purely additive and there are no “deny” rules.&lt;/p&gt;

&lt;p&gt;A Role always sets permissions within a particular namespace ; when you create a Role, you have to specify the namespace it belongs in. ClusterRole, by contrast, is a non-namespaced resource, and grants access at the cluster level. ClusterRoles have several uses. &lt;br&gt;
You can use a ClusterRole to:&lt;br&gt;
define permissions on namespaced resources and be granted within individual namespace(s)&lt;br&gt;
define permissions on namespaced resources and be granted across all namespaces&lt;br&gt;
define permissions on cluster-scoped resources&lt;/p&gt;

&lt;p&gt;If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.&lt;/p&gt;

&lt;p&gt;Default Cluster Roles&lt;br&gt;
While Kubernetes RBAC is a complex topic, one would always want to implement RBAC in the cluster. For this purpose Kubernetes offers out-of-the-box default cluster roles that can be used as a starting point. &lt;br&gt;
These are visible in the output of kubectl get clusterrole, and four cluster roles you can use right away are:&lt;/p&gt;

&lt;p&gt;cluster-admin&lt;br&gt;
admin&lt;br&gt;
edit&lt;br&gt;
view&lt;/p&gt;

&lt;p&gt;With these roles, you can start to define who can interact with your cluster and in what way. It is highly recommended to follow the principle of least privilege, and grant additional privileges as necessary for work to proceed. &lt;br&gt;
Kubernetes RBAC Resource Relationship&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq-mpCvs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmmsq383f7jm2h5axarr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dq-mpCvs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fmmsq383f7jm2h5axarr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example: Nginx Ingress Controller RBAC Policy&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k4-aYuJY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w2ojo1gvi1rzhonhaxa5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k4-aYuJY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/w2ojo1gvi1rzhonhaxa5.png" alt="Alt Text"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Explicitly Tuning RBAC Policies - ‘It's Complicated’&lt;br&gt;
Components like Operators or highly privileged controllers that require Cluster wide ‘Read-Only’ but do not necessarily require to read secrets for example, may leave a user to take a shortcut and provision RBAC policy with excessive permissions.&lt;/p&gt;

&lt;p&gt;Kubernetes RBAC additive model does not enable us to implement semantics that capture:&lt;br&gt;
deny access to specific resource groups, &lt;br&gt;
allow access to all other resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gqod3xaD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wp7wliyen2urz86sz9q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gqod3xaD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wp7wliyen2urz86sz9q4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we can have the explicit set of policy access rules of all cluster resources - and “subtract” from that group the resources we would like to deny access to - we achieve the above semantics.&lt;/p&gt;

&lt;p&gt;Let’s how we can achieve that:&lt;br&gt;
Run kubectl api-resources and get all of the cluster installed/supported resources and their respective api-groups &lt;br&gt;
Derive the RBAC policy from the above list&lt;br&gt;
Manually tune the policy and reduce any resource and/or api-groups that we wish to deny access to.&lt;/p&gt;

&lt;p&gt;While this method will work, it’s manual, and takes significant effort.&lt;br&gt;
An easier way to achieve this is with Alcide rbac-tool&lt;/p&gt;

&lt;p&gt;rbac-tool - Simplify RBAC Policy Tuning&lt;br&gt;
Alcide rbac-tool has the ability to address this exact use case.&lt;br&gt;
Example: Generate a Role policy that allows create,update,get,list (read/write) everything except secrets, services, networkpolicies in core,apps &amp;amp; networking.k8s.io API groups&lt;br&gt;
rbac-tool  gen --generated-type=Role --deny-resources=secrets.,services.,networkpolicies.networking.k8s.io --allowed-verbs=* --allowed-groups=,extensions,apps,networking.k8s.io&lt;/p&gt;

&lt;p&gt;The generated policy is cluster specific.&lt;br&gt;
For a Kubernetes KIND cluster v1.16 the generated policy looks as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ca062_ar--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v2a7s88zndwymfrkg4j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ca062_ar--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v2a7s88zndwymfrkg4j4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Kubernetes RBAC helps us as users, to define and regulate API access to the Kubernetes cluster. In many cases where users wish to achieve even more granular controls, Validating Admission Controllers are the Kubernetes construct to achieve that. The Kubernetes ecosystem, and open-source tools such as Alcide’s rbac-tool help to unfold and simplify Kubernetes RBAC. get it here: &lt;a href="https://github.com/alcideio/rbac-tool"&gt;https://github.com/alcideio/rbac-tool&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>rbac</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
