<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tayyab J</title>
    <description>The latest articles on Forem by Tayyab J (@tayyabjamadar).</description>
    <link>https://forem.com/tayyabjamadar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tayyabjamadar"/>
    <language>en</language>
    <item>
      <title>How do you Integrate Emissary Ingress with OPA</title>
      <dc:creator>Tayyab J</dc:creator>
      <pubDate>Thu, 28 Apr 2022 08:24:04 +0000</pubDate>
      <link>https://forem.com/infracloud/how-do-you-integrate-emissary-ingress-with-opa-37p7</link>
      <guid>https://forem.com/infracloud/how-do-you-integrate-emissary-ingress-with-opa-37p7</guid>
      <description>&lt;p&gt;API gateways play a vital role while exposing microservices. They are an additional hop in the network that the incoming request must go through in order to communicate with the services. An API gateway does routing, composition, protocol translation, and user policy enforcement after it receives a request from client and then reverse proxies it to the appropriate underlying API. As the API gateways are capable of doing the above-mentioned tasks, they can be also configured to send the incoming client requests to an external third-party authorization (authz) server. The fate of the incoming request then depends upon the response from this external authz server to the gateway. This is exactly where Open Policy Agent (OPA) comes into the picture.&lt;/p&gt;

&lt;p&gt;There are many open source Kubernetes native API gateways out there like Contour, Kong Gateway, Traefik, Gloo, etc.. In this article, we will be exploring the Emissary Ingress.&lt;/p&gt;

&lt;p&gt;Let's dive deep and start understanding more bit about &lt;a href="https://github.com/emissary-ingress/emissary" rel="noopener noreferrer"&gt;Emissary Ingress&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Emissary Ingress?
&lt;/h2&gt;

&lt;p&gt;Emissary Ingress was earlier known as Ambassador API gateway, it is an open source Kubernetes native API gateway and is currently a &lt;a href="https://www.cncf.io/projects/emissary-ingress/" rel="noopener noreferrer"&gt;CNCF Incubation Project&lt;/a&gt;. Like many other Kubernetes gateways, Emissary has also been built to work with Envoy Proxy. It is deployed as complete stateless architecture and supports multiple plugins such as traditional SSO authentication protocols (e.g.: OAuth, OpenID Connect), rate limiting, logging, and tracing service. Emissary utilizes its &lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/running/services/ext_authz/" rel="noopener noreferrer"&gt;ExtAuth protocol&lt;/a&gt; in &lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/running/services/auth-service/" rel="noopener noreferrer"&gt;AuthService&lt;/a&gt; resource to configure the authentication and authorization for incoming requests. &lt;code&gt;ExtAuth&lt;/code&gt; supports two protocols: gRPC and plain HTTP. For gRPC interface, the external service must implement Envoy's &lt;a href="https://github.com/emissary-ingress/emissary/blob/master/api/envoy/service/auth/v2/external_auth.proto" rel="noopener noreferrer"&gt;external_auth.proto&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  OPA
&lt;/h2&gt;

&lt;p&gt;Open Policy Agent is a well-known general-purpose policy engine and has emerged as a policy enforcer across the stacks be it API gateways, service meshes, Kubernetes, microservice, CICD, or IAC. OPA decouples decision making from policy enforcement such that whenever your software needs to make a decision regarding the incoming requests, it queries OPA. &lt;a href="https://github.com/open-policy-agent/opa-envoy-plugin" rel="noopener noreferrer"&gt;OPA-Envoy&lt;/a&gt; extends OPA with a gRPC server that implements the Envoy &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/ext_authz_filter" rel="noopener noreferrer"&gt;External Authorization API&lt;/a&gt;, thus making itself compatible to be as an external authz server to Emissary.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Emissary Ingress with OPA
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6qxlbqfc6jjahyavq0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6qxlbqfc6jjahyavq0v.png" alt="Emissary OPA"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The above figure shows highlevel architecture of Emissary and OPA integration. When incoming request from client reaches Emissary, it sends an authorization request to OPA which contains input JSON. OPA evaluates this JSON against the Rego policies provided to it and responds to Emissary, if this result JSON from OPA has &lt;code&gt;allow&lt;/code&gt; as &lt;code&gt;true&lt;/code&gt; then only the client request is further routed to API or else the request is denied by Emissary and never reaches the API. Now, we will be installing Emissary Ingress and integrate it with OPA for external authorization. &lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;First, we will be needing to start a Minikube cluster. If you don't have Minikube, you can install it from &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the Emissary Ingress to the minikube through Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add the Repo:&lt;/span&gt;
helm repo add datawire https://app.getambassador.io
helm repo update

&lt;span class="c"&gt;# Create Namespace and Install:&lt;/span&gt;
kubectl create namespace emissary &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://app.getambassador.io/yaml/emissary/2.2.2/emissary-crds.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90s &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available deployment emissary-apiext &lt;span class="nt"&gt;-n&lt;/span&gt; emissary-system
helm &lt;span class="nb"&gt;install &lt;/span&gt;emissary-ingress &lt;span class="nt"&gt;--namespace&lt;/span&gt; emissary datawire/emissary-ingress &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; emissary &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt; &lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;90s deploy &lt;span class="nt"&gt;-lapp&lt;/span&gt;.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;emissary-ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or go to &lt;a href="https://www.getambassador.io/docs/emissary/latest/tutorials/getting-started/" rel="noopener noreferrer"&gt;Emissary Ingress Documentation&lt;/a&gt; to install it through Kubernetes YAMLs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring the routing for demo application
&lt;/h3&gt;

&lt;p&gt;Different gateways have their own set of configurations for exposing a service. In Emissary, we need to configure the routing through Mappings and Listeners. &lt;/p&gt;

&lt;p&gt;Mapping resource simply tells Emissary which service to redirect the incoming request to. It is highly configurable like &lt;code&gt;Ingress&lt;/code&gt;. You can learn more about Mapping resource on &lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/using/intro-mappings/" rel="noopener noreferrer"&gt;Introduction to the Mapping resource&lt;/a&gt; page. &lt;br&gt;
We will create a simple &lt;code&gt;Mapping&lt;/code&gt; resource which will redirect all the incoming requests to our demo application's service that is &lt;code&gt;demo-svc&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: demo-app-mapping  
spec:
  hostname: "*"
  prefix: /
  service: demo-svc
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Listener resource instructs Emissary where to listen on the network for the incoming request. Here we will create a listener to listen on port &lt;code&gt;8080&lt;/code&gt; and &lt;code&gt;HTTP&lt;/code&gt; protocol and associates with hosts in &lt;code&gt;All&lt;/code&gt; namespace . For detailed info visit &lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/running/listener/" rel="noopener noreferrer"&gt;Listener Docs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
  name: demo-app-listener-8080
  namespace: emissary
spec:
  port: 8080
  protocol: HTTP
  securityModel: XFP
  hostBinding:
    namespace:
      from: ALL
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install the Demo Application
&lt;/h3&gt;

&lt;p&gt;Install a simple echo server as a demo application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo-app
  template:
    metadata:
      labels:
        app: demo-app
    spec:
      containers:
      - name: http-svc
        image: gcr.io/google_containers/echoserver:1.8
        ports:
        - containerPort: 8080
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
  name: demo-svc
  labels:
    app: demo-app
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: demo-app
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Communicate with the demo app at different paths.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service emissary-ingress &lt;span class="nt"&gt;-n&lt;/span&gt; emissary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The above exposing method may not work for macOS users. They can use busybox and configure it to hit the emissary local endpoint instead.&lt;/p&gt;

&lt;p&gt;Copy the private URL with target port 80. The URL must be IP &lt;code&gt;192.168.49.2&lt;/code&gt; followed by a NodePort like &lt;code&gt;http://192.168.49.2:30329&lt;/code&gt;. Export the NodePort value to &lt;code&gt;$NODEPORT&lt;/code&gt; environment variable and curl to that at paths as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/public
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/secured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OPA has not yet been added to the setup and the above curl requests are directly sent to API without any policy enforcement. &lt;/p&gt;

&lt;h3&gt;
  
  
  How to Install and Configure OPA?
&lt;/h3&gt;

&lt;p&gt;OPA will be reading the policies fed to it via a configmap. Create the following configmap which contains a policy that allows all incoming requests only through &lt;code&gt;GET&lt;/code&gt; method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -n emissary -f  -
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-policy
data: 
  policy.rego: |-
    package envoy.authz

    default allow = false

    allow {
       input.attributes.request.http.method == "GET" 
    }
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OPA can be configured as an external authorization server via deploying it as an independent deployment or as a sidecar to the &lt;code&gt;emissary-ingress&lt;/code&gt;. Here we will add it as a sidecar. Save the following YAML as &lt;code&gt;opa-patch.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;opa&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openpolicyagent/opa:latest-envoy&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9191&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--server"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--addr=0.0.0.0:8181"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--set=plugins.envoy_ext_authz_grpc.addr=0.0.0.0:9191"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--set=decision_logs.console=true"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--ignore=.*"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/policy/policy.rego"&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/policy&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-policy&lt;/span&gt;
            &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-policy&lt;/span&gt;
        &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-policy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;patch the emissary-ingress deployment and wait for the all the  &lt;code&gt;emissary-ingress&lt;/code&gt; pods to restart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch deployment emissary-ingress &lt;span class="nt"&gt;-n&lt;/span&gt; emissary &lt;span class="nt"&gt;--patch-file&lt;/span&gt; opa-patch.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until all the emissary-ingress pods come to &lt;code&gt;Running&lt;/code&gt; state with OPA sidecar.&lt;/p&gt;

&lt;p&gt;Create the following &lt;code&gt;AuthService&lt;/code&gt;. AuthService is a resource which configures Emissary to communicate with an external service for Authn and Authz of incoming request. We are configuring it communicate with OPA on localhost since OPA is deployed as a sidecar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -f -
apiVersion: getambassador.io/v3alpha1
kind: AuthService
metadata:
  name: opa-ext-authservice
  namespace: emissary
  labels:
    product: aes
    app: opa-ext-auth
spec:
  proto: grpc
  auth_service: localhost:9191
  timeout_ms: 5000
  tls: "false"
  allow_request_body: true
  protocol_version: v2
  include_body:
    max_bytes: 8192
    allow_partial: true
  status_on_error:
    code: 504
  failure_mode_allow: false
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try doing curl now, since the policy accepts requests coming through &lt;code&gt;GET&lt;/code&gt; method and there are no restrictions on path, both the request will get a 200 OK response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-i&lt;/span&gt; http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/public
curl &lt;span class="nt"&gt;-i&lt;/span&gt; http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/private
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets edit the policy to accept incoming requests at path &lt;code&gt;/public&lt;/code&gt; only and request to any other path will be denied.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | kubectl apply -n emissary -f  -
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-policy
data: 
  policy.rego: |-
    package envoy.authz

    default allow = false

    allow {
       input.attributes.request.http.method == "GET"
       input.attributes.request.http.path == "/public" 
    }
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now restart the emissary ingress deployment for policy changes to take effect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout restart deployment emissary-ingress &lt;span class="nt"&gt;-n&lt;/span&gt; emissary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until all the emissary-ingress pods come to &lt;code&gt;Running&lt;/code&gt; state after restart.&lt;/p&gt;

&lt;p&gt;Now do a curl request at path &lt;code&gt;/public&lt;/code&gt;, it will be accepted but at path &lt;code&gt;/private&lt;/code&gt; it will be denied by OPA with a 403 response and hence the request will not reach the demo API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-i&lt;/span&gt; http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/public
curl &lt;span class="nt"&gt;-i&lt;/span&gt; http://192.168.49.2:&lt;span class="nv"&gt;$NODEPORT&lt;/span&gt;/private
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The decision-making about the incoming request from the client to exposed API can be decoupled to OPA as an external authorization server in the Emissary Ingress setup. OPA can be added as a plug-and-play policy enforcer to Emissary and any other gateways supporting the Envoy &lt;a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/ext_authz_filter" rel="noopener noreferrer"&gt;External Authorization API&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We hope you found this post informative and engaging. Connect with us over &lt;a href="https://twitter.com/infracloudio" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/company/infracloudio/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; and start a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  References and further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cncf.io/blog/2020/03/06/the-difference-between-api-gateways-and-service-mesh/" rel="noopener noreferrer"&gt;API Gateways and Service Mesh&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nginx.com/learn/api-gateway/" rel="noopener noreferrer"&gt;API Gateways- Nginx&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.openpolicyagent.org/docs/latest/" rel="noopener noreferrer"&gt;OPA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/open-policy-agent/opa-envoy-plugin" rel="noopener noreferrer"&gt;OPA Envoy-plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/emissary-ingress/emissary" rel="noopener noreferrer"&gt;Emissary Ingress Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/running/services/ext_authz/" rel="noopener noreferrer"&gt;Emissary ExtAuth protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io/docs/emissary/latest/topics/running/services/auth-service/" rel="noopener noreferrer"&gt;AuthService&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opa</category>
      <category>emissaryingres</category>
      <category>security</category>
      <category>api</category>
    </item>
    <item>
      <title>Prometheus HA with Thanos Sidecar Or Receiver?</title>
      <dc:creator>Tayyab J</dc:creator>
      <pubDate>Thu, 01 Jul 2021 10:58:37 +0000</pubDate>
      <link>https://forem.com/tayyabjamadar/prometheus-ha-with-thanos-sidecar-or-receiver-5dma</link>
      <guid>https://forem.com/tayyabjamadar/prometheus-ha-with-thanos-sidecar-or-receiver-5dma</guid>
      <description>&lt;p&gt;Prometheus has been the flag bearer for monitoring the systems for a long time now. It has proved itself as a go-to solution for monitoring and alerting in Kubernetes systems. Though Prometheus does have some general instructions to achieve high availability within itself, it comes with its own limitations in data retention, historic data retrieval, and multi-tenancy. And this is where Thanos comes into the picture.  In this blog post, we will go through the two different approaches for integrating Thanos with Prometheus in Kubernetes environments and will explore why one should go with a specific approach. Let's get started!&lt;/p&gt;

&lt;p&gt;Along with Thanos, another open source project named Cortex is also a popular alternative solution. An interesting fact is that initially, Thanos supported only sidecar installation, and Cortex preferred the push-based or remote write approach. But back in 2019, both the projects collaborated, and after learning and influencing each other (sharing is caring), that’s where a receiver component was added to Thanos, and the Cortex blocks storage has been built on top of a few core Thanos components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanos in general
&lt;/h2&gt;

&lt;p&gt;Thanos supports its integration with Prometheus in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Sidecar&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Receiver&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With following common components in Thanos Stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Querier&lt;/li&gt;
&lt;li&gt;Store&lt;/li&gt;
&lt;li&gt;Compactor&lt;/li&gt;
&lt;li&gt;Ruler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both Sidecar and Receiver are the different components in the Thanos stack and have their own way of functioning. But in the end, it serves the same purpose. Before comparing the approaches, let’s get to know in short how exactly both Sidecar and Receiver work.&lt;/p&gt;

&lt;p&gt;Let’s start with the sidecar.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Thanos Sidecar approach work?
&lt;/h2&gt;

&lt;p&gt;In the Sidecar approach, the Thanos Sidecar component as the name implies runs as a &lt;a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar#solution"&gt;sidecar&lt;/a&gt; in more than one Prometheus server pod, be it vanilla Prometheus or Prometheus managed by the Prometheus Operator. This component is responsible for data delivery (from Prometheus TSDB) to object storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3TegmBU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75f34im0t1s0lfy4nn2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3TegmBU4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/75f34im0t1s0lfy4nn2c.png" alt="Thanos Sidecar architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the layout above, for high availability, more than one Prometheus instance is provisioned along with the Sidecar component. Both the Prometheus instances scrape the metrics independently from the targets. The scraped TSDB blocks, by default, are stored in the storage provisioned to the Prometheus (Persistent Volumes). &lt;/p&gt;

&lt;p&gt;Sidecar implements Thanos’ Store API on top of Prometheus’ remote-read API, making it possible to query the time series data in Prometheus servers from a centralized component named Thanos Querier. Furthermore, the sidecar can also be configured to upload the TSDB blocks to object storage at an interval of two hours, blocks are created every two hours. The data stored in the bucket can be queried using the Thanos Store component, this implements the same Store API and needs to be discovered by Thanos Querier. &lt;/p&gt;

&lt;p&gt;For detailed information on the sidecar, please refer to our another blog post &lt;a href="https://www.infracloud.io/blogs/thanos-ha-scalable-prometheus/"&gt;Making Prometheus Highly Available (HA) &amp;amp; Scalable with Thanos&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Thanos Receiver approach work?
&lt;/h2&gt;

&lt;p&gt;The Receiver is provisioned as an individual StatefulSet, unlike sidecar. In this approach, all the other components of the Thanos stack exist and function the same way as the sidecar approach, but the Receiver replaced the Sidecar component. The way TSDBs are queried and transferred to object storage has a drastic change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HNjYekib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dk5rsqc1s59sjyopmv71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HNjYekib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dk5rsqc1s59sjyopmv71.png" alt="Thanos Receiver architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Prometheus Remote Write API is put into use, such that the Prometheus instances are configured to continuously write remotely on the Receiver. The receiver is configured to populate the object storage bucket and also has its own retention period. Querier is configured to query data on the Receiver and storage bucket via Store.&lt;/p&gt;

&lt;p&gt;Integrating Receiver is a bit trickier compared to Sidecar, for more details for setting up the receiver take a look at the  blog post &lt;a href="https://www.infracloud.io/blogs/multi-tenancy-monitoring-thanos-receiver/"&gt;Achieve Multi-tenancy in Monitoring with Prometheus &amp;amp; Thanos Receiver&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Let's compare Sidecar and Receiver
&lt;/h2&gt;

&lt;p&gt;Let's do 1:1 comparison of Thanos Sidecar and Receiver for achieving Prometheus HA, compares the both on the various aspects like High Availability, integrations with Prometheus, storage, and data acquisition.&lt;/p&gt;

&lt;h3&gt;
  
  
  High Availability
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sidecar
&lt;/h4&gt;

&lt;p&gt;High availability (HA) is acquired by integrating sidecar containers with each replica of Prometheus instance. Each instance individually scrapes the target, and sidecar uploads the blocks to objects storage. Prometheus writes a TSDB block every two hours, considering there are two Prometheus replicas and one goes down, the latest in construction block will be lost. This generally will show a void in the graph for this specific Prometheus instance. But since there are two replicas, that void is filled with the data from another Prometheus instance’s block. Thanos Querier takes care filling these gaps and deduplication.&lt;/p&gt;

&lt;h4&gt;
  
  
  Receiver
&lt;/h4&gt;

&lt;p&gt;Similar to Sidecar, multiple Prometheus instances are deployed to scrape the same targets and are configured to write remotely to Receiver StatefulSet. Here, not only Prometheus replicas but also Receiver replicas play a vital role in HA. Apart from that, the Receiver also supports multi-tenancy. Consider setting a replication factor=2, this would ensure that the incoming data gets replicated between two Receiver pods. Failure of a single Prometheus instance is covered by another since both write remotely to the Receiver. Failure of a single Receiver pod is compensated by other due to the replication factor being two.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Integration with Prometheus
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sidecar
&lt;/h4&gt;

&lt;p&gt;A simple addition of a sidecar container in the Prometheus instance pod is all that needs to be done, and all the other Thanos components work along with it. The sidecar optionally writes a TSDB block every two hours to the storage. Generally, a number of sidecars are exposed as a service to the Thanos Querier by simply adding the endpoint under the Querier configuration. Data stored in buckets is exposed via the Store component. Thus, integrating Sidecar is quite easy and suitable for most of the scenarios.&lt;/p&gt;

&lt;h4&gt;
  
  
  Receiver
&lt;/h4&gt;

&lt;p&gt;This needs configuration changes in the Prometheus instance to remote write the TSDBs to the Receivers along with deploying an additional Receiver StatefulSet. The Receiver retains the TSDBs on local storage for the value of &lt;code&gt;--tsdb.retention&lt;/code&gt; flag. Achieving load-balancing and data replication needs running multiple instances of Thanos Receiver as a part of a hashring. Configuration of hashring such that there are exclusive Receiver endpoints for matching tenant header in the HTTP request is needed. Integrating the Receiver is a complex and tedious task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sidecar
&lt;/h4&gt;

&lt;p&gt;Sidecar reads from the Prometheus’ local storage, so no additional local storage (PVs) are required for TSDBs. Additionally, it considerably reduces the retention time of TSDBs in Prometheus local storage since it uploads every two hours while their historic data is made durable and queryable via object storage. By default, Prometheus stores the data for 15 days. In the case of monitoring a complete, heavy production cluster, it would require a considerable amount of local storage, and the local storage is comparatively expensive than object storage (EBS volumes are expensive than S3 buckets).&lt;/p&gt;

&lt;p&gt;Since Sidecar exports Prometheus metrics every 2 hours to buckets, it brings the Prometheus closer to being Stateless. Though in Thanos &lt;a href="https://thanos.io/tip/components/sidecar.md/#sidecar"&gt;docs&lt;/a&gt;, the retention of Prometheus is recommended to not be lower than three times the min block duration, so it becomes 6 hours.&lt;/p&gt;

&lt;h4&gt;
  
  
  Receiver
&lt;/h4&gt;

&lt;p&gt;The Receiver, being a StatefulSet needs to be provisioned with PVs. The amount of local storage required here is dependent on the flags &lt;code&gt;--receive.replication-factor&lt;/code&gt;, &lt;code&gt;--tsdb.retention&lt;/code&gt;, and pod replicas of the StatefulSet. Higher the TSDB retention, more of the local storage will be utilized. Since the data is being continuously written to the Receiver, the Prometheus retention could be kept at the minimum value. This setup needs more local storage compared to Sidecar.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data acquisition
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sidecar
&lt;/h4&gt;

&lt;p&gt;Here the TSDB block is read from the local storage of the Prometheus instance, either served to the Querier for querying or exported to the object storage intermittently. Sidecar works on a pull-based model (Thanos Querier pulls out series from Prometheus at query time), and the data is not constantly written to any other instance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Receiver
&lt;/h4&gt;

&lt;p&gt;Receiver works on push based model, TSDBs are written remotely by the Prometheus instance itself to the Receiver continuously, hence bringing the Prometheus closest it can to be stateless. Data is then further uploaded to object storage from the Receiver. Pushing the metrics comes with its own pros and cons which are discussed &lt;a href="https://docs.google.com/document/d/1H47v7WfyKkSLMrR8_iku6u9VB73WrVzBHb2SB6dL9_g/edit#heading=h.2v27snv0lsur"&gt;here&lt;/a&gt;, and is recommended to be used mostly in air-gapped, or egress only environments.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion - Sidecar or Receiver for Prometheus HA?
&lt;/h2&gt;

&lt;p&gt;Selecting a type of approach is entirely subjective to the environment in which Prometheus HA and multitenancy are to be achieved. In a case where Prometheus High Availability (HA) needs to be achieved for a single cluster or using a Prometheus Operator for specific application monitoring, Sidecar seems to be a good option due to its ease of operation and lightweight integration. Sidecar can also be used for multi-tenancy via layered Thanos Querier approach.  &lt;/p&gt;

&lt;p&gt;Whereas in case a more centralized view of multiple tenants is required or in egress only environments, one can go with Receiver after considering the limitations of pushing the metrics. Achieving a global view of a single-tenant is not recommended via Receiver. When trying to achieve a global view of multiple tenants with different environment limitations, one can go with a hybrid approach of using both Sidecar and Receiver.&lt;/p&gt;

&lt;p&gt;We hope you found this post informative and engaging. Connect with us over &lt;a href="https://twitter.com/infracloudio"&gt;Twitter&lt;/a&gt; and &lt;a href="https://www.linkedin.com/company/infracloudio/"&gt;Linkedin&lt;/a&gt; and start a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  References and further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://thanos.io/tip/components/receive.md/"&gt;https://thanos.io/tip/components/receive.md/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thanos.io/tip/components/sidecar.md/"&gt;https://thanos.io/tip/components/sidecar.md/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thanos.io/tip/operating/multi-tenancy.md/"&gt;https://thanos.io/tip/operating/multi-tenancy.md/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grafana.com/blog/2020/07/16/how-the-cortex-and-thanos-projects-collaborate-to-make-scaling-prometheus-better-for-all/"&gt;https://grafana.com/blog/2020/07/16/how-the-cortex-and-thanos-projects-collaborate-to-make-scaling-prometheus-better-for-all/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write"&gt;https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>observability</category>
      <category>prometheus</category>
      <category>thanos</category>
      <category>sidecar</category>
    </item>
  </channel>
</rss>
