<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vivekanand Rapaka</title>
    <description>The latest articles on Forem by Vivekanand Rapaka (@vivekanandrapaka).</description>
    <link>https://forem.com/vivekanandrapaka</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vivekanandrapaka"/>
    <language>en</language>
    <item>
      <title>Istio Ingress gateway vs Istio Gateway vs Kubernetes Ingress</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Thu, 29 Dec 2022 23:15:30 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/istio-ingress-gateway-vs-istio-gateway-vs-kubernetes-ingress-5hgg</link>
      <guid>https://forem.com/vivekanandrapaka/istio-ingress-gateway-vs-istio-gateway-vs-kubernetes-ingress-5hgg</guid>
      <description>&lt;h3&gt;
  
  
  Objective
&lt;/h3&gt;

&lt;p&gt;Network Traffic management is one of the key areas for any kubernetes setup and its important to understand how different components of your cluster function when it comes to handling the incoming traffic.&lt;/p&gt;

&lt;p&gt;The main objective of this post is to discuss about components of nginx ingress controller and Istio service mesh and the main differences between each of them along with following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Different types of services used in a kubernetes cluster&lt;/li&gt;
&lt;li&gt;What is an ingress controller?&lt;/li&gt;
&lt;li&gt;What is an ingress resource?&lt;/li&gt;
&lt;li&gt;What is an istio service mesh?&lt;/li&gt;
&lt;li&gt;Traffic flow in Nginx Ingress Controller vs Istio service mesh.&lt;/li&gt;
&lt;li&gt;When to use Istio Service Mesh vs Nginx Ingress controller?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Services in kubernetes - A quick recap
&lt;/h3&gt;

&lt;p&gt;Before knowing about Nginx ingress controller and Istio service mesh, its important to know about concept of services in a native kubernetes setup.&lt;/p&gt;

&lt;p&gt;If you have worked on kubernetes or have learnt the basics of kubernetes, you must be familiar with object type called "service".&lt;/p&gt;

&lt;p&gt;In order for your workloads hosted on pods to be able to accept traffic, you would need some kind of load balancer. All the pods we deploy are ephemeral, which means when a pod is terminated or killed, it will be replaced by another pod with different ip address and we cannot directly communicate with the POD instead we use the concept of service object. &lt;/p&gt;

&lt;p&gt;A service is a logical abstraction to expose the deployed pods that host your application. Below are few different types of the services that are commonly used.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;NodePort&lt;/li&gt;
&lt;li&gt;ClusterIP&lt;/li&gt;
&lt;li&gt;LoadBalancer&lt;/li&gt;
&lt;li&gt;ExternalName&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Depending on the type of service you choose, traffic will be routed accordingly. I'll briefly touch upon what each of these services does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NodePort&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you use this kind of service object, kubernetes would create a random port in the range of 30000-32768 on each of the nodes and you can access the backend pod by typing the IP address of the Node followed by the port as follows: &lt;br&gt;
&lt;code&gt;http://192.168.126.8:32768&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Below is the YAML file for node port service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: MyApp
  ports:
      # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
    - port: 80
      targetPort: 80
      # Optional field
      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
      nodePort: 30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have multiple nodes, each node will have it own IP and you would need to use the IP address and the port number to access your workload. This is good for testing purposes and local development, but not ideal for real world scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterIP&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This is one of the common services that is used to create and expose your workloads within your cluster. Unlike nodePort service, you have an option to choose which port you would like your kubernetes cluster to allocate the port number. When you create a clusterIP type of service, it would create a service object with an IP and port number specified and you can use the same IP/DNS service name allocated to the service across the cluster to access your cluster. When you create a service object, if you do not specify the type of service you would like to create (ex:clusterIP/NodePort/Loadbalancer), by default kubernetes would create the service of type clusterIP.&lt;/p&gt;

&lt;p&gt;Below is the YAML file snippet from kubernetes.io for service of type clusterIp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This type of service does not expose your workloads outside your kubernetes cluster and its good for workloads like backend API Services, Databases, Batch processing workloads, etc. &lt;br&gt;
Typically the services which you dont need to be expose to external world are created as clusterIP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is one of most commonly used services that is used to expose your workloads to external world and on cloud. When you use this kind of service, kubernetes would spinup a load balancer in the cloud service provider where your managed kubernetes cluster is hosted on and it creates an IP Address in the cloud, which you can access it externally from your kubernetes cluster. This is one of the service types that are used for exposing your front-end services. You can also specify the ip address you would like to assign to your service.&lt;/p&gt;

&lt;p&gt;Below is the YAML file for Load Balancer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  clusterIP: 10.0.171.239
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.0.2.127
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ExternalName&lt;/strong&gt;&lt;br&gt;
Services of type external name point a service to a DNS name instead of pods based on pod selector like other type of services described above.&lt;/p&gt;

&lt;p&gt;Below is yaml file that shows how to define an external name service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To learn more about these services in detail follow this &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is an Ingress Controller and why do we need one?
&lt;/h3&gt;

&lt;p&gt;Now that we have covered the basics of kubernetes services, lets move on to Nginx Ingress controller.&lt;/p&gt;

&lt;p&gt;One of the services we discussed above is Load Balancer, while you can use this kind of service to expose your workloads, this is good of a couple of different workloads. However, when your microservices grow, to expose them externally, you need to use multiple services of type load balancer and each of these services would create an additional load balancer and a public IP on your cloud service provider and you would end up not only managing all of them, but also you would be paying for each of the public Ips provisioned by load balancer service.&lt;/p&gt;

&lt;p&gt;Here is where, Ingress controller comes into picture. &lt;/p&gt;

&lt;p&gt;An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination capabilities for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. &lt;/p&gt;

&lt;p&gt;When you use an ingress controller like nginx ingress controller, you dont need to create multiple load balancer type services to expose your workloads. When you create an Nginx ingress controller, it would create a service type of Load balancer and you can use it as your inbound ip to access all your workloads by exposing them on clusterIP type of service. But how do we create a single service type and route the traffic to multiple backend services? that can be achieved using an Ingress Resource. An ingress resource is another kubernetes object that is used to define how the incoming traffic can be forwarded to appropriate backend service and the traffic sent to service would be directed to the pods.&lt;/p&gt;

&lt;p&gt;I'm going to deploy a sample hello-world application on an AKS cluster, install nginx controller and configure ingress resource to route the traffic. While i'm not going to cover everything in detail on this setup, i used followed this &lt;a href="https://docs.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="noopener noreferrer"&gt;link&lt;/a&gt; to install ingress controller on AKS and its pretty straight forward.&lt;/p&gt;

&lt;p&gt;I have created an AKS cluster and deployed sample application on it in a new namespace called 'ingress-namespace'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqukq36tn4uij4n5i7nfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqukq36tn4uij4n5i7nfz.png" alt="Image description" width="728" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then installed a nginx ingress controller, it created few kubernetes objects in 'ingress-basic' namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg9w084ydbls6uemmjlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdg9w084ydbls6uemmjlk.png" alt="Nginx ingress service" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of them you see is a service type of load balancer that accepts the incoming traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Ingress Resource?
&lt;/h2&gt;

&lt;p&gt;An ingress resource one of the resource types that helps in defining the routing to your backend services. There are two types of routing methods available in ingress. &lt;/p&gt;

&lt;p&gt;1.Host-based routing&lt;br&gt;
2.Path-based routing&lt;/p&gt;

&lt;p&gt;Using an ingress resource you can map the incoming requests to respective backend services.&lt;/p&gt;

&lt;p&gt;Below is an ingress resource YAML file that shows ingress routing rule for based on host name. There are two hosts defined the the resource. If the incoming request is &lt;a href="https://foo.bar.com/bar" rel="noopener noreferrer"&gt;https://foo.bar.com/bar&lt;/a&gt; where host name is "foo.bar.com" and prefix is "/bar", it goes to "service1" and requests for "*.foo.com" with prefix "/foo" goes to "service2". This type of routing is called host-based routing.&lt;/p&gt;

&lt;p&gt;code snippet credits: Kubernetes.io&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "*.foo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/foo"
        backend:
          service:
            name: service2
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is an ingress resource YAML file that shows how the routing happens on incoming request for application path with prefix '/testpath'. That means, if there is an incoming request, for &lt;a href="https://myexamplesite.com/testpath" rel="noopener noreferrer"&gt;https://myexamplesite.com/testpath&lt;/a&gt; it will be evaluated against below ingress rule and it would be sent to the service called test on port 80. This path based routing.&lt;/p&gt;

&lt;p&gt;code snippet credits: Kubernetes.io&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Traffic flow when you use Ingress Controller and Ingress resource
&lt;/h2&gt;

&lt;p&gt;Here is a picture on how the incoming traffic flows when you use ingress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblyktp3smny96wzpnz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgblyktp3smny96wzpnz5.png" alt="Image description" width="683" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image credits: kubernetes.io&lt;/p&gt;

&lt;h2&gt;
  
  
  Traffic flow explained:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Request originates from client reaches ingress-managed load balancer&lt;/li&gt;
&lt;li&gt;Then request is processed by Ingress resource based on the service prefix defined in the routing rule.&lt;/li&gt;
&lt;li&gt;Then request is sent to the actual service&lt;/li&gt;
&lt;li&gt;Then the request is sent to the actual backend pod.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have taken a look at ingress and other options in ingress, lets go few concepts of Istio and how they are different from traditional ingress controller.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Istio?
&lt;/h2&gt;

&lt;p&gt;Istio service mesh is one of the widely used service mesh tools and it has capabilities like observability, traffic management, and security for your microservices workloads hosted on your kubernetes cluster. For more information on istio vist &lt;a href="https://istio.io/latest/about/service-mesh/" rel="noopener noreferrer"&gt;https://istio.io/latest/about/service-mesh/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Istio IngressGateway?
&lt;/h2&gt;

&lt;p&gt;Istio Ingress Gateway is one of the components that is operates at the edge of the service mesh and serves as traffic controller incoming requests. Interestingly, this also installed as one of the 'service' object and has few pods running behind it. So basically the logic to handle the traffic in the pods that run the istio ingressGateway and istio uses Envoy proxy images to run these pods. This is similar to plain Nginx Ingress Controller.The Ingress Gateway Pod is configured by a Gateway and a Virtual Service.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Istio Gateway? And how is it different from Ingress Controller?
&lt;/h2&gt;

&lt;p&gt;Istio Gateway is the component is similar to ingress resource. Like the way ingress resource is used to configure ingress controller, Istio Gateway is used to configure Istio Ingress Gateway which is mentioned in the above section. Using this component, we can configure it accept traffic on the host that we want the traffic to be sent on, configure TLS certificates for incoming requests.&lt;/p&gt;

&lt;p&gt;Below is the yaml snippet of istio gateway component. Here it shows that in the selector, it uses istio: ingressgateway as the label to bind to istio ingress gateway and this is how its bound to istio gateway. It also has the 'servers' section which has the configuratio for configuring the port number, hosts that this gateway is configured to accept traffic on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "httpbin.example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What is a Virtual Service?
&lt;/h2&gt;

&lt;p&gt;A virtual service is used to configure routing to the backend services. We can configure one virtual service per application and the backend services.&lt;/p&gt;

&lt;p&gt;Below is the snippet of virtual service component, that shows how its configured to route the traffic to backend 'service' based on incoming hosts and uri prefix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews-route
spec:
  hosts:
  - reviews.prod.svc.cluster.local
  http:
  - name: "reviews-v2-routes"
    match:
    - uri:
        prefix: "/wpcatalog"
    - uri:
        prefix: "/consumercatalog"
    rewrite:
      uri: "/newcatalog"
    route:
    - destination:
        host: reviews.prod.svc.cluster.local
        subset: v2
  - name: "reviews-v1-route"
    route:
    - destination:
        host: reviews.prod.svc.cluster.local
        subset: v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Traffic flow when you use Istio Ingress Gateway with Istio gateway and Virtual Service
&lt;/h2&gt;

&lt;p&gt;Below picture shows how the traffic flows in Istio and also shows how the services are configured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0lphd5juqwniyeh2z40.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0lphd5juqwniyeh2z40.jpg" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  When to use Istio service mesh vs Nginx Ingress Controller?
&lt;/h3&gt;

&lt;p&gt;So far we have seen the differences between a traditional nginx ingress controller vs istio service mesh. Using service mesh over ngnix ingress controller is recommended only if you are looking for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enabling mutual TLS between services&lt;/li&gt;
&lt;li&gt;Observability of your service traffic&lt;/li&gt;
&lt;li&gt;Implement deployment techniques like blue/green, circuit breaking, A/B testing, etc&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use traditional nginx ingress controller, only if you want to handle your incoming traffic and distribute it to backend services, which is good for services with less number. When your workloads/services increase, using service mesh tool like ISTIO service mesh is essential.&lt;/p&gt;

&lt;p&gt;In this blog post, we covered differences between a traditional ingress gateway vs istio service mesh and when to use each of them.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this article.&lt;/p&gt;

&lt;p&gt;Thank you for reading this post and I hope you find it informative.&lt;/p&gt;

&lt;p&gt;Happy Learning!!!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>istio</category>
      <category>servicemesh</category>
    </item>
    <item>
      <title>CMK Encryption for Azure Storage Accounts</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Wed, 26 Jan 2022 12:56:34 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/cmk-encryption-for-azure-storage-accounts-38gd</link>
      <guid>https://forem.com/vivekanandrapaka/cmk-encryption-for-azure-storage-accounts-38gd</guid>
      <description>&lt;h2&gt;
  
  
  Purpose of this post
&lt;/h2&gt;

&lt;p&gt;The purpose of this post is to show you what kind of encryption Microsoft uses for encrypting storage accounts by default and how you can use CMK (Customer Managed Keys) to encrypt your storage accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encryption using Microsoft managed keys
&lt;/h2&gt;

&lt;p&gt;By default, if you don't specify the type of encryption for your storage accounts while creation, Microsoft uses server-side encryption (SSE) to automatically encrypt your data. This is applied to any storage account regardless of its tier. Microsoft uses Microsoft managed keys for this type of encryption. This is the default option from Microsoft.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encryption using Customer managed keys (CMK)
&lt;/h2&gt;

&lt;p&gt;While you can continue to let Microsoft handle the encryption of your data, customers can use their own keys to handle data encryption. This type of encryption is called CMK enabled encryption. Here are some of the benefits of using CMK over default Microsoft managed keys.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Customers have control over the keys used to encrypt their data.&lt;/li&gt;
&lt;li&gt;Microsoft rotates their keys as per their own compliance requirements. Customers using CMK can meet security compliance requirements.&lt;/li&gt;
&lt;li&gt;CMK keys are stored in customer's key vault, giving control over where these can be used.&lt;/li&gt;
&lt;li&gt;Same CMK keys can be used to encrypt multiple storage accounts.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementing CMK for storage accounts
&lt;/h2&gt;

&lt;p&gt;In this section, we'll see how to implement CMK for storage accounts. &lt;/p&gt;

&lt;h4&gt;
  
  
  Examining default encryption for a storage account
&lt;/h4&gt;

&lt;p&gt;Before implementing CMK, lets see how Microsoft encrypts storage account with Microsoft managed keys. While creating a storage account in the 'encryption' section, you can specify whether you would like go with default encryption or a customized encryption using CMK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6FYxpA2f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2bm9eq89rpo5mwb7j8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6FYxpA2f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2bm9eq89rpo5mwb7j8w.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once its created, you can see the type of encryption used by storage account as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--55gkmxtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygve20eyru3kz2yccdy4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--55gkmxtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygve20eyru3kz2yccdy4.png" alt="Image description" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you would like to use CMK, you can do so, however the a new key has to be created and stored in Azure Key Vault and used for encryption. We'll see that in the next section.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enabling CMK for a storage account
&lt;/h4&gt;

&lt;p&gt;1.Create a new key in Azure key vault in the same region as storage account&lt;br&gt;
2.Click on 'generate/import' under keys as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w0zBzDw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6qu8xgel6qv6r1l0ge4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w0zBzDw4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6qu8xgel6qv6r1l0ge4.png" alt="Image description" width="569" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Give key a name and leave everything else to default as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S94aKMKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzyukwulj9e299je3ezf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S94aKMKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzyukwulj9e299je3ezf.png" alt="Image description" width="800" height="713"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Go back to storage account and encryption section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d7t_n4Kb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhjerbja4v8cqtnx0vuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d7t_n4Kb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhjerbja4v8cqtnx0vuy.png" alt="Image description" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gd5ZD3QU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp49h3ny3uraup875pz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gd5ZD3QU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zp49h3ny3uraup875pz7.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After selecting the key it should show as following. Click 'save' to apply the settings&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cgVY1jgV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/264n80k43nmo30u70745.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cgVY1jgV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/264n80k43nmo30u70745.png" alt="Image description" width="620" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once applied it would show that it is now using CMK for storage encryption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JfS86JIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1a57gjal5y5liiz1hot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JfS86JIJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1a57gjal5y5liiz1hot.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a part of this applying CMK encryption for storage accounts, it also creates a system assigned managed identity to the storage account and same is granted permission Azure Key Vault with 'get', 'wrap' and 'unwrap' permissions for the managed identity of storage account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cbgxI2Ce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnf6vywmf81bdmvh6t5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cbgxI2Ce--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnf6vywmf81bdmvh6t5t.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we have seen how to use customer managed keys for storage account encryption.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this blog post. Hope you enjoyed reading it.&lt;/p&gt;

&lt;p&gt;Happy Learning!!!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>storage</category>
      <category>encryption</category>
    </item>
    <item>
      <title>Access Secrets in AKV using Managed identities for AKS</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Mon, 17 May 2021 18:29:15 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/access-secrets-from-akv-using-managed-identities-for-aks-91p</link>
      <guid>https://forem.com/vivekanandrapaka/access-secrets-from-akv-using-managed-identities-for-aks-91p</guid>
      <description>&lt;h2&gt;
  
  
  Purpose of this post
&lt;/h2&gt;

&lt;p&gt;The purpose of this post is to show you how to access secrets from  AKS cluster that are stored in Azure Key Vault.&lt;/p&gt;

&lt;p&gt;In one of my previous blog &lt;a href="https://dev.to/vivekanandrapaka/using-azure-keyvault-for-azure-webapps-with-azure-devops-25lh"&gt;posts&lt;/a&gt;, i have shown how to access keys from Key vault from Azure DevOps, where i have configured the release pipeline to fetch the secret from key vault and substitute it during runtime for the pipeline.&lt;/p&gt;

&lt;p&gt;we have many other ways of accessing keys from key vault from any of the Azure resources we deploy, Using managed identities is one of the secure and easy ways to access to keep our app secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Managed Identities?
&lt;/h2&gt;

&lt;p&gt;There are a lot of posts that are there which helps you understand what a managed identity is. If you go to Microsoft docs, here is the definition of managed identities you will get.&lt;/p&gt;

&lt;p&gt;"Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications may use the managed identity to obtain Azure AD tokens. For example, an application may use a managed identity to access resources like Azure Key Vault where developers can store credentials in a secure manner or to access storage accounts." &lt;br&gt;
Definition credits: &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview" rel="noopener noreferrer"&gt;Microsoft docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In simple words, any azure resource that supports azure ad authentication can have managed identities. Once we enable managed identity for an azure, a service principal will be created in active directory on behalf of that azure resource you create. &lt;br&gt;
With this, you can grant access to the an Azure resource that has managed identity enabled on the target azure resource you want to access.&lt;/p&gt;

&lt;p&gt;For example, if you want to have a webapp access your key vault, all you need to do is to enable managed identity on your webapp and grant access to the managed identity of your webapp in the access policies of the key vault.&lt;/p&gt;

&lt;p&gt;Without managed identities, in the above mentioned scenario you would need a &lt;a href="https://docs.microsoft.com/en-us/powershell/azure/create-azure-service-principal-azureps?view=azps-5.9.0#:~:text=An%20Azure%20service%20principal%20is,accessed%20and%20at%20which%20level." rel="noopener noreferrer"&gt;service principal&lt;/a&gt; and a client secret to be created for your application (webapp in above scenario), and that service principal has to be granted permission on the target azure resource (key vault in above scenario). You need to configure your webapp to use the client id and secret to make calls to key vault to fetch the secrets. &lt;/p&gt;

&lt;p&gt;You would have to manage the client id and secret by yourself. Incase if the service principal credentials are compromised, you need to change the secret every time and update the application code to consume the new secret. This is not only a bit insecure, but also tedious to update client secrets in multiple places.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managed Identities to rescue
&lt;/h2&gt;

&lt;p&gt;With managed identities you no longer have to create a service principal for your app, but when the feature is enabled on the azure resource, it will not only create an SP for you, but also it would manage rotation of keys by itself. You no longer need to keep client id and client secret of your service principal in your source code to access the target resource.&lt;/p&gt;

&lt;p&gt;Kindly note that we are removing the burden of maintaining the service principal credentials in your code.&lt;br&gt;
But you still need to have appropriate libraries and respective code to access the target resource. For example, if your app is going to access key vault and if the app is going to be on a webapp with managed identity enabled, you no longer need to pass the service principal credentials to call the key vault api endpoint. You can call the key vault api endpoint directly from your webapp as it has managed identity enabled and that managed identity is granted permission on the key vault.&lt;/p&gt;

&lt;p&gt;With that, Lets dive into some demo.&lt;/p&gt;

&lt;p&gt;Here are the steps we are going to follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an AKS cluster&lt;/li&gt;
&lt;li&gt;Enable managed identity to AKS cluster&lt;/li&gt;
&lt;li&gt;Create a key vault with a secret in it.&lt;/li&gt;
&lt;li&gt;Enable access to managed identity of AKS via access policies in  key vault.&lt;/li&gt;
&lt;li&gt;Access the secret in the key vault from a Pod in AKS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We are going to create 2 resources in this demo.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AKS Cluster&lt;/li&gt;
&lt;li&gt;Azure Key Vault&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this demo, i have created a sample AKS cluster using following commands after i have logged in to Azure from my azure cli&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

az group create --name yourresourcegroupname --location uksouth 
az aks create -g yourresourcegroupname -n MyAKS --location uksouth --generate-ssh-keys


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we are discussing about managed identities and not about AKS, the above should suffice for creating an AKS cluster.&lt;/p&gt;

&lt;p&gt;Once AKS cluster is created, you should see a new resource group created with name "MC_" this is for the underlying resource your AKS cluster needs to function.&lt;/p&gt;

&lt;p&gt;Once its created, click on the VMSS that was created for your AKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwky39mhveoinhesrtxj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwky39mhveoinhesrtxj.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once in the VMSS blade, click on the identity and notice that we have option for System Assigned managed identity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3932w6zotykdw5rjdiqe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3932w6zotykdw5rjdiqe.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enable it by click on "on".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywfm1fm0z4uzvpkt5pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywfm1fm0z4uzvpkt5pq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once its enabled, you should see a new managed identity resource created in the "MC_" resource group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkgbwcd8se5nozwmqt6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkgbwcd8se5nozwmqt6v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next create and Azure key vault resource and secret in it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;az group create --name "yourresourcegroupname" -l "locationofyourresorucegroup"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I have created below key vault and a secret.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1pxhwr6xc8fnfoqxxnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1pxhwr6xc8fnfoqxxnr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As of now, we have created an AKS cluster, enabled system assigned managed identity and created a Key Vault with a new secret in it.&lt;/p&gt;

&lt;p&gt;Next, we are going to add permission to AKS to access key vault. To do so, go to access policies of Key vault and click on "Add access policy" option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh1offispoyl2cd1zmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxh1offispoyl2cd1zmq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select "secret management" in the configure from template option.&lt;br&gt;
Note that i have selected "secret management" for the sake of this POC. In real production environment, get, list permissions should be enough.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkkwz34fip85ysqgzmzk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkkwz34fip85ysqgzmzk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "Select principal" option, click on "none selected" to select one and choose "AKS Service principal" Object ID and "add".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpllheea6tgx35bpixo5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpllheea6tgx35bpixo5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see the access policy added in the list of access policies and click 'save'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycjiun9w43a5ylzu6yto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycjiun9w43a5ylzu6yto.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, connect to AKS Cluster using below commands&lt;/p&gt;

&lt;p&gt;&lt;code&gt;az aks get-credentials --resource-group yourresourcegroupname --name youraksclustername --overwrite-existing&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once done, spin up an nginx pod using below commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncs758lrqk2ytkvoaa6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncs758lrqk2ytkvoaa6a.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;use following command to login to the pod interactively&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl exec -i -t nginx --container nginx -- /bin/bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To access the secret key in azure key vault, we need to hit the api to obtain the access token as described in this &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad#:~:text=In%20the%20terminal%20window%2C%20using,the%20access%20token%20is%20below.&amp;amp;text=The%20response%20includes%20the%20access%20token%20you%20need%20to%20access%20Resource%20Manager" rel="noopener noreferrer"&gt;document&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g9uomn9lnvgf7mvie0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g9uomn9lnvgf7mvie0i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the token is obtained, you can access the secret key value in the key vault using below command.&lt;/p&gt;

&lt;p&gt;curl 'https:///secrets/?api-version=2016-10-01' -H "Authorization: Bearer "&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49a3ye2fffb5cbz4g07u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49a3ye2fffb5cbz4g07u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This way we can access the values in the key vault from AKS with managed identities enabled.&lt;/p&gt;

&lt;p&gt;In this blog post, we have seen what managed identies are in a nut shell and seen how to enabled managed identity for AKS cluster and access the key vault from AKS cluster with help of access policy granted to managed identity of AKS cluster.&lt;/p&gt;

&lt;p&gt;System Assigned managed identity lives as long as the resource is in Azure. Once the resource is deleted, the corresponding managed identity and its service principal are also deleted from Azure AD.&lt;/p&gt;

&lt;p&gt;We also have whats called an User identity which exists even after a resource is deleted and you can assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it and you are responsible for cleaning it up after use.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed reading this blog post.&lt;/p&gt;

&lt;p&gt;Thanks for reading!!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>azuredevops</category>
    </item>
    <item>
      <title>Using Azure Keyvault for Azure Webapps with Azure DevOps</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Fri, 19 Mar 2021 16:44:28 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/using-azure-keyvault-for-azure-webapps-with-azure-devops-25lh</link>
      <guid>https://forem.com/vivekanandrapaka/using-azure-keyvault-for-azure-webapps-with-azure-devops-25lh</guid>
      <description>&lt;h4&gt;
  
  
  Purpose of this post
&lt;/h4&gt;

&lt;p&gt;The purpose of this post is to show you how we can use Azure Key Vault to secure secrets of a webapp and call them from Azure DevOps using Variable groups. This is one of the ways to handle secrets for your deployments. One of the other ways is to use Managed Identities which is more secure. I'll cover that in a different blog post.&lt;/p&gt;

&lt;h4&gt;
  
  
  What are secrets and why is secret management important?
&lt;/h4&gt;

&lt;p&gt;Secrets management is the process of securely and efficiently managing the safe usage of credentials by authorized application. In a way, secrets management can be seen as an enhanced version of password management. While the scope of managed credentials is larger, the goal is the same — to protect critical assets from unauthorized access.&lt;/p&gt;

&lt;p&gt;For managing sensitive application configuration like DB connection strings, API Keys and other types of application related sensitive keys. It is recommended to use Azure Key Vault or any other secret management solution for storing secrets. Azure Key Vault is a cloud service for securely storing and accessing secrets like connection strings, account keys, or the passwords for PFX (private key files). Azure Key vault can be used for all commonly used services like Azure Webapp, Azure Kubernetes, Azure Virtual Machines and many other Azure Services. &lt;/p&gt;

&lt;p&gt;Data like connection strings, API tokens, Client ID, Password are considered as sensitive information and handling them poorly may not only lead into security incidents but also my compromise your entire system.&lt;/p&gt;

&lt;p&gt;Here are a couple of poorly handled secret management practices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintaining secrets in source code repository in a settings or environment config file&lt;/li&gt;
&lt;li&gt;Having same password/keys for all the environments&lt;/li&gt;
&lt;li&gt;Secrets are shared across all the team members&lt;/li&gt;
&lt;li&gt;Teams using service accounts to connect to the database or a server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Avoiding the above would be the first step for an effective secret management.&lt;/p&gt;

&lt;h4&gt;
  
  
  Using Azure KeyVault for App Services
&lt;/h4&gt;

&lt;p&gt;Using Azure DevOps, All the sensitive data like Connection Strings, Secrets, API Keys, and any other data you categorize as sensitive. These values can be fetched directly from Azure Key Vault, instead of configuring them on pipeline.&lt;/p&gt;

&lt;p&gt;Let’s take an example of configuring DB Connection string for an Azure WebApp using Azure KeyVault.&lt;/p&gt;

&lt;p&gt;Lets create a KeyVault along with a secret in it. Notice that the key value is secret.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikat29rg51piqa2c4ud2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikat29rg51piqa2c4ud2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, lets create one more for UAT DB connection. Once created, it will show the keys created as in below screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gg3jvxoyqh8w4kt1l9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gg3jvxoyqh8w4kt1l9u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in Azure DevOps, create a new variable group under the library section of pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83dht1vmcknpxu5clnrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83dht1vmcknpxu5clnrk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Give variable group a name &lt;/li&gt;
&lt;li&gt; Make sure to select the options “Allow access to all pipelines”, “Link secrets from Azure KeyVault”.&lt;/li&gt;
&lt;li&gt; Choose KeyVault name and authorize.&lt;/li&gt;
&lt;li&gt; Click on “Add” and select secrets for using them in the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is the screenshot for reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferdk2xxr90a5ll8gd5vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferdk2xxr90a5ll8gd5vj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, in the pipeline, go to variables section click on ‘Variable groups’ and click on ‘Link variable group’ to choose the variable group that is created.&lt;/p&gt;

&lt;p&gt;In the stages, select the environments and click link option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tek7wr94yd5nb8hyhp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tek7wr94yd5nb8hyhp1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the next step is to configure the task for applying the DB connection string to the app service.&lt;/p&gt;

&lt;p&gt;Add and configure “Azure App Service Settings” task and in the connection strings settings, configure the JSON value for applying DB Connection string. The value here is $(Dev-DBConnectionString) that is stored in Azure KeyVault. It is picked up by the pipeline during the execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2qc7wskpj8uu2yypvfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2qc7wskpj8uu2yypvfz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the logs of the execution for the pipeline. Here it shows that the pipeline is able to fetch the value and it being a sensitive parameter, value of DB connection string is hidden in the logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth5q9s0tv80s7qcj3tx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth5q9s0tv80s7qcj3tx9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the webapp, under configuration-&amp;gt;Database connection strings, we will be able to see the actual value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9bx6yeowkpt6xwwlsrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9bx6yeowkpt6xwwlsrb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we click on the ‘show values’ we can see the value of connection string.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9glabx0w1hm0jmj3z625.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9glabx0w1hm0jmj3z625.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For configuring the other application settings which are &lt;strong&gt;NON-SENSITIVE&lt;/strong&gt;, we can use ‘App Settings’ Section of “Azure App Service Settings” task to configure application settings, similar to DB Connection strings, we can use the values from key vault as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zepk17ktcg9au9juwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zepk17ktcg9au9juwa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the execution, we can see that application key that is configured in the above setting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzww0mwk9k1w38r6ms7v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzww0mwk9k1w38r6ms7v0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other way to manage secrets without key vault is to use variable and the padlock option to lock the key value as shown in the below screenshots.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj75bmmxrx2gxowwkamv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj75bmmxrx2gxowwkamv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5xec7l0s95bb3oumn7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5xec7l0s95bb3oumn7v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This way the secret is not visible to anyone, but if you would like to know the value, you need to other ways to handle it, the suggested approach is to implement a solution like Azure Key Vault with right access polices.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this blog post and we have seen how to use Azure Key Vault for Azure Web Apps with Azure DevOps and various options available to handle secrets in Azure DevOps using Variable groups and Variables.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed reading it.  Happy Learning!!&lt;/p&gt;

</description>
      <category>azuredevops</category>
      <category>azurekeyvault</category>
      <category>azure</category>
    </item>
    <item>
      <title>Importing Existing Infrastructure to Terraform</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Sun, 22 Nov 2020 05:38:10 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/importing-existing-infrastructure-to-terraform-mn2</link>
      <guid>https://forem.com/vivekanandrapaka/importing-existing-infrastructure-to-terraform-mn2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Cover page image credits to: terraform.io&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Purpose
&lt;/h4&gt;

&lt;p&gt;The purpose of this blog post is to show you how to import already deployed infrastructure in azure by other means under Terraform control.&lt;/p&gt;

&lt;h4&gt;
  
  
  Assumptions
&lt;/h4&gt;

&lt;p&gt;I assume that you have basic knowledge on Azure &amp;amp; Terraform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Need for import
&lt;/h4&gt;

&lt;p&gt;Terraform is a IaC tool that lets you to manage your infrastructure regardless of where its hosted (on-premises or cloud) and how its deployed (manually deployed or by other IaC tools like ARM Templates). In case of Azure, you might run into a scenario where the existing infrastructure is already deployed and your team or organization has decided to use Terraform for managing the infrastructure going forward. In such scenario, you need to bring existing infrastructure under Terraform’s control.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll see how to achieve this on an existing infrastructure that’s already deployed on Azure. Here are the steps that are involved in importing the deployed infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Identify the resources that you want to import to Terraform.&lt;/li&gt;
&lt;li&gt;Write the configuration for deployed resource.&lt;/li&gt;
&lt;li&gt;Run ‘Terraform Import’ to import the changes to a state file.&lt;/li&gt;
&lt;li&gt;Run ‘Terraform Plan’ to review the changes between the Terraform configuration and actual state.&lt;/li&gt;
&lt;li&gt;Make the changes in configuration to include the missing changes, run ‘Terraform Plan’ again to make sure that configuration includes all the attributes from deployed infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that when I refer to the word configuration, I mean the terraform template file (.tf) that we use to declare our desired state configuration&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;

&lt;p&gt;I have following resources deployed in my resource group from a different deployment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;App Service Plan&lt;/li&gt;
&lt;li&gt;WebApp with Staging Slot&lt;/li&gt;
&lt;li&gt;Application Insights&lt;/li&gt;
&lt;li&gt;Storage Account&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Forko26f9suj3cwgacvhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Forko26f9suj3cwgacvhb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to import these resources, we’ll have to create a terraform configuration file that includes all these components. Let’s start with App Service Plan. We’ll refer to the documentation of terraform resource for &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/app_service_custom_hostname_binding" rel="noopener noreferrer"&gt;azurerm_app_service_plan&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6bxaxwo02irlv8gpcs95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6bxaxwo02irlv8gpcs95.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created a new created a new main.tf file and copied the above block to it and added azurerm provider on top of it and also updated the values from existing infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  version = "2.0.0"
  features {}
}

resource "azurerm_app_service_plan" "example" {
  name                = "plan-sp-dev"
  location            = East US
  resource_group_name = “az-terf-dev-rg”

  sku {
    tier = "Standard"
    size = "S1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Details of the service plan can be obtained from overview in the resource blade of app service plan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fovu3tlqesr0upzurj5iy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fovu3tlqesr0upzurj5iy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;let’s initialize terraform using &lt;code&gt;terraform init&lt;/code&gt; command to install the azurerm provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg1ee8xxwntr0bun9xz0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg1ee8xxwntr0bun9xz0r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a first step to import the resource, we wrote the configuration. To identify the command associated with importing a particular resource, we would have the import command in terraform documentation for that resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foexhju48g6lywwcf7xzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foexhju48g6lywwcf7xzg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the command syntax:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform import azurerm_app_service_plan.instance1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Web/serverfarms/instance1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you observe carefully, terraform import command is followed by the resource_type which is azurerm_app_service_plan and the resource identifier name which is instance1.  There are the values from the configuration file in current terraform’s working directory.&lt;/p&gt;

&lt;p&gt;So, it’s important that we write the configuration first and the import process is dependent on it and terraform import doesn’t create a configuration file by itself.&lt;/p&gt;

&lt;p&gt;Transforming the import command we obtained above to current implementation, the command we should use is as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform import azurerm_app_service_plan.appserviceplan /subscriptions/yoursubscriptionidgoeshere/resourceGroups/az-terf-dev-rg/providers/Microsoft.Web/serverfarms/plan-sp-dev&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxsns4macytnkgm6cfs8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxsns4macytnkgm6cfs8d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After running the above command, it shows that the App Service Plan resource import is successful and going forward, this resource will be managed by Terraform.&lt;/p&gt;

&lt;p&gt;We need to make sure that next apply doesn’t do any breaking modifications. Let’s run ‘Terraform Plan’ to view the changes between the configuration we wrote and the current state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyrrmnwbsqs3c6uje25i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flyrrmnwbsqs3c6uje25i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqixzj759oautfg3ab8kj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqixzj759oautfg3ab8kj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the above screenshot we can see that it now says that our configuration matches the current state of service plan. This means we can safely run ‘terraform apply’ now if we want to. &lt;/p&gt;

&lt;p&gt;Let’s do the same with other resources as well. We’ll import app service by adding the configuration in existing main.tf file.&lt;/p&gt;

&lt;p&gt;I took the configuration block from terraform documentation and added it by removing “site_config” section as I don’t have any custom site config for my site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2826iqkjstkriqvoaeib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2826iqkjstkriqvoaeib.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I need app_service_plan_id and that can be obtained from app service plan properties.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F07kh58gzonjwzr8l4f5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F07kh58gzonjwzr8l4f5w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_app_service" "appservice" {
  name                = "app-dev-webapp"
  location            = "EastUS"
  resource_group_name = "az-terf-dev-rg"
  app_service_plan_id = "/subscriptions/yoursubscriptionidgoeshere/resourceGroups/az-terf-dev-rg/providers/Microsoft.Web/serverFarms/plan-sp-dev"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is how my configuration file looks like so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Furlbjw6pa2ijqsysul5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Furlbjw6pa2ijqsysul5l.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s try to import this resource and check. The procedure to find the command for importing is same as service plan. Syntax is as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform import azurerm_app_service.instance1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Web/sites/instance1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s run a terraform import.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgg58178gikthp6pw7549.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgg58178gikthp6pw7549.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s take a minute to examine the. tfstate file that got updated after terraform import command. Every time the import command is run, it would update the state file with the attributes that terraform imported for. So we haven’t mentioned the app insights key information in our configuration, but terraform state file got updated with all the attributes that were set during the manual deployment of this resource. So we need to add the app insights instrumentation key in our configuration file.&lt;/p&gt;

&lt;p&gt;If we omit this, terraform would still show that there are no changes to be applied, however if someone changes this key, it won’t be under terraform’s control to revert the change back.&lt;/p&gt;

&lt;p&gt;So we need to identify all the configs that we need to control via terraform and add them in our configuration file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhkzucv5ukyqlcgeutl9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhkzucv5ukyqlcgeutl9c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F69p1judhpvefuz5bjkwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F69p1judhpvefuz5bjkwp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if we run terraform plan it should show no changes to the existing infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwri26443br09h7jal2ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwri26443br09h7jal2ja.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Incase if you see any changes in the output of terraform plan, make sure that you identify and correct the changes. Once the changes are correct, run ‘terraform plan’ to make sure that there are no new changes to be applied.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn9n58vho0qdvoaiy5kec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn9n58vho0qdvoaiy5kec.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, we can do the same for other resources as well. Let’s add staging slot with different name to see how it will show the changes and we can take a corrective action.&lt;/p&gt;

&lt;p&gt;Name of the staging slot is ‘webappname-staging’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdkhrpeekqmygioworker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdkhrpeekqmygioworker.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve added the slot configuration &amp;amp; intentionally gave the name wrong to identify the different after we run import &amp;amp; plan commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsnj4fobddpho0dsx3uqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsnj4fobddpho0dsx3uqx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the output of terraform import command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fasn18kddly96wzg6wdwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fasn18kddly96wzg6wdwb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Import command will succeed regardless of the name in terraform configuration as we gave the correct name in the terraform import command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpvy1ag8ay17o1f49z3w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpvy1ag8ay17o1f49z3w5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, terraform plan shows that there is a change 1 to add and 1 to destroy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fghjbvgav0eqr7enrwrai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fghjbvgav0eqr7enrwrai.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the output, it clearly shows that there is a change which is forcing the replacement. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fptrr2oa60b5k2f7nkoe2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fptrr2oa60b5k2f7nkoe2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, it’s important that we check the configuration thoroughly before running the plan. &lt;br&gt;
Let’s correct the configuration by changing the name from “staging-test” to “staging” and run ‘terraform plan’ again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fag6v459b59z68jzu5u99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fag6v459b59z68jzu5u99.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the same way, we can import other resources (storage account, app insights)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhdbzuw2bi4a3b0046f21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fhdbzuw2bi4a3b0046f21.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While, app insights import and plan commands ran as expected, storage account resulted in showing that there is a new change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbmlj2k3eazprvco4yzc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbmlj2k3eazprvco4yzc6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I haven’t changed the account replication type to “LRS” which is the current setting for the storage account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7l1gabvu3n5dmn3oxn5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7l1gabvu3n5dmn3oxn5m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve changed the configuration back to “LRS” and it showed no new changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjksa8wd0shdnv4jj84g2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjksa8wd0shdnv4jj84g2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fja66fyurj7yx9umr138o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fja66fyurj7yx9umr138o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have seen how to import the configuration by hardcoding the values in a single configuration file. However, the best practice is to maintain separate variables file and individual module based architecture of all components you wish to deploy. &lt;/p&gt;

&lt;p&gt;Check out my other &lt;a href="https://dev.to/vivekanandrapaka/terraform-and-azure-devops-120p"&gt;blog post&lt;/a&gt; on how to integrate terraform with DevOps and deploy to various environments with ease with module based architecture.&lt;/p&gt;

&lt;p&gt;Here are key take-aways for importing existing infrastructure to terraform:&lt;/p&gt;

&lt;p&gt;1.Write the configuration first for the resources you would like to import into terraform state.&lt;br&gt;
2.Terraform import process is dependent on the configuration we write.&lt;br&gt;
3.Import command can be found in the resource documentation and the bottom of the page.&lt;br&gt;
4.Always review the statefile to identify the attributes &amp;amp; the values that were imported.&lt;br&gt;
5.Make necessary changes to your configuration file based on the state file review.&lt;br&gt;
6.Run terraform plan to make sure that there are no breaking changes.&lt;/p&gt;

&lt;p&gt;It’s always a best practice to source control terraform configuration files and store the state file in remote back end.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this blog post. Hope you enjoyed reading it.&lt;/p&gt;

&lt;p&gt;Happy Learning!!!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Magic with Azure DevOps PowerShell Module</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Thu, 12 Nov 2020 08:51:31 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/magic-with-azure-devops-powershell-module-4kjk</link>
      <guid>https://forem.com/vivekanandrapaka/magic-with-azure-devops-powershell-module-4kjk</guid>
      <description>&lt;h4&gt;
  
  
  Purpose of this blog post
&lt;/h4&gt;

&lt;p&gt;To show you how to use AzureDevOps PowerShell module to achieve some basic tasks that you can do using PowerShell.&lt;/p&gt;

&lt;h4&gt;
  
  
  Assumptions
&lt;/h4&gt;

&lt;p&gt;I assume that you have fair knowledge on how to use PowerShell for day-to-day operations and some practical working experience on Azure DevOps.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites needed
&lt;/h4&gt;

&lt;p&gt;Here are a couple of prerequisites you need if you would like to replicate below in your environment&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; PowerShell (Obviously) - Its suggested that you have latest and greatest version of PowerShell, but any PowerShell version after 5.1 works just fine.&lt;/li&gt;
&lt;li&gt; AzureDevOps PowerShell module – I’ll show you how to install this in this blog post.&lt;/li&gt;
&lt;li&gt; Azure Devops Account - With few build &amp;amp; release pipelines setup. check out one of my previous blog &lt;a href="https://dev.to/vivekanandrapaka/terraform-and-azure-devops-120p"&gt;posts&lt;/a&gt; on how to setup an end to end Azure DevOps Pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  AzureDevOps PowerShell Module
&lt;/h4&gt;

&lt;p&gt;AzureDevOps module is open-source PowerShell module, first released in the year 2017. It is owned and maintained by &lt;a href="https://developer.microsoft.com/en-us/advocates/donovan-brown" rel="noopener noreferrer"&gt;Donovan Brown&lt;/a&gt; (Principal DevOps manager at Microsoft) and &lt;a href="https://www.razorspoint.com/author/razor/" rel="noopener noreferrer"&gt;Sebastian Schütze&lt;/a&gt; (Azure Nerd with focus on DevOps and AzureDevOps). and there have been many new versions to it.  The current version as of this article is 7.1.2.&lt;/p&gt;

&lt;p&gt;If you are an infrastructure engineer or developer, I’m sure that you have used PowerShell in past and the capabilities that you get with PowerShell are simply awesome. Other than the native cmdlets available with default in-built modules available in PowerShell, there are lot of community-driven/open-source modules available for achieving desired actions.&lt;/p&gt;

&lt;p&gt;Once such module is AzureDevOps PowerShell module.&lt;/p&gt;

&lt;p&gt;Just like any other PowerShell module for respective functions, you can use the cmdlets in this module to interact with REST API of Azure DevOps for all the aspects of Azure DevOps. We already have AzureDevOps CLI offered by Microsoft and it does the same.&lt;/p&gt;

&lt;p&gt;If you are a PowerShell fan like me, you would look for a module that offers cmdlets for respective functions for any of the technologies.&lt;/p&gt;

&lt;p&gt;To know more about Azure DevOps Powershell Module, please visit following &lt;a href="https://www.donovanbrown.com/post/PowerShell-I-would-like-you-to-meet-TFS-and-VSTS" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.powershellgallery.com/packages/VSTeam/7.1.2" rel="noopener noreferrer"&gt;AzureDevOps module on Powershell gallery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chocolatey.org/packages/microsoft-vsteam-psmodule" rel="noopener noreferrer"&gt;AzureDevOps module in Choclatey&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, Let’s get started.&lt;/p&gt;

&lt;p&gt;First, we’ll take a look at how to install AzureDevOps module and the capabilities this module offers. It’s pretty straight forward. &lt;/p&gt;

&lt;p&gt;We need a PAT (personal access token) from Azure DevOps to consume it from the cli. So, we’ll generate that first and then install the PowerShell module.&lt;/p&gt;

&lt;p&gt;1.Login to your Azure DevOps organization and generate personal access token for your account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4lm97ji2vtzqpd7t2hm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4lm97ji2vtzqpd7t2hm1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Select “New Token” and give it a name, choose full access.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: Do not share this token with anyone. Anyone who has this token has access to your AzureDevOps account.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1gw3igd12bwm5uk7uh08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1gw3igd12bwm5uk7uh08.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Copy the token and secure it. You need to use it later.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;READ the warning message&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fai3bfm5kaptsirau1551.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fai3bfm5kaptsirau1551.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Run PowerShell as administrator and run following command:&lt;br&gt;
&lt;code&gt;Install-Module -Name VSTeam&lt;/code&gt;&lt;br&gt;
(run it with –force parameter to upgrade to latest version if the module is already installed)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flg3su8nt4mhlb6i7jl8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flg3su8nt4mhlb6i7jl8n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You see that I already had the module, I ran with ‘-force’ parameter to upgrade it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo4idja3qy48dds19y3p0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fo4idja3qy48dds19y3p0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Assign the Personal access token generated in the previous step to a variable for easy reuse.&lt;br&gt;
$PAT="yourtokengoeshere"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw9s2hc24stlz5rxruvd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw9s2hc24stlz5rxruvd7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.Type following commands to set up your account:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Import-Module VSTeam&lt;br&gt;
Set-VSTeamAccount -Account https:// https://dev.azure.com/yourorganizationname/ -PersonalAccessToken $PAT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flramvhwhowa7xqm19j8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flramvhwhowa7xqm19j8b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7.Let’s list the projects in the Azure DevOps.&lt;br&gt;
&lt;code&gt;Get-VSTeamProject&lt;/code&gt;   # Lists all the available Projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifi2mniby4y8ln14tumf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fifi2mniby4y8ln14tumf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8.Let’s set a project to a $project to a variable and use below cmdlet for exploring the build definitions, etc.&lt;/p&gt;

&lt;p&gt;you can use any of your projects from previous output&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$Project='Terraform'&lt;/code&gt;  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Get-VSTeamReleaseDefinition -ProjectName $Project&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2kh2nhahruabb9i2eya0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2kh2nhahruabb9i2eya0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have one CD release pipeline, that its showing in the above output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsvb9uogtqq4fxeqyg49m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsvb9uogtqq4fxeqyg49m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9.Let’s explore a little more. How about exporting the variables for this release definition?&lt;/p&gt;

&lt;p&gt;Type the following command to get the variables.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Get-VSTeamReleaseDefinition -ProjectName $project -Id 1 | Select -expand Variables | Format-List&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F600b95hsaygbdi4xz6zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F600b95hsaygbdi4xz6zm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great, we can see the variables in output. Now let’s see the variables from release pipeline to verify.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx8yozouy7mym9rph08t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx8yozouy7mym9rph08t9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks like we received only the release-scoped variables alone. &lt;/p&gt;

&lt;p&gt;Let’s see on how to obtain values for scope for ‘Dev’&lt;/p&gt;

&lt;p&gt;Let’s pipe our previous cmdlet to &lt;code&gt;Get-member&lt;/code&gt; to see if there exists a property for environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F232iux4mwoqbkbu7k7lz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F232iux4mwoqbkbu7k7lz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have an environment Property. Let’s expand on that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Falf57r53ypmscqm500v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Falf57r53ypmscqm500v7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above output, we can see the variables which are scoped to ‘Dev’&lt;/p&gt;

&lt;p&gt;10.Now, let’s run following command to get environment details for respective environments&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Get-VSTeamReleaseDefinition -ProjectName $project -Id 1 | Select -expand environments | Where name -like 'DEV' | Select -Expand Variables | Format-List&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgnitb1587h7mjq54riby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgnitb1587h7mjq54riby.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above output we can see the variables scoped for ‘Dev’ environment.&lt;/p&gt;

&lt;p&gt;Similarly, if you have additional stages, you can provide the environment name in the above command to retrieve the values. This is especially helpful when you want to extract the variables and store it for future reference.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this blog post. &lt;/p&gt;

&lt;p&gt;There are many other cmdlets that we can use to play around and get a lot done with help of this module. We can write scripts or a simple one liner to trigger the releases, create reports based on the release run, etc.&lt;/p&gt;

&lt;p&gt;I highly encourage you to go through the various other cmdlets available for use by typing Get-Command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fur0y5prxbbgrlt2iz854.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fur0y5prxbbgrlt2iz854.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, Here is the GitHub &lt;a href="https://github.com/MethodsAndPractices/vsteam" rel="noopener noreferrer"&gt;link&lt;/a&gt; for the module and it has detailed documentation on how can you can contribute to it.&lt;/p&gt;

&lt;p&gt;Thanks for reading this blog post, happy learning!!!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azuredevops</category>
      <category>powershell</category>
    </item>
    <item>
      <title>SonarCloud Quality check and Pre-deployment Gates with Azure DevOps</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Sun, 01 Nov 2020 18:24:06 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/sonar-cloud-quality-check-and-pre-deployment-gates-with-azure-devops-2pd7</link>
      <guid>https://forem.com/vivekanandrapaka/sonar-cloud-quality-check-and-pre-deployment-gates-with-azure-devops-2pd7</guid>
      <description>&lt;h4&gt;
  
  
  &lt;strong&gt;Purpose of this blog post&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In my previous blog &lt;a href="https://dev.to/vivek345388/devsecops-with-azure-devops-32ho"&gt;post&lt;/a&gt; on DevSecOps with Azure DevOps, I’ve explained about how we can integrate various security tools in Azure DevOps. SonarCloud is one of them. I’ve also mentioned that we can use SonarCloud quality gates as one of the pre-checks for deployment to any of the stages.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll see how we can add SonarCloud as one of the Pre-deployment Gate checks before we deploy it to any of the environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Pre-Deployment Approvals, Gates and Manual Intervention&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;There are three types of checks that can be used to control our release deployment in Azure DevOps.&lt;/p&gt;

&lt;p&gt;Lets quickly recap on what those are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-Deployment Approvals:&lt;/strong&gt; We add someone from our team as an approver for the release to be promoted and deployed to a specific stage. Once the deployment is approved, the release is processed. This is helpful when you would someone from your team to do a review of the deployment that is about to be performed. A good example would be, a test engineer needs to approve the deployment to Production only after all the bugs in Pre-Prod are remediated. You can add more than one approver for a release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-deployment gates:&lt;/strong&gt; We can perform some pre-checks to make sure that the defined requirement(s) are met prior to the deployment of the code into your infrastructure. These are mainly used when you need to connect to external services and get health signal from external services and then promote the release only after the desired health check is met. Typically, gates are used in connection with incident management, problem management, change management, monitoring, and external approval systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual Intervention:&lt;/strong&gt; Sometimes, you may need to introduce manual intervention into a release pipeline. For example, there may be tasks that cannot be accomplished automatically such as confirming network conditions are appropriate, or that specific hardware or software is in place, before you approve a deployment. You can do this by using the Manual Intervention task in your pipeline.&lt;/p&gt;

&lt;p&gt;To know more about these, follow the below link from Microsoft documentation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/?view=azure-devops" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/?view=azure-devops&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SonarCloud Quality gate check:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have configured SonarCloud analysis in our previous blog post and here is how it looks like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdlq6yvjw6s8wxe5u3j89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdlq6yvjw6s8wxe5u3j89.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to install SonarCloud extension from market place, configure it in Azure DevOps and add above tasks.&lt;/p&gt;

&lt;p&gt;Publish Quality Gate Result would query the rest api of SonarCloud and get the code analysis and show it in the results section.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To see how to add and configure SonarCloud, please follow my previous blog post &lt;a href="https://dev.to/vivek345388/devsecops-with-azure-devops-32ho"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the build is triggered and successfully run, it would show the results in the extensions tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5kn8eiq0u672jaj9rpv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5kn8eiq0u672jaj9rpv2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now on to how we setup Pre-Deployment Gates, click on release pipelines and then &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the Pre-deployment actions button&lt;/li&gt;
&lt;li&gt;Click on gates and enable it.&lt;/li&gt;
&lt;li&gt;Select ‘Check SonarCloud Quality Gate status’ and enable it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3j39vrzbxd4iua5wtjg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3j39vrzbxd4iua5wtjg6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is how it would look like once its enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjmjtaiva6nxylkbbo98i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjmjtaiva6nxylkbbo98i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go-ahead and create a new release and you can observe that its executing the Pre-Deployment gates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F43olx51aodwsrh991pfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F43olx51aodwsrh991pfa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s going wait on the default delay of 5 mints before evaluating the gates. You can change the default evaluation time out in the ‘evaluation options’ part of the Pre-deployment conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsj2lt87r2rse9v05ukrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsj2lt87r2rse9v05ukrr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it’s done, it will start deploying to your stage.&lt;br&gt;
With Pre-Deployment gates, you not only have option to check ‘SonarCloud Quality’ status, but also it lets you choose other options like checking if your deployment Azure Policies are met and compliant and also many more like invoking a custom azure function you can code, check for alerts, etc. You can see the other options in the below screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8fez9v5s7hium45arzmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8fez9v5s7hium45arzmz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we have learnt about what Pre-deployment gates are and how to integrate SonarCloud quality checks in your release pipeline.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed reading this post.&lt;/p&gt;

&lt;p&gt;Happy Learning!!&lt;/p&gt;

</description>
      <category>sonarcloud</category>
      <category>azuredevops</category>
      <category>azure</category>
    </item>
    <item>
      <title>DevSecOps with Azure DevOps</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Wed, 28 Oct 2020 19:07:26 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/devsecops-with-azure-devops-32ho</link>
      <guid>https://forem.com/vivekanandrapaka/devsecops-with-azure-devops-32ho</guid>
      <description>&lt;h4&gt;
  
  
  Purpose of this post
&lt;/h4&gt;

&lt;p&gt;The purpose of this blog post is to give you high level overview on what DevSecOps is and some steps on how security can be integrated in your Azure DevOps pipeline with help of some readily available tasks in Azure DevOps related to some commonly used security scanning tools in build and release pipelines.&lt;/p&gt;

&lt;h4&gt;
  
  
  Continuous Integration, Deployment and Delivery
&lt;/h4&gt;

&lt;p&gt;If you are reading this article, I’m assuming that you must have encountered these terms – CI and CD by and you do have a fair understanding of them.&lt;/p&gt;

&lt;p&gt;Let’s recap on what we mean by each of these terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous integration&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Continuous integration is a process of automating the build and testing the quality of the code when someone in the team commits the code to your source control. This ensures that a particular set of unit tests are run, build compiles successfully, without any issues. In case, if the build fails, the person committing the code should be notified to fix the issues encountered. This is one of the software engineering practices where the feedback on the newly developed code is provided to the developers immediately, with different types of tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Delivery Vs Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both Continuous Delivery and Deployment are interesting terms. In Continuous Delivery, once the CI is done and the code is integrated in your source code, the ability to deploy the code automatically to various stages of the pipeline seamlessly and making sure that the code is production ready is Continuous Delivery, However in continuous delivery, the code is not deployed to production automatically. A manual intervention is required.&lt;/p&gt;

&lt;p&gt;Where as in continuous deployment, Every build or change that is integrated passes all the quality checks, deployment gates and they get deployed from lower environments till Production automatically without any human intervention.&lt;/p&gt;

&lt;p&gt;CI &amp;amp; CD helps you deliver the code faster, great!!! &lt;/p&gt;

&lt;p&gt;but how about security?&lt;/p&gt;

&lt;p&gt;DevSecOps is no longer a buzz word, or maybe it still is, but lot of organizations are shifting gears towards implementing the notion of including security in their Software Development lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is DevSecOps?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security needs to shift from an afterthought to being evaluated at every step of the process. Securing applications is a continuous process that encompasses secure infrastructure, designing an architecture with layered security, continuous security validation, and monitoring for attacks.&lt;/p&gt;

&lt;p&gt;In simple terms, the key focus around DevSecOps is that you need to make sure that the product you are developing is secure right from the time you start coding it and that the best practices of ensuring that security are met at every stage of your pipeline and an ongoing practice. In other words, security should be met as one of the key elements from the initial phase of development cycle, rather than looking at the security aspects at the end of the product sign-off/deployment. This is also called as ‘shift-left’ strategy of security. It’s more of injecting security in your pipeline at each stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do can we achieve security at various stages of the pipeline?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are multiple stages involved in getting your code deployed to your servers/cloud hosted solutions right from the developers coding it from the pipeline till deploying them.&lt;/p&gt;

&lt;p&gt;Let’s now see few of them and how we can achieve integrating security around our pipelines using this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precommit–Hooks/IDE Plugins:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Precommit hooks/IDE Plugins are usually used to find and remediate issues quickly in the code even before a developer commits the code to the remote repository.  Some of the common issues that can be found or eliminated are credentials exposed in code like SQL connection strings, AWS Secret keys, Azure storage account keys, API Keys, etc. When these are found in the early stage of the development cycle, it helps in preventing accidental damage.&lt;br&gt;
There are multiple tools/plugins which are available and can be integrated in a developer’s IDE. A developer still get around these and commit the code bypassing these Precommit hooks. These are just the first line of defense but not meant to be your full-fledged solution for identifying major security vulnerabilities. Some of the Precommit hooks tools include – Git-Secret, Talisman. Some of the IDE plugins include .NET Security Guard, 42Cruch, etc. You can find more about other tools here: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://owasp.org/www-community/Source_Code_Analysis_Tools" rel="noopener noreferrer"&gt;https://owasp.org/www-community/Source_Code_Analysis_Tools&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets Management:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using secret management for entire code base is one of the best practices. There could be a secret management tool that you can use like an Azure Key Vault, AWS secret manager, HashiCorp vault built into your pipeline already for accessing the secure credentials. The same secret management has to be used by your entire code base and not just the DevOps pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software Composition Analysis:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the name indicates, SCA is all about analyzing the software/code for determining the vulnerable open-source components, third party libraries that your code is dependent on.&lt;br&gt;&lt;br&gt;
In majority of the cases of software development, very less portion of code is written and rest of it is imported/dependent on from external libraries. &lt;/p&gt;

&lt;p&gt;SCA focuses on not only determining the vulnerable open source components, but also shows you if there are any outdated components are present in your repo &amp;amp; also highlights issues with opensource licensing. WhiteSource bolt is one of the light weight tools that does scanning of the code integrates with Azure DevOps and shares the vulnerabilities and fixes in a report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAST (Static analysis security testing):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While SCA is focused on determining issues related to open source/third party components used in our code, it doesn’t actually analyze the code that is written by us.&lt;br&gt;
This will be done by SAST. Some common issues that can be found are like SQL Injection, Cross-site scripting, insecure libraries, etc. Using these tools needs collaboration with security personnel as the initial reports generated by these reports can be quite intimidating and you may encounter certain false-positives. CheckMarx is one of the SAST tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DAST (Dynamic Analysis Security Testing):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Key differences between SAST and DAST is that while vulnerabilities can be determined in the third libraries in our code, it doesn’t actually scan the deployed site itself. There could be some more vulnerabilities which can’t be determined until the application is deployed into one of the lower environments like PreProd by providing target site URL. You can run DAST in a passive or aggressive mode. While passive test runs fairly quick, aggressive tests run for more time.&lt;/p&gt;

&lt;p&gt;In general, a manual Pen test/DAST can take longer time. This will be done by a Pentester from security team. A manual test can’t be done every time you check-in the code or deploy the code as pen testing itself would take some amount of time. &lt;/p&gt;

&lt;p&gt;I have worked on cloud migrations to Azure &amp;amp; AWS and we usually raise a request for DAST &amp;amp; Pentest test to security team at the last leg of migration lifecycle and get a sign-off from security team after all the identified vulnerabilities are fixed. Usually security team the takes a week or sometimes more than a week for them to complete the report, they run scripts, test data, try to break the application and what not to see if the application we migrated is secure enough. Once the vulnerability report is out, we look at the critical &amp;amp; high issues reported and start working on fixing them. Majority of the times, the deliverable timelines used to get extended based on the amount of work we had to do to remediate the issues raised. With DAST testing using ZAP Auto Scanner task in Azure DevOps, we can identify and fix the issues even before they become a bottle neck later.&lt;/p&gt;

&lt;p&gt;And, security just doesn’t mean DAST/ Pentest or code quality alone. The infrastructure you deploy should also be secure. With your environment deployed on Azure, you have Azure Polices/Initiatives that help you govern and put rail guards around your infrastructure by auditing &amp;amp; enforcing the rules you specify. You could enforce polices to make sure that your infrastructure meets your desired state. For example, using Azure Polices, you can enforce use of managed azure disks only, storage accounts are not publicly accessible, Subnets in a particular VNet doesn’t allow Inbound Internet Traffic, SQL Server firewalls doesn’t allow internet traffic, etc. These are just of few of the tasks that you can achieve using Azure Polices. We will take a look at how Azure polices work in another blog post and also enabling effective monitoring and alerting is another key aspect.&lt;/p&gt;

&lt;p&gt;Azure DevOps supports integration of multiple open source and licensed tools for scanning your application as a part of your CI &amp;amp; CD process.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll see how to achieve security in our Azure DevOps pipeline using following tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; WhiteSource Bolt extension for Scanning Vulnerability for SCA&lt;/li&gt;
&lt;li&gt; Sonarcloud for code quality testing&lt;/li&gt;
&lt;li&gt; OWASP ZAP Scanner for passive DAST testing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Sonarcloud for code quality testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.WhiteSource Bolt:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Integrating WhiteSource bolt in your pipeline is pretty straight forward. In this blog post, I’m going to use one of the previous repos that I have used in my previous blog posts.&lt;/p&gt;

&lt;p&gt;If you would like to follow along, feel free to clone/import it to your Azure DevOps repo and steps are in the previous blog &lt;a href="https://dev.to/vivek345388/terraform-and-azure-devops-120p"&gt;post&lt;/a&gt; too.&lt;/p&gt;

&lt;p&gt;To install WhiteSource Bolt in your Azure DevOps pipeline, search for “WhiteSource Bolt” from Marketplace and install it. You’ll go through a series of steps to get it installed in your organization.&lt;/p&gt;

&lt;p&gt;It’s all straight forward.&lt;/p&gt;

&lt;p&gt;I’m jumping straight ahead to the build pipelines, in which we are going to integrate WhiteSource Bolt.&lt;/p&gt;

&lt;p&gt;Login to your Azure DevOps and click on Pipelines -&amp;gt; Build Pipelines and edit your build pipeline, you can import the complete project and pipelines from my git repo and the steps are mentioned my previous blog post. please refer to the link above.&lt;/p&gt;

&lt;p&gt;Once in the build pipeline, to add the tasks click on “+” icon and search for “WhiteSource bolt” in Marketplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3vqb69js2padfyqz4k4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3vqb69js2padfyqz4k4e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in your build pipeline, click “+” and add “WhiteSource Bolt” task&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2czq13wyszb5pgdxxx7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2czq13wyszb5pgdxxx7e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave the default settings as by default, it would scan your root directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp546prwedjdbuag28196.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp546prwedjdbuag28196.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and kick-off a new build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftngkwj5vwx8h6pbs7ytd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftngkwj5vwx8h6pbs7ytd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In your build pipeline, you can see the logs of the task&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3zpiqhtv0syen74iciar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3zpiqhtv0syen74iciar.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In your build pipeline section, you will see that you have a new section for WhiteSource Bolt, you can click on this to view the results after the build pipeline completes the build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgtqfrcl9m1hv85ngwvwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgtqfrcl9m1hv85ngwvwb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also see the results in the build pipeline results and the report tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flb48vlh9av12qs1j0gv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flb48vlh9av12qs1j0gv6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that it not only shows the vulnerabilities, but also shows the fixes for each of them. Note that this has only scanned the third party libraries and open source components in the code but not the deployed code on the target infrastructure.&lt;/p&gt;

&lt;p&gt;This can be achieved via DAST testing in release pipeline using ZAP Auto Scanner. We’ll see that as well in this blog post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Sonarcloud&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let us see how to integrate SonarCloud in Azure DevOps Pipeline. Prior to adding task in Azure DevOps, we need to import our Azure DevOps Project in SonarCloud.&lt;/p&gt;

&lt;p&gt;You need Sonarcloud account for integrating it in the pipeline. Login to &lt;a href="https://sonarcloud.io/" rel="noopener noreferrer"&gt;https://sonarcloud.io/&lt;/a&gt; with your Azure DevOps account and choose your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi06wgvkepwcoxoz92cge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fi06wgvkepwcoxoz92cge.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select import projects from Azure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1bmrnwcfx2264gbvmkzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1bmrnwcfx2264gbvmkzc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a personal access token in Azure DevOps, copy the token and paste it somewhere, we need it later&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbho6plf2tig903cmb62e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbho6plf2tig903cmb62e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in Sonarcloud site, provide the personal access token to import the projects, choose defaults to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu89p9rmx62u94tevi2v8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fu89p9rmx62u94tevi2v8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generate a token in Sonarcloud that will be used in Azure DevOps. Once logged in SonarCloud, go to My Account &amp;gt; Security &amp;gt; Generate Tokens and copy the token and paste it somewhere, we need it later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgikf3wd5m8f8sprsg7dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgikf3wd5m8f8sprsg7dg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the application project Click on ‘Administration’ -&amp;gt; ‘Update Key’ to find the key for the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F56mag8vwuv17j5u3nb76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F56mag8vwuv17j5u3nb76.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now back in Azure DevOps we need to add SonarCloud tasks.  Go to the build pipeline and install SonarCloud plugin from marketplace. Just like WhiteSource bolt, search for Sonarcloud and install it in our Azure DevOps Organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxto3h87ocjbnpfxnzrvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxto3h87ocjbnpfxnzrvy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike WhiteSource bolt, we need to add three tasks for analyzing the code with SonarCloud. &lt;/p&gt;

&lt;p&gt;Note that the project I’m trying to analyze is .NET Core, but the process of including the steps doesn’t vary much for any of the other technologies.&lt;/p&gt;

&lt;p&gt;Add the ‘Prepare analysis on SonarCloud’ task before Build task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft1f9o3ekwpfxearw3zkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ft1f9o3ekwpfxearw3zkf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide following details for the task:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; SonarCloud Service Endpoint: Create a new service endpoint by clicking on ‘new’ and copy paste the code generated, give a name for Service connection name, Save and verify&lt;/li&gt;
&lt;li&gt; Select the organization&lt;/li&gt;
&lt;li&gt; Add Project key generated from SonarCloud earlier.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below screenshot shows how to add a new service connection after clicking on ‘new’ in step 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2ru369kxa5d4kpaylmxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2ru369kxa5d4kpaylmxj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add ‘Run code Analysis’ and ‘Publish Quality Gate Result’ tasks and save it and create a build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcgb6wvi9fty3a2ru04rc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcgb6wvi9fty3a2ru04rc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Publish Quality Gate Result task is optional, but it can be added to publish the report link and quality gate result status to the pipeline.&lt;/p&gt;

&lt;p&gt;Save and initiate a build. Once you run it, you should see the logs as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8tixwczkzp2leoth8kew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8tixwczkzp2leoth8kew.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the build summary, under extensions tab, you can see the link to view the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnsrq4tu6sxongh42a80h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnsrq4tu6sxongh42a80h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above screen, the quality gate status shows as none. the reason for that is, in Sonarcloud Initial status for quality gate shows as “Not computed” for the project we imported. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsjmcxuipkw8b5sh4d77t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsjmcxuipkw8b5sh4d77t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To fix it, under administration tab, choose "Previous Version" and notice that it says that 'changes will take effect after the next analysis'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7q7sby7kztsgyz55rgnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7q7sby7kztsgyz55rgnb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, the status in overview shows that “Next scan will generate a Quality Gate”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx972aylv3uoo9r63gqo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx972aylv3uoo9r63gqo4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in Azure DevOps, trigger another build and wait for it to complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyvo5atojkyabm849vmju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyvo5atojkyabm849vmju.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now under extensions tab of build summary, it should show the result status along with the link to view the Bugs, Vulnerabilities, etc. click on the "Detailed SonarCloud Report" to view the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc0nbiwomn1vbll2cat8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc0nbiwomn1vbll2cat8e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fomk2u9is1hehkgdznrqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fomk2u9is1hehkgdznrqs.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr08lmf2rnu6xr8n3hode.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr08lmf2rnu6xr8n3hode.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The beauty of Sonarcloud is that you can integrate in your branch polices for any new Pull Requests raised and also as one of the deployment gates for deploying the bug free code to your environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. ZAP Auto Scanner:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One tool to consider for penetration testing is OWASP ZAP. OWASP is a worldwide not-for-profit organization dedicated to helping improve the quality of software. ZAP is a free penetration testing tool for beginners to professionals. ZAP includes an API and a weekly docker container image that can be integrated into your deployment process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Definition credits: owasp.org &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With ZAP scanner you can either run a passive or active test. During a passive test, the target site is not manipulated to expose additional vulnerabilities. These usually run pretty fast and are a good candidate for CI process. When the an active san is done, it is used to simulate many techniques that hackers commonly use to attach websites.&lt;/p&gt;

&lt;p&gt;In your release pipeline, click on add to add a new stage after PreProd stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxflu6x5o0kerjwwhn1xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxflu6x5o0kerjwwhn1xp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new stage with ‘Empty Job’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fedbhpjw9wh3mmr2torx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fedbhpjw9wh3mmr2torx6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rename it to DAST Testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffnn6g5wgxke14howwggx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffnn6g5wgxke14howwggx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on add tasks and add get ‘ZAP Auto Scanner’ task from market place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjqrozr4537nj0h7sim57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjqrozr4537nj0h7sim57.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, add following tasks one after the other.&lt;/p&gt;

&lt;p&gt;OWASP Zap Scanner:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Leave Aggressive mode unchecked.&lt;/li&gt;
&lt;li&gt; Failure threshold to 1500 or greater. This to make sure that the test doesn’t fail if your site has score more in number. Default is 50.&lt;/li&gt;
&lt;li&gt; Root URL to begin crawling: Provide your URL that the scan needs to run against.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Word of Caution:&lt;/strong&gt; Don't provide any site URLs in the above step that you don't own. crawling against sites that you don't own is considered as hacking.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;4.Port: default is 80, if your site is running on secure port, provide 443, else you can leave it to port 80&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9n4vdjl7tt93exgi7u9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9n4vdjl7tt93exgi7u9z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nunit template task: this is mainly used to install a template that is used by Zap scanner to produce a report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj7o41fdc4rvfwmnweky7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj7o41fdc4rvfwmnweky7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The in-line script used is present in description of the tool in Azure Market Place. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=CSE-DevOps.zap-scanner" rel="noopener noreferrer"&gt;https://marketplace.visualstudio.com/items?itemName=CSE-DevOps.zap-scanner&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generate nunit type file task: This used to publish the test results in XML format to owaspzap directory in the default working directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1u9dfpdagkr5rylk2xrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1u9dfpdagkr5rylk2xrg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Publish Test Results task: this is mainly used to publish the test results from the previous task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5yyt142nej9ulkwkpx4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5yyt142nej9ulkwkpx4r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure that you select the agent pool as ‘ubuntu-18.04’&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcqdcz28ru81vmnr0wj72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcqdcz28ru81vmnr0wj72.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything is done, Kick off a release. Make sure that Preprod stage is deployed and the environment is ready before running DAST testing stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0am5aez49og53um3wj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0am5aez49og53um3wj7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a release is complete, you should be able to see the results in the tests tab of the release you created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fefqcaexmy42tehjz9fe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fefqcaexmy42tehjz9fe7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, We have seen how to integrate security testing using WhiteSource Bolt, SonarCloud and OWASP ZAP Scanner in our DevOps pipeline at various stages of build and release.&lt;/p&gt;

&lt;p&gt;This brings us to the end of this blog post.&lt;/p&gt;

&lt;p&gt;Just like DevOps, DevSecOps also needs cultural shift. It needs collaboration from all departments of an organization to achieve security at each level.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed reading it.  Happy Learning!!&lt;/p&gt;

&lt;p&gt;Couple of references I used for writing this blog post:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=DzX9Vi_UQ8o&amp;amp;t=724s" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=DzX9Vi_UQ8o&amp;amp;t=724s&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>azuredevops</category>
      <category>security</category>
    </item>
    <item>
      <title>Terraform and Azure DevOps</title>
      <dc:creator>Vivekanand Rapaka</dc:creator>
      <pubDate>Tue, 20 Oct 2020 13:14:26 +0000</pubDate>
      <link>https://forem.com/vivekanandrapaka/terraform-and-azure-devops-120p</link>
      <guid>https://forem.com/vivekanandrapaka/terraform-and-azure-devops-120p</guid>
      <description>&lt;h4&gt;
  
  
  Purpose of this article
&lt;/h4&gt;

&lt;p&gt;The main purpose of this article is to show you how to deploy your infrastructure using Terraform on Azure DevOps and deploy a sample application on multiple environments. &lt;/p&gt;

&lt;p&gt;I've been working on terraform for a while now and as a part of my learning process, thought I should write a blog post to show how to work with terraform on Azure DevOps and deploy an application into multiple environments.&lt;/p&gt;

&lt;p&gt;In this post, we'll spin up our infrastructure on Azure by setting up the build &amp;amp; release pipelines and We'll also take a look at what each of the tasks in the build &amp;amp; release pipelines does.&lt;/p&gt;

&lt;h4&gt;
  
  
  Things you need to follow along
&lt;/h4&gt;

&lt;p&gt;If you would like to do this on your own, following are the prerequisites you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Subscription&lt;/li&gt;
&lt;li&gt;Azure DevOps Account&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Assumptions
&lt;/h4&gt;

&lt;p&gt;This blog assumes that you have fair understanding of Azure, Azure DevOps &amp;amp; Terraform. Initially, we'll go through the setup required and then I'll discuss in detail about each of the pipeline steps.&lt;/p&gt;

&lt;p&gt;Ok, lets dive right in.&lt;/p&gt;

&lt;p&gt;As you may have already known, terraform is one of the infrastructure as code tools that enables us to deploy your landing zones in your respective cloud environments like Azure, AWS, GCP, soon. &lt;/p&gt;

&lt;p&gt;Terraform is considered as one of the tools in DevOps toolset.&lt;/p&gt;

&lt;p&gt;So, we’ll take a look at how we can deploy our landing zone to different environments using Azure DevOps and deploy a sample application to it.&lt;/p&gt;

&lt;p&gt;I’ve taken a Microsoft’s demo application PartsUnlimted and added my terraform code to it. &lt;/p&gt;

&lt;p&gt;It also contains the build and release pipeline json files you can import to follow along and replicate the same in your own subscription.&lt;/p&gt;

&lt;p&gt;Here are the steps that we’ll do as a part of this our implementation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Import the code from my &lt;a href="https://github.com/vivek345388/PartsUnlimited.git" rel="noopener noreferrer"&gt;github&lt;/a&gt; repo to Azure DevOps&lt;/li&gt;
&lt;li&gt; Setup build pipeline&lt;/li&gt;
&lt;li&gt; Setup release pipeline&lt;/li&gt;
&lt;li&gt;     Access the application in Dev&lt;/li&gt;
&lt;li&gt; Deploy the application to PreProd, Prod&lt;/li&gt;
&lt;li&gt;     Walk-Through of terraform code, tasks in build &amp;amp; release pipelines&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Code import from GitHub &amp;amp; Project Setup
&lt;/h4&gt;

&lt;p&gt;Login to &lt;a href="https://dev.azure.com/" rel="noopener noreferrer"&gt;Azure DevOps&lt;/a&gt; and create a new project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fascjawdfmazuocgljdgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fascjawdfmazuocgljdgj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on ‘Repos’ -&amp;gt; files and import the code. Click on the 3rd option, import&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdx5txtzwd55p7elj0h51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdx5txtzwd55p7elj0h51.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4djoxrbgrkbk93jk3j1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4djoxrbgrkbk93jk3j1z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy/Paste the following URL in clone URL &lt;a href="https://github.com/vivek345388/PartsUnlimited.git" rel="noopener noreferrer"&gt;https://github.com/vivek345388/PartsUnlimited.git&lt;/a&gt; and click on import&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcasxir5y48du6gh8cumy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcasxir5y48du6gh8cumy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it’s done, it will show that the code is now imported and you will be able to see the repo with code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1mmv0345p0wi0qeph1bk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1mmv0345p0wi0qeph1bk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above folder,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Infra.Setup folder contains the terraform files that we will be using to deploy our infrastructure.&lt;/li&gt;
&lt;li&gt; Pipeline.Setup folder contains the build &amp;amp;release pipelines json files. Download both the json files from Build Pipeline &amp;amp; Release Pipeline to your local folder&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8yydx32qwrk98ik3b7il.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8yydx32qwrk98ik3b7il.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Repeat the same step to download release pipeline json file from the code ReleasePipeline-&amp;gt;PartsUnlimitedE2E_Release.json as well to your local folder.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Build Pipeline Setup
&lt;/h4&gt;

&lt;p&gt;Now, let’s setup the build pipeline. Click on pipelines -&amp;gt; pipelines&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcrldnokfw55ie3w3nprq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcrldnokfw55ie3w3nprq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on ‘Import Pipeline’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F46ne72fwxxqelej257i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F46ne72fwxxqelej257i9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on browse and select the downloaded build Json file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm7k3g5hn3y1vpa9chshx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm7k3g5hn3y1vpa9chshx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9f1udiy27ki8f9gk3w4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9f1udiy27ki8f9gk3w4x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once import is successful, you will see below screen where it says some settings needs to attention&lt;/p&gt;

&lt;p&gt;For the agent pool,  Choose ’Azure Pipelines’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6kbrnnurqfhe2kpwp5th.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6kbrnnurqfhe2kpwp5th.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the agent specification, choose ‘vs2017-win2016’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc83o3mby8u8b7rn4py5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc83o3mby8u8b7rn4py5g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on ‘Save &amp;amp; queue’ to queue a new build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv3n07fsm98vt9hg70c0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv3n07fsm98vt9hg70c0f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the defaults and click on ‘save and run’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fayhh7c7lq5dgel44d3h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fayhh7c7lq5dgel44d3h5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once its complete, you should be able to see the pipeline run and its results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftn2q63b2numzhwadno2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftn2q63b2numzhwadno2i.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also see the published artifacts in the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq3vaa49rk518t913u8ux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fq3vaa49rk518t913u8ux.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now this completes the build pipeline setup. Let’s also configure release pipeline. &lt;/p&gt;

&lt;h4&gt;
  
  
  Release Pipeline Configuration
&lt;/h4&gt;

&lt;p&gt;Click on ‘releases’ and click on 'New pipeline'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp64y7hjxsk11rzww8at9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp64y7hjxsk11rzww8at9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fejtexdfxjhv4hz4y2d9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fejtexdfxjhv4hz4y2d9a.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quick Note: At the time of writing this article, we don't have an option to import an existing pipeline from new release pipeline page when you don't have any release pipelines. Hence we have to create a new empty pipeline to get to the screen where we can import the downloaded release pipeline json file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Choose ‘empty job’ and click on ‘save’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foaxc3eq5g2lcvshulfg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foaxc3eq5g2lcvshulfg4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4ijk7extck7e8rqqjn1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4ijk7extck7e8rqqjn1q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, come back to the releases page and click on the releases one more time and choose import pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1nz0yc0kcpzy0jnlap78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1nz0yc0kcpzy0jnlap78.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose release pipeline json that’s downloaded in the beginning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg4ewrio1s1sf4o7a7c7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fg4ewrio1s1sf4o7a7c7h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It would look like below after the pipeline has been imported. Click on ‘Dev’ of the stage to configure the settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7bejjxuxzx69uucydc9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7bejjxuxzx69uucydc9y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quick note: You need to have following tasks installed from Azure Market Place. if you don't have them in your subscription, please get them from here.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens" rel="noopener noreferrer"&gt;Replace tokens&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click on ‘Azure cli’ &amp;amp; ‘App service deploy’ tasks and choose the subscription to authorize. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quick Note: I’m not using service principals/connections here to keep it simple for the purpose of this blog post.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj3b60y33mtzlt2d2wpm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj3b60y33mtzlt2d2wpm7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1p6uvc9vbs0z8tt3kcq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F1p6uvc9vbs0z8tt3kcq7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repeat the same steps for rest of the stages ‘PreProd’ &amp;amp; ‘Prod’. Once you complete all the tasks that needs attention, click on save at the top of the screen to save the pipeline.&lt;br&gt;
Here is how the pipeline should look like after you complete everything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzqv3m1bomax3hlej6zfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzqv3m1bomax3hlej6zfl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you have saved everything, click on ‘Create release’ in above screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7cn447vl318qkcybx436.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7cn447vl318qkcybx436.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on ‘logs’ option to view the logs for each of the tasks. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpripbtzob69e6ulig7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpripbtzob69e6ulig7v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successful deployment to Dev, it would look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx2jzu8rg721kvgs95hqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx2jzu8rg721kvgs95hqv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything is done, you would see that code is deployed successfully to dev and you can browse the page by accessing the webapp link.&lt;/p&gt;

&lt;p&gt;Go to your Azure portal and grab your webapp link and access it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fntxs39u2lsgf5tupa7hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fntxs39u2lsgf5tupa7hh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy6o0o0duawhkjw2d9bof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fy6o0o0duawhkjw2d9bof.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in your Azure DevOps release pipeline, As continuous deployment is enabled, it deploys the code to all the environments one after the other once the deployment is successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ruxmzj2us8qsxyeo81u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9ruxmzj2us8qsxyeo81u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s take a minute to examine what each of the files in our Infra.Setup folder does. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2f2ty38men8m2x8lpxhv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F2f2ty38men8m2x8lpxhv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve used the concept of modules in terraform to isolate each of the components we are deploying. This is similar to linked templates in ARM templates. &lt;/p&gt;

&lt;p&gt;Every terraform file that we author is considered as a module.&lt;/p&gt;

&lt;p&gt;In a simple Terraform configuration with only one root module, we create a flat set of resources and use Terraform' s expression syntax to describe the relationships between these resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_app_service_plan" "serviceplan" {
  name                = var.spName
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  sku {
    tier = var.spTier
    size = var.spSKU
  }
}

resource "azurerm_app_service" "webapp" {
  name                = var.webappName
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  app_service_plan_id = azurerm_app_service_plan.serviceplan.id
}
```




In the above code block, we declare two resources, app service and service plan in a single file. Later, app service is referencing the app service plan in the same file. while this approach is fine for smaller deployments, when the infrastructure grows, it would be challenging to maintain these files.

When we introduce module blocks, our configuration becomes hierarchical rather than flat: each module contains its own set of resources, and possibly its own child modules, which can potentially create a deep, complex tree of resource configurations.

However, in most cases terraform strongly recommend keeping the module tree flat, with only one level of child modules, and use a technique similar to the above of using expressions to describe the relationships between the modules.




```

module "appServicePlan" {
    source  = "./modules/appServicePlan"
    spName  = var.spName
    region  = var.region
    rgName  = var.rgName
    spTier  = var.spTier
    spSKU   = var.spSKU
}

module "webApp" {
    source         = "./modules/webApp"
    name           = var.webAppName
    rgName         = var.rgName
    location       = var.region
    spId           = module.appServicePlan.SPID
    appinsightskey = module.appInsights.instrumentation_key
}
```



Here you can see that both app service plan and app service are called as modules by main.tf file

&amp;gt; Definition Credits: Terraform.io

#### Benefits of modular based templates

Modules or Linked Templates yields us following benefits:
1.  You can reuse the individual components for other deployments.
2.  For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components.
3.  You can easily add new resources in a new template and call them via main template.

Following are the resources that we deployed as a part of this blog post.

1.  App service plan – To host the Webapp
2.  App Service – Webapp to host the application.
3.  Application insights – To enable monitoring.

Its hierarchy looks like this.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/nrwruxhd501rsgfy2byb.png)

In general when we have a single file for deployments, we pass the variables in the same file or use a .tfvars file to pass the variables.

&amp;gt; Variables are same as Parameters in ARM templates

 In the above file structure, each individual template for example: webapp.tf will have its variables that it needs. The values have to be passed to it when this module is called. Remember that each terraform file that we create is considered as a module.



```
variable "name" {}
variable "location" {}
variable "rgName" {}
variable "spId" {}
variable "appinsightskey" {}

resource "azurerm_app_service" "webApp" {
  name                = var.name
  location            = var.location
  resource_group_name = var.rgName
  app_service_plan_id = var.spId
  app_settings = {
    "APPINSIGHTS_INSTRUMENTATIONKEY" = var.appinsightskey
  }
}

resource "azurerm_app_service_slot" "webApp" {
  name                = "staging"
  app_service_name    = azurerm_app_service.webApp.name
  location            = azurerm_app_service.webApp.location
  resource_group_name = azurerm_app_service.webApp.resource_group_name
  app_service_plan_id = azurerm_app_service.webApp.app_service_plan_id

  app_settings = {
    "APPINSIGHTS_INSTRUMENTATIONKEY" = var.appinsightskey
  }

}
```


Now lets see how the values are passed and modules are called in individual templates.

There are two main files that control the entire deployment.

1.  main.tf     -  Contains the code to call all the individual resources.
2.  main.tfvars -  Contains variables that are consumed by main.tf file

in the main.tf file, each of the modules will be called as follows:



```
module "appServicePlan" {
    source  = "./modules/appServicePlan"
    spName  = var.spName
    region  = var.region
    rgName  = var.rgName
    spTier  = var.spTier
    spSKU   = var.spSKU
}

module "webApp" {
    source         = "./modules/webApp"
    name           = var.webAppName
    rgName         = var.rgName
    location       = var.region
    spId           = module.appServicePlan.SPID
    appinsightskey = module.appInsights.instrumentation_key
}
```



the variables are declared in the same file in the variables section.




```
variable "region" {}
variable "rgName" {}
variable "spName" {}
variable "spTier" {}
variable "spSKU" {}
variable "webAppName" {}
variable "appInsightsname" {}
variable "AIlocation" {}
```


The values for above variables will be passed from main.tfvars file.

We use the same templates for deployment to all the environments. so how does Azure DevOps handles deployments to different environments? 

We keep place holders `#{placeholdername}#` for each of these values passed in our main.tfvars file.



```
region = #{region}#               
rgName = #{ResouceGroupName}#       
spName = #{spName}#                     
spSKU =  #{spSKU}#                
spTier = #{spTier}#             
webAppName = #{webAppName}#         
appInsightsname = #{appInsightsname}#   
AIlocation = #{AIlocation}#
```



when use the same templates for deploying to multiple environments, We use ‘replace tokens’ task in Azure DevOps and place respective values for each environment. This helps us in choosing different values for each environment. 

For example, the value for `#{webAppName}#` will be different per environment.

`app-dev-webapp` for dev
`app-ppd-webapp` for preprod
`app-prd-webapp` for prod

While the main.tfvars file has a place holder `#{webAppName}#` for this, we declare the values for it in our variables section of release pipeline

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/le554ov6poum36z7wr2j.png)

The 'replace tokens' task has an option called token prefix where we can declare the token prefix and suffix for the place holder value in the file we would like to replace in. In the target files, we place the files would like to get targeted for this replacement. Here we gave  **/*.tf and **/*.tfvars as the target as these files have the placeholder content.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/zkye3l3pjt1jg0wgu7rw.png)


#### Build Pipeline

Build pipeline is mostly self explanatory as the first couple of tasks complie the application and publish the code.

Take a look at the Publish Artifact: Artifacts, Publish Artifact: Infra.Setup tasks

Publish Artifact: Artifacts : publishes the compiled code to Azure Pipelines for consumption by release pipelines

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/0lacldmqb5102w46hzmx.png)

Publish Artifact: Infra.Setup tasks : publishes the terraform templates to Azure Pipelines for consumption by release pipelines. As we dont need to compile them we can directly choose them from the repo as path to publish.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ux4yo5iyob2z0h7ve0bl.png)

At the end of the build pipeline, it would publish the artifacts as below:

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kmk5dxi8dorej29sg6uw.png)

These will be consumed in our release pipeline for deployment.

#### Release Pipeline

You can see that the source artifacts are from our build pipeline.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kwql3dbchgnlvr42wip7.png)

Now lets take a look at each of the release tasks.

1.Create Resource Group and Storage Account: Creates a storage account for storing .tfstate file that terraform stores the configuration of our deployment.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/taee0wi6sdc567ccctpb.png)

2.Obtain access Key and assign to pipeline variable: Retrieves the storage account key and assigns it to a variable in Azure Pipelines.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/6vf9ap663kiimmhjb3sw.png)

3.Replace tokens in **/*.tf **/*.tfvars: 

Remember that we have kept place holders to replace the values per environment, this task is responsible for the same. Values for each of the place holders in main.tf file are in variables section of each stage.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ep0gcpj7u2j9pavvg7rm.png)

4.Install Terraform 0.13.4: Installs terraform on the release agent.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/znrgo65e68g6y722dl92.png)

5.Terraform: init : Initializes the terraform configuration and we also have specified the storageaccount resource group and the storage account for it to place the .tfstate file

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/qc5a0d7ua279686n0b3r.png)

6.Terraform: plan : Runs terraform deployment in dry-run mode

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5gjddnn0rm8vfewuf9jg.png)

7.Terraform: apply -auto-approve: Applies the configuration based on the dry-run mode in step 6.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/rf3ryfg6m4u9toek2txh.png)

8.Retrieve Terraform Outputs: This task is mainly responsible for retrieving each of the outputs obtained after terraform apply is complete and they are being consumed by the 'App Service Deploy' task. In case of Azure, we have ARM Outputs task readily available for us, here we need to write a small script to get the outputs.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/yl5pixffowk28lz3m5bk.png)


9.Azure App Service Deploy: Deploys the application code into the webapp.

![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/92egtw93q92g31mfbfa0.png)

### Conclusion

This brings us to the end of the blog post.

Hope this helps you learn, practice and deploy your infrastructure using Terraform via Azure DevOps!!

Thanks for reading this blog post &amp;amp; Happy Learning..

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>azuredevops</category>
    </item>
  </channel>
</rss>
