<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dylan Morley</title>
    <description>The latest articles on Forem by Dylan Morley (@dylanmorley).</description>
    <link>https://forem.com/dylanmorley</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dylanmorley"/>
    <language>en</language>
    <item>
      <title>Azure Service Bus - Replay Messages CLI</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Fri, 05 Aug 2022 14:57:00 +0000</pubDate>
      <link>https://forem.com/dylanmorley/azure-service-bus-replay-messages-cli-59a0</link>
      <guid>https://forem.com/dylanmorley/azure-service-bus-replay-messages-cli-59a0</guid>
      <description>&lt;p&gt;When working with an Asynchronous messaging product such as Azure Service Bus (ASB), you're going to be working with queues and publish/subscribe scenarios. In either case, you'll have times when message processing isn't successful - perhaps required data isn't available, perhaps some dependent system is offline. &lt;/p&gt;

&lt;p&gt;When this happens, we need to handle the failures, and what we need to do depends on your implementation pattern. Generally, after a certain amount of attempts by your compute processing to handle the message, the transport will &lt;a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues"&gt;dead letter it&lt;/a&gt;. You may be using the dead letter sub-queue section of the ASB entity as your strategy for failed attempts, and messages will be located here until you do something about them&lt;/p&gt;

&lt;p&gt;You could also be sending failures on to a centralised &lt;code&gt;errors&lt;/code&gt; entity - but whatever pattern has been chosen, you'll have some  outstanding messages you need to process. At small message volumes, this is achievable with tools such as &lt;a href="https://github.com/paolosalvatori/ServiceBusExplorer"&gt;Service Bus Explorer&lt;/a&gt; and the inbuilt portal tooling, but this soon becomes problematic if you have many 1000's of messages to deal with. &lt;/p&gt;

&lt;p&gt;While your systems should be self-healing as much as possible, there are times when you need to intervene and would like to have an automated way to deal with scenarios. &lt;/p&gt;

&lt;p&gt;For example, simply &lt;em&gt;always&lt;/em&gt; replaying messages that dead letter immediately isn't a good strategy - if there's a genuine system problem you're just going to create a request storm of errors. You need to replay &lt;em&gt;when you're happy that processing is going to be successful&lt;/em&gt;, and you want an automated way to do this&lt;/p&gt;

&lt;h2&gt;
  
  
  CLI support
&lt;/h2&gt;

&lt;p&gt;There are numerous ways you can replay messages - you could write a function app, a logic app, or provision any other type of compute that allows you to execute some code. &lt;/p&gt;

&lt;p&gt;However, replaying messages felt like a very common problem where we could offer a reusable solution. Creating CLI support would make this easy for anyone to consume - as a CLI tool, it can be installed easily and it becomes a cross-platform solution which allows people to work in the way they want. &lt;/p&gt;

&lt;p&gt;We therefore created &lt;a href="https://www.nuget.org/packages/Asos.ServiceBus.MessageSiphon"&gt;Asos.ServiceBus.MessageSiphon&lt;/a&gt;, which allows you to define a configuration file that represents the message work you want to perform. &lt;/p&gt;

&lt;p&gt;In this example, we're connecting to a source namespace using a SAS key, peeking messages from a topic-subscription and cloning them into another namespace, this time connecting using RBAC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Logging"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"LogLevel"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;    
    &lt;/span&gt;&lt;span class="nl"&gt;"ReplayMessagesJob"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"JobType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SourceToTarget"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"JobName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Clone-Message-To-Other-Namespace"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"NumberOfConcurrentProcesses"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ServiceBusDetails"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Source"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"ConnectionMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ConnectionString"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"ConnectionString"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Endpoint=sb://namespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=access-key"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Target"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"ConnectionMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Rbac"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"FullyQualifiedNamespace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"namespace1.servicebus.windows.net"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SiphonWork"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"SiphonMode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Clone"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"SourceConnectionName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Source"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"SourceEntity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"topic-entity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"SourceSubscriptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test-subscription"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; 
                &lt;/span&gt;&lt;span class="nl"&gt;"SourceBatchReceiveSize"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"TargetConnectionName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Target"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"TargetEntity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"copy-of-topic"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are various examples in the README we've published with the package, but usage is always the same. Define a configuration file, then execute it via the CLI after installing the tool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;siphon-asb-messages &lt;span class="nt"&gt;-n&lt;/span&gt; D:&lt;span class="se"&gt;\t&lt;/span&gt;emp&lt;span class="se"&gt;\f&lt;/span&gt;ile-with-config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;By using a configuration file based CLI tool, we put the power in the hands of the user and allow them to define various configurations - supporting a variety of common scenarios and ways to filter the messages, such as by age or message header. &lt;/p&gt;

&lt;p&gt;The tool can be installed on build agent pipelines and executed on a schedule, or can be installed by any engineer in their development environment. &lt;/p&gt;

&lt;p&gt;This allows you to handle requirements such as network and RBAC restricted namespaces. Instead of allowing engineers to connect to namespaces and manipulate data, you can define it as a job that's executed from a build pipeline in a controlled way. An example Azure Devops Pipeline is included in the README &lt;/p&gt;

&lt;p&gt;Source code will be available on Github at &lt;a href="https://github.com/ASOS/asos-servicebus-message-siphon"&gt;https://github.com/ASOS/asos-servicebus-message-siphon&lt;/a&gt; very shortly&lt;/p&gt;

</description>
      <category>azure</category>
      <category>eventdriven</category>
      <category>tooling</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure Logic Apps Standard - Part 2 Build Pipelines and Provisioning</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Wed, 25 May 2022 12:23:50 +0000</pubDate>
      <link>https://forem.com/dylanmorley/azure-logic-apps-standard-part-2-build-pipelines-and-provisioning-1g80</link>
      <guid>https://forem.com/dylanmorley/azure-logic-apps-standard-part-2-build-pipelines-and-provisioning-1g80</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/dylanmorley/azure-logic-apps-standard-part-1-solution-design-56cf"&gt;Part 1&lt;/a&gt;, we looked at the requirements and the solution design - now let's think about how to build a hello world application to prove out some deployment Pipelines using Azure DevOps.&lt;/p&gt;

&lt;p&gt;I believe making sure your foundations are in a good shape first then building upon them really pays dividends in a new software project - you'll go a bit slower at first whilst you identify and fix issues, but you'll move so much faster as the delivery progresses. &lt;/p&gt;

&lt;p&gt;You achieve this through building the necessary automation, testing your build and deploy process until you're happy with a few different scenarios. &lt;strong&gt;Invest time and get this right at the beginning&lt;/strong&gt;. Do this by not focussing on the application itself, which we can just use a walking skeleton / hello-world type application, and instead focus on the deployment process. &lt;/p&gt;

&lt;p&gt;What we'll aim to have working as part of this is &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A simple, HTTP triggered logic application workflow that returns 'Hello World'&lt;/li&gt;
&lt;li&gt;A build pipeline that produce an artifact, ready for deployment&lt;/li&gt;
&lt;li&gt;A provisioning pipeline that will use the artifact, create infrastructure and deploy the logic application into&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Workflow Extension Model
&lt;/h2&gt;

&lt;p&gt;Logic App Standard is built as an extension on top of the functions runtime. This means the logic app runtime and all of the built-in connectors binaries need to be made available to your application, and you have a couple of choices here how to distribute them. &lt;/p&gt;

&lt;h3&gt;
  
  
  Bundled runtime
&lt;/h3&gt;

&lt;p&gt;In this mode, you provide values for application settings &lt;code&gt;AzureFunctionsJobHost__extensionBundle__id&lt;/code&gt; and &lt;code&gt;AzureFunctionsJobHost__extensionBundle__version&lt;/code&gt;. All the required binaries are acquired by the functions host using these settings and this is essentially a managed process for you. This will result in a smaller build time artifacts, but you &lt;strong&gt;cannot&lt;/strong&gt; use custom connectors - only the built in connectors that are in the version of the nuget package are available.&lt;/p&gt;

&lt;p&gt;In this mode, you could have a simple deployment story - it's just the JSON files that make up the workflows that would need to be deployed&lt;/p&gt;

&lt;h3&gt;
  
  
  Package reference mode
&lt;/h3&gt;

&lt;p&gt;In this mode, we don't provide the application settings and instead treat the logic app runtime as just another nuget package. This is available as package &lt;a href="https://www.nuget.org/packages/Microsoft.Azure.Workflows.WebJobs.Extension/"&gt;Microsoft.Azure.Workflows.WebJobs.Extension&lt;/a&gt;, which contains the logic app capabilities and the built-in connectors. We provide a package reference for the version of the runtime we want to consume, and any other packages we want in our solution. This is a more traditional model - we package up everything needed to run the application and deploy it.&lt;/p&gt;

&lt;p&gt;If you want to use custom connectors, you &lt;em&gt;must&lt;/em&gt; use package reference mode - and as that's a requirement for us, this choice is made for us. &lt;/p&gt;

&lt;h2&gt;
  
  
  Built in and Custom Connectors
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why built in connectors?
&lt;/h3&gt;

&lt;p&gt;Connectors are either built-in or managed, but what's the difference? When you choose a managed connector, you're going out of process and are making a call into a Microsoft provided connector. This will run in it's own infrastructure, and may come with certain limitations such as maximum size or number of calls per minute. When working in the consumption model, managed API connections can be a good choice and enable powerful integrations with little effort, but the limitations may impact your ability to perform particularly large workloads or have the performance profile you're aiming for,&lt;/p&gt;

&lt;p&gt;With single tenant logic apps, we can provision our own plans to run our applications and built-in connectors run in-process on that infrastructure. Therefore, we don't have any rate limits enforced and the only limit is the infra itself. Keeping the call in-process will also provide performance benefits not previously available to us. &lt;/p&gt;

&lt;p&gt;While there are a number of built-in connectors (and more on the way) when there's something not currently available, logic apps has an extensibility model that allows you to design your own functionality and make available via a nuget package. ASOS open sourced the &lt;a href="https://github.com/ASOS/asos-logicapps-cosmosconnector"&gt;Asos Cosmos Connector&lt;/a&gt;, which shows how you might go about building a connector. &lt;/p&gt;

&lt;p&gt;The output of that is a nuget package, which we'll need to include in our deployments - that's easy enough, as it's just a csproj package reference. However, in order to get a local design time experience, the package also needs to be installed locally in your logic app runtime directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Connectors - local experience
&lt;/h3&gt;

&lt;p&gt;When working with a custom connector locally, you'll be using the VS Code IDE to view the workflows at design time, so any custom connectors you create need to be available to the IDE in order to understand how to display them to you, and display the relevant UI for particular actions based on the connector specification. The current version of the extension bundle is installed on your local machine in your user profile, in a sub-directory azure-functions-core-tools - this contains all the built-in connectors code, which are distributed as DLLS alongside the extension code. &lt;/p&gt;

&lt;p&gt;Therefore, if we want our custom connectors to also be available, we need to register the connector so it's picked up by the extension. IF you've packaged up your connector as a nuget package, then you can use this to make it available on the local file system so that VS Code can display the relevant information. &lt;/p&gt;

&lt;p&gt;Here, we demonstrate how we can follow a convention based pattern to distribute and register our custom connectors using a powershell script. The script we've used for this always expects the libraries to follow a certain pattern and to be available in a nuget feed. This then allows us to extract the package locally, and install the contents in the extensions directory. It's available as a gist to demonstrate - &lt;a href="https://gist.github.com/dylan-asos/309f7dd3a59c43dab60fd645293aad3e"&gt;https://gist.github.com/dylan-asos/309f7dd3a59c43dab60fd645293aad3e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At runtime, the application also needs access to the package details in order execute and display the workflow details in the portal - this is a simple process, achieved by packaging all binaries and deploying via a zip process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other local considerations
&lt;/h3&gt;

&lt;p&gt;When working as part of a team, the barrier for entry should be as low as possible when someone goes to work on a repository. It should be simple to get up and running - everything needed to get the application running should be understood. A new engineer should be able to clone, build, test and run with minimal effort - how simple this is can be a key key indicator of the health of the solution.&lt;/p&gt;

&lt;p&gt;This goes for any local settings files as well - it should be easy getting local settings populated, and this might include sensitive values such as connection strings. Including a script with your project that an engineer can run that will fully populate their local setting file will save you time and make for a more secure solution, so we'll demonstrate how you can do this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Pipelines
&lt;/h2&gt;

&lt;p&gt;Since we're building a project that's consuming nuget packages, we just need to produce a zip file that we'll deploy to our logic app that we'll provision. The order is &lt;/p&gt;

&lt;p&gt;1) Get Sources&lt;br&gt;
2) Perform versioning&lt;br&gt;
3) Build projects&lt;br&gt;
4) Run all non-integration tests&lt;br&gt;
5) Publish test results&lt;br&gt;
6) Perform Packaging&lt;br&gt;
7) Publish Artifact&lt;/p&gt;

&lt;p&gt;We want a reusable set of pipelines that will let us build any component we built, and follow a 'build once, deploy multiple times' approach.&lt;/p&gt;

&lt;p&gt;We now have an &lt;code&gt;artifact&lt;/code&gt;, that contains everything what we need to deploy and test our logic app.&lt;/p&gt;
&lt;h2&gt;
  
  
  What to provision?
&lt;/h2&gt;

&lt;p&gt;Based on the architecture requirements from Part 1, I see the provisioning requirements for this in 3 parts&lt;/p&gt;

&lt;p&gt;1) The network - vnet, subnets, public ips and nat. Both the logic application and bastion will depend on this, and this should be provisioned independently from both of them&lt;br&gt;
2) The application and dependencies - This is the &lt;em&gt;main deployment&lt;/em&gt; and would contain everything needed to have a working application. Can be created and destroyed independently from the network&lt;br&gt;
3) The Bastion and VM for support purposes&lt;/p&gt;

&lt;p&gt;In this way, we can think about the provisioning in smaller chunks - this also allows us to destroy parts of the infrastructure independently from each other - e.g. I may choose to delete my infrastructure at night or at weekends in my &lt;strong&gt;dev&lt;/strong&gt; environment.&lt;/p&gt;

&lt;p&gt;Both the application and the bastion VMs depend on the network, but we can disconnect and remove them from the vnet independently, whilst leaving the network in place. This can be helpful, as we may incurr costs from our application deployment if it depends on compute and other PaaS services, but the network is generally low cost and dependent on the traffic generated. &lt;/p&gt;
&lt;h2&gt;
  
  
  Provisioning logic apps using Terraform
&lt;/h2&gt;

&lt;p&gt;There are a number of tools you could choose from, but the PaaS nature of the architecture fits nicely for Terraform. We'll follow a &lt;code&gt;modules and blueprints&lt;/code&gt; approach for our solution, which will mean our integration applications can easily reuse what we build. &lt;/p&gt;

&lt;p&gt;At the initial time of design, Terraform didn't support logic apps standard, so an &lt;a href="https://gist.github.com/dylanmorley/21975d7959a688db0f11c627dd76d1d4"&gt;ARM template approach&lt;/a&gt; was the only viable solution. Since then, &lt;a href="https://github.com/hashicorp/terraform-provider-azurerm/pull/13196"&gt;I've created a resource for this in Terraform&lt;/a&gt; so you can follow the native approach - documentation is at &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/logic_app_standard"&gt;azurerm_logic_app_standard&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_logic_app_standard" "example" {
  name                       = "test-azure-logic-app-standard"
  location                   = azurerm_resource_group.example.location
  resource_group_name        = azurerm_resource_group.example.name
  app_service_plan_id        = azurerm_app_service_plan.example.id
  storage_account_name       = azurerm_storage_account.example.name
  storage_account_access_key = azurerm_storage_account.example.primary_access_key

  app_settings = {
    "CONFIG_SETTING" = "example"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our provisioning pipeline should&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take the build artifact for the workflow&lt;/li&gt;
&lt;li&gt;Run the Terraform stage to create the infrastructure&lt;/li&gt;
&lt;li&gt;Zip Deploy the application into the newly provisioned infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Terraform is just about provisioning the empty shell of the logic app - it doesn't contain any of the workflows themselves, which we'll provide by deploying into the newly provisioned application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning considerations
&lt;/h2&gt;

&lt;p&gt;As we want to secure our applications using VNET and service endpoints, there are a few settings and other considerations to ensure the application is in a working state and correctly associated with the network subnets. &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying
&lt;/h2&gt;

&lt;p&gt;We've provisioned our application and now we're ready to deploy - this is the simple bit as this is no different from an App Service or Azure Functions type deployment, so we simply take the zip artifact we produced and use a zip deploy task to upload to Azure. &lt;/p&gt;

&lt;h2&gt;
  
  
  Demo project
&lt;/h2&gt;

&lt;p&gt;A demo project that shows how this will all fit together will be available in github soon. &lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>serverless</category>
    </item>
    <item>
      <title>net6 API Prometheus Metrics</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Fri, 20 May 2022 10:11:20 +0000</pubDate>
      <link>https://forem.com/dylanmorley/net6-api-prometheus-metrics-2817</link>
      <guid>https://forem.com/dylanmorley/net6-api-prometheus-metrics-2817</guid>
      <description>&lt;p&gt;If you're creating a dotnet API to run in Kubernetes, chances are you'll be wanting to use Prometheus to gather metrics about your application. &lt;/p&gt;

&lt;p&gt;A library that makes this really easy for us is &lt;a href="https://github.com/prometheus-net/prometheus-net"&gt;prometheus-net&lt;/a&gt; - we can create a minimal API that exposes metrics quickly and demonstrate a few different ways of creating metric data&lt;/p&gt;

&lt;p&gt;The source for this is in github at &lt;a href="https://github.com/dylanmorley/dotnet-api-prometheus-metrics-reference"&gt;https://github.com/dylanmorley/dotnet-api-prometheus-metrics-reference&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics Configuration
&lt;/h2&gt;

&lt;p&gt;The majority of the work for this is done in the &lt;a href="https://github.com/dylanmorley/dotnet-api-prometheus-metrics-reference/blob/main/src/PrometheusMetrics.Api.Reference/Program.cs"&gt;Program&lt;/a&gt; startup class, let's look at a few of the important points&lt;/p&gt;

&lt;p&gt;&lt;code&gt;EventCounterAdapter&lt;/code&gt; - listens for any dotnet &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/diagnostics/event-counters"&gt;event counters&lt;/a&gt; and converts them into Prometheus format at the exporter endpoint&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MeterAdapter&lt;/code&gt; - listens for dotnet metrics created via &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/diagnostics/metrics-instrumentation"&gt;System.Diagnostics.Metrics Meter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.UseHttpMetrics&lt;/code&gt; - we want the application to expose metrics about each one of our endpoints, such as number of requests, request durations&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.UseEndpoints&lt;/code&gt; -&amp;gt; &lt;code&gt;MapMetrics&lt;/code&gt; - we'll expose the &lt;code&gt;/metrics&lt;/code&gt; endpoint as part of the application&lt;/p&gt;

&lt;p&gt;That's pretty much it, not many things to configure and our app is up and running&lt;/p&gt;

&lt;h2&gt;
  
  
  Controller usage
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://github.com/dylanmorley/dotnet-api-prometheus-metrics-reference/blob/main/src/PrometheusMetrics.Api.Reference/Controllers/WeatherForecastController.cs"&gt;weather forecast controller&lt;/a&gt; there are a couple of examples of how metrics can now be created. &lt;/p&gt;

&lt;p&gt;If you execute the &lt;code&gt;WeatherForecast/Get&lt;/code&gt; endpoint, you'll be incrementing a metric created by a &lt;code&gt;System.Diagnostics.Metrics&lt;/code&gt; Meter instance, and if you hit &lt;code&gt;WeatherForecast/PredictStorm&lt;/code&gt; you'll be using a Histogram from the prometheus-net library. &lt;/p&gt;

&lt;p&gt;After executing either of these methods, you can check the &lt;code&gt;/metrics&lt;/code&gt; endpoint and will see various event counters, custom counters &amp;amp; http controller endpoint metrics, all converted into Prometheus format and ready for scraping. &lt;/p&gt;

&lt;p&gt;A docker file is included in the repo, to demonstrate creating an image that could then be run alongside a Prometheus instance on a kubernetes cluster. &lt;/p&gt;

&lt;p&gt;That's it - a super simple reference solution for getting metrics up and running  &lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Azure Devops - Secure Reset of Terraform State</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Wed, 13 Oct 2021 10:00:08 +0000</pubDate>
      <link>https://forem.com/dylanmorley/azure-devops-secure-reset-of-terraform-state-3p0f</link>
      <guid>https://forem.com/dylanmorley/azure-devops-secure-reset-of-terraform-state-3p0f</guid>
      <description>&lt;p&gt;If you're using Terraform for Azure Infrastructure provisioning, you're likely using the &lt;a href="https://www.terraform.io/docs/language/settings/backends/azurerm.html"&gt;Azure Storage Backend type&lt;/a&gt; for your state file.&lt;/p&gt;

&lt;p&gt;When running a plan and apply, Terraform acquires a lock on the the state file to control concurrency (i.e. so that multiple deployments don't interfere with each other), and sometimes if a pipeline terminates abruptly you're left with a lock on the state file. &lt;/p&gt;

&lt;p&gt;Next time you run the pipeline, you'll get something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error locking state: Error acquiring the state lock: 
state blob is already locked
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've network restricted our storage accounts and are using a VM Scale Set associated with a subnet for our Azure Devops build pools. The subnet is allow listed on the storage account network restrictions, and the containers holding the state file are also  RBAC restricted to the SPN associated with the Devops Service Connection. This means, &lt;a href="https://samcogan.com/store-terraform-state-securely-in-azure/"&gt;only our Devops pipelines can interact with our Terraform state&lt;/a&gt; and therefore we need an azure pipeline to perform the unlock for us - so that we can authenticate and  execute under the correct security context. &lt;/p&gt;

&lt;p&gt;This is pretty simple, the only requirement here is to break the lease on the state file blob, which can be achieved with an  AzureCLI task. By using this task type, authentication is performed via the specified Service Connection, which performs token management and allows access to the &lt;a href="https://docs.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=azure-cli"&gt;RBAC restricted container&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TerraformStateFile&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform State File (e.g. whatever.statefile.tfstate)&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceConnectionName&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YourServiceConnectionDefault&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The name of the service connection that gives access to the storage account for the state file&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageAccountName&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YourStorageDefault&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The name of the storage account that holds the Terraform state files&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;

&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AzureCLI@2&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Break&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lease&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;on&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;terraform&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;state"&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BreakLease&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;azureSubscription&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ parameters.ServiceConnectionName }}&lt;/span&gt;
      &lt;span class="na"&gt;scriptType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pscore"&lt;/span&gt;
      &lt;span class="na"&gt;scriptLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;inlineScript"&lt;/span&gt;
      &lt;span class="na"&gt;inlineScript&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;az&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;storage&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;blob&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lease&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;break&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--container-name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'terraform'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--blob-name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;parameters.TerraformStateFile&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--account-name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;parameters.StorageAccountName&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}'"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB: The container name is hardcoded to &lt;code&gt;'terraform'&lt;/code&gt; in this example, you could parameterise or set to whatever default works for you&lt;/p&gt;

&lt;p&gt;Now, any team members can run the pipeline to reset state, without requiring any direct access to the storage account.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Azure Devops - Managed Identity for Automation Tests</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Wed, 06 Oct 2021 17:19:36 +0000</pubDate>
      <link>https://forem.com/dylanmorley/azure-devops-managed-identity-for-automation-tests-3321</link>
      <guid>https://forem.com/dylanmorley/azure-devops-managed-identity-for-automation-tests-3321</guid>
      <description>&lt;p&gt;You're writing some integration tests and as part of doing so you need to check on some Azure Resources - perhaps look in a database, check service bus messages - you need access to some Azure infrastructure to assert that the expected things have happened. When running from Azure Devops, you can take advantage of a form of Managed Identity and avoid any connection strings in your test code&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Devops
&lt;/h2&gt;

&lt;p&gt;If you're writing Devops Pipelines, you can setup Service Connections in your project which &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops" rel="noopener noreferrer"&gt;give you access to Azure subscriptions&lt;/a&gt;. A service connection to Azure from Devops is associated with a Service Principal Name (SPN). &lt;/p&gt;

&lt;p&gt;Once you have a connection and SPN, your YAML pipelines can use this to authenticate with Azure when running certain tasks. A sneaky way of achieving identity management is possible that's really useful for integration testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing task
&lt;/h2&gt;

&lt;p&gt;For this to work, your test project needs to be a dotnet project and the tests should be able to execute by using &lt;code&gt;dotnet test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Within the test project, I'm going to assume you want to do a few things. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Have different configurations, depending on the environment you're testing &amp;amp; connect to different Azure resources, depending on the environment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inherit the Security context of the Service Connection, so no need to have any connection strings in your test pack. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can execute &lt;code&gt;dotnet test&lt;/code&gt; and wrap it up with an AzureCLI task, passing in the service connection and environment details, like so&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwclph5mqy2rb5vj9k695.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwclph5mqy2rb5vj9k695.png" alt="YAML test task"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By doing this, we get some goodness for free. The AzureCLI task will authenticate with the service connection and obtain a token before running your script, and will perform tidy up afterwards - we get a form of token management for us. &lt;/p&gt;

&lt;p&gt;By passing in &lt;code&gt;ASPNETCORE_ENVIRONMENT&lt;/code&gt; we can take decisions in code to &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/environments?view=aspnetcore-5.0" rel="noopener noreferrer"&gt;load up local settings files&lt;/a&gt; and build our configuration values based on a various sources. You can then have appsettings files for the different environments, allowing you to specify different resources to interact with&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;IConfigurationRoot&lt;/span&gt; &lt;span class="nf"&gt;GetConfiguration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Read the ASPNETCORE_ENVIRONMENT variable we passed in &lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;envName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;EnvironmentData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AspNetCoreEnvironment&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ConfigurationBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;SetBasePath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;outputPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddJsonFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"appsettings.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddJsonFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"appsettings.&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;envName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddEnvironmentVariables&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;OK - so you've authenticated with Azure by using the AzureCLI task - but how can you &lt;strong&gt;use&lt;/strong&gt; the token in your code?&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Pack
&lt;/h2&gt;

&lt;p&gt;This is the nice part &amp;amp; Microsoft have made this really easy for us with the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet" rel="noopener noreferrer"&gt;DefaultAzureCredential&lt;/a&gt; class from the Azure.Identity package. As described, the class will check a number of places in order to try and obtain security context, and because we've authenticated by the AzureCLI task we'll get the token for our Service Connection. Nice!&lt;/p&gt;

&lt;p&gt;In code, will look like so.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;

&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;AccessToken&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;GetAccessToken&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;dac&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DefaultAzureCredential&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;DefaultAzureCredentialOptions&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;ExcludeSharedTokenCacheCredential&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;dac&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetTokenAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Azure&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TokenRequestContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="s"&gt;$"https://management.azure.com/.default"&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;NB: I found I had to &lt;a href="https://github.com/Azure/azure-sdk-for-net/issues/17052#issuecomment-729998843" rel="noopener noreferrer"&gt;ExcludeSharedTokenCacheCredential&lt;/a&gt; for consistent results. &lt;/p&gt;

&lt;p&gt;Once you're at this point, you'll find many of the Azure SDKs will have entry points that allow you to use the class directly. You can pass an instance of DefaultAzureCredential in to class - see &lt;a href="https://yourazurecoach.com/2020/08/13/managed-identity-simplified-with-the-new-azure-net-sdks/" rel="noopener noreferrer"&gt;here for some examples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It's now just a case of making sure the SPN you're using has permissions to the things you want to integration test with. You might need to &lt;a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current" rel="noopener noreferrer"&gt;perform some role assignments&lt;/a&gt; to set these permissions up. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping it up
&lt;/h2&gt;

&lt;p&gt;By setting up an SPN and service connection, we can execute our tests using the AzureCLI task, which will authenticate with azure and make a token available that we can then pick up from a c# test pack by using DefaultAzureCredential. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is really powerful&lt;/em&gt; - as it means that when running the tests locally, you can either have the tests run under your security context as your identity, or you can provide environment variables and acquire the context of an SPN. No change of code, and your test pack is completely portable and will happily run in Windows or Linux environments. &lt;/p&gt;

&lt;p&gt;This means we don't have any connection strings in our test solutions &amp;amp; we're using Access tokens and RBAC in Azure, which keeps configuration management simple and takes advantage of a consistent Azure security architecture. &lt;/p&gt;

&lt;p&gt;This is simple to get going and I found it really made our test interactions with Azure easy to manage. &lt;/p&gt;

</description>
      <category>devops</category>
      <category>azure</category>
      <category>csharp</category>
      <category>testing</category>
    </item>
    <item>
      <title>AZ CLI - Deleting Terraform Test Resource Groups</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Sat, 02 Oct 2021 10:41:29 +0000</pubDate>
      <link>https://forem.com/dylanmorley/az-cli-deleting-terraform-test-resource-groups-4kkg</link>
      <guid>https://forem.com/dylanmorley/az-cli-deleting-terraform-test-resource-groups-4kkg</guid>
      <description>&lt;p&gt;This is a series of quick posts, tips and tricks when working with the Azure CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deleting Multiple Resource Groups
&lt;/h2&gt;

&lt;p&gt;When working with the Terraform Provider for Azure, you may be adding a new feature that requires you to run the test automation packs.&lt;/p&gt;

&lt;p&gt;In cases where there are errors in the Provider as you're developing the feature, you might end up with errors like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;===&lt;/span&gt; CONT  TestAccEventGridEventSubscription_deliveryMappings
testcase.go:88: Step 1/2 error: Error running pre-apply refresh: &lt;span class="nb"&gt;exit &lt;/span&gt;status 1

        Error: Argument or block definition required

          on terraform_plugin_test.tf line 47, &lt;span class="k"&gt;in &lt;/span&gt;resource &lt;span class="s2"&gt;"azurerm_eventgrid_event_subscription"&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt;:
          47:     &lt;span class="o"&gt;[&lt;/span&gt;

        An argument or block definition is required here.

    testing_new.go:63: Error retrieving state, there may be dangling resources: &lt;span class="nb"&gt;exit &lt;/span&gt;status 1

&lt;span class="nt"&gt;---&lt;/span&gt; FAIL: TestAccEventGridEventSubscription_deliveryMappings &lt;span class="o"&gt;(&lt;/span&gt;10.18s&lt;span class="o"&gt;)&lt;/span&gt;
FAIL      
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note part of the error description - &lt;em&gt;there may be dangling resources:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because the test didn't complete, the destroy phase of the test might not have completed, which means you're in a state where there are resources in Azure that have been created by the test code that need tidying up&lt;/p&gt;

&lt;p&gt;Luckily, we can do this with a bit of Powershell + AZ CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$resource_groups&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;az group list &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"[? contains(name,'acctest')][].{name:name}"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; tsv&lt;span class="o"&gt;)&lt;/span&gt;

foreach &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$resource_groups&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    Write-Output &lt;span class="s2"&gt;"Deleting resource group &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    az group delete &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$group&lt;/span&gt; &lt;span class="nt"&gt;--yes&lt;/span&gt; 
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Assumes you're logged in and in the subscription you want to work with&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform tests create resource groups following an &lt;code&gt;acctest&lt;/code&gt; naming convention, so we find all of those that match, then delete them one by one. &lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AZ CLI - Assign User Role</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Wed, 08 Sep 2021 11:03:45 +0000</pubDate>
      <link>https://forem.com/dylanmorley/az-cli-assign-user-role-i1b</link>
      <guid>https://forem.com/dylanmorley/az-cli-assign-user-role-i1b</guid>
      <description>&lt;p&gt;This is a series of quick posts, tips and tricks when working with the Azure CLI. &lt;/p&gt;

&lt;h2&gt;
  
  
  Assigning Roles for RBAC
&lt;/h2&gt;

&lt;p&gt;To take advantage of the built in roles and fine grained RBAC support many of the resources in Azure support, you should assign Roles to Security Principals. &lt;/p&gt;

&lt;p&gt;To do so, the principal that will be &lt;em&gt;performing&lt;/em&gt; the assignment must have the relevant permissions. You'll have this if you are an Administrator or Owner of an Azure subscription, but if not you'll explicitly need the permissions to read &amp;amp; write by granting the &lt;em&gt;roleAssignments&lt;/em&gt;, which looks like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"permissions": [
    {
        "actions": [
            "Microsoft.Authorization/*/read",
            "Microsoft.Authorization/roleAssignments/read",
            "Microsoft.Authorization/roleAssignments/write",
            "Microsoft.Authorization/roleAssignments/delete"
        ],
        "notActions": [],
        "dataActions": [],
        "notDataActions": []
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OK, so you've authenticated as a principal that has the correct permissions and you want to assign a role to another principal - documentation for this is available at &lt;a href="https://docs.microsoft.com/en-us/cli/azure/role/assignment?view=azure-cli-latest"&gt;the az cli role page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, I noticed a little quirk when trying to assign a role to a &lt;em&gt;user principal&lt;/em&gt;, where the assignee here is the object id of a user principal from AAD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az role assignment create 
--assignee 00000000-0000-0000-0000-000000000000 
--role "Storage Account Key Operator Service Role" 
--scope $id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ForbiddenError: Operation failed with status: 'Forbidden'. Details: 403 Client Error: Forbidden for url: &lt;a href="https://graph.windows.net/%7Bguid%7D/getObjectsByObjectIds?api-version=1.6"&gt;https://graph.windows.net/{guid}/getObjectsByObjectIds?api-version=1.6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's going on here? I have permissions to assign roles, this command is working for me for other principal types, but for &lt;em&gt;user principals&lt;/em&gt; I'm receiving this error. &lt;/p&gt;

&lt;p&gt;You can always pass the --debug flag to your az cli to see what's going on in a bit more depth, for this command we can see&lt;/p&gt;

&lt;p&gt;msrest.http_logger : {"odata.error":{"code":"Authorization_RequestDenied","message":{"lang":"en","value":"Insufficient privileges to complete the operation."},"requestId":"{guid}","date":"2021-09-08T08:50:40"}}&lt;br&gt;
msrest.exceptions : Operation failed with status: 'Forbidden'. Details: 403 Client Error: Forbidden for url: &lt;a href="https://graph.windows.net/%7Bguid%7D/getObjectsByObjectIds?api-version=1.6"&gt;https://graph.windows.net/{guid}/getObjectsByObjectIds?api-version=1.6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's happening is the command is trying to perform a user account lookup which means it requires additional privileges to do so, specifically 'Read directory data' permission with Azure AD Graph API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1WeVPe5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/df6fphkgglhu80s1q5m9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1WeVPe5s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/df6fphkgglhu80s1q5m9.png" alt="Read Directory Data"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You &lt;em&gt;could&lt;/em&gt; grant the account the permission and that would solve, but I'd rather keep to least privilege and not do that if possible. Luckily, the command provides another way to assign the role, by passing parameters in a slightly different form&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az role assignment create 
--assignee-object-id "guid" 
--assignee-principal-type "User" 
.--role "The Role Name" 
--scope "the/full/resource/id" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using the command in this format, you won't perform a call to the graph API and therefore don't need 'Read directory data' permission - nice!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cli</category>
      <category>aad</category>
    </item>
    <item>
      <title>Azure Logic Apps Standard - Part 1 Solution Design</title>
      <dc:creator>Dylan Morley</dc:creator>
      <pubDate>Fri, 27 Aug 2021 12:19:21 +0000</pubDate>
      <link>https://forem.com/dylanmorley/azure-logic-apps-standard-part-1-solution-design-56cf</link>
      <guid>https://forem.com/dylanmorley/azure-logic-apps-standard-part-1-solution-design-56cf</guid>
      <description>&lt;p&gt;Integration - in a Microservices world, it's one of the hot topics for a business. In order to scale, we've decomposed monoliths into smaller, discrete applications that manage their own data. We're creating event streams and the data we're producing isn't in a single, central database anymore - it's in many different systems, in many different forms. We're also mixing our custom software with SaaS and other applications that give us capabilities without requiring us to build, own and operate all the software we require.&lt;/p&gt;

&lt;p&gt;We therefore need technology that allows us to perform integration, giving us the capability to ingest and move data between boundaries, flowing data from one system to another, changing shape and enriching it. We need to turn disparate and distributed data into something we can understand, bringing information back into Data Lakes where we can analyse and produce joined up reports that can give us a single, coherent view over multiple systems.&lt;/p&gt;

&lt;p&gt;To do this, there are some very common patterns and steps you'll need to perform - get data from a source system, transform it to a new shape, split it into chunks, some conditional statements depending on state - generate payload and send it to a target system.&lt;/p&gt;

&lt;p&gt;Logic Applications &lt;strong&gt;Standard&lt;/strong&gt; are part of the Azure Integration Services suite and allow us to perform complex integrations in Azure. Logic applications are triggered by an event, and will then perform a sequence of actions through to completion - this is considered our workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtq9dit82qmqmfcie2fj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtq9dit82qmqmfcie2fj.png" alt="Azure Logic App Workflows"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the team building the workflow, it's up to you to chain together the actions that perform the work needed - you can build a sequence of decisions and other branches that perform conditional work based on various criteria. You can then orchestrate workflows, having multiple workflows calling each other - a useful approach that allows you to decompose a large business sequence into smaller, easier to understand chunks. &lt;/p&gt;

&lt;p&gt;Logic apps standard introduces some important changes, and the runtime is now built as an extension point on top of Azure Functions. This really opens up the hosting possibilities - previously, they were ARM definitions and Azure Only deployments. In Standard, you now have a VS Code extension for designing the flows, can run and test them locally, and can package and deploy them as you would any other piece of software you create. If you containerise, you can host and run wherever you like.&lt;/p&gt;

&lt;p&gt;We're going to look at what might be a &lt;em&gt;typical real world use-case&lt;/em&gt; for a business. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We want to integrate with a third-party API and perform some data retrieval - We need to import a minimum of 100,000 rows of data. This is a once daily batch operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We're going to read API data, transform from XML to JSON, persist some state and raise service bus events. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The third party has a requirement that it must allow-list requests from a static IP. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All PaaS resources must be VNET restricted. We're going to interact with Storage, Service Bus, Key Vault and Cosmos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We need to do this in &lt;em&gt;approximately&lt;/em&gt; 20 minutes, so will set ourselves a NFR of 100/rps throughput. In order to fan out, we need to consider splitting data into chunks and performing updates and notifications in smaller batches than what we initially receive.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at what these requirements might translate to as a Physical Infrastructure diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ysspfuvtnhsmmvhvn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4ysspfuvtnhsmmvhvn0.png" alt="Logic App Physical Solution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Requirements
&lt;/h2&gt;

&lt;p&gt;To produce a static IP, we need to provision a Public IP address and a NAT Gateway. We then need to route all outbound traffic from our logic application through the VNET that is associated with the NAT gateway. A good article that goes into more detail on this is at &lt;a href="https://notetoself.tech/2020/11/21/azure-functions-with-a-static-outbound-ip-address/" rel="noopener noreferrer"&gt;note to self - azure functions with a static outbound ip&lt;/a&gt; - it's the same process for Logic Apps.&lt;/p&gt;

&lt;p&gt;Since resources are VNET restricted, any support access to the PaaS components needs to be considered - in this example we solve the problem with Azure Bastion and an Azure VM in the same VNET. Engineers would need to connect to the VM via Bastion, before accessing whatever resources they require. Within the VNET, we'll therefore create a number of subnets, for the logic workflow and for management operations. Finally, we can take advantage of Service Endpoints for each of the PaaS resources, which fits nicely for this serverless and 100% Azure implementation. &lt;/p&gt;

&lt;p&gt;The requirement for VNET configuration forces us to take premium versions of certain Skus, the app service plan to run the logic app and service bus are prime examples of this, where VNET support is only available on the premium offering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Applications
&lt;/h2&gt;

&lt;p&gt;Logic App Standard does allow for a low-code &amp;amp; build approach, but we're going to put some discipline behind this and build a full CI/CD Azure DevOps Pipeline. This allows us to produce a build artifact from a CI job that contains everything we need to provision, deploy and test the application, then use the artifact in a multi-stage pipeline. &lt;/p&gt;

&lt;p&gt;To do so, we'll need to generate a zip file with all the required components needed to deploy to azure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt6ochkmlib4a1sk2ecb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt6ochkmlib4a1sk2ecb.png" alt="Azure Devops : Provision -&amp;gt; Deploy -&amp;gt; Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This gives us a fully automated means to provision the physical infrastructure, deploy the workflows and all required dependencies, and test that the workflows are in a good state and doing what we want them to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Connectors
&lt;/h2&gt;

&lt;p&gt;One of the advantages of Logic Apps Standard is they allow you to build your own integration connectors and use them within your logic flows. This allows you to produce an nuget package that contains the connector code, and reference that from your applications.&lt;/p&gt;

&lt;p&gt;Logic Apps have 'hosted connectors' that you can make use of which will run out-of-process from your logic application. Whilst these are good for a number of use cases, they do come with limitations such as number of requests allowed per minute, or maximum payload sizes. If you have a high throughput application and you're running it on your own infrastructure, it makes sense to use the 'Built In' connector approach which allows you to run completely on your own application plan without any imposed limits (other than that of the underlying infrastructure)&lt;/p&gt;

&lt;p&gt;One of the requirements for the demo application for this article was to persist some state in Cosmos Database, and at the time of writing this there was no in-built connector for Cosmos DB. We therefore wrote our own connector, &lt;a href="https://github.com/ASOS/asos-logicapps-cosmosconnector" rel="noopener noreferrer"&gt;which we've made available as ASOS open source&lt;/a&gt;. As well as being a useful connector, it's should demonstrate how to build and test any custom connector you could want to design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating The Infrastructure 
&lt;/h2&gt;

&lt;p&gt;We need to build the infrastructure where we'll deploy and run the logic workflows - if you're working in Azure then you'll have a number of choices for representing physical infrastructure as source code. ARM templates, Azure Bicep, Ansible and Terraform are all options, for this example we're going to use Terraform. &lt;/p&gt;

&lt;p&gt;One issue you may encounter with Terraform is that is uses the Azure SDK for Go - it often takes some time for an Azure API to become available in the Go SDK before being implemented as a Terraform module (Indeed, Day 0 support is one of the strengths of Bicep). This means that certain bleeding edge features might not be available, and at the time of writing there is no Terraform resource for Logic Applications Standard. &lt;/p&gt;

&lt;p&gt;To solve this problem, we'll wrap up an ARM template the creates the logic app in a Terraform module, allowing us to provision the application in a state ready to be deployed to. Mixing ARM with Terraform is an acceptable approach until a native module is made available. I find the AzureRm provider for Terraform pretty easy to understand and have made a few contributions when a feature hasn't been immediately available to me - it's open source, dive in and build! &lt;/p&gt;

&lt;p&gt;All our infrastructure provisioning process should be entirely handled by Terraform code - we'll provision the infrastructure, then zip deploy our application into what we've created. This should be a modular, easily repeatable process that we could get good reuse out of, for any other integrations we care to build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Approach
&lt;/h2&gt;

&lt;p&gt;An important consideration before we begin - how are we going to test the workflows? One of the advantages of Logic Apps is they promote a declarative approach that doesn't require writing code, however - that means we're not going to be using unit testing to help design the application. We need to think about what our test boundaries are within the overall logic app. Let's look at some more traditional test boundaries in a Model -&amp;gt; View -&amp;gt; Controller application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9zd3b4p8buzjzr0se9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9zd3b4p8buzjzr0se9b.png" alt="Fig 2 - Different test boundaries within an application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our requirements for the application are to retrieve data from a third party, transform data, persist into cosmos and transmit events to service bus. We could treat the entire integration as a black box and just test at the boundaries - trigger the processing to start and assert that messages appear on Service Bus (&lt;strong&gt;an end to end test&lt;/strong&gt;). While these have value, they will slow down your feedback loop, make assertions more difficult and are more prone to errors. With data storage and service bus messaging in the mix, you'll need to consider how to isolate data so that concurrent test executions don't interfere with each other.&lt;/p&gt;

&lt;p&gt;Ideally, since the workflows are run as an extension on top of Azure functions, I'd create an instance of the process during the build pipeline, mock out all the dependencies and run the whole thing in memory, black box testing in the build pipeline before a deployment takes place. I want to test my application business logic at build time, not necessarily interactions with the underlying transports. &lt;/p&gt;

&lt;p&gt;Unfortunately, it isn't possible at the time of writing - improved test support is something that is being addressed by Microsoft as the offering matures. Until that time, we should consider :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What parts of the workflows can be unit tested? If I'm performing transformations using liquid templates, can I unit test just the transformations work OK?&lt;/li&gt;
&lt;li&gt;If I'm using node inline code - how can I test the Javascript? &lt;/li&gt;
&lt;li&gt;If my overall logic application is composed of multiple workflows, can I test individual workflows in isolation without requiring the whole end to end piece? Does that offer value?&lt;/li&gt;
&lt;li&gt;Can I test in the build pipeline, or should we deploy then test? &lt;em&gt;Should we do both?!&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In fact, testing the workflows is worth an article by itself, which you can see in Part 3 'Testing Logic Application Workflows'&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;That's it for this article, which is just an introduction to how we might go about designing the solution and some of the moving parts we need to consider that will allow us to build, deploy and test the applications.&lt;/p&gt;

&lt;p&gt;We've turned business requirements into Infrastructure, and made some technology choices to allow us to build and deploy, and we're ready to start the implementation - which we'll look at next. Check back for the follow up articles, which will be on these topics.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 2 - Build Pipelines and Provisioning &lt;/li&gt;
&lt;li&gt;Part 3 - Testing Logic Application Workflows&lt;/li&gt;
&lt;li&gt;Part 4 - Designing for scale out and throughput&lt;/li&gt;
&lt;li&gt;Part 5 - Operating and Observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, we'll produce the demo application and make that open source, so you can deploy to your own Azure subscriptions.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>serverless</category>
      <category>testing</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
