<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter Davis</title>
    <description>The latest articles on Forem by Peter Davis (@panachesoftwaredev).</description>
    <link>https://forem.com/panachesoftwaredev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/panachesoftwaredev"/>
    <language>en</language>
    <item>
      <title>Protect Azure Functions with API Keys using Azure API Management - Part 3</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 06 Sep 2023 16:49:56 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-3-2pkc</link>
      <guid>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-3-2pkc</guid>
      <description>&lt;p&gt;In Part 1 and Part 2 of this series we created two Azure Functions protected by function keys, published those to Azure and then wrapped those using Azure API Management so we could control access using products and subscription keys. &lt;/p&gt;

&lt;p&gt;In this final part we're going to update our System function to let it create new subscriptions directly in API Management which can then be used to call our user functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable Management API
&lt;/h2&gt;

&lt;p&gt;Open the &lt;strong&gt;Management API&lt;/strong&gt; tab from the left hand menu of the API Management instance in Azure and then set &lt;strong&gt;Enable Management REST API&lt;/strong&gt; to &lt;strong&gt;Yes&lt;/strong&gt;.  In addition click on the &lt;strong&gt;Generate&lt;/strong&gt; button next to &lt;strong&gt;Access Token&lt;/strong&gt;, this will allow us to test our calls to make sure everything is working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QrDiOE7x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vudp622ne4bioojd91l0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QrDiOE7x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vudp622ne4bioojd91l0.png" alt="Management API" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API Management service exposes a management API that we can use to create subscriptions, along with many other things, and you can get more information &lt;a href="https://learn.microsoft.com/en-us/rest/api/apimanagement/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you can see from the screenshot above ours is exposed at &lt;a href="https://devtoapiman.management.azure-api.net"&gt;https://devtoapiman.management.azure-api.net&lt;/a&gt; and using the Access Token we generated above we can make calls to this API.  Let's give it a go and get the details for the &lt;strong&gt;DevToUserFunctions&lt;/strong&gt; product we created earlier.&lt;/p&gt;

&lt;p&gt;We will need to make a call to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devtoapiman.management.azure-api.net/subscriptions/%7BsubscriptionId%7D/resourceGroups/%7BresourceGroupName%7D/providers/Microsoft.ApiManagement/service/%7BserviceName%7D/products/%7BproductId%7D?api-version=2022-08-01"&gt;https://devtoapiman.management.azure-api.net/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/products/{productId}?api-version=2022-08-01&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;{subscriptionId} can be found on the &lt;strong&gt;Overview&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;{resourceGroupName} can be found on the &lt;strong&gt;Overview&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;{serviceName} is the name you gave to your API Management service (devtoapiman in this example)&lt;/li&gt;
&lt;li&gt;{productId} is the product we want to get the details of (DevToUserFunctions in this example)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will also need to add a header to the call.  This will be called &lt;strong&gt;Authorization&lt;/strong&gt; and the value should be set to the &lt;strong&gt;Access Token&lt;/strong&gt; you generated above.&lt;/p&gt;

&lt;p&gt;If we provide these details we should get information regarding our product returned, confirming we have access to the management API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--emdevofE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vtat2f1nhtjebqot09qb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--emdevofE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vtat2f1nhtjebqot09qb.png" alt="Get Product details" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will need to use the &lt;strong&gt;Subscription&lt;/strong&gt; end point of this API to programmatically create new subscriptions.  Details for this end point can be reviewed &lt;a href="https://learn.microsoft.com/en-us/rest/api/apimanagement/current-ga/subscription/create-or-update?tabs=HTTP"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update our System API
&lt;/h2&gt;

&lt;p&gt;Head back to Visual Studio and our &lt;strong&gt;devto.apiman.system&lt;/strong&gt; project and the first thing we want to do is add &lt;a href="https://restsharp.dev/"&gt;RestSharp&lt;/a&gt; as a dependency, for this example add the following nuget packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RestSharp&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RestSharp.Serializers.SystemTextJson&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now add a new class called &lt;strong&gt;APIManagementSubscription&lt;/strong&gt; to our project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace devto.apiman.system
{
    public class APIManagementSubscription
    {
        public APIManagementSubscription()
        {
            id = string.Empty;
            type = string.Empty;
            name = string.Empty;
            properties = new Properties();
        }

        public string id { get; set; }
        public string type { get; set; }
        public string name { get; set; }
        public Properties properties { get; set; }
    }

    public class Properties
    {
        public Properties()
        {
            scope = string.Empty;
            displayName = string.Empty;
            primaryKey = string.Empty;
            secondaryKey = string.Empty;
        }

        public string scope { get; set; }
        public string displayName { get; set; }
        public bool allowTracing { get; set; }
        public string primaryKey { get; set; }
        public string secondaryKey { get; set; }
    }

    public class APIManagementSubscriptionCreate
    {
        public APIManagementSubscriptionCreate()
        {
            properties = new Properties();
        }

        public Properties properties { get; set; }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will be used to generate the JSON to create our new subscriptions.&lt;/p&gt;

&lt;p&gt;We can let the management API auto generate our subscription primary and secondary key, but we're going to provide our own.  In this case we'll just generate a random string, but in reality we may use guids generated in a database or another value that comes from somewhere else.  Our random string generator is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private string GenerateAPIKey(int randomLength = 50)
{
    char[] chars = "_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-=".ToCharArray();

    var randomString = new StringBuilder();

    for (var i = 0; i &amp;lt; randomLength; i++)
    {
        randomString.Append(chars[Random.Shared.Next(chars.Length)]);
    }

    return randomString.ToString();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we're going to generate our payload, based on the documentation &lt;a href="https://learn.microsoft.com/en-us/rest/api/apimanagement/current-ga/subscription/create-or-update?tabs=HTTP"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private APIManagementSubscriptionCreate GenerateSubscriptionCreation(string userId)
{
    var apiManagementSubscriptionCreate = new APIManagementSubscriptionCreate();

    var subscriptionId = "XXXXXXXXXXXXXX";
    var resourceGroup = "DevTo-APIManagement";
    var serviceName = "devtoapiman";
    var productName = "devtouserfunctions";

    apiManagementSubscriptionCreate.properties.displayName = userId;
    apiManagementSubscriptionCreate.properties.scope = $"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.ApiManagement/service/{serviceName}/products/{productName}";
    apiManagementSubscriptionCreate.properties.primaryKey = GenerateAPIKey();
    apiManagementSubscriptionCreate.properties.secondaryKey = GenerateAPIKey();
    apiManagementSubscriptionCreate.properties.allowTracing = true;

    return apiManagementSubscriptionCreate;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we're hardcoding our values like subscription ID and resource group in this example (and you should change them to be appropriate for your setup), but in a proper application we'd read those in from something like environment variables or an Azure Key Vault.&lt;/p&gt;

&lt;p&gt;We need to generate an &lt;strong&gt;Access Token&lt;/strong&gt; to be able to call the Management API and for that we can use the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private string GenerateToken()
{
    var id = "integration";
    var key = "XXXXXXXXXXXXXXXXXXXXXX";
    var expiry = DateTime.UtcNow.AddDays(10);
    using (var encoder = new HMACSHA512(Encoding.UTF8.GetBytes(key)))
    {
        var dataToSign = id + "\n" + expiry.ToString("O", CultureInfo.InvariantCulture);
        var hash = encoder.ComputeHash(Encoding.UTF8.GetBytes(dataToSign));
        var signature = Convert.ToBase64String(hash);
        var encodedToken = string.Format("SharedAccessSignature uid={0}&amp;amp;ex={1:o}&amp;amp;sn={2}", id, expiry, signature);
        return encodedToken;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 'id' above is the &lt;strong&gt;Identifier&lt;/strong&gt; field from the &lt;strong&gt;Management API&lt;/strong&gt; tab we looked at in Azure earlier.  The 'key' is either the &lt;strong&gt;Primary key&lt;/strong&gt; or &lt;strong&gt;Secondary key&lt;/strong&gt; from the same screen.&lt;/p&gt;

&lt;p&gt;Our final step before we can make our call is generating the API endpoint to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private string GetManagerEndPoint(string userId)
{
    var subscriptionId = "XXXXXXXXXXXXXX";
    var resourceGroup = "DevTo-APIManagement";
    var serviceName = "devtoapiman";

    return $"subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.ApiManagement/service/{serviceName}/subscriptions/{userId}?api-version=2022-08-01";
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can update our function to call the management API to create a subscription and return the result.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Function("GenerateUserAPIKey")]
public async Task&amp;lt;HttpResponseData&amp;gt; Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
{
    var restOptions = new RestClientOptions("https://devtoapiman.management.azure-api.net");

    var restClient = new RestClient(restOptions);

    var managerToken = GenerateToken();

    restClient.AddDefaultHeader("Authorization", managerToken);

    var apiManagementSubscriptionCreate = GenerateSubscriptionCreation("generatedUserSubscription");

    var managerEndpoint = GetManagerEndPoint("generatedUserSubscription");

    var subscriptionResult = await restClient.PutJsonAsync&amp;lt;APIManagementSubscriptionCreate, APIManagementSubscription&amp;gt;(managerEndpoint, apiManagementSubscriptionCreate);

    if(subscriptionResult == null)
    {
        var errorResponse = req.CreateResponse(HttpStatusCode.InternalServerError);
        errorResponse.Headers.Add("Content-Type", "text/plain; charset=utf-8");
        errorResponse.WriteString("Error Generating user subscription");
    }

    var validResponse = req.CreateResponse(HttpStatusCode.OK);
    await validResponse.WriteAsJsonAsync(subscriptionResult);
    return validResponse;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only other change we're going to make is to our user function, to display the API Key that was passed in so we can demonstrate retrieving that from the request so we could then use it to look up user data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Function("GetUserData")]
public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
{
    _logger.LogInformation("C# HTTP trigger function processed a request.");

    var response = req.CreateResponse(HttpStatusCode.OK);
    response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

    response.WriteString($"Found the API Key: {GetAPIKey(req)}");

    return response;
}

public string GetAPIKey(HttpRequestData req)
{
    var foundAPIKey = req.Headers.Where(req =&amp;gt; req.Key == "devto-key").FirstOrDefault();

    if (foundAPIKey.Value == null)
        return string.Empty;

    return foundAPIKey.Value.FirstOrDefault(string.Empty);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now re-publish these functions from Visual Studio, but before you test the System call to generate a new subscription you'll need to manually create a subscription to call that API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R126vKIy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nitn3sq1cdubybcgd8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R126vKIy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nitn3sq1cdubybcgd8r.png" alt="Create system subscription" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Call the system 'GenerateUserAPIKey' function and check you get a valid response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5FJ4chQ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmcr7f9nw0pqvktw0wbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5FJ4chQ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmcr7f9nw0pqvktw0wbm.png" alt="Generate User Subscription" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming it completed successfully you can check that in the subscriptions to see the new value&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RZ_JYViL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhcj590auesz8ntvekmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RZ_JYViL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhcj590auesz8ntvekmf.png" alt="created subscription" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if we call the user function with the created API key we should now be able to retrieve that key from the request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5sOnDo----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juo2uhc4kb0vztaem5bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5sOnDo----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/juo2uhc4kb0vztaem5bx.png" alt="Get user data" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Hopefully this guide has provided you with all the basic building blocks you need to create new Azure Functions and protect them with API Keys provided via Azure API Management.  You have also seen how you can use the management API to generate those keys so that this can be an automated process in your application on user sign up.&lt;/p&gt;

&lt;p&gt;It's important to note that although we have been using Azure Functions in this example, you could easily use the same functionality in API Management to protect REST APIs created and hosted via different means.&lt;/p&gt;

&lt;p&gt;I hope you found this useful.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OsH2IMsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee" width="217" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
      <category>dotnet</category>
      <category>api</category>
    </item>
    <item>
      <title>Protect Azure Functions with API Keys using Azure API Management - Part 2</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 06 Sep 2023 16:49:46 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-2-216i</link>
      <guid>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-2-216i</guid>
      <description>&lt;p&gt;In Part 1 of this series we created two Azure Functions protected by function keys and published those to Azure.  In this part we're going to use &lt;a href="https://azure.microsoft.com/en-gb/products/api-management" rel="noopener noreferrer"&gt;Azure API Management&lt;/a&gt; to wrap those functions and provide access via API Keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an API Management service
&lt;/h2&gt;

&lt;p&gt;Head back into Azure create a new resource choosing &lt;strong&gt;API Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgmreufc53i9sj2tfllx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgmreufc53i9sj2tfllx.png" alt="Create API Management"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide an appropriate resource name for your instance and then choose the &lt;strong&gt;Developer (no SLA)&lt;/strong&gt; pricing tier.  As mentioned in part 1, there are some restrictions placed on the consumption tier so for a real world application you would probably want to use one of the other tiers, and for this example we need access to the management API that you don't get with the consumption tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gvuva4zlpdhvdf014gn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gvuva4zlpdhvdf014gn.png" alt="Create Service"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All other settings can be left at their default values for this example.  The creation process may take a little bit of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Import Functions
&lt;/h2&gt;

&lt;p&gt;Once your API Management resource has created open it in Azure and choose &lt;strong&gt;APIs&lt;/strong&gt; from the left menu and then &lt;strong&gt;Function App&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrzwpl8rg8ie9j1zf4oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrzwpl8rg8ie9j1zf4oc.png" alt="Import Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the import wizard that opens you should be able to browse to your Function and Select the calls you want to import.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F154c3bwicxquq0vea8wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F154c3bwicxquq0vea8wi.png" alt="Import APIs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your import for the System function should look similar to this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68w374s997zw7e7ymdj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68w374s997zw7e7ymdj4.png" alt="Import System"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my example I've used the 'Full' option and adjusted the 'API URL suffix' and included a 'Version identifier' to make the URL for this function a bit cleaner.&lt;/p&gt;

&lt;p&gt;Do the same for the User function.&lt;/p&gt;

&lt;p&gt;If we open the v1 User API and click on the &lt;strong&gt;Test&lt;/strong&gt; tab you should be able to see the complete &lt;strong&gt;Request URL&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6yqot48ioepujdcp67v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6yqot48ioepujdcp67v.png" alt="GetUserData Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets head back to Postman and try calling this API, but this time &lt;strong&gt;don't&lt;/strong&gt; add the &lt;strong&gt;x-functions-key&lt;/strong&gt; header value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhqv8reyx5siyc8priv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhqv8reyx5siyc8priv1.png" alt="API Management Postman call"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that we received a slightly different error message this time.  If we head back to the Function and look at the &lt;strong&gt;App keys&lt;/strong&gt; you'll see that API Management has automatically created it's own &lt;strong&gt;Host key&lt;/strong&gt;, apim-DevToAPIMan, and automatically uses this to call the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtns9mguevy7716dnyca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtns9mguevy7716dnyca.png" alt="API Management Host Key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API Management URI is now protected by a subscription key that can be generated in the API Management interface and we're going to use this alongside &lt;strong&gt;Products&lt;/strong&gt; to provide us with a lot of control when it comes to providing access to our APIs.&lt;/p&gt;

&lt;p&gt;Add two new products via the left hand menu option in API Management, one for the system function and one for the user function.  Give them appropriate names and then also choose the APIs you want this product to give access to, ticking the 'Published' checkbox along the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zfrpri48chxs8m6dcmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zfrpri48chxs8m6dcmr.png" alt="Create Product"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being able to specify which APIs are included in a product allows us to create separate products for different user types, with access to only the APIs those types require if we want.&lt;/p&gt;

&lt;p&gt;Once we've created the products head over to the &lt;strong&gt;Subscriptions&lt;/strong&gt; left hand menu entry and add a new subscription. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w6cyfspfl2phx0lq9gd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w6cyfspfl2phx0lq9gd.png" alt="Create Subscription"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll notice in the create subscription screen we can set the scope &lt;strong&gt;Product&lt;/strong&gt; and then select the product that this subscription has access to.&lt;/p&gt;

&lt;p&gt;When the subscription is created two keys are generated for us, a primary and a secondary.  This could allow us to, for example, use the primary key internally in our application to call services as a user, but then make the secondary key visible to the user so that they can call the APIs themselves, this way we can differentiate between an internal application call and an external user call at runtime, and also revoke and regenerate one or both of the keys as required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lfc7rxiyz0u116nvs3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lfc7rxiyz0u116nvs3g.png" alt="Subscription Keys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we go ahead and use these keys, head back to our APIs and in the settings change the header and query parameter names to something easier, in this example 'devto-key'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabcd3b02scl63lj6kocr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabcd3b02scl63lj6kocr.png" alt="Subscription setting"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With 'Subscription required' ticked if we now provide one of the primary or secondary keys via a header or query parameter called 'devto-key' we should gain access to our API call.&lt;/p&gt;

&lt;p&gt;Let's give it a test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxor1x283vzxc8fgggjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxor1x283vzxc8fgggjh.png" alt="User API Management call"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;We've now got Azure Functions which we've published that can be called directly if we include a host key, but we won't use that method for access, instead we've created an instance of Azure API Management, imported our functions and then used products and subscriptions to control access to the APIs.&lt;/p&gt;

&lt;p&gt;In the final step we're going to bring this all together by making changes to our System Azure Function so that it can programmatically create new user subscriptions in our API Management instance. &lt;/p&gt;

&lt;p&gt;Head on over to Part 3 to continue this.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
      <category>dotnet</category>
      <category>api</category>
    </item>
    <item>
      <title>Protect Azure Functions with API Keys using Azure API Management - Part 1</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 06 Sep 2023 16:49:34 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-1-19pb</link>
      <guid>https://forem.com/panachesoftwaredev/protect-azure-functions-with-api-keys-using-azure-api-management-part-1-19pb</guid>
      <description>&lt;h2&gt;
  
  
  Secure your Azure Functions with API Keys
&lt;/h2&gt;

&lt;p&gt;While building REST APIs using .NET I've generally handled all the authentication tasks within the code I'm writing, however as I've transitioned across to using Isolated &lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp" rel="noopener noreferrer"&gt;Azure Functions&lt;/a&gt; I've started to take advantage of the built in authentication provided by Azure, this allows me to remove authentication code from my projects and have that handled externally, providing me with the flexibility to change how I authenticate without the need to update my code.&lt;/p&gt;

&lt;p&gt;Because my projects have previously used &lt;a href="https://learn.microsoft.com/en-us/azure/active-directory-b2c/overview" rel="noopener noreferrer"&gt;Azure B2C&lt;/a&gt; authentication I'd setup my Azure Functions to use that as an identity provider, which meant I needed to provide a valid token to authenticate. &lt;/p&gt;

&lt;p&gt;I wanted to change this to allow for authentication with an API Key instead and by combining Azure Functions with &lt;a href="https://azure.microsoft.com/en-gb/products/api-management/" rel="noopener noreferrer"&gt;Azure API Management&lt;/a&gt; I've been able to setup a simple way to achieve this programmatically.&lt;/p&gt;

&lt;p&gt;In this article I'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating Azure Functions and publishing them to Azure&lt;/li&gt;
&lt;li&gt;Importing Azure Functions into Azure API Management&lt;/li&gt;
&lt;li&gt;Setting up Products and Subscriptions in Azure API Management&lt;/li&gt;
&lt;li&gt;Generating API Keys and creating subscriptions Programmatically&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Costs
&lt;/h2&gt;

&lt;p&gt;One thing to note before we dive in is that the use of Azure Functions and API Management does come with a cost.  In this example we'll use the consumption based tier for our functions so that cost will be minimal but for API management, there are limitations on the number of subscriptions you can create in the consumption based plan and more importantly we need access to the Management API which isn't provided at that level so we need to use the developer tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create our Azure Functions
&lt;/h2&gt;

&lt;p&gt;We're going to create two very simple Azure functions for this example.  A System Function, that will handle the creation of new API Keys and a User Function that will be protected by those created API keys.&lt;/p&gt;

&lt;p&gt;We will protect our System Function with an API Key that only we know and in a real world scenario this would be used by our application to create new API keys for users, likely at the point of user sign-up, without us needing to do it manually.  Those generated keys can then be used to call our other function, and if we store those keys somewhere that will also allow us to identify the user making the call.&lt;/p&gt;

&lt;p&gt;Within Azure choose &lt;strong&gt;Create a resource&lt;/strong&gt; and then select &lt;strong&gt;Function App&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyety29oxhrqy43m94ukn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyety29oxhrqy43m94ukn.png" alt="Create Azure Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create our first &lt;strong&gt;System&lt;/strong&gt; function using a new resource group and name, choosing to deploy code using the .NET stack, 7 Isolated for the version and a region applicable to you.  The operating system will be Linux and we'll use the Consumption hosting option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhsw78o7ou5zvi4zkce1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhsw78o7ou5zvi4zkce1.png" alt="Azure Function Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Review + create&lt;/strong&gt; as all other settings can be left as their defaults.&lt;/p&gt;

&lt;p&gt;Once this is done create a second &lt;strong&gt;User&lt;/strong&gt; function with the same options but a different name&lt;/p&gt;

&lt;p&gt;After this you should have two functions created in Azure, mine are called &lt;strong&gt;devto-apiman-system&lt;/strong&gt; and &lt;strong&gt;devto-apiman-user&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8hde4t8r15gnbr1k9ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8hde4t8r15gnbr1k9ii.png" alt="Created Azure Functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Visual Studio Projects
&lt;/h2&gt;

&lt;p&gt;You can use whatever development tool you want to create your code, in this example I'm going to use &lt;a href="https://visualstudio.microsoft.com/" rel="noopener noreferrer"&gt;Visual Studio&lt;/a&gt;, the community edition is free to use, but you can also use Visual Studio Code or something like Rider from Jet Brains.&lt;/p&gt;

&lt;p&gt;In Visual Studio choose to create a new Azure Function and choose a name, I'm going to use &lt;strong&gt;devto.apiman.system&lt;/strong&gt; with a solution called &lt;strong&gt;DevTo APIManagement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f801xsn38r5x64ez6qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f801xsn38r5x64ez6qd.png" alt="Create Solution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next screen make sure you've choose &lt;strong&gt;.NET 7.0 Isolated&lt;/strong&gt; for the 'Functions worker' and &lt;strong&gt;Http trigger&lt;/strong&gt; for the 'Function'.  Make sure that the 'Authorization level' is set to &lt;strong&gt;Function&lt;/strong&gt;, this means that a key will be required to call the function once it is deployed to Azure.  If we set this to 'Anonymous' we could still wrap it with an API Key when calling the function from Azure API Management, but if the user found the direct URI of the function they would be able to call it directly without a key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed4ly6bxyt8mrirbwgvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed4ly6bxyt8mrirbwgvq.png" alt="Function Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, add another function project to the solution with the same settings but for the user function.  Once done you should have a solution that looks a little like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9pk6rqfrxd482v6dfq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9pk6rqfrxd482v6dfq5.png" alt="Solution Structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're going to keep this simple for the moment, so change the &lt;strong&gt;Function1.cs&lt;/strong&gt; in your system function so that the function name is something better ("GenerateUserAPIKey" in my case), set the Trigger to be a "post" request only, and tweak the message returned.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

using System.Net;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Extensions.Logging;

namespace devto.apiman.system
{
    public class Function1
    {
        private readonly ILogger _logger;

        public Function1(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger&amp;lt;Function1&amp;gt;();
        }

        [Function("GenerateUserAPIKey")]
        public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req)
        {
            _logger.LogInformation("C# HTTP trigger function processed a request.");

            var response = req.CreateResponse(HttpStatusCode.OK);
            response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

            response.WriteString("Generated a user API Key!");

            return response;
        }
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Edit &lt;strong&gt;Function1.cs&lt;/strong&gt; in the user function in a similar way but make it a "get" request.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

using System.Net;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Extensions.Logging;

namespace devto.apiman.user
{
    public class Function1
    {
        private readonly ILogger _logger;

        public Function1(ILoggerFactory loggerFactory)
        {
            _logger = loggerFactory.CreateLogger&amp;lt;Function1&amp;gt;();
        }

        [Function("GetUserData")]
        public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
        {
            _logger.LogInformation("C# HTTP trigger function processed a request.");

            var response = req.CreateResponse(HttpStatusCode.OK);
            response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

            response.WriteString("We got some user data!");

            return response;
        }
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Update you solution properties so that both projects get started and then just run the solution to test it out.  You should get the two functions starting, with their URIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frryxgprzgxl6lejzr8xg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frryxgprzgxl6lejzr8xg.png" alt="Running Functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we use something like Postman to call these APIs you should get an 'OK' and the appropriate message back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63iya3qkevtdp2vzm3s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63iya3qkevtdp2vzm3s5.png" alt="Postman test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that although we've specified &lt;strong&gt;AuthorizationLevel.Function&lt;/strong&gt; for these APIs, we don't need to provide a key to call them.  this is because when run locally the function authorization is ignored, it will only come into effect once we upload to Azure.  We could still get the passed in function key from the request header for testing purposes, but it won't be validated during the call.&lt;/p&gt;

&lt;p&gt;Lets push these functions up to Azure and test them there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to Azure
&lt;/h2&gt;

&lt;p&gt;Right-Click on the system function project and choose publish.  Work through the wizard, logging into your Azure account, choosing 'Azure Function App (Linux)' and then your system function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg6ce6usk3u353ymwzm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg6ce6usk3u353ymwzm0.png" alt="Publish System Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will generate you a publishing profile, once this has completed choose &lt;strong&gt;Publish&lt;/strong&gt; and after a minute or so you should receive a 'Publish succeeded' message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkfmbu63myo7cps46ovu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkfmbu63myo7cps46ovu.png" alt="Publish Succeeded"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead and publish our user function in the same way.&lt;/p&gt;

&lt;p&gt;If you happen to receive errors during this publish stage quite often if you head over to Azure and stop the function and then retry the publish that will clear the issue.&lt;/p&gt;

&lt;p&gt;Once published head over to our User function in Azure and you should see our &lt;strong&gt;GetUserData&lt;/strong&gt; function listed at the bottom of the  page (Sometimes you might need to restart and refresh the app a few times to see this) and the URI in the top right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xdhvijg4d3fkpqt4yed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xdhvijg4d3fkpqt4yed.png" alt="Published Function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets go ahead and call the &lt;strong&gt;GetUserData&lt;/strong&gt; function like we did locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws0011rx30rhcrt2ekjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws0011rx30rhcrt2ekjo.png" alt="GetUserData"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time notice that we get a &lt;strong&gt;401 Unauthorized&lt;/strong&gt; response, that's because the &lt;strong&gt;AuthorizationLevel.Function&lt;/strong&gt; has now activated and we now need a function key to call the API.&lt;/p&gt;

&lt;p&gt;Head over to the &lt;strong&gt;App keys&lt;/strong&gt; tab in azure and notice that we have some host keys&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcrfi89pe8vy4s0opwtl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcrfi89pe8vy4s0opwtl.png" alt="App Keys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we copy the &lt;strong&gt;default&lt;/strong&gt; key and then add a &lt;strong&gt;x-functions-key&lt;/strong&gt; header, which includes that default key value, to our request in Postman then our request should now complete successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptsiifwiohfzemq5j7sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptsiifwiohfzemq5j7sp.png" alt="Call function with App key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Mission complete right, we have now published our functions to Azure and protected them with a key....&lt;/p&gt;

&lt;p&gt;....the trouble is, we want to provide our users with their own API Keys, rather than a global key that everyone has, so that we can add and withdraw access as required.&lt;/p&gt;

&lt;p&gt;While we could manually create new host keys for the functions, we really want a way to manage that easily, programmatically generate our keys, and potentially provide access to specific calls for different users.&lt;/p&gt;

&lt;p&gt;For that, we'll need a better way to manage access and we'll use Azure API Management.&lt;/p&gt;

&lt;p&gt;Head on over to Part 2 to continue this.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>azurefunctions</category>
      <category>dotnet</category>
      <category>api</category>
    </item>
    <item>
      <title>Converting ASP.NET Core user secrets to environment variables at runtime</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Fri, 11 Aug 2023 16:06:15 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/converting-aspnet-core-user-secrets-to-environment-variables-at-runtime-26lf</link>
      <guid>https://forem.com/panachesoftwaredev/converting-aspnet-core-user-secrets-to-environment-variables-at-runtime-26lf</guid>
      <description>&lt;p&gt;I'm currently in the process of building &lt;a href="https://www.panachesports.com"&gt;Panache Sports&lt;/a&gt; with a front end coded as a server side &lt;a href="https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blazor"&gt;Blazor&lt;/a&gt; web application backed by serverless &lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview"&gt;Azure Functions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the front end I'm using the &lt;a href="https://learn.microsoft.com/en-us/aspnet/core/security/app-secrets"&gt;secret manager&lt;/a&gt; to prevent me from checking configuration details into GitHub.&lt;/p&gt;

&lt;p&gt;This works well and is simple to use.  In my project directory I run the following in the .NET CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet user-secrets init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a 'secrets.json' specific to the project on my local machine in this location:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%APPDATA%\Microsoft\UserSecrets\&amp;lt;user_secrets_id&amp;gt;\secrets.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I can then add new secrets using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet user-secrets set "Mysecrets:ApiKey" "12345"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, I can use the 'Manage User Secrets' option when right-clicking on my project in Visual Studio:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nC7dCQJ_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3dmx3n4c7ln4acd0tyw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nC7dCQJ_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3dmx3n4c7ln4acd0tyw9.png" alt="Manage Secrets" width="311" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which allows me to directly edit the json file.&lt;/p&gt;

&lt;p&gt;In my code I can then simply refer to these via the IConfiguration interface as if they had been placed in 'appsettings.json' within the project, so for example in the 'Program.cs' file of my project I could do the following to get the secret I created above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = builder.Configuration.GetValue&amp;lt;string&amp;gt;("Mysecrets:ApiKey");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I can create and access configuration details when running the code locally, knowing that my API Keys and other data won't be checked into GitHub for others to see.&lt;/p&gt;

&lt;p&gt;When I push this project from my local development environment up to an &lt;a href="https://azure.microsoft.com/en-gb/products/app-service/web"&gt;Azure Web App Service&lt;/a&gt; the configuration setup will be stored as environment variables, but Azure will link everything up so that my 'builder.Configuration.GetValue' call still works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When developing the Azure Functions locally that I use for the backend data access I can use environment variables to get the same type of details.&lt;/p&gt;

&lt;p&gt;The Azure Function project in Visual Studio includes a 'local.settings.json' file, which Git ignores by default, where I can store my configuration details.&lt;/p&gt;

&lt;p&gt;I can then access those values via the 'System.Environment' call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = Environment.GetEnvironmentVariable("My_Secret_Api_Key");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And again, this works locally and when I publish my Function to Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The shared code issue
&lt;/h2&gt;

&lt;p&gt;Now comes the problem I faced.  I have some code that I'm using in my Azure Function that I would like to re-use in my Web application, and in the Azure Function that's using a 'Environment.GetEnvironmentVariable' call.&lt;/p&gt;

&lt;p&gt;No problem right, that call will work quite happily when I push the code to Azure, so the following lookup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = Environment.GetEnvironmentVariable("My_Secret_Api_Key");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;will return the exact same value as getting it from the configuration via.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = builder.Configuration.GetValue&amp;lt;string&amp;gt;("My_Secret_Api_Key");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I can reuse the code in both projects without any changes and all will work fine in Azure.&lt;/p&gt;

&lt;p&gt;But what about when running locally?  That's where I now have an issue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = Environment.GetEnvironmentVariable("My_Secret_Api_Key");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;looks in a different place to 'appsettings.json' or the secret manager.  It looks at the environment variables stored in 'launchSettings.json', which are also available via the launch profiles screen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--50bY6fY5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzjtk5wd99u8i73uw87u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--50bY6fY5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzjtk5wd99u8i73uw87u.png" alt="Launch Profiles" width="779" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although I can avoid checking these into GitHub as well the main issue is that I'm now storing some configuration in the secret manager while the rest needs to be stored in the build configuration.&lt;/p&gt;

&lt;p&gt;That's not a major problem, but it is a pain to maintain two areas.  It would be much nicer if I could put everything in the secret manager and not have to change any of my calls.&lt;/p&gt;

&lt;p&gt;Well it turns out I can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting environment variables via code
&lt;/h2&gt;

&lt;p&gt;Not only can I read environment variables at runtime, but I can also set them at runtime with the following call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Environment.SetEnvironmentVariable("My_Secret_Api_Key", "12345");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I can read my secret from the secret manager and set it as an environment variable at runtime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = builder.Configuration.GetValue&amp;lt;string&amp;gt;("My_Secret_Api_Key");

Environment.SetEnvironmentVariable("My_Secret_Api_Key", mySecret);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only issue with this is that it is unnecessary when I'm running in Azure, so I can limit it to only setting the environment variable when running in my (local) development environment.&lt;/p&gt;

&lt;p&gt;You can determine the environment in different ways, but in my case I simply do the following in the 'Program.cs' of my web application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var mySecret = builder.Configuration.GetValue&amp;lt;string&amp;gt;("My_Secret_Api_Key");

var app = builder.Build();

if (app.Environment.IsDevelopment()) 
{
Environment.SetEnvironmentVariable("My_Secret_Api_Key", mySecret);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when I'm running locally I store all my configuration in the secret manager and convert them at runtime to environment variables if required.&lt;/p&gt;

&lt;p&gt;When I push the code to Azure everything gets set using the configuration tab of the Azure web app service and the code works without any changes.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OsH2IMsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee" width="217" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>aspdotnet</category>
      <category>azurefunctions</category>
      <category>blazor</category>
    </item>
    <item>
      <title>Switching from SQL Server to Azure CosmosDB</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Fri, 04 Aug 2023 11:11:48 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/switching-from-sql-server-to-azure-cosmosdb-91j</link>
      <guid>https://forem.com/panachesoftwaredev/switching-from-sql-server-to-azure-cosmosdb-91j</guid>
      <description>&lt;h2&gt;
  
  
  Switching from SQL Server to Cosmos DB
&lt;/h2&gt;

&lt;p&gt;For the past 20 years or so I’ve been using and developing solutions using relational databases, primarily SQL Server, but I’ve also used Oracle and MySQL.&lt;/p&gt;

&lt;p&gt;As I began to plan out development for &lt;a href="https://www.panachesports.com" rel="noopener noreferrer"&gt;Panache Sports&lt;/a&gt; my initial thoughts were to do what I’ve always done and use a relational database to store all my data.  I’d normally look at using MySQL to keep costs down, but with the resources provided by &lt;a href="https://www.microsoft.com/en-us/startups" rel="noopener noreferrer"&gt;Microsoft Startups&lt;/a&gt; the option to use a managed Azure SQL database was also available.&lt;/p&gt;

&lt;p&gt;This would have been the easiest route to go for, but as I looked at the data I’d be storing for Panache Sports, and how I was building it using serverless Azure functions, I wondered if there was another way to do things.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://learn.microsoft.com/en-us/azure/cosmos-db/introduction" rel="noopener noreferrer"&gt;Azure Cosmos DB&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.panachesports.com" rel="noopener noreferrer"&gt;Panache Sports&lt;/a&gt; being designed as a serverless SaaS (Software as a Service) solution, being able to take advantage of a high availability, scalable database like Cosmos DB was very appealing.  But there was a big downside, throwing away 20 years of relational database modelling to move to a NoSQL, unstructured database was going to have a significant learning curve.&lt;/p&gt;

&lt;p&gt;So, if you’ve never looked at Cosmos DB, or have wondered if it might be applicable to your use case, here’s my introduction, including some of the decisions around why I choose to go down this route.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Decisions
&lt;/h2&gt;

&lt;p&gt;Lets first look at how I would have structured my data using a relational database.  This will be a very high level simplified example, but it should cover some of the key concepts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.panachesports.com" rel="noopener noreferrer"&gt;Panache Sports&lt;/a&gt; is designed to store data about sports organisations and the staff and players of those organisations.  So let’s think of a simple example structure for storing that information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foez60ut241iiyvxkhqxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foez60ut241iiyvxkhqxi.png" alt="Example Panache Sports database structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ve approached things in the normal relational DB way.  We’ve normalised our data so that we have tables storing specific chunks of information.  We’re using header and detail tables to split the data we store for users and organisations, and tables like Address are shared between users, organisations and Players so we keep that data in a single consistent location.&lt;/p&gt;

&lt;p&gt;For organisations, we’re storing the type of sport (football, Baseball, Formula One etc.) but we link to a separate table via an Id so that we can add new sports types and change names and descriptions without changing anything on our organisation, knowing we’ll always get the latest data.&lt;/p&gt;

&lt;p&gt;All looks fine and normal, you’ probably make changes, but this is just a simple example.&lt;/p&gt;

&lt;p&gt;So how would we move this to Cosmos DB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving our relational structure to Cosmos DB
&lt;/h2&gt;

&lt;p&gt;Cosmos DB, in its NoSQL mode, which is what I’ll be discussing here, stores unstructured data in JSON format.  So rather than creating rigid table structures, you can simply store any JSON document in a Cosmos DB container that you want, even completely different structured JSON documents in the same container.&lt;/p&gt;

&lt;p&gt;At a very high level Cosmos DB consists of Databases, Containers and Items.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h7ww1hq5zrdmz4d4zip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8h7ww1hq5zrdmz4d4zip.png" alt="Simplified Cosmos DB structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Databases contain a group of Containers and then each container stores items.&lt;/p&gt;

&lt;p&gt;Containers can have Triggers, Stored Procedures and Functions which allow you to maintain the items they store.  These triggers and functions can be built as fully functional .NET Azure functions.&lt;/p&gt;

&lt;p&gt;Items within a container are JSON documents and individual documents within the same container can have completely different structures, however you must define a partition key for the container and all items in a container must use the same partition key, even if the rest of the data is different.&lt;/p&gt;

&lt;p&gt;Partition keys are simply a property on your JSON items and define how data is logically ordered within a container, and Cosmos DB will then distribute data across physical partitions automatically.  The partition Keys are used to for writing and updating and this key can also be used in “where” clauses to provide efficient data retrieval.&lt;/p&gt;

&lt;p&gt;Logical partitions can store 20GB of data, so making decisions on how you partition your data is important.&lt;/p&gt;

&lt;p&gt;I’ll probably create a separate post that covers containers and partitions but for more background take a look at &lt;a href="https://learn.microsoft.com/en-us/azure/cosmos-db/resource-model" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/azure/cosmos-db/resource-model&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cosmos DB allows for Geo-Replication of data so you can make sure that data is always accessible in a physical location close to your users, however the cost increases significantly with this.&lt;/p&gt;

&lt;p&gt;In Cosmos DB RUs are what matter.  Rather than basing pricing tiers on storage space or compute Cosmos DB pricing is based on RUs, Request Units, per second.&lt;/p&gt;

&lt;p&gt;An RU is 1 point read of 1kb of data in Cosmos DB.  So if we need to read 20 kb of data, that’s going to be around 20 RUs.&lt;/p&gt;

&lt;p&gt;You can get started with the free tier of Cosmos DB that provides 1000 RUs per second, and If you want a good introduction to RUs check out this Cosmos DB essentials episode &lt;a href="https://www.youtube.com/watch?v=3naCwuXhLlk" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=3naCwuXhLlk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although we could build our Cosmos DB data structure in exactly the same way as we built our relational data structure, we start hitting issues with our RU charge increasing.&lt;/p&gt;

&lt;p&gt;Microsoft provides a really good example of how you might model &lt;a&gt;data here https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/model-partition-example&lt;/a&gt; and I suggest having a read through this.&lt;/p&gt;

&lt;p&gt;But let’s take a look at our original data.  If we want to get a complete Organisation record we need to query 4 separate tables and this begins to significantly increase our RUs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwsf1g1w9jgxv0ui6wg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwsf1g1w9jgxv0ui6wg0.png" alt="Tables to be modelled in Cosmos DB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I split the same data up in Cosmos DB I’d have 4 items holding parts of the data I need so when looking up an organisation I’d need to lookup and return 4 objects and then reconstruct this data, this greatly increases the RUs required to look up a single Organisation record.&lt;/p&gt;

&lt;p&gt;For Cosmos DB, I’m not so much worried about the size of database, as storage costs are increasingly becoming smaller and smaller, instead I want to limit the amount of lookups I’m performing.&lt;/p&gt;

&lt;p&gt;So how about this for storing our data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesfmynqjrj4ywgfqknce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesfmynqjrj4ywgfqknce.png" alt="Example Organisation record"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if we want to get an organisation record we only require one lookup, and because a JSON document is small the size isn’t really a factor.&lt;/p&gt;

&lt;p&gt;If we have an API frontend to our system then the API call to get an organisation returns the data exactly as it is stored in Cosmos DB, I don’t need to use separate classes/objects for my database and my APIs.&lt;/p&gt;

&lt;p&gt;What’s not to like right?&lt;/p&gt;

&lt;p&gt;….Oh dear, I can hear a collective sigh being let out by all the DB Admins I’ve ever worked with.&lt;/p&gt;

&lt;p&gt;For one, what about the sport type? If “Formula One” was to change it’s name to “Formula Super One” I’d have to update every record that was using it, instead of a single record.&lt;/p&gt;

&lt;p&gt;If I want to find out all the addresses I have in the system I’ll need to look through every organisation, user and player record rather than on a single table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thinking differently
&lt;/h2&gt;

&lt;p&gt;Because I’m trying to prevent additional lookups wherever possible I’m going to end up duplicating data across my records, and this is where in Cosmos DB we need to throw out everything we’ve previously learnt about normalising our data and instead make decisions on the frequency that data will be read and updated.&lt;/p&gt;

&lt;p&gt;In my organisation record above, yes if I have 20 teams in the system and “Formula One” changes it’s name to “Formula Super One” I’m going to have to change 20 records, rather than just 1.  But what is the frequency of that?  It’s very unlikely that data like that will change so I will make the decision to take the hit on a large multi-record update at some point in the future knowing that I am saving on reads and speed of access on a day to day basis.&lt;/p&gt;

&lt;p&gt;In Cosmos DB you want to de-normalise your data, storing duplicates of information in records to speed up reads if you know the likelihood of that data changing is small.  We’ll probably be okay with storing duplicate addresses across multiple records because we know that the address of a teams stadium or head office isn’t likely to change all that often.&lt;/p&gt;

&lt;p&gt;What about if I wanted to get a list of my organisations. One way of doing that would be to add a document type and then store a separate list of organisations, something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjcyuruxbfbmhnncgnqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjcyuruxbfbmhnncgnqr.png" alt="Organisation record with 'Type' property added"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here I’ve added a type of “organisation” to my record and I can store another record with a completely different structure in the same Cosmos DB container with a type of “organisationlist”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfmzlxqb6bnp93f3eo7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfmzlxqb6bnp93f3eo7n.png" alt="Organisation list record"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if I want to display a list of organisations I look for a document with the type of “organisationlist” and return a single record with summary data, rather than querying for multiple organisations.  If I then want to get the full organisation I’ve got the Id in this list so I can perform a direct lookup to get it.&lt;/p&gt;

&lt;p&gt;Instead of doing a lookup that returns 20 organisation records we will return 1 list record, and if we need more information we can perform a second lookup as required.&lt;/p&gt;

&lt;p&gt;Again, we’re duplicating data between the master record and the list record, and we have to keep these in sync, but we’re doing that with data we know isn’t going to change all that often so we’re prioritising  the read operations for speed and cost.&lt;/p&gt;

&lt;p&gt;This de-normalisation goes against everything we try to achieve in a relational database, and how I’ve learnt to structure things over the past 20 years, so it really takes some getting used to.&lt;/p&gt;

&lt;p&gt;At a basic level, we’re structuring our data to prioritise how it’s going to be accessed. If we know it won’t change often but will be read frequently we’ll prioritise the reading of that data and duplicate information across records, where as if we know that the data will be updated frequently we may decide to normalise it as we would have done in a relational database and prioritise the writes.&lt;/p&gt;

&lt;p&gt;In Cosmos DB the storage cost isn’t really the concern, the RUs are, and so how you structure your data will depend on how you think it will be accessed.&lt;/p&gt;

&lt;p&gt;I’ve worked a lot with large ERP systems that store vast amounts of transactional data, like invoices, purchase orders and wages, with hundreds if not thousands of records moving through the system daily.  For that data a relational database may make sense, where you avoid duplication of data and update small chunks of information rather than large records.  But for Panache Sports we’re dealing with Organisations and player/staff data, this is unlikely to change on a day-to-day or even week-to-week basis, as such we can make use of Cosmos DB to efficiently store and retrieve our information frequently, and only need to worry about updates every now and then.&lt;/p&gt;

&lt;p&gt;I’ll look to cover some further topics around Cosmos DB, like how you structure containers and their partition keys, which are a crucial component of making sure you can organise your data appropriately, but for now I hope this has provided a good introduction to how you need to think differently about structuring your data if you’re thinking about a move from a relational database to Cosmos DB.&lt;/p&gt;

&lt;p&gt;Like everything, Cosmos DB isn’t appropriate for every situation, or all types of data, but it may work for you in specific areas.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cosmosdb</category>
      <category>sqlserver</category>
      <category>azure</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Windows Terminal Themes</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Thu, 16 Jun 2022 12:21:07 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/windows-terminal-themes-3d05</link>
      <guid>https://forem.com/panachesoftwaredev/windows-terminal-themes-3d05</guid>
      <description>&lt;p&gt;A quick guide showing how to change your Windows Terminal from looking like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8ke7acrzjwvp41vkx1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8ke7acrzjwvp41vkx1g.png" alt="Windows Terminal Before"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gmvb3yi702g4rbs2lv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gmvb3yi702g4rbs2lv.png" alt="Windows Terminal Theme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Fonts
&lt;/h2&gt;

&lt;p&gt;Download the &lt;strong&gt;Cascadia Code (aka Caskaydia Cove Nerd Font)&lt;/strong&gt; from &lt;a href="https://github.com/ryanoasis/nerd-fonts/releases/tag/v2.1.0" rel="noopener noreferrer"&gt;Nerd Fonts&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Direct Link: &lt;a href="https://github.com/ryanoasis/nerd-fonts/releases/download/v2.1.0/CascadiaCode.zip" rel="noopener noreferrer"&gt;Caskaydia Cove Nerd Font&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unzip the file and double click on the &lt;code&gt;Caskaydia Cove Nerd Font Complete&lt;/code&gt; file and choose &lt;code&gt;Install&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d15c71ju23e3exjeqf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d15c71ju23e3exjeqf9.png" alt="Install Caskaydia Cove Nerd Font"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install PowerShell Core (Optional)
&lt;/h2&gt;

&lt;p&gt;If you want to use the new cross platform and open source version of PowerShell head over to &lt;a href="https://github.com/PowerShell/PowerShell" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and follow the install instructions applicable to you.  &lt;/p&gt;

&lt;p&gt;In this example we'll download the &lt;strong&gt;Windows (x64) MSI&lt;/strong&gt; (&lt;a href="https://github.com/PowerShell/PowerShell/releases/download/v7.2.4/PowerShell-7.2.4-win-x64.msi" rel="noopener noreferrer"&gt;Direct Link&lt;/a&gt;) and then run the MSI to install.&lt;/p&gt;

&lt;p&gt;Once finished, re-open windows terminal and you should see a new &lt;strong&gt;PowerShell&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxt7bldhws74czfhmzws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgxt7bldhws74czfhmzws.png" alt="Windows Terminal Powershell Core"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also change your Windows Terminal start-up options to set PowerShell Core as your default profile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftturzqhxzrubkh3k48h9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftturzqhxzrubkh3k48h9.png" alt="Windows Terminal Default Profile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The following steps will need to be performed in both PowerShell versions if you want to match your themes across both&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set default Font
&lt;/h2&gt;

&lt;p&gt;Within Windows Terminal &lt;strong&gt;Settings &amp;gt; PowerShell &amp;gt; Appearance&lt;/strong&gt; change the &lt;code&gt;Font face&lt;/code&gt; to &lt;code&gt;CaskaydiaCove Nerd Font&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft80mp9pl460lzzi97eg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft80mp9pl460lzzi97eg0.png" alt="Windows Terminal Font face"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Oh My Posh
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ohmyposh.dev/" rel="noopener noreferrer"&gt;Oh My Posh&lt;/a&gt; is the package that allows us to theme our terminal window.  The installation process has changed several times since I started using it a couple of years ago but it's now pretty simple.&lt;/p&gt;

&lt;p&gt;Make sure you have the latest version of &lt;strong&gt;App Installer&lt;/strong&gt; from the &lt;strong&gt;Windows Store&lt;/strong&gt; as this will provide you with &lt;code&gt;winget&lt;/code&gt;, this should be installed by default on Windows 11 but may require an update.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.microsoft.com/p/app-installer/9nblggh4nns1#activetab=pivot:overviewtab" rel="noopener noreferrer"&gt;App Installer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In PowerShell run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;winget install JanDeDobbeleer.OhMyPosh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mrt17lvtmnh9871qtsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3mrt17lvtmnh9871qtsc.png" alt="winget Oh My Posh"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new PowerShell profile if one doesn't exist with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (!(Test-Path -Path $PROFILE )) { New-Item -Type File -Path $PROFILE -Force }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;If you're using the original PoweShell run the following command as administrator to allow script execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope LocalMachine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not required for PowerShell Core.&lt;/p&gt;




&lt;p&gt;To install new terminal icons (these change the folder/file icons in terminal) run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Install-Module -Name Terminal-Icons -Repository PSGallery
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit your profile by using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notepad $PROFILE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following lines, where &lt;code&gt;{username}&lt;/code&gt; is replaced with your Windows username.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oh-my-posh init pwsh --config C:\Users\{username}\AppData\Local\Programs\oh-my-posh\themes\paradox.omp.json | Invoke-Expression
Import-Module -Name Terminal-Icons
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and then restart Windows Terminal and you should see your new theme.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gmvb3yi702g4rbs2lv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gmvb3yi702g4rbs2lv.png" alt="Windows Terminal Theme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Themes
&lt;/h2&gt;

&lt;p&gt;You can see the other themes available to you at &lt;a href="https://ohmyposh.dev/docs/themes" rel="noopener noreferrer"&gt;Oh My Posh Themes&lt;/a&gt;, or by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get-PoshThemes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then change &lt;code&gt;paradox.omp.json&lt;/code&gt; in your profile file to a different theme, i.e. &lt;code&gt;jandedobbeleer.omp.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l763l1aoivm56q83210.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2l763l1aoivm56q83210.png" alt="jandedobbeleer theme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>windowsterminal</category>
      <category>ohmyposh</category>
      <category>powershell</category>
      <category>windows11</category>
    </item>
    <item>
      <title>Making a local MicroK8s environment available externally (Part 5 - Reverse Tunnels)</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:03:29 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-5-reverse-tunnels-480c</link>
      <guid>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-5-reverse-tunnels-480c</guid>
      <description>&lt;h2&gt;
  
  
  First a warning
&lt;/h2&gt;

&lt;p&gt;If you've been following along with our previous steps you will have created your own local Linux VM running Microk8s, in this step we're going to expose this VM to the public internet.&lt;/p&gt;

&lt;p&gt;Making a hole in yours, or anyone's network, to expose a machine, virtual or not, increases that networks risk of attack.  You are responsible for your security and the risks created by following these steps.&lt;/p&gt;

&lt;p&gt;...On with the show!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reverse Tunnels
&lt;/h2&gt;

&lt;p&gt;Because I'm running my MicroK8s Linux VM at home on my desktop it isn't exposed to the public internet without me needing to do some work.&lt;/p&gt;

&lt;p&gt;I could pay extra for a static IP address, but depending on your ISP that may simply not be an option, or it could be prohibitively expensive.&lt;/p&gt;

&lt;p&gt;As an alternative, we can use reverse tunnels to route network traffic to and from our local machine to the outside world, regardless of whether we have a static IP or not.&lt;/p&gt;

&lt;p&gt;Previously I've used services like &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;ngrok&lt;/a&gt; to perform this, and there is a list of other options &lt;a href="https://github.com/anderspitman/awesome-tunneling" rel="noopener noreferrer"&gt;here&lt;/a&gt;.  But quite often when using a paid service I've quickly hit on limitations, like a limited number of addresses and not being able to route traffic across multiple ports.  Because of this I looked to find another option, and like the rest of this series, I wanted to do it myself.&lt;/p&gt;

&lt;p&gt;Enter SSH, which we can use to provide a tunnel from one server to another.&lt;/p&gt;

&lt;p&gt;So far we've made use of free open source software....well, apart from Windows 11 pro, but I'm guessing that the majority of the audience already had that, but for this stage one of the things we will need is our own server already accessible to the outside world, which in general, is going to cost us something.&lt;/p&gt;

&lt;p&gt;There are a huge amount of options here, varying in price and provider, but for this example I'm going to be creating and hosting a VM in &lt;a href="https://azure.microsoft.com/" rel="noopener noreferrer"&gt;Microsoft Azure&lt;/a&gt;.  If you don't have an account with Azure, you can sign up for a free one and receive $200 credit for 30 days if you just want to test things out, alternatively you can use any other provider like &lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;digital ocean&lt;/a&gt;, &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;, the choice is yours.&lt;/p&gt;

&lt;p&gt;Now before we start to worry too much about cost, all we're going to be using this VM for is forwarding data to and from a couple of ports, so we're going to build all this on the cheapest VM we can get away with.&lt;/p&gt;

&lt;p&gt;Last but not least, before we get started, I used the following guide from the brilliant &lt;a href="https://www.jeffgeerling.com/blog/2022/ssh-and-http-raspberry-pi-behind-cg-nat" rel="noopener noreferrer"&gt;Jeff Geerling&lt;/a&gt; when I was trying this out for the first time on a Raspberry Pi cluster I built (maybe that will be the basis for another guide!).  I urge you to check out Jeff's page and &lt;a href="https://www.youtube.com/c/JeffGeerling/" rel="noopener noreferrer"&gt;YouTube channel&lt;/a&gt;, he makes great videos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Linux VM....again!
&lt;/h2&gt;

&lt;p&gt;This time though we're creating a new VM using Azure.  Now of course we could have created our first VM in Azure and then wouldn't have needed to go through this step, but the cost of hosting a full Linux VM running MicroK8s and our services could be costly, this way we're paying for a minimal VM.&lt;/p&gt;

&lt;p&gt;With an azure account head into the &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;portal&lt;/a&gt; and choose the option to 'Create Resource'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhva4ial5hkkqwow2l9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhva4ial5hkkqwow2l9v.png" alt="Azure Create Resource"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then choose the option to create a virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdib8etm6qc5215xtreb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdib8etm6qc5215xtreb.png" alt="Azure Create VM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following are the settings I used for creating my VM but you may want to adjust these as you see fit.  Pay close attention to the 'Size' as this determines the cost of your VM.  I've chosen the minimum possible for my subscription, but if you want to use this VM for other things you may want to increase the performance here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiudlsek33e1xsg08twv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiudlsek33e1xsg08twv9.png" alt="Azure VM Settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For 'Disks' you may want to change from premium storage to standard HDD which again reduces the cost.&lt;/p&gt;

&lt;p&gt;My 'Network' settings are set as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwwvtokydyato6622w2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwwvtokydyato6622w2o.png" alt="Azure VM Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure you have the 'Public IP' set to '(new)', and also make note of the inbound ports that are allowed (also shown on the first screen). This should default to allow connections via 'SSH' (port 22) and this will allow us to login to our VM once it's created.&lt;/p&gt;

&lt;p&gt;Go ahead and create the machine, which should only take a few minutes, and once complete you should be able to see the resource in your dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F487wz9na4uvfo8s6u39p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F487wz9na4uvfo8s6u39p.png" alt="Azure VM Running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up the terminal on your &lt;strong&gt;local machine&lt;/strong&gt; and you should be able to SSH to this new VM using the IP address shown.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {username}@{Azure VM IP}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5hbi07xa4kche6q9erj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5hbi07xa4kche6q9erj.png" alt="Azure VM SSH"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup SSH
&lt;/h2&gt;

&lt;p&gt;The first thing we need to do on our &lt;strong&gt;Azure VM&lt;/strong&gt; is setup SSH correctly.  We need to enable the &lt;code&gt;AllowTCPForwarding&lt;/code&gt; option, which should be set to yes by default, but we can check it with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sshd -T | grep -E 'allowtcpforwarding'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you should see the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;allowtcpforwarding yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now need to enable &lt;code&gt;GatewayPorts&lt;/code&gt; which you can do by editing the SSH Config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find the line that reads &lt;code&gt;#GatewayPorts no&lt;/code&gt; and change this to &lt;code&gt;GatewayPorts yes&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4nes2muit2e8ee1gwfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4nes2muit2e8ee1gwfo.png" alt="Azure VM GatewayPorts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt; and run the following to restart SSH.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now using the following command you can check that both options are enabled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sshd -T | grep -E 'gatewayports|allowtcpforwarding'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5eerhdumfgqfp1audgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5eerhdumfgqfp1audgy.png" alt="Azure VM GatewayPorts and TCP Forwarding"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back on our &lt;strong&gt;Local Linux VM&lt;/strong&gt; we need to configure things to allow us to connect directly via SSH to our Azure VM.  This involves us creating an SSH key pair.&lt;/p&gt;

&lt;p&gt;Run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh-keygen -t ed25519 -C "{Local VM HostName}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And press &lt;code&gt;Enter&lt;/code&gt; for all the prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn00hsbre7oadknz3409.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbn00hsbre7oadknz3409.png" alt="SSH Key Pair"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get the contents of the created file using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /home/{username}/.ssh/id_ed25519.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32zewjh3gdnbn9yekzxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32zewjh3gdnbn9yekzxr.png" alt="SSH Key"&gt;&lt;/a&gt;&lt;br&gt;
Copy the string returned and then back on your &lt;strong&gt;Azure VM&lt;/strong&gt; edit the &lt;br&gt;
&lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt; file and paste the copied string in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano ~/.ssh/authorized_keys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nr7m51d2m6cmupch9cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nr7m51d2m6cmupch9cv.png" alt="SSH Authorized Keys"&gt;&lt;/a&gt;&lt;br&gt;
Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt; and head back to your &lt;strong&gt;Local VM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You should now be able to SSH to your Azure VM from your Local VM without the need for a password, simply use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {username}@{Azure VM IP}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;answer &lt;code&gt;yes&lt;/code&gt; when prompted and you should simply connect to the VM.  you can type &lt;code&gt;exit&lt;/code&gt; to return to your local VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqzqzqjf37v1dvqjyqwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqzqzqjf37v1dvqjyqwm.png" alt="SSH Azure VM No Password"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Ports
&lt;/h2&gt;

&lt;p&gt;As we saw when we created our Azure VM, apart from port 22 for SSH connections, most ports are disabled by default.&lt;/p&gt;

&lt;p&gt;The Panache Legal services we have running on our local VM use ports 30000-30010 so we'll need to open those up.  Choose the Networking menu item of our Azure VM and then click on &lt;code&gt;Add inbound port rule&lt;/code&gt; and create a rule for our ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41c6rnbims30cscbswtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41c6rnbims30cscbswtg.png" alt="Azure VM Ports"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup AutoSSH
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;autossh&lt;/code&gt; will allow us to persist our connections.  We could simply create the tunnels manually via the command line using something like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -nNTv -R 0.0.0.0:8080:localhost:80 {username}@{Azure VM IP}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would map our webservers http port to our Azure VM, but you would need to run this after reboots, or when we log out of our VM, to maintain the connection.  With &lt;code&gt;autossh&lt;/code&gt; we can ensure that this is always running, but first, we need to install it.&lt;/p&gt;

&lt;p&gt;On our &lt;strong&gt;Local VM&lt;/strong&gt; run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install autossh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the install completes create an &lt;code&gt;autossh&lt;/code&gt; config file with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/default/autossh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within this new file add the following lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AUTOSSH_POLL=60
AUTOSSH_FIRST_POLL=30
AUTOSSH_GATETIME=0
AUTOSSH_PORT=22000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Following those lines we need to add configuration for each of the ports.  This requires a line beginning with &lt;code&gt;SSH_OPTIONS=" -N&lt;/code&gt; then followed by &lt;code&gt;-R 0.0.0.0:30001:localhost:30001 {Azure VM Username}@{Azure VM IP}&lt;/code&gt;, repeated for each of the ports.  So for example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SSH_OPTIONS="-N -R 0.0.0.0:30001:localhost:30001 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30002:localhost:30002 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30003:localhost:30003 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30004:localhost:30004 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30005:localhost:30005 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30006:localhost:30006 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30007:localhost:30007 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30008:localhost:30008 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30009:localhost:30009 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30010:localhost:30010 {Azure VM Username}@{Azure VM IP}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87rflwelwm9j8ozmqpqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87rflwelwm9j8ozmqpqt.png" alt="Auto SSH Config"&gt;&lt;/a&gt;&lt;br&gt;
Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next we need to tell &lt;code&gt;systemd&lt;/code&gt; about &lt;code&gt;autossh&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create a new file with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /lib/systemd/system/autossh.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following, making sure to set the &lt;code&gt;{username}&lt;/code&gt; placeholder appropriately to the username you use on your local VM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=autossh
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User={username}
EnvironmentFile=/etc/default/autossh
ExecStart=/usr/bin/autossh $SSH_OPTIONS
Restart=always
RestartSec=60

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt;, then add a symlink for &lt;code&gt;systemd&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ln -s /lib/systemd/system/autossh.service /etc/systemd/system/autossh.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally run the following commands to tell &lt;code&gt;systemd&lt;/code&gt; about &lt;code&gt;autossh&lt;/code&gt; along with enabling it on startup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl start autossh
sudo systemctl enable autossh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have one final step, assuming you're using the Panache Legal containers, if you remember when we created our deployment files in step 4 some of the environment variables referred to our Local VM, this is where you replaced &lt;code&gt;{server-IP}&lt;/code&gt; with the IP address of the local VM.  You now need to change this to the IP address of your Azure VM, although you can leave &lt;code&gt;{db-server-IP}&lt;/code&gt; as it is because the MySQL connection does not need to be routed across the internet.&lt;/p&gt;

&lt;p&gt;Again, if you're using the Panache Legal containers you can run the &lt;code&gt;DeleteService.sh&lt;/code&gt; script to remove all the pods, make your changes to the deployment files and then run the &lt;code&gt;StartServices.sh&lt;/code&gt; script to recreate all the pods.&lt;/p&gt;

&lt;p&gt;If all has gone according to plan you should be able to visit &lt;code&gt;http://{Azure VM IP}:30001&lt;/code&gt; in your browser and, fingers crossed, you should see the Panache Legal login page and then be able to login to the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foly4zj6qx3loxs5h6dzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foly4zj6qx3loxs5h6dzs.png" alt="Panache Legal Azure VM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  All Finished :o)
&lt;/h2&gt;

&lt;p&gt;That's it.&lt;/p&gt;

&lt;p&gt;We've created a local Linux VM.&lt;/p&gt;

&lt;p&gt;We've installed MicorK8s and MySQL on the local VM.&lt;/p&gt;

&lt;p&gt;We've installed NGINX and phpMyAdmin.&lt;/p&gt;

&lt;p&gt;We've spun up pods in MicroK8s using deployment files.&lt;/p&gt;

&lt;p&gt;We've created an Azure Linux VM.&lt;/p&gt;

&lt;p&gt;and finally we've configured Auto SSH and used a tunnel to access our local VM via the internet using our Azure VM.&lt;/p&gt;

&lt;p&gt;Hopefully this has given you all the tools you need to build your own environments and expose them to the outside world if you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Closing
&lt;/h2&gt;

&lt;p&gt;Keep a look out for further tutorials and posts that I'll be putting out.&lt;/p&gt;

&lt;p&gt;And please take a look at &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform" rel="noopener noreferrer"&gt;Panache Legal&lt;/a&gt;, it's a fully Open Source application built using .NET, with Blazor and identity server.  It's still in active development so consider it pre-alpha, but take a look and why not get involved.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microk8s</category>
      <category>devops</category>
      <category>azure</category>
      <category>ssh</category>
    </item>
    <item>
      <title>Making a local MicroK8s environment available externally (Part 4 - Running Docker Containers)</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:03:21 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-4-running-docker-containers-4hjp</link>
      <guid>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-4-running-docker-containers-4hjp</guid>
      <description>&lt;p&gt;As I mentioned previously, I'm currently building a .Net based Open Source LegalTech platform called &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform"&gt;Panache Legal&lt;/a&gt;.  For development and testing I build docker images for the various microservices which are then hosted on &lt;a href="https://hub.docker.com/u/panachesoftware"&gt;Docker Hub&lt;/a&gt; and while I can use these with Docker Desktop, I'd prefer to get things up and running with Kubernetes so that I can learn more about that environment and also test things like scaling, which is why we're building this MicroK8s system.&lt;/p&gt;

&lt;p&gt;In this example I'm going to be showing how I get Panache Legal up and running, but you may want to choose different containers more suited to your needs, either way, this should still provide the groundwork you need to get going.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment files
&lt;/h2&gt;

&lt;p&gt;To spin up our containers in MicroK8s we could simply jump straight into the command line and deploy our app that way using the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl create ......
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But for Panache Legal we have multiple containers we need to create, all of which have various command line arguments, and this can become a little cumbersome.&lt;/p&gt;

&lt;p&gt;Instead of the command line we'll use deployment files so that we have all the setup in an easy to maintain place.&lt;/p&gt;

&lt;p&gt;For Panache Legal I maintain some basic deployment files in the GitHub repository &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform/tree/main/support%20files/MicroK8s"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are individual deployment files for each of the microservices, along with two other scripts &lt;code&gt;DeleteServices.sh&lt;/code&gt; and &lt;code&gt;StartServices.sh&lt;/code&gt; which provide an easy single script to run which will create or remove all of the services in one go.&lt;/p&gt;

&lt;p&gt;Lets take a look inside these files and see how they are setup.  For reference you can also check out the kubernetes documentation for &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;deployments&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's have a look at the &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform/blob/main/support%20files/MicroK8s/panachesoftware-client-service.yaml"&gt;Client&lt;/a&gt; service, which provides a .NET Web API for storing and maintaining client information.  This Microservice has its own database associated with it and is protected by JWT token authentication, handled by the &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform/blob/main/support%20files/MicroK8s/panachesoftware-identity.yaml"&gt;Identity&lt;/a&gt; service.&lt;/p&gt;

&lt;p&gt;Looking at the first part of the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: panachesoftware-service-client
  labels:
    app: panachesoftware-service-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We specify that this is a &lt;code&gt;Deployment&lt;/code&gt; and we provide a name for the app &lt;code&gt;panachesoftware-service-client&lt;/code&gt; This is the internal name MicroK8s will use for the app and can be set to whatever you need.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;spec&lt;/code&gt; part of the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  replicas: 1
  selector:
    matchLabels:
      app: panachesoftware-service-client
  template:
    metadata:
      labels:
        app: panachesoftware-service-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sets out what pods this deployment will manage, in this case we're keeping it simple so we just reference the name we provided in the &lt;code&gt;metadata&lt;/code&gt; section.  &lt;/p&gt;

&lt;p&gt;In addition here we specify the number of replicas.  For the moment I simply use 1 replica, but the microservice based approach to Panache Legal means that certain services, like &lt;code&gt;Identity&lt;/code&gt; and the &lt;code&gt;UI&lt;/code&gt; may be set to use multiple replicas so that more than one instance of the service will run, allowing our application to scale as demand increases.&lt;/p&gt;

&lt;p&gt;The next section provides details of the container itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
      containers:
      - env:
          - name: ASPNETCORE_ENVIRONMENT
            value: Development
          - name: ASPNETCORE_URLS
            value: http://+:55005
          - name: ConnectionStrings__MySQL
            value: server={db-server-IP};port=3306;database=PanacheSoftware.Client.K8S;user={db-user};password={db-password};GuidFormat=Char36
          - name: PanacheSoftware__CallMethod__APICallsSecure
            value: "False"
          - name: PanacheSoftware__CallMethod__UICallsSecure
            value: "False"
          - name: PanacheSoftware__CallMethod__UseAPIGateway
            value: "True"
          - name: PanacheSoftware__DBProvider
            value: MySQL
          - name: PanacheSoftware__Secret__ClientServiceSecret
            value: AA04416A-A87B-4D88-956B-27CBFFCC2802
          - name: PanacheSoftware__StartDomain
            value: panachesoftware.com
          - name: PanacheSoftware__Url__IdentityServerURL
            value: http://{server-IP}:30002
          - name: PanacheSoftware__Url__IdentityServerURLSecure
            value: https://{server-IP}:30002
        image: panachesoftware/panachesoftwareserviceclient:latest
        name: panachesoftware-service-client
        ports:
        - containerPort: 55005
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;name, value&lt;/code&gt; pairs shown here all correspond to environment variables expected by the service, so if you were to look at the &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform/blob/main/src/Services/PanacheSoftware.Service.Client/appsettings.json"&gt;appsettings.json&lt;/a&gt; file of the client service .NET project, you will see corresponding settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ConnectionStrings": {
    "MSSQL": "Data Source=localhost;Database=PanacheSoftware.Client;User ID=sa;Password=Passw0rd123!;Connect Timeout=30;Encrypt=False;TrustServerCertificate=False;ApplicationIntent=ReadWrite;MultiSubnetFailover=False",
    "MySQL": "server=raspberrypi;port=3306;database=PanacheSoftware.Client;user=pi;password=Passw0rd123!;GuidFormat=Char36"
  },
  "PanacheSoftware": {
    "DBProvider": "MySQL",
    "StartDomain": "panachesoftware.com",
    "CallMethod": {
      "APICallsSecure": "false",
      "UICallsSecure": "false",
      "UseAPIGateway": "true"
    },
    "Url": {
      "IdentityServerURL": "http://localhost:55002",
      "IdentityServerURLSecure": "https://localhost:44302",
      "APIGatewayURL": "http://localhost:55003",
      "APIGatewayURLSecure": "https://localhost:44303"
    },
    "Secret": {
      "ClientServiceSecret": "1314EF18-40FA-4B16-83DF-B276FF0D92A9"
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All I've changed in the example deployment files is to put placeholders, like &lt;code&gt;{db-server-IP}&lt;/code&gt; and &lt;code&gt;{db-user}&lt;/code&gt;, in the file where you will need to change these to be applicable in your system.&lt;/p&gt;

&lt;p&gt;We setup MySQL in a previous part so you should have the relevant username and password for the services that require a database, and we're using the same machine to host MicroK8s and our MySQL database so &lt;code&gt;{db-server-IP}&lt;/code&gt; and &lt;code&gt;{server-IP}&lt;/code&gt; will be the same.&lt;/p&gt;

&lt;p&gt;if you need to determine the IP address of your VM you can use the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hostname -I | awk '{print $1}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be aware that in general, a new IP address will likely be assigned each time you VM starts.  One way to change this is to update the DHCP IP reservations within your router to make sure that the IP address assigned to your VM (based on its hostname) always remains the same.&lt;/p&gt;

&lt;p&gt;Outside of the environment variables there are a couple of other important items.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;image&lt;/code&gt; specifies the docker image that will be used in this deployment. This is the image name along with the appropriate tag, in this case the client service with the &lt;code&gt;latest&lt;/code&gt; tag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;panachesoftware/panachesoftwareserviceclient:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7Nh6llqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3in4ihb1cydqvpzbfx1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7Nh6llqn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3in4ihb1cydqvpzbfx1u.png" alt="Docker Hub" width="800" height="504"&gt;&lt;/a&gt;&lt;br&gt;
Alongside this the &lt;code&gt;containerPort&lt;/code&gt; specifies the port number that the service will be running on, in this case &lt;code&gt;55005&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When an app is deployed in MicroK8s it will be made available on the &lt;code&gt;cluster IP&lt;/code&gt;, however we want to make the service available by going to the IP of the VM itself so we add an additional section to the deployment file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: panachesoftware-service-client-service
spec:
  type: NodePort
  selector:
    app: panachesoftware-service-client
  ports:
    - protocol: TCP
      port: 55005
      targetPort: 55005
      nodePort: 30005
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could have included this &lt;code&gt;service&lt;/code&gt; detail in a separate file, but for our setup it is easier to include it within the deployment file.&lt;/p&gt;

&lt;p&gt;By choosing &lt;code&gt;NodePort&lt;/code&gt; for our type it specifies that the app will be exposed via a port on the host, rather than on the cluster IP.  If you don't add anything further it will be assigned a random port within the '30000-32767' range by default.&lt;/p&gt;

&lt;p&gt;We need our services to talk to each other, as shown by the references to other other parts of the platform in the environment variables i.e. &lt;code&gt;PanacheSoftware__Url__IdentityServerURL&lt;/code&gt; providing the address of the Identity Server, so we need to specify a specific port that will always be used, this is the final section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ports:
    - protocol: TCP
      port: 55005
      targetPort: 55005
      nodePort: 30005
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where we tell MicroK8s to map the port '55005' of the service to be exposed on port '30005' of the host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performing the deployment
&lt;/h2&gt;

&lt;p&gt;Now we've gone through the details of the deployment files go ahead and run through the Panache Legal examples adjusting the placeholders as appropriate.&lt;/p&gt;

&lt;p&gt;If you edit the files on another machine, maybe the Windows Host rather than on the Linux VM we created then you can copy the files to your Linux VM using a tool like &lt;a href="https://winscp.net/eng/index.php"&gt;WinSCP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to use the provided scripts to run the deployment files for you then make sure to set those files as executable with the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x DeleteServices.sh
chmod +x StartServices.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing to note here is that if you have the Linux Firewall running, as discussed when we were setting up the MicroK8s dashboard, you may need to allow the ports that the services will use to be exposed.  MicroK8s will actually edit iptables behind the scenes and allow access to the nodeport ports, 30001-30010 for our services, but if we go to these addresses in our browser it will fail, we need to enable the 55001-55010 port range in ufw, which is confusing, but works!&lt;/p&gt;

&lt;p&gt;Allow these ports via the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 55001:55010/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, either run the &lt;code&gt;StartServices.sh&lt;/code&gt; script with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./StartServices.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or start-up your services individually with something like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl apply -f panachesoftware-client-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're running the &lt;code&gt;StartServices.sh&lt;/code&gt; script, once it completes you should be able to see the 10 pods it creates starting up by using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl get pods -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6vIC579_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yn0g9mzp4mp2s60z4dl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6vIC579_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1yn0g9mzp4mp2s60z4dl.png" alt="Pod Creation" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also watch the progress of the pod creation via the dashboard we configured in Step 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5WC5PDIf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ehpirqbs9op45x0zrd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5WC5PDIf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ehpirqbs9op45x0zrd9.png" alt="Pod Creation Dashboard" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On first creation it will take a bit of time to complete as the various containers need to be downloaded from Docker Hub, but after a while you should see the status of the pods move from &lt;code&gt;ContainerCreating&lt;/code&gt; to &lt;code&gt;Running&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once everything is running let's test it out, on your windows host you should be able to got to &lt;code&gt;http://{VM-IP-Address}:30001&lt;/code&gt; and see the Panache Legal Login screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mwMYoK9a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k26racuu45z1dlkx8288.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mwMYoK9a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k26racuu45z1dlkx8288.png" alt="Panache Legal" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming you kept the &lt;code&gt;PanacheSoftware__StartDomain&lt;/code&gt; environment variable as &lt;code&gt;panachesoftware.com&lt;/code&gt; the system will have started up and created a default user with the details, Username: &lt;code&gt;admin@panachesoftware.com&lt;/code&gt;, Password: &lt;code&gt;Passw0rd123!&lt;/code&gt;. So hit Login and use those credentials to access the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S0y4ht5J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qlqorzar5gceetna6vb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S0y4ht5J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qlqorzar5gceetna6vb1.png" alt="Panache Legal Dashboard" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the left menu, if you go to &lt;code&gt;Settings&lt;/code&gt; &amp;gt; &lt;code&gt;API Access&lt;/code&gt; you should see a list of the individual services that make up the Panache Legal Platform, all of which are hosted as separate pods in MicroK8s.  On this screen you can see the links to those services swagger definition, and you can note that they are all assigned to the ports specified in the deployment files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kg1GFAyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg57ypbjzgtldv7a4ccn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kg1GFAyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg57ypbjzgtldv7a4ccn.png" alt="Panache Legal API Access" width="800" height="670"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you head on over to phpMyAdmin, that we setup in step 2, you should be able to see that all of the databases associated with our services have been automatically created via the entity framework.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MeqMGCtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0klgkv7ls76lmzwrg5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MeqMGCtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0klgkv7ls76lmzwrg5j.png" alt="phpMyAdmin Panache Legal Databases" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you find that any pods didn't create, maybe they are stuck with a status of crash loop or some other error, you can check the logs for the service within the 'Pods' area of the dashboard, just click on the 3 dots on the right hand side and choose 'Logs'.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AQJj3g3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xg98nm9wml6fegzudxq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AQJj3g3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xg98nm9wml6fegzudxq7.png" alt="Dashboard Pods Logs" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In general, the majority of errors often relate to the service not being able to connect to a database, so check usernames and passwords and all the access settings we configured in Step 3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;The main bulk of the work is now complete.  So far we've.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Created a Linux VM&lt;/li&gt;
&lt;li&gt;Setup MicroK8s&lt;/li&gt;
&lt;li&gt;Setup MySQL&lt;/li&gt;
&lt;li&gt;Setup NGINX&lt;/li&gt;
&lt;li&gt;Setup phpMyAdmin&lt;/li&gt;
&lt;li&gt;Spun up some services in MicroK8s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all we need to do is host some docker containers locally then your work is done.  But if you want a little bit more, lets head over to our final step where we're going to make our local environment available to the outside world by using SSH.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OsH2IMsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee" width="217" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>microk8s</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Making a local MicroK8s environment available externally (Part 3 - NGINX and phpMyAdmin)</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:03:12 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-3-nginx-and-phpmyadmin-5hjb</link>
      <guid>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-3-nginx-and-phpmyadmin-5hjb</guid>
      <description>&lt;p&gt;If you're following along with the steps to get our MicroK8s environment up and running this part isn't strictly necessary, but getting &lt;a href="https://www.phpmyadmin.net/"&gt;phpMyAdmin&lt;/a&gt; up and running can make the administration of our MySQL installation easier.&lt;/p&gt;

&lt;p&gt;As part of this, we'll also look at installing the &lt;a href="https://www.nginx.com/resources/wiki/"&gt;NGINX&lt;/a&gt; webserver and enable this through the built in firewall in Ubuntu.&lt;/p&gt;

&lt;p&gt;If you haven't already &lt;code&gt;SSH&lt;/code&gt; into our VM using windows terminal, in my case using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh pete@pl-k8s-vm-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install NGINX
&lt;/h2&gt;

&lt;p&gt;To act as our webserver for hosting phpMyAdmin we're going to install NGINX.&lt;/p&gt;

&lt;p&gt;This should be pretty quick and easy, simply run the following,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Enable Firewall access
&lt;/h2&gt;

&lt;p&gt;Now we've installed NGINX run the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw app list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will provide us with a list of applications registered with the firewall that we can allow access to.  In this instance we'll just enable &lt;code&gt;Nginx HTTP&lt;/code&gt; as we don't require &lt;code&gt;https&lt;/code&gt; access at this point.&lt;/p&gt;

&lt;p&gt;...oh, and while we're here we'll also allow OpenSSH access as well.&lt;/p&gt;

&lt;p&gt;Run the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 'Nginx HTTP'
sudo ufw allow OpenSSH 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done we can check the status to make sure we're allowed access through the firewall with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should return something like the following.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a-fTBOzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ay1j0s63iniepalqnbnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a-fTBOzW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ay1j0s63iniepalqnbnl.png" alt="UFW Status" width="511" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once this is done we can check that the webserver is up and running by launching a web browser on our host machine and simply inputting our VMs hostname, in my case &lt;code&gt;http://pl-k8s-vm-1&lt;/code&gt;, and hopefully you should see the welcome to nginx page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZT1GnRH_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3byrhr3mdb3a3l8kx86j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZT1GnRH_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3byrhr3mdb3a3l8kx86j.png" alt="Welcome to NGINX" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing PHP
&lt;/h2&gt;

&lt;p&gt;Before we can install &lt;code&gt;phpMyAdmin&lt;/code&gt; we need to go ahead and install &lt;code&gt;PHP&lt;/code&gt; and then setup our &lt;code&gt;NGINX&lt;/code&gt; webserver to serve php pages.&lt;/p&gt;

&lt;p&gt;Run the following command to install PHP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install php-fpm php-mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this we need to tell &lt;code&gt;NGINX&lt;/code&gt; to process our php pages so run the following to edit the config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/nginx/sites-enabled/default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within the editor find the line referencing the index files that are processed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ICNcdHhr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9uhfdzdqosvjnj7rv1sc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ICNcdHhr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9uhfdzdqosvjnj7rv1sc.png" alt="NGINX Index Before" width="773" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And add&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;index.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to the list:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JGsFqiP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6v37scmei396w4rkjfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JGsFqiP8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6v37scmei396w4rkjfj.png" alt="NGINX Index After" width="714" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the same config file find the reference to &lt;code&gt;PHP scripts to FastCGI server&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rLNJR6z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nijfi4it3vio3u1a167i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rLNJR6z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nijfi4it3vio3u1a167i.png" alt="NGINX PHP Before" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And add the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---SUXNdip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6aiwiww9efulvj9hh25v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---SUXNdip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6aiwiww9efulvj9hh25v.png" alt="NGINX PHP After" width="728" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt;, and the get &lt;code&gt;NGINX&lt;/code&gt; to reload its config with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done lets quickly test if &lt;code&gt;PHP&lt;/code&gt; is being correctly processed by &lt;code&gt;NGINX&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Run the following command to create a demo php file in the route of the webserver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /var/www/html/index.php
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then add the following line of code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php phpinfo(); ?&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pI_eFCSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ppuy259ej6nas52tyqbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pI_eFCSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ppuy259ej6nas52tyqbq.png" alt="NGINX Index PHP" width="740" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt;, reload the url we originally used to test our &lt;code&gt;NGINX&lt;/code&gt; install, in my case &lt;code&gt;http://pl-k8s-vm-1/&lt;/code&gt;, and hopefully you should see the PHP Info page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eUPOC2ih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/404pv7z9cek897ts4aff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eUPOC2ih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/404pv7z9cek897ts4aff.png" alt="NGINX phpInfo" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install phpMyAdmin
&lt;/h2&gt;

&lt;p&gt;We're almost there, just &lt;code&gt;phpMyAdmin&lt;/code&gt; left to setup.&lt;/p&gt;

&lt;p&gt;Run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install phpmyadmin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The installation will run and you'll be presented by a wizard with options to choose from.  The first one will ask you to select which webserver to use, now &lt;code&gt;NGINX&lt;/code&gt; won't be in the list, but we won't let that stop us, choose &lt;code&gt;apache2&lt;/code&gt; instead by pressing &lt;code&gt;SPACE&lt;/code&gt;, &lt;code&gt;TAB&lt;/code&gt; then &lt;code&gt;ENTER&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cRVhPBIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsdozomzjo6tyrlc5xj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cRVhPBIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsdozomzjo6tyrlc5xj1.png" alt="phpMyAdmin Install 1" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we'll be asked to configure phpMyAdmin:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HH388c_3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id3wdjb7ppldn61i235h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HH388c_3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id3wdjb7ppldn61i235h.png" alt="phpMyAdmin Install 2" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;press &lt;code&gt;ENTER&lt;/code&gt; on &lt;code&gt;YES&lt;/code&gt; and in the next screen set a password of your choice for phpMyAdmin to use to connect to the database server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Password validation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If during the &lt;code&gt;MySQL&lt;/code&gt; installation you choose to enable a 'Validate Password plugin' you may receive &lt;code&gt;password does not satisfy the current policy requirement&lt;/code&gt; errors when trying to create the phpmyadmin user.  If this is the case the easiest way I found to fix it was to choose the option to ignore the error then log into &lt;code&gt;MySQL&lt;/code&gt; with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UNINSTALL COMPONENT "file://component_validate_password";
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and exit out.&lt;/p&gt;

&lt;p&gt;You can then run the following to remove and then re-install &lt;code&gt;phpMyAdmin&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt remove phpmyadmin
sudo apt install phpmyadmin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time your password should be accepted.  If you then want to re-enable the password validation you can log back into &lt;code&gt;MySQL&lt;/code&gt; and run the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTALL COMPONENT "file://component_validate_password";
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Continuing the phpMyAdmin install&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the install has completed we want to create a link from the phpmyadmin share to the root of our &lt;code&gt;NGINX&lt;/code&gt; webserver by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ln -s /usr/share/phpmyadmin /var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this we should be able to access &lt;code&gt;phpMyAdmin&lt;/code&gt; by going to the following address &lt;code&gt;http://pl-k8s-vm-1/phpmyadmin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7upaanRN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gaubooi6mz4pdj2pdl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7upaanRN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gaubooi6mz4pdj2pdl2.png" alt="phpMyAdmin Login" width="788" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then use the account we setup in Step 2, mine was &lt;code&gt;pluser&lt;/code&gt; with a password of &lt;code&gt;5ecurePassw0rd!&lt;/code&gt;, to access our &lt;code&gt;MySQL&lt;/code&gt; installation with &lt;code&gt;phpMyAdmin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A-T06Xrt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x4ttez1e4dh1zy0naaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A-T06Xrt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x4ttez1e4dh1zy0naaf.png" alt="phpMyAdmin Home" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;I think we've got everything in place now so it's about time we got something up and running in our new Kubernetes environment.  Lets head over to Part 4 to get our docker containers up and running.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OsH2IMsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee" width="217" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>phpmyadmin</category>
      <category>devops</category>
      <category>microk8s</category>
    </item>
    <item>
      <title>Making a local MicroK8s environment available externally (Part 2 - Installing MicroK8s and MySQL)</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:03:03 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-2-installing-microk8s-and-mysql-43c6</link>
      <guid>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-2-installing-microk8s-and-mysql-43c6</guid>
      <description>&lt;p&gt;In the first part of this series we made sure Hyper-V was up and running and then created our Linux VM running Ubuntu.  We also tweaked that VM so that the resolution was better suited to our system, and then went ahead and got SSH up and running....so that we wouldn't need to log into the desktop anyway. 😉&lt;/p&gt;

&lt;p&gt;So lets go ahead and get on with the main part of this, setting up MicroK8s and MySQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up MicroK8s
&lt;/h2&gt;

&lt;p&gt;First up, there are several versions of Kubernetes available for us to use, &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;K3s&lt;/a&gt;, &lt;a href="https://k3d.io/" rel="noopener noreferrer"&gt;k3d&lt;/a&gt;, &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;Kind&lt;/a&gt;, &lt;a href="https://minikube.sigs.k8s.io/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; and &lt;a href="https://microk8s.io/" rel="noopener noreferrer"&gt;MicroK8s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;They all use different methods, like VMs, docker or snap for hosting, and they all have their pro's and con's.  I settled for MicroK8s because it offered all the functionality I needed and setup was quick and easy, but as you investigate these tools you may settle on using something else, but at least MicroK8s will offer a good introduction.&lt;/p&gt;

&lt;p&gt;Installation is simple, and as we just need the terminal lets SSH into our VM via Windows Terminal with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {username}@{VM name}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we're logged in simply run the following to install MicroK8s.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snap install microk8s --classic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will take a minute or two, depending on your network speed, but you should see an installed tick once it finishes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folfcgt583x2e80ll68s8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Folfcgt583x2e80ll68s8.png" alt="MicroK8s Install"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make sure the install completed okay and everything is up and running we can run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s status --wait-ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we do this you'll notice that we don't have permission to access MicroK8s, but luckily you will be provided with the commands you need to fix this, in my case, because my username is &lt;code&gt;pete&lt;/code&gt; I need to run the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -a -G microk8s pete
sudo chown -f -R pete ~/.kube
newgrp microk8s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this we can issue our &lt;code&gt;microk8s status --wait-ready&lt;/code&gt; command again and hopefully we should see that MicroK8s is running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzio4iql3xcpydo6u0qb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzio4iql3xcpydo6u0qb5.png" alt="MicroK8s Running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything is up and running you should see &lt;code&gt;microk8s is running&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next up we'll install a couple of add-ons, for this simple setup we'll just install DNS, which is often used by other addons so is almost always needed, and then the dashboard so we have a nice web based interface to see what's happening with the cluster.&lt;/p&gt;

&lt;p&gt;Run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s enable dashboard dns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's finished we can check what services are running with the following command, and hopefully you can spot the dashboard and dns services in the list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s kubectl get all --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last thing we'll do is check that the dashboard is running by using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microk8s dashboard-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return us a token that we can use for login and also the port number that the dashboard is running on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71ayeqk7ydyq6rbzd2l5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71ayeqk7ydyq6rbzd2l5.png" alt="MicroK8s Dashboard Token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see our dashboard is running on port &lt;code&gt;10443&lt;/code&gt;.  Let's check this from our Windows host by opening a browser to &lt;code&gt;https://{VM Name}:10443&lt;/code&gt;.  You'll likely receive a message about the connection not being private but simply carry on to the page, choose to login with a token, paste in the token you were provided above, and hopefully you'll login to dashboard where you can see the status of your install.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo0ld9966icfkx3syuxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmo0ld9966icfkx3syuxm.png" alt="MicroK8s Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linux Firewall&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In later parts we will be enabling connections via the Linux firewall.  In preperation for that it will be useful to make sure that access to dashboard is possible when the firewall is running.&lt;/p&gt;

&lt;p&gt;To check if the firewall is active run the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if the status does not come back as &lt;code&gt;active&lt;/code&gt; you can enable the firewall with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw enable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then allow access to the port the dashboard uses with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ufw allow 10443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up MySQL
&lt;/h2&gt;

&lt;p&gt;Panache Legal is designed to run against SQL Server or MySQL (or MariaDB if you're running on something like a Raspberry Pi) so I could just install the free developer version of SQL server on Windows, or even SQL server for Linux, but I'd prefer to keep things Open Source so lets go with MySQL.&lt;/p&gt;

&lt;p&gt;First up, lets update all our packages so we're ready for the install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's done, lets perform the install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install mysql-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This won't take long to run and once it's finished run the following command to ensure it's all up and running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start mysql.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's not strictly necessary, but best practice is now to run the security script.  By doing this you'll set a new root password as well as disabling certain pre-installed features and configuration that could be used to gain access to the server.  In general you should answer &lt;code&gt;Y&lt;/code&gt; to all the questions and accept the changes it wants to make.&lt;/p&gt;

&lt;p&gt;Run the script with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql_secure_installation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you receive an error message when trying to change the root password exit it and run the following commands before running the above command again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mysql
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by '{some password}';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next time you run &lt;code&gt;mysql_secure_installation&lt;/code&gt; you'll need to enter the password you supplied in &lt;code&gt;{some password}&lt;/code&gt; above when the script starts.&lt;/p&gt;

&lt;p&gt;Once this script is finished we'll create a new user that can be used by our microservices to login and create their databases.&lt;/p&gt;

&lt;p&gt;Log in to MySQL with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql -u root -p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next create a new user with a username of your choice replacing &lt;code&gt;{username}&lt;/code&gt; and also a password of your choice replacing &lt;code&gt;{password}&lt;/code&gt; by issuing the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER '{username}'@'%' IDENTIFIED BY '{password}';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll notice in the above that instead of a hostname we provided &lt;code&gt;%&lt;/code&gt; after the username. This will allow us to connect to the MySQL database from an external machine if we want.&lt;/p&gt;

&lt;p&gt;For example, in the above I'm using &lt;code&gt;CREATE USER 'pluser'@'%' IDENTIFIED BY '5ecurePassw0rd!';&lt;/code&gt;, now don't tell anyone my password, that's between you and me!&lt;/p&gt;

&lt;p&gt;The Panache Legal Microservices we'll be running need to be able to create their own database, as they use a code first approach in Entity Framework, so we need to grant appropriate privileges to this new user.  In this instance we'll just grant all, but you may want to be more restrictive in your environment, especially if this is a production environment!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GRANT ALL PRIVILEGES ON *.* TO '{username}'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though we've configured our user to allow connections from external systems MySQL itself by default will only allow connections from localhost.  To change this we need to edit the &lt;code&gt;mysqld.cnf&lt;/code&gt; file using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for a line that says.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind-address            = 127.0.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and change this to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bind-address            = 0.0.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupj10osrvavly8psh3q6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupj10osrvavly8psh3q6.png" alt="MySQL Bind Address"&gt;&lt;/a&gt;&lt;br&gt;
Exit out of nano &lt;code&gt;CTRL+X, Y, ENTER&lt;/code&gt;, and then restart MySQL with the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;We've got our Linux VM running, we've installed MicroK8s and also MySQL so now lets go ahead and setup &lt;code&gt;phpMyAdmin&lt;/code&gt;, along with the &lt;code&gt;NGINX&lt;/code&gt; webserver so that we can easily administer our MySQL installation.&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;phpMyAdmin&lt;/code&gt; setup is optional, and isn't required to get everything else running so if you want to skip that part and head straight to getting the containers running simply skip forward to Part 4.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microk8s</category>
      <category>opensource</category>
      <category>devops</category>
      <category>mysql</category>
    </item>
    <item>
      <title>Making a local MicroK8s environment available externally (Part 1 - Building a Linux VM)</title>
      <dc:creator>Peter Davis</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:02:50 +0000</pubDate>
      <link>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-1-building-a-linux-vm-1n3d</link>
      <guid>https://forem.com/panachesoftwaredev/making-a-local-microk8s-environment-available-externally-part-1-building-a-linux-vm-1n3d</guid>
      <description>&lt;p&gt;I'm currently building an Open Source LegalTech platform called &lt;a href="https://github.com/PanacheSoftware/PanacheLegalPlatform" rel="noopener noreferrer"&gt;Panache Legal&lt;/a&gt; which is built using .NET and consists of a number of microservices which are all designed to run in Docker containers.  This all works great locally, but one issue is how to host a version externally that can be used by colleagues for testing and demos.&lt;/p&gt;

&lt;p&gt;Of course that's easy right, I've got docker images uploaded to docker hub, so just spin those up as web apps in Azure (or your cloud provider of choice) and it's problem solved.&lt;/p&gt;

&lt;p&gt;But where is the fun in that. I don't really want to pay for too many hosted services, but most importantly, I want to learn more about Kubernetes and the whole DevOps pipeline.  In short, I'd like to do it myself.&lt;/p&gt;

&lt;p&gt;So here's what we're going to do instead.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Linux VM locally in Hyper-V&lt;/li&gt;
&lt;li&gt;Install MicroK8s and MySQL (including phpMyAdmin)&lt;/li&gt;
&lt;li&gt;Get Panache Legal (or your docker images of choice) up and running on that VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above gives us a local Kubernetes environment using MicroK8s, but because we want that accessible externally, and because I don't have a static IP from my ISP, we'll have to jump through some additional hoops to expose this to the outside world.  So we'll follow this up and...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Linux VM in Azure&lt;/li&gt;
&lt;li&gt;Setup SSH port forwarding to send traffic to and from that Azure VM to my local VMs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, there are many different ways we could have done this, like using WSL, using a local linux workstation, running it all in a VM in Azure, running MicroK8s on Windows.  Plus there are security considerations around opening a route to our local network....but we've got to start somewhere, so lets go with the route outlined above and just get going, and hopefully, even if the whole setup isn't valid for you, parts of it might be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating our Hyper-V Linux VMs
&lt;/h2&gt;

&lt;p&gt;Before we start, go ahead and download a copy of the Ubuntu installation ISO from &lt;a href="https://ubuntu.com/desktop" rel="noopener noreferrer"&gt;Ubuntu desktop&lt;/a&gt;.  Other flavours of Linux are available if you prefer.&lt;/p&gt;

&lt;p&gt;I'm creating this environment in Windows 11 and you'll need the Pro or Enterprise version to use Hyper-V.  If you're looking for a free alternative you could likely use &lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;VirtualBox&lt;/a&gt; to achieve the same result.&lt;/p&gt;

&lt;p&gt;Hyper-V won't be enabled by default so you'll need to open &lt;strong&gt;Control Panel &amp;gt; Programs and Features &amp;gt; Turn Windows features on or off&lt;/strong&gt; and make sure &lt;strong&gt;Hyper-V&lt;/strong&gt; is enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jougbrf6lwpel1asncv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4jougbrf6lwpel1asncv.png" alt="Windows features Hyper-V"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up Hyper-V Manager and the first thing we'll do is use the &lt;strong&gt;Virtual Switch Manager...&lt;/strong&gt; action to create an &lt;strong&gt;External Switch&lt;/strong&gt; to allow our VM to see the outside world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe42bka1m4xth02vfh8p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe42bka1m4xth02vfh8p6.png" alt="External switch setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that's done use the &lt;strong&gt;New &amp;gt; Virtual Machine...&lt;/strong&gt; action to create our VM.  Give your VM a name and then you can choose either Generation 1 or Generation 2.  Assign as much memory as you want and then make sure you choose your newly created &lt;strong&gt;External Switch&lt;/strong&gt; for the networking.&lt;/p&gt;

&lt;p&gt;When you get to &lt;strong&gt;Installation Options&lt;/strong&gt; choose &lt;strong&gt;Install an operating system from a bootable image file&lt;/strong&gt; and select the Ubuntu ISO you downloaded earlier and then finish.  &lt;/p&gt;

&lt;p&gt;I'm aware we could have used the Hyper-V &lt;strong&gt;Quick Create...&lt;/strong&gt; option, but again, I like to do all of this myself so I'm familiar with the setup.&lt;/p&gt;

&lt;p&gt;Before we go any further make sure you open the &lt;strong&gt;Settings...&lt;/strong&gt; of your new VM.  If you choose to use &lt;strong&gt;Generation 2&lt;/strong&gt; untick &lt;strong&gt;Enable Secure Boot&lt;/strong&gt; so we don't get any issues with the Ubuntu installation.  Also check on the Memory and Processor settings, you may want to adjust what's assigned here based on the spec of your machine. &lt;/p&gt;

&lt;p&gt;Now simply use the &lt;strong&gt;Connect...&lt;/strong&gt; option and start up your VM.  You should boot into the install screen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j63jtmywr9wjmjx08q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j63jtmywr9wjmjx08q7.png" alt="Ubuntu Install"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the &lt;strong&gt;Install Ubuntu&lt;/strong&gt; option and then from their go with the defaults or options appropriate for you.  When it comes to the &lt;strong&gt;Updates and other software&lt;/strong&gt; screen I choose a minimal installation as I'm not going to be using the VMs for anything other than running MicroK8s and MySQL.&lt;/p&gt;

&lt;p&gt;After the install has finished you'll boot into the desktop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59vclvr96rg8ijpxnw7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59vclvr96rg8ijpxnw7w.png" alt="Ubuntu Desktop"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's likely the auto software updater will run, so let it do its thing and once that's done we're almost ready to install MicroK8s and MySQL, there are just a couple of extra steps that might make our life easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSH Setup
&lt;/h2&gt;

&lt;p&gt;I generally use &lt;a href="https://github.com/microsoft/terminal" rel="noopener noreferrer"&gt;Windows Terminal&lt;/a&gt; (which you can download from the Microsoft Store) to interact with my Linux VMs but that needs us to have SSH running on our VMs, and Ubuntu does not come with that pre-installed, so let's get that up and running.&lt;/p&gt;

&lt;p&gt;Open a new terminal window with &lt;code&gt;Ctrl+Alt+T&lt;/code&gt; and use the following commands to install &lt;code&gt;openssh-server&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install openssh-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should be all that's needed and we can confirm that we can login by opening windows terminal on our host and using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh {username}@{VM name}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin6jt9joucgiossg9unl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin6jt9joucgiossg9unl.png" alt="Windows Terminal SSH"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Desktop Resolution
&lt;/h2&gt;

&lt;p&gt;It's likely we'll do everything via windows terminal from this point onwards, but if you want to use the Linux desktop for anything you'll find that by default you can't change the resolution to anything better than the fairly disappointing 1024x768.&lt;/p&gt;

&lt;p&gt;To fix this we need to make a couple of tweaks.  Open a terminal with &lt;code&gt;Ctrl+Alt+T&lt;/code&gt; and then use the nano editor (or vim if you prefer) to edit the grub file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/default/grub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find the line that begins with &lt;code&gt;GRUB_CMDLINE_LINUX_DEFAULT&lt;/code&gt; and add &lt;code&gt;video=hyperv_fb:1920x1080&lt;/code&gt; to the end.  You can use whatever resolution is suitable for your monitor.&lt;/p&gt;

&lt;p&gt;In addition to this you also want to edit the line that begins with &lt;code&gt;GRUB_CMDLINE_LINUX&lt;/code&gt; and make sure that is the same.&lt;/p&gt;

&lt;p&gt;Things should look a little like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a27fwsdrx1p8xu4uc1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a27fwsdrx1p8xu4uc1k.png" alt="Edit Grub file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and exit the editor, &lt;code&gt;Ctrl+x&lt;/code&gt; then &lt;code&gt;Y&lt;/code&gt; then &lt;code&gt;Enter&lt;/code&gt; in nano.&lt;/p&gt;

&lt;p&gt;and then run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo update-grub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reboot your VM and you should find it starts back up with your new resolution set.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;At this point we should our Linux VM up and running.&lt;/p&gt;

&lt;p&gt;Lets head over to part two of this guide where we'll go through the installation of MicroK8s and MySQL.&lt;/p&gt;

&lt;p&gt;Pete&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/panachesoftware" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhmy70aj858tvqekv5n7.png" alt="Buy Me A Coffee"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microk8s</category>
      <category>hyperv</category>
      <category>ubuntu</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
