<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Henk Boelman</title>
    <description>The latest articles on Forem by Henk Boelman (@hboelman).</description>
    <link>https://forem.com/hboelman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hboelman"/>
    <language>en</language>
    <item>
      <title>Highlights for the Global AI Student Conference</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Mon, 28 Nov 2022 14:44:09 +0000</pubDate>
      <link>https://forem.com/azure/highlights-for-the-global-ai-student-conference-25dh</link>
      <guid>https://forem.com/azure/highlights-for-the-global-ai-student-conference-25dh</guid>
      <description>&lt;p&gt;On December 13th the Global AI Student Conference is taking place. This is the 5th edition of conference that highlights student projects and educator discussion. In 12 hours, the conference travelers around the world, centrally hosted from the Global AI Studio in the Netherlands. Day one has over 20 sessions and panels covering the world, from China to Mexico. Day two is all about gaining hands-on knowledge with deep-learning&lt;/p&gt;

&lt;h2&gt;
  
  
  My top 3 for Day 1
&lt;/h2&gt;

&lt;p&gt;The first session I’m looking forward to, is from 3 students from China, they are going to share how they used Microsoft Cognitive Services to create their &lt;a href="https://aiconf.education/sessions/ishare-donation-platform/"&gt;iShare Donation Platform&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5AKQ4g-7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s82daapjpuozb9p23av5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5AKQ4g-7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s82daapjpuozb9p23av5.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second on my list is &lt;a href="https://aiconf.education/sessions/breast-cancer-prediction-using-automated-ml/"&gt;Breast Cancer prediction using automated ML&lt;/a&gt; by Hadil Ben Amor. I’m looking forward to seeing how Automated ML can be combined with Power BI to solve a real problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCg5ftmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqlrya5eu5x74pzszx5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCg5ftmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqlrya5eu5x74pzszx5a.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I’m always interested in how we can take responsibility when creating AI systems. In the session &lt;a href="https://aiconf.education/sessions/towards-responsible-ai/"&gt;Towards Responsible AI&lt;/a&gt;, Luis Beltran is going to share some examples and tips to create responsible AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CYram6Yp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kq3zjjq728667442uiqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CYram6Yp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kq3zjjq728667442uiqv.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can find all the sessions and create a personal agenda on &lt;a href="https://aiconf.education"&gt;https://aiconf.education&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Day 2
&lt;/h2&gt;

&lt;p&gt;The second day is workshop day. During the day you can participate in a free full Fundamentals of Deep Learning workshop offered by Nvidia.   In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aiconf.education/workshop/"&gt;Register and more information click here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Global AI Community?
&lt;/h2&gt;

&lt;p&gt;The Global AI Community empowers developers who are passionate about AI to share knowledge through events and meetups. We host 3 global AI events across 100 locations that span every corner of the globe. We also provide a variety of resources to members of the community, including a free Meetup Pro account for user groups, event-in-a-box content, Azure passes, and support for booking Microsoft venues. We work closely with product teams at Microsoft to share workshop content about the newest AI products.&lt;br&gt;
&lt;a href="https://globalai.community"&gt;https://globalai.community&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>career</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Power Umbraco with a bit of Azure</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Mon, 10 Jan 2022 11:12:21 +0000</pubDate>
      <link>https://forem.com/azure/power-umbraco-with-a-bit-of-azure-h2g</link>
      <guid>https://forem.com/azure/power-umbraco-with-a-bit-of-azure-h2g</guid>
      <description>&lt;p&gt;With the release of Umbraco 9 a whole new era is born and really gives you the opportunity to run your Umbraco solution in new ways.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=n9fFgfA5tyw" rel="noopener noreferrer"&gt;Watch my session on Umbraco Together Conference&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog we dive into the Azure building block you can use to run Umbraco 9 on Azure. We look at Docker containers, Azure WebApps, networking and scaling, storage and Azure SQL. Finally we dive in how you can combine those buildings block to fit your specific scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub7aepmjuwpqu7iuemu4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub7aepmjuwpqu7iuemu4.jpg" alt="Umbraco 9 Solution Design"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1 - Setup your Azure Resources
&lt;/h2&gt;

&lt;p&gt;To get started we need to create a few Azure Resources. For this we will use the &lt;a href="https://docs.microsoft.com/cli/azure/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Azure CLI&lt;/a&gt;, but you can also create them from the &lt;a href="https://portal.azure.com/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt; or use &lt;a href="https://docs.microsoft.com/azure/azure-resource-manager/templates/overview?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;ARM&lt;/a&gt; of &lt;a href="https://docs.microsoft.com/azure/azure-resource-manager/bicep/overview?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;BICEP&lt;/a&gt; templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Login to your Azure Subscription
az login

# Create a resource group
az group create \
    --location westeurope \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 1.2 - Create an Azure SQL Database
&lt;/h3&gt;

&lt;p&gt;For Umbraco to work we need an SQL Database, in the code below we will create an Azure SQL database in the S1 service tier. This is enough for development or low traffic website. If you run a heavy  website it is recommended to go for the P tier.&lt;/p&gt;

&lt;p&gt;In the example below we create first an SQL Server, an SQL Server can hold multiple databases. Firewall and security is managed on the database level. Second we create the database for Umbraco and finally we create a firewall rule that enables connections to the server from any Azure Resource. If you need access from your dev environment you have to add your IP also to the firewall.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/azure/azure-sql/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Read more about Azure SQL&lt;/a&gt; on Microsoft Docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the SQL Server
az sql server create \
    --name &amp;lt;SERVER_NAME&amp;gt; \ 
    --admin-user &amp;lt;ADMIN_USER&amp;gt; \
    --admin-password &amp;lt;ADMIN_PASSWORD&amp;gt; \
    --location westeurope \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;

# Create the database on the server
az sql db create \
    --server &amp;lt;SERVER_NAME&amp;gt; \
    --name DATABASE_NAME&amp;gt; 
    --service-objective S1 
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt; 

# Grant Azure Resources access to server
az sql server firewall-rule create \
    --server &amp;lt;SERVER_NAME&amp;gt; \
    --name AllowAzureServices \
    --start-ip-address 0.0.0.0 \
    --end-ip-address 0.0.0.0 \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;  

# Show the connection string
az sql db show-connection-string --client ado.net \
    --server &amp;lt;SERVER_NAME&amp;gt; \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 1.3 - Create a storage account
&lt;/h3&gt;

&lt;p&gt;To store the media from Umbraco we are using an Azure Storage Account, this make sure that images are stored outside of Umbraco and helps to run Umbraco in containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/azure/storage/blobs/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Read more about Azure Storage Accounts&lt;/a&gt; on Microsoft Docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the storage account
az storage account create \
    --name &amp;lt;STORAGE_ACCOUNT_NAME&amp;gt; \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt; \  
    --location westeurope \
    --sku Standard_GRS \
    --encryption-services blob

# Create a public container on the storage account
az storage container create \
    --name &amp;lt;STORAGE_CONTAINER_NAME&amp;gt; \
    --public-access blob \
    --account-name &amp;lt;STORAGE_ACCOUNT_NAME&amp;gt;  \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;   

# Show the connection string
az storage account show-connection-string \ 
    --name &amp;lt;STORAGE_ACCOUNT_NAME&amp;gt; 
    --query "connectionString" -o tsv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 1.4 - Add a Content Delivery Network
&lt;/h3&gt;

&lt;p&gt;By default an Azure Blob Storage account is readable from one region, to put images close to the website visitors we can add a Content Delivery Network in front of the storage account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If we have storage account called &lt;em&gt;umbraco9.blob.core.windows.net&lt;/em&gt; and publicly accessible container &lt;em&gt;assets&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We can create a CDN &lt;em&gt;umbraco9.azureedge.net&lt;/em&gt; that points to hostname &lt;em&gt;umbraco9.blob.core.windows.net&lt;/em&gt; and path: &lt;em&gt;/assets/&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Note that the CDN will not work if you don't add the &lt;em&gt;origin-host-header&lt;/em&gt; parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/azure/cdn/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Read more about Content Delivery Networks&lt;/a&gt; on Microsoft Docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create A CDN Profile
az cdn profile create \
    --name &amp;lt;CDN_PROFILE_NAME&amp;gt; \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;   
    --sku Standard_Microsoft

# Create a CDN Endpoint for the Storage Container
az cdn endpoint create \
    --name &amp;lt;ENDPOINT_NAME&amp;gt; \
    --profile-name &amp;lt;CDN_PROFILE_NAME&amp;gt; \
    --origin &amp;lt;STORAGE_ACCOUNT_NAME&amp;gt;.blob.core.windows.net \
    --origin-path "/&amp;lt;STORAGE_CONTAINER_NAME&amp;gt;/" \
    --origin-host-header &amp;lt;STORAGE_ACCOUNT_NAME&amp;gt;.blob.core.windows.net \
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Part 2 - Setup Umbraco 9.x
&lt;/h2&gt;

&lt;p&gt;In the previous steps we created the resources we needed before we could get started with Umbraco, a database and storage account.&lt;/p&gt;

&lt;p&gt;Now we can start with setting up Umbraco 9.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://our.umbraco.com/documentation/Fundamentals/Setup/Install/install-umbraco-with-templates" rel="noopener noreferrer"&gt;Read more about installing Umbraco&lt;/a&gt; on Umbraco Docs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Before you continue create a GitHub repository&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# First create a directory
mkdir source

# Create the dotnet solution and project
dotnet new -i Umbraco.Templates
dotnet new sln -n &amp;lt;SOLUTION_NAME&amp;gt;
dotnet new umbraco -n &amp;lt;PROJECT_NAME&amp;gt; --connection-string "&amp;lt;SQL_CONNECTION_STRING&amp;gt;"
dotnet sln add &amp;lt;PROJECT_NAME&amp;gt;
cd &amp;lt;PROJECT_NAME&amp;gt;

# Add the Umbraco Azure Blob Storage Provider package
dotnet add package Umbraco.StorageProviders.AzureBlob
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next let's enable the Umbraco.StorageProviders.AzureBlob in Umbraco.&lt;/p&gt;

&lt;p&gt;Add the lines below to the method &lt;strong&gt;ConfigureServices&lt;/strong&gt; in file &lt;strong&gt;startup.cs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    .AddAzureBlobMediaFileSystem() 
    .AddCdnMediaUrlProvider()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the lines below to the method &lt;strong&gt;Configure&lt;/strong&gt; after ** u.UseWebsite();** in file &lt;strong&gt;startup.cs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    u.UseAzureBlobMediaFileSystem();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the section &lt;strong&gt;Storage&lt;/strong&gt; to &lt;strong&gt;appsettings.json&lt;/strong&gt; and &lt;strong&gt;appsettings.Development.json&lt;/strong&gt;. &lt;br&gt;
For developerment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add &lt;strong&gt;appsettings.Development.json&lt;/strong&gt; to your .gitignore file.&lt;/li&gt;
&lt;li&gt;Add the connectionstring / containername / and CDN url to the &lt;strong&gt;appsettings.Development.json&lt;/strong&gt; file.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Umbraco": {
    "Storage": {
      "AzureBlob": {
        "Media": {
          "ConnectionString": "",
          "ContainerName": "",
          "Cdn": {
            "Url": ""
          }
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now everything is ready to launch Umbraco and follow the setup and initialize the Umbraco Database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Part 3 - Docker
&lt;/h2&gt;

&lt;p&gt;In this section we are going to package up our Umbraco installation in a container, so later we can deploy these containers to Azure.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Don't have Docker on your machine, the easiest way is to &lt;a href="https://www.docker.com/get-started" rel="noopener noreferrer"&gt;install Docker Desktop&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/dotnet/architecture/microservices/container-docker-introduction/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Read more about Containers&lt;/a&gt; on Microsoft Docs&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 3.1 - Dockerfile
&lt;/h3&gt;

&lt;p&gt;Create an empty file "Dockerfile" on the same level as the source directory and add the content below. &lt;/p&gt;

&lt;p&gt;Replace &lt;em&gt;UmbracoTogether.Web.dll&lt;/em&gt; with the name of your dll.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source

# copy csproj and restore
COPY ./source/ /source/
RUN dotnet restore

RUN dotnet build -c Release

# publish app and libraries
RUN dotnet publish -c release -o /app --no-restore

# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
EXPOSE 80
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "UmbracoTogether.Web.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 3.2 - Build the container
&lt;/h3&gt;

&lt;p&gt;Next we build the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t umbraco:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 3.3 - Run the container locally
&lt;/h3&gt;

&lt;p&gt;To run the container you need to specify the environment variables, for this you can use the -e parameter. The -p command maps a port inside the container to a public port. In the example below we expose Umbraco on port 80.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 80:80 -t umbraco:latest 
    -e "ConnectionStrings:umbracoDbDSN"="&amp;lt;SQL_CONNECTION_STRING&amp;gt;" 
    -e Umbraco:Storage:AzureBlob:Media:ConnectionString="&amp;lt;STORAGE_CONNECTION_STRING&amp;gt;" 
    -e Umbraco:Storage:AzureBlob:Media:ContainerName="&amp;lt;STORAGE_CONTAINER_NAME&amp;gt;" 
    -e Umbraco:Storage:AzureBlob:Media:Cdn:Url="&amp;lt;CDN_PROFILE_URL&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Part 4 - Run the containers in Azure
&lt;/h2&gt;

&lt;p&gt;In step 3 we created a container. This container is now only available on your dev machine. The next thing now we have to do is store in in a central place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 4.1 - Create and Azure Container Registry
&lt;/h3&gt;

&lt;p&gt;In Azure you can create a Azure Container Registry here you can store privately your container image.&lt;/p&gt;

&lt;p&gt;In the sample below we create a registry, enable admin with username and password and perform an login using the CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/azure/container-registry/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Read more about Azure Container Registries&lt;/a&gt; on Microsoft Docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the Azure Container registry
az acr create 
    --name &amp;lt;ACR_NAME&amp;gt; 
    --sku Basic 
    --admin-enabled true 
    --resource-group &amp;lt;RESOURCE_GROUP_NAME&amp;gt;   

# Retrieve the credentials
az acr credential show --name &amp;lt;ACR_NAME&amp;gt; --query "passwords[0].value"
az acr credential show --name &amp;lt;ACR_NAME&amp;gt; --query "username"

# Login to the registry
az acr login -n &amp;lt;ACR_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build and tag
docker build . -t &amp;lt;ACR_NAME&amp;gt;.azurecr.io/umbraco:latest

# Push to image to the ACR
docker push &amp;lt;ACR_NAME&amp;gt;.azurecr.io/umbraco:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your image is in your registry and you can try and run it on a different machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 5 - CI &amp;amp; CD to Azure using Github Actions
&lt;/h2&gt;

&lt;p&gt;The final step is to setup a basic CI / CD pipeline in GitHub actions. &lt;/p&gt;

&lt;p&gt;In this action we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor commits on the Master Branch&lt;/li&gt;
&lt;li&gt;Build the container&lt;/li&gt;
&lt;li&gt;Push the container to our ACR&lt;/li&gt;
&lt;li&gt;Deploy the container to an Azure Container Instance in West Europe&lt;/li&gt;
&lt;li&gt;Deploy the container to an Azure Container Instance in North America&lt;/li&gt;
&lt;li&gt;Add the ACI's to a Traffic manager profile&lt;/li&gt;
&lt;li&gt;Clean up the old container instances (TODO)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/actions" rel="noopener noreferrer"&gt;Learn more about GitHub Actions&lt;/a&gt; on GitHub Docs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/azure/traffic-manager/?WT.mc_id=aiml-48648-heboelma" rel="noopener noreferrer"&gt;Learn more about Azure Traffic Manager&lt;/a&gt; on Microsoft Docs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 5.1 - Create a service principle
&lt;/h3&gt;

&lt;p&gt;For GitHub actions to access resources in Azure you need create a service principle and grant this service principle access to a resource group.&lt;/p&gt;

&lt;p&gt;Use this bash script below to generate a a service principle.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

# Set the following
spName = &amp;lt;NAME&amp;gt;
subName = &amp;lt;AZURE_SUBSCRIPTION_NAME&amp;gt;
subscriptionId = &amp;lt;AZURE_SUBSCRIPTION_GUID&amp;gt;
resourceGroup = &amp;lt;&amp;lt;RESOURCE_GROUP_NAME&amp;gt;

# set the subscription
az account set --subscription "$subName" 

# Create a service principal
    echo "Creating service principal..."
    spInfo=$(az ad sp create-for-rbac --name "$spName" \
            --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroup \
            --role contributor  \
            --sdk-auth)

    # save spInfo locally
    echo $spInfo &amp;gt; auth.json        

    if [ $? == 0 ]; then

        echo '========================================================='
        echo 'GitHub secrets for configuring GitHub workflow'
        echo '========================================================='
        echo "AZURE_CREDENTIALS: $spInfo"
        echo '========================================================='
    else
        "An error occurred. Please try again."
         exit 1
    fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 5.2 - Add GitHub secrets
&lt;/h3&gt;

&lt;p&gt;Add the contents of &lt;em&gt;auth.json&lt;/em&gt; to the GitHub Secrets &lt;em&gt;AZURE_CREDENTIALS&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After you added he content remove the auth.json file and make sure you leave it out of your version control.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add the following other secrets to GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/actions/security-guides/encrypted-secrets" rel="noopener noreferrer"&gt;Read more about secrets in GitHub&lt;/a&gt; on GitHub Docs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ACR_USERNAME
ACR_PASSWORD
SQL_CONSTR
STORAGE_CONN_STRING
STORAGE_CONTAINER
CDN_URL
TRAFFIC_MANAGER_DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Part 5.3 - Create the GitHub Action
&lt;/h3&gt;

&lt;p&gt;Add the content below to the file: &lt;strong&gt;.github/workflows/build_and_deploy.yml&lt;/strong&gt; to create the github action.&lt;/p&gt;

&lt;p&gt;Adjust the variables under &lt;em&gt;env&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - main

name: Umbraco 9 Build &amp;amp; Deploy

env:
  # basic  
  resourceGroup: My_Resource_Group
  location: westeurope
  subName: "my-subscription-name"

  # app specific
  acrName: xxxx.azurecr.io

  # aci
  image_name: umbraco

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - uses: azure/docker-login@v1
        with:
          login-server: ${{ env.acrName }}
          username: ${{ secrets.ACR_USERNAME }}
          password: ${{ secrets.ACR_PASSWORD }}

      - run: |
          docker build . -t ${{ env.acrName }}/${{ env.image_name }}:${{ github.sha }}
          docker push ${{ env.acrName }}/${{ env.image_name }}:${{ github.sha }}
  deploy:
    name: Deploy
    runs-on: ubuntu-latest
    needs: build

    steps:

      - name: 'Login via Azure CLI'
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}

      - name: 'Deploy to Europe Azure Container Instances'
        uses: 'azure/aci-deploy@v1'
        with:
          resource-group: ${{ env.resourceGroup }}
          dns-name-label: ${{ github.sha }}-eu
          image: ${{ env.acrName }}/${{ env.image_name }}:${{ github.sha }}
          registry-login-server:  ${{ env.acrName }}
          registry-username: ${{ secrets.ACR_USERNAME }}
          registry-password: ${{ secrets.ACR_PASSWORD }}
          name: umbraco9-eu-${{ github.sha }}
          secure-environment-variables: ConnectionStrings__umbracoDbDSN="${{ secrets.SQL_CONSTR }}" Umbraco__Storage__AzureBlob__Media__ConnectionString="${{ secrets.DEV_STORAGE }}" Umbraco__Storage__AzureBlob__Media__ContainerName="${{secrets.STORAGE_CONTAINER}}" Umbraco__Storage__AzureBlob__Media__Cdn__Url="${{secrets.CDN_URL}}" 
          location: westeurope
          cpu: 1
          memory: 2gb
          ports: 80

      - name: 'Deploy to West US Azure Container Instances'
        uses: 'azure/aci-deploy@v1'
        with:
          resource-group: ${{ env.resourceGroup }}
          dns-name-label: ${{ github.sha }}-us
          image: ${{ env.acrName }}/${{ env.image_name }}:${{ github.sha }}
          registry-login-server:  ${{ env.acrName }}
          registry-username: ${{ secrets.ACR_USERNAME }}
          registry-password: ${{ secrets.ACR_PASSWORD }}
          name: umbraco9-us-${{ github.sha }}
          secure-environment-variables: ConnectionStrings__umbracoDbDSN="${{ secrets.SQL_CONSTR }}" Umbraco__Storage__AzureBlob__Media__ConnectionString="${{ secrets.STORAGE_CONN_STRING }}" Umbraco__Storage__AzureBlob__Media__ContainerName="${{secrets.STORAGE_CONTAINER}}" Umbraco__Storage__AzureBlob__Media__Cdn__Url="${{secrets.CDN_URL}}" 
          location: eastus
          cpu: 1
          memory: 2gb
          ports: 80

      - name: 'Add to Traffic Manager'
        run: |
          az network traffic-manager profile create --name ${{ secrets.TRAFFIC_MANAGER_DNS }} \
            --routing-method Weighted \
            --path "/" \
            --protocol HTTP \
            --unique-dns-name ${{ secrets.TRAFFIC_MANAGER_DNS }} \
            --ttl 10 \
            --port 80 \
            --resource-group ${{ env.resourceGroup }}
          az network traffic-manager endpoint create -g ${{ env.resourceGroup }} \
              -n ${{ github.sha }}-eu \
              --profile-name ${{ secrets.TRAFFIC_MANAGER_DNS }} \
              --type externalEndpoints \
              --weight 1 \
              --target ${{ github.sha }}-eu.westeurope.azurecontainer.io
          az network traffic-manager endpoint create -g ${{ env.resourceGroup }} \
              -n ${{ github.sha }}-us \
              --profile-name ${{ secrets.TRAFFIC_MANAGER_DNS }} \
              --type externalEndpoints \
              --weight 1 \
              --target ${{ github.sha }}-us.eastus.azurecontainer.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a globally available, load balanced Umbraco 9 that can be deployed with a GitHub action.&lt;/p&gt;

</description>
      <category>umbraco</category>
      <category>webdev</category>
      <category>azure</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Tech A11y Summit</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Sun, 12 Dec 2021 09:34:58 +0000</pubDate>
      <link>https://forem.com/azure/tech-a11y-summit-3c1a</link>
      <guid>https://forem.com/azure/tech-a11y-summit-3c1a</guid>
      <description>&lt;p&gt;&lt;strong&gt;On the 15th of December, from 12:00 CET till 18:00 CET, the Tech A11y Summit is taking place. An 6 hour event all about accessibility in tech, intending to leave you afterwards with a lot of practical things you can directly apply in your work or daily life.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During the conference interpretation into international sign is provided as well as automated captions and a transcript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Register now on the &lt;a href="https://www.techa11y.dev"&gt;Tech A11y Summit Website&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the conference all about?
&lt;/h3&gt;

&lt;p&gt;The conference offers a wide range of topics presented by 16 speakers from all over the world. From 10 minute sessions and deep-dive sessions to interactive panels.&lt;/p&gt;

&lt;h3&gt;
  
  
  Opening and keynotes
&lt;/h3&gt;

&lt;p&gt;The summit starts with a 30 minute documentary created by &lt;a href="https://www.henkboelman.com"&gt;Henk Boelman&lt;/a&gt; about &lt;a href="https://www.techa11y.dev/main-track/inclusive-and-accessible-events/"&gt;inclusive and accessible events&lt;/a&gt;. After the documentary the hosts of the day, &lt;a href="https://twitter.com/raytalks"&gt;Rayta&lt;/a&gt; and &lt;a href="https://twitter.com/Stacy_Cash"&gt;Stacy&lt;/a&gt;, will open the conference and then it is time for the openings keynote &lt;a href="https://www.techa11y.dev/main-track/inclusive-design-more-than-you-hear/"&gt;Inclusive Design, more than you hear&lt;/a&gt; by &lt;a href="https://twitter.com/marievandries"&gt;Marie Van Driessche&lt;/a&gt;. In this keynote she will dive into the question: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Websites are a very visual medium. You therefore might think that they will work for people who are deaf. But is that true?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Don't miss out on this keynote, tune in 12:30 CET on the &lt;a href="https://www.techa11y.dev"&gt;Tech A11y Summit Website&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this keynote is not enough to get you excited for the rest of the day, then &lt;a href="https://twitter.com/DennieDeclercq"&gt;Dennie Declercq&lt;/a&gt; talk &lt;a href="https://www.techa11y.dev/main-track/the-scary-truth-about-labels/"&gt;The scary truth about labels&lt;/a&gt; must get you moving to the tip of your chair.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ask your questions during the live panel
&lt;/h3&gt;

&lt;p&gt;We are lucky that Marie and Dennie are able to join us live in the studio for a panel after their talks. In this panel Rayta will dive into the different perspectives of Marie and Dennie about accessibility and inclusion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4YKc8MW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5i6tk4pcck24zmkv5pj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4YKc8MW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5i6tk4pcck24zmkv5pj.png" alt="An image with all the speaker profile pictures and names" width="880" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deep dive sessions
&lt;/h3&gt;

&lt;p&gt;From the panel we move on to &lt;a href="https://twitter.com/MeganStrant"&gt;Megan Strant&lt;/a&gt;, she is joining us live from Australia with her session &lt;a href="https://www.techa11y.dev/main-track/how-can-you-drive-inclusion-for-hidden-disability/"&gt;How can you drive inclusion for hidden disability&lt;/a&gt;? In her session she will talk about the question: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What can we each do to support people with challenges across neurodiversity and mental health to make the workplace and online technology better on a daily basis?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  All about accessibility in web development
&lt;/h3&gt;

&lt;p&gt;After Megans session we go to a section in the conference that focusses on web-development. &lt;a href="https://twitter.com/bolonio"&gt;Adrián Bolonio&lt;/a&gt; will kick this section of with his session &lt;a href="https://www.techa11y.dev/main-track/testing-web-accessibility/"&gt;Testing Web Accessibility&lt;/a&gt; followed by an hour of 10 minute lightning sessions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/the-css-side-of-accessibility/"&gt;The CSS Side of Accessibility&lt;/a&gt; by Linda Ikechukwu&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/could-browsers-fix-more-accessibility-problems-automatically/"&gt;Could browsers fix more accessibility problems automatically?&lt;/a&gt; by Hidde de Vries&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/tables-have-their-place-now-let-s-make-them-fit/"&gt;Tables have their place, now let’s make them fit&lt;/a&gt; by Martine Dowden&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/an-inclusive-web-using-preference-media-queries-to-make-your-sites-for-everyone/"&gt;An inclusive web: using preference media queries to make your sites for everyone&lt;/a&gt; Kilian Valkhof&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/mapping-our-journey-to-accessibility-what-we-can-learn-about-accessibility-from-maps/"&gt;Mapping our journey to accessibility: What we can learn about accessibility from maps&lt;/a&gt; by Joe Glombek&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The lightning round is closed by a panel about Accessibility in front-end development, hosted by Stacy Cashmore.&lt;/p&gt;

&lt;p&gt;Next up, after the panel there are 2, 30-minute sessions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/frontend-and-backend-accommodate-neurodiversity-in-your-products/"&gt;FronteND and BackeND (Accommodate NeuroDiversity in your products)&lt;/a&gt; by Anna Korinna Németh-Szabó&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.techa11y.dev/main-track/accessibility-insights-catch-accessibility-bugs-early-and-often/"&gt;Accessibility Insights: Catch accessibility bugs early and often&lt;/a&gt; by Jacqueline Gibson and John Wade John Wade&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Closing keynote
&lt;/h3&gt;

&lt;p&gt;We close the summit with a keynote by &lt;a href="https://twitter.com/donasarkar"&gt;Dona Sakar&lt;/a&gt; where she will talk about all that you have learned about Accessibility during the day and what you now can DO with it. In this closing session, you'll learn 5 actionable things you can do TODAY to bake inclusion into your product from day 1.&lt;/p&gt;

&lt;p&gt;We hope to see you during the summit from 12:00 CET till 18:00 CET.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Register now on: &lt;a href="https://www.techa11y.dev"&gt;Tech A11y&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>webdev</category>
      <category>inclusion</category>
    </item>
    <item>
      <title>4 Sessions not to miss on the Global AI Student Conference</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Thu, 15 Apr 2021 09:15:24 +0000</pubDate>
      <link>https://forem.com/azure/4-sessions-not-to-miss-on-the-global-ai-student-conference-42a</link>
      <guid>https://forem.com/azure/4-sessions-not-to-miss-on-the-global-ai-student-conference-42a</guid>
      <description>&lt;p&gt;&lt;strong&gt;On April 24th, the Global AI Student Conference takes place. An 8-hour conference with 16 sessions. 14 sessions are given by our Microsoft Student Ambassadors and there are 2 panels.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Date:  &lt;strong&gt;24 April 2021&lt;/strong&gt;&lt;br&gt;
Time:  &lt;strong&gt;10:00 - 18:00 UTC&lt;/strong&gt;&lt;br&gt;
Registration &amp;amp; full program:  &lt;strong&gt;&lt;a href="https://aiconf.education" rel="noopener noreferrer"&gt;https://aiconf.education&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;During the full duration of the conference interpretation into international sign will be provided&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below our Student Speakers have provided some details of what you will learn if you attend their session, as well as some useful links if you want to get started right now with the technologies they will talk about.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg3u0jj9tv9b4r0l6hl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg3u0jj9tv9b4r0l6hl2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine Learning In Fluid Mechanics
&lt;/h3&gt;

&lt;p&gt;By &lt;a href="https://aiconf.education/speakers/nigama-vajjula/" rel="noopener noreferrer"&gt;Nigama Vajjula&lt;/a&gt;&lt;br&gt;
Time: 15:00 - 15:30 UTC&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this session:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How ML relates to fluid mechanics.&lt;/li&gt;
&lt;li&gt;Related research areas.&lt;/li&gt;
&lt;li&gt;The current state of ML research in fluid mechanics with some case studies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://link.springer.com/article/10.1007/s00162-020-00542-y" rel="noopener noreferrer"&gt;Special issue on machine learning and data-driven methods in fluid dynamics&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey7k3o7ozjftt8u3fidu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey7k3o7ozjftt8u3fidu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create no-code ML Models with Azure Machine Learning and Microsoft Learn
&lt;/h3&gt;

&lt;p&gt;By &lt;a href="https://aiconf.education/speakers/foteini-savvidou/" rel="noopener noreferrer"&gt;Foteini Savvidou&lt;/a&gt;&lt;br&gt;
Time: 12:30-13:00 UTC&lt;/p&gt;

&lt;p&gt;In this session, I will explain the concept of Machine Learning and Regression and show how to build a no-code regression model that predicts the price of an automobile in Azure Machine Learning Designer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this session:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Azure Machine Learning workspace.

&lt;ul&gt;
&lt;li&gt;Build and train a regression model in Azure Machine Learning Designer.&lt;/li&gt;
&lt;li&gt;Evaluate and publish that model.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/learn/modules/create-regression-model-azure-machine-learning-designer/?WT.mc_id=aiml-16127-cxa" rel="noopener noreferrer"&gt;Self-Paced Learning: Microsoft Learn – Create a Regression Model with Azure Machine Learning designer&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq64fs7uxxy8zxsd2k64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq64fs7uxxy8zxsd2k64.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI as easy as creating a PowerPoint
&lt;/h3&gt;

&lt;p&gt;By: &lt;a href="https://aiconf.education/speakers/malte-reimann/" rel="noopener noreferrer"&gt;Malte Reimann&lt;/a&gt;&lt;br&gt;
Time: 13:00 - 13:30 UTC&lt;/p&gt;

&lt;p&gt;&lt;em&gt;'AI as easy as creating a PowerPoint' covers how to use image classification with zero math and near no computer science expertise needed by using Lobe.ai.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this session:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you know how to use a mouse, keyboard, and your smartphone camera, after this session you are guaranteed to be able to realize your own ideas involving machine learning.&lt;/li&gt;
&lt;li&gt;You will learn image classification without all the complexity of statistics.&lt;/li&gt;
&lt;li&gt;Often companies talk about the benefits of Ai in board rooms. You will be able to be the one to materialize these benefits by starting small and iterating quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.lobe.ai" rel="noopener noreferrer"&gt;www.lobe.ai&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For65529hjbb5ap0iftr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For65529hjbb5ap0iftr0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Health Bot
&lt;/h3&gt;

&lt;p&gt;By: &lt;a href="https://aiconf.education/speakers/christina-pardali/" rel="noopener noreferrer"&gt;Christine Pardali&lt;/a&gt;&lt;br&gt;
Time: 10:300-11:00 UTC&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you will learn in this session:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand what is a healthcare assistant
&lt;/li&gt;
&lt;li&gt;Get to know Azure Health Bot Service and the management portal
&lt;/li&gt;
&lt;li&gt;Learn how to build your own customized assistant in 15 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learn more:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/azure/health-bot/?WT.mc_id=aiml-16127-cxa" rel="noopener noreferrer"&gt;Azure Health Bot&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you want to learn more go to &lt;a href="https://aiconf.education" rel="noopener noreferrer"&gt;https://aiconf.education&lt;/a&gt; to view the full program and register.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>azure</category>
      <category>education</category>
      <category>students</category>
    </item>
    <item>
      <title>[Video] Adding Sign Language to your online event</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Mon, 15 Feb 2021 12:06:23 +0000</pubDate>
      <link>https://forem.com/azure/adding-sign-language-to-your-online-event-408d</link>
      <guid>https://forem.com/azure/adding-sign-language-to-your-online-event-408d</guid>
      <description>&lt;p&gt;After the &lt;a href="https://dev.to/azure/adding-sign-language-interpretation-to-your-online-event-3c6g"&gt;blogpost&lt;/a&gt; Maya and me where invited to the Microsoft Reactor to talk about how you can add sign language to your online event.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/sWMUY87VdA0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Henk Boelman (Cloud Advocate) and Maya de Wit (Sign language interpreting consultant) will look into how you can make virtual events more inclusive by adding sign language interpretation. We will explore what sign language interpretation is, lessons learnt and conclude with some tips how you can practically add sign language interpretation to your online event.&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>onlinevents</category>
      <category>signlanguage</category>
    </item>
    <item>
      <title>Adding sign language interpretation to your online event</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Tue, 24 Nov 2020 09:24:09 +0000</pubDate>
      <link>https://forem.com/azure/adding-sign-language-interpretation-to-your-online-event-3c6g</link>
      <guid>https://forem.com/azure/adding-sign-language-interpretation-to-your-online-event-3c6g</guid>
      <description>&lt;p&gt;&lt;strong&gt;The current situation around the world forcing us to stay at home and re-invent online conferences, offers us the opportunity to make events more accessible for community members who are deaf or hard of hearing. The past six months I have been involved in co-producing multiple online conferences that offered signed language interpretation. I worked with Maya de Wit, who runs her own company in &lt;a href="https://www.mayadewit.nl/"&gt;Sign Language Interpreting Consultancy&lt;/a&gt;. Maya assisted me in creating this document.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was a complete unaware of sign languages and interpretation before I started on this journey. In this article I want to share with you the things I have learned and hopefully inspire you to add sign language interpretation to your next online event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things I learned
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;A person providing sign language interpretation is called a sign language interpreter. Persons who ‘speak’ a sign language are called signers. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Sign language is not just one language
&lt;/h3&gt;

&lt;p&gt;Pick the right language for your audience. I thought sign language is universal, turned out that this was not the case. Each country has their own national sign language and sometimes also regional signed languages.&lt;/p&gt;

&lt;p&gt;The first step for your event is to ask yourself, who  is the audience and where are they located. If your audience is mainly in the United States, you can go with American Sign Language (ASL). If your audience is Dutch go with NGT (Dutch Sign Language). However, in these times most online events - and especially tech events - reach a worldwide audience, so picking a local sign language excludes a lot of people by default. Luckily there is something called &lt;a href="https://wfdeaf.org/news/resources/faq-international-sign/"&gt;International Sign&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;International Sign (IS) is constructed by combining common elements and lexical signs from different sign languages. IS is used in a variety of different contexts, particularly at international meetings, and informally when travelling and socializing. International Sign is a term used by the World Federation of the Deaf (WFD) and other international organizations. Deaf people typically know only one sign language. Signers from differing countries may use IS spontaneously with each other, with relative success. This communicative success is linked to various factors. First, people who sign in IS have a certain amount of shared contextual knowledge. Secondly, signers may take advantage of shared knowledge of a spoken language, such as English. Thirdly, communication is made easier by the use of iconic signs and visual resources.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  You need more than one interpreter
&lt;/h3&gt;

&lt;p&gt;Interpreting is an exhausting activity – both physically as mentally. Keeping up with the story, context, jargon, references is no easy feat. This is especially so when interpreting a conversation (or: banter), or a Q&amp;amp;A session. You always need at least a team of two interpreters per language combination, and if the event is longer than two hours you will need at least a team of three interpreters. Ask the interpreter consultant to advise you on the recommended number of interpreters needed for your event. More importantly, check if the interpreters you want to hire are accredited: you can ask for their credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do not interact with the interpreters during the live show
&lt;/h3&gt;

&lt;p&gt;The interpreters are at your event to interpret. Be a professional by respecting their work and the deaf viewers and do not comment on their interpretation or specific signs you see.  &lt;/p&gt;

&lt;p&gt;When there are technical difficulties, inform the interpreters that they can pause until further notice. &lt;/p&gt;

&lt;h3&gt;
  
  
  As producer you can help!
&lt;/h3&gt;

&lt;p&gt;As a producer of the show (the person behind the buttons) you can support in various ways. A good thing to know is that if a person is not visible in the video stream and you can only hear their voice, it is more challenging to interpret. Deaf people do not hear who is talking, so the interpreter needs to indicate first that the person not visible on the screen is talking, before they can actually start interpreting what is being said.&lt;br&gt;
Not every platform is suitable for sign language interpretation. For my events we have used different platforms like Streamyard and MS Teams. Check with the interpreter consultant if they can recommend a platform or if your preferred platform is suitable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Size matters
&lt;/h3&gt;

&lt;p&gt;The size of the interpreter's insert should be at least twenty five percent of the total broadcast screen. The insert can be either on the bottom right or the bottom left, although the latest research shows that it is better on the left side of the screen. Place a thin and dark frame around the insert to make it easier on the eyes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What &amp;amp; how?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1 - Preparation
&lt;/h3&gt;

&lt;p&gt;For almost everything good preparation means half the work done. To deliver the best experience possible the interpreting team needs to understand what will happen.&lt;/p&gt;

&lt;h4&gt;
  
  
  Know the content ahead of time
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outline of the complete event&lt;/strong&gt;
This gives the team a general idea about the structure, duration of sessions, and helps with planning / when to switch to another team member. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The content per session&lt;/strong&gt;
Share the abstract and slides of every session prior to the event. This is very helpful to understand the context and intent of the talk. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;List of names&lt;/strong&gt;
Share with the interpreters the names of presenters, moderators and participants who have an active role in the event, so they will know how to spell the names of everyone correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Taking turns
&lt;/h4&gt;

&lt;p&gt;The interpreters need to be able to see each other and interact with one-another during the event to make sure switching turns is as smooth as possible. As the producer you need to know when to switch interpreters. You can do so by agreeing on a cue so that the interpreter can signal when it is time for a switch. In general, the interpreters take turns approximately every fifteen minutes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Test the setup
&lt;/h4&gt;

&lt;p&gt;Always do a test of the broadcasting setup with the interpreters a few days before the event.&lt;/p&gt;

&lt;p&gt;In the test session check the camera setup, the audio for the interpreters (is it clear and not distorted), their lighting (no shadows on face or hands), and the size of the interpreter's insert on the screen. Ask the team to use identical colored backgrounds (preferably blue or grey), and always ask the interpreters for tips to improve the experience for the viewer. &lt;/p&gt;

&lt;p&gt;Practice also taking turns in the test session.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a back channel
&lt;/h4&gt;

&lt;p&gt;Create a dedicated back channel with the larger production team for your online event. It wouldn’t be an online conference if someone wouldn’t get disconnected, can’t receive an audio feed, etc.&lt;/p&gt;

&lt;h4&gt;
  
  
  Communication
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Include in the speaker invite that sign language will be provided during the stream. &lt;/li&gt;
&lt;li&gt;Use the icon for sign language interpretation on your website / for the sessions or tracks that will have interpretation, helping participants build their schedule accordingly. &lt;/li&gt;
&lt;li&gt;In your event promotion, share that there’ll be interpretation and into which language. Some event platforms allow you to specify what accessibility features you’re offering participants. One of those platforms is &lt;a href="https://confs.tech"&gt;confs.tech&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2 - Run
&lt;/h3&gt;

&lt;p&gt;Ensure that all the presenters in your online event wear a headset or have a dedicated microphone. Good audio is essential for all your participants but also for the interpreters in order to provide a good interpretation.&lt;/p&gt;

&lt;p&gt;The interpreters will inform you what their anticipated schedule is: who will start and when switching can be expected. The interpreters will interpret everything they hear, from communication, comments to obvious sounds.&lt;/p&gt;

&lt;h3&gt;
  
  
  3 - Debrief
&lt;/h3&gt;

&lt;p&gt;Plan a debriefing session with your interpreting team to discuss how you can improve your setup to ensure a successful interpretation. If it is possible, ask your viewers for feedback on the interpretation.&lt;/p&gt;

&lt;p&gt;When publishing your event recordings, clearly state the availability of sign language interpretation so that deaf viewers are able to find the accessible information.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Online conference with IS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://globalai.live/october-sessions-advanced-ai/"&gt;Global AI Community October Sessions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/playlist?list=PLxu2-n2PUPT2EM3Ir1HeaWlEWPYged4hS"&gt;Azure Thursday&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://live.df20.nl/"&gt;DF20 - Virtual Umbraco&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/playlist?list=PLxu2-n2PUPT2KsKkMjOfVC-HD--L-tDzZ"&gt;Virtual Azure Community Day&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Read more
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.mayadewit.nl/news/2020/9/17/signed-language-interpretation-at-your-virtual-meetings"&gt;Signed language interpretation at your virtual meetings by Maya de Wit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jansche/AccessibleEvents"&gt;Accessible Events by Jan Schenk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.lazerwalker.com/2020/07/20/captions.html"&gt;Your Online Event Should Have Live Captions by Em Lazer-Walker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>a11y</category>
      <category>onlineevents</category>
      <category>signlanguage</category>
    </item>
    <item>
      <title>Join the Global AI October sessions!</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Tue, 22 Sep 2020 14:10:41 +0000</pubDate>
      <link>https://forem.com/hboelman/join-the-global-ai-october-sessions-3dph</link>
      <guid>https://forem.com/hboelman/join-the-global-ai-october-sessions-3dph</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Global AI Community is proud to present the October sessions. A series of 4 events in the month October focusing on the different fields of Artificial Intelligence (AI).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From how to get started in the field from AI to a deep dive into Computer Vision and Natural Language Processing.&lt;br&gt;
Every show is 3 hours long and filled with technical demos and panels presented by industry experts from companies like OpenCV, Google and Microsoft. There is also plenty of opportunity ask your questions!&lt;/p&gt;

&lt;p&gt;You can now register at: &lt;a href="https://globalai.live/register"&gt;https://globalai.live/register&lt;/a&gt;&lt;br&gt;
&lt;em&gt;If you are quick you might even receive some pre-event swag!&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Episodes:
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Getting started - 8 October 2020
&lt;/h3&gt;

&lt;p&gt;In this episode of the Global AI October sessions we focus on how to get started in the field of AI.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://globalai.live/october-sessions-getting-started"&gt;https://globalai.live/october-sessions-getting-started&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.meetup.com/Global-AI-Channel/events/273427398"&gt;https://www.meetup.com/Global-AI-Channel/events/273427398&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Computer vision - 15 October 2020
&lt;/h3&gt;

&lt;p&gt;In this episode of the Global AI October sessions we focus on what computer vision is, the technical challenge and how to create models. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://globalai.live/october-sessions-computer-vision"&gt;https://globalai.live/october-sessions-computer-vision&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.meetup.com/Global-AI-Channel/events/273427259/"&gt;https://www.meetup.com/Global-AI-Channel/events/273427259/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Natural Language Processing - 22 October 2020
&lt;/h3&gt;

&lt;p&gt;In this episode of the Global AI October sessions we focus on Natural Language Processing with industry experts from around the world.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://globalai.live/october-sessions-natural-language-processing"&gt;https://globalai.live/october-sessions-natural-language-processing&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.meetup.com/Global-AI-Channel/events/273427182/"&gt;https://www.meetup.com/Global-AI-Channel/events/273427182/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Advanced AI - 29 October 2020
&lt;/h3&gt;

&lt;p&gt;In this episode of the Global AI October sessions we dive very deep into the world of advanced AI systems.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://globalai.live/october-sessions-advanced-ai"&gt;https://globalai.live/october-sessions-advanced-ai&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.meetup.com/Global-AI-Channel/events/273427085/"&gt;https://www.meetup.com/Global-AI-Channel/events/273427085/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hope to see you there!&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>event</category>
      <category>community</category>
    </item>
    <item>
      <title>Online events with Teams NDI and OBS</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Thu, 03 Sep 2020 07:03:16 +0000</pubDate>
      <link>https://forem.com/azure/online-events-with-teams-ndi-and-obs-44mn</link>
      <guid>https://forem.com/azure/online-events-with-teams-ndi-and-obs-44mn</guid>
      <description>&lt;h1&gt;
  
  
  Online events with Teams NDI and OBS
&lt;/h1&gt;

&lt;p&gt;A few weeks ago the &lt;a href="https://docs.microsoft.com/en-us/microsoftteams/release-notes/release-notes?WT.mc_id=teamsndi-blog-heboelma#ndi-out-for-teams-meetings" rel="noopener noreferrer"&gt;NDI feature&lt;/a&gt; in Teams became available. This is great news as it gives event organizers the opportunity to use Teams as a conversation platform and use another tool, like OBS of vMix to manage and brand the output and stream it to a platform of their choosing, like YouTube or Vimeo.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://www.ndi.tv/" rel="noopener noreferrer"&gt;Learn more&lt;/a&gt; about NDI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Enable NDI in Teams
&lt;/h2&gt;

&lt;p&gt;To make NDI available for users, an administrator has to enable this in a Teams Policy and the user has to enable it in the Teams Client.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable NDI in the Meeting Policy
&lt;/h3&gt;

&lt;p&gt;The path below shows how to enable this for everyone in the organization. &lt;em&gt;(Your Teams admin will know how to implement this for a smaller group)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the Microsoft Teams admin center 
&lt;a href="https://admin.teams.microsoft.com/" rel="noopener noreferrer"&gt;https://admin.teams.microsoft.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to Meetings &amp;gt; Meeting policies&lt;/li&gt;
&lt;li&gt;Click on the 'Global' (Org-wide default)
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fndi-policy.png" alt="NDI Policy"&gt;
&lt;/li&gt;
&lt;li&gt;In the Global Policy switch 'Allow NDI streaming' on.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fndi-policy-2.png" alt="Enable NDI Policy"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-US/microsoftteams/meeting-policies-in-teams?WT.mc_id=teamsndi-blog-heboelma#bkaudioandvideo" rel="noopener noreferrer"&gt;Read more&lt;/a&gt; about Managing meeting policies.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Enabling the policy can take a few hours.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable NDI in the Teams Client
&lt;/h3&gt;

&lt;p&gt;When the policy is enabled every user has to activate NDI in settings, before a call is started. You cannot enable NDI during a call.&lt;/p&gt;

&lt;p&gt;To enable to policy open settings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on the profile image in the top right&lt;/li&gt;
&lt;li&gt;Click on settings
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fsettings.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;li&gt;Open the Permissions settings&lt;/li&gt;
&lt;li&gt;Switch the "Network Device Interface (NDI)" on
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fsettings-2.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tips &amp;amp; Tricks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The video quality improves if nobody shares a screen.&lt;/li&gt;
&lt;li&gt;You want to set the Media bit rate in the policy to 10 Mbs to get better video quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/microsoftteams/use-ndi-in-meetings?WT.mc_id=teamsndi-blog-heboelma" rel="noopener noreferrer"&gt;Read more about NDI in MS Teams on Microsoft Docs&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Get the Teams NDI in OBS
&lt;/h2&gt;

&lt;p&gt;In this part we will go through the minimal steps that are needed to get the Teams NDI output in an OBS Scene.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install OBS with NDI support
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://obsproject.com/" rel="noopener noreferrer"&gt;Download&lt;/a&gt; and install OBS&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Palakis/obs-ndi/releases" rel="noopener noreferrer"&gt;Download&lt;/a&gt; and install the OBS NDI Plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup a NDI Source in OBS
&lt;/h3&gt;

&lt;p&gt;In this section we are going to create a scene in OBS with NDI source from a Team Meeting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First you have to start a Teams Meeting with at least one guest.&lt;/li&gt;
&lt;li&gt;Create a new NDI Source in OBS

&lt;ul&gt;
&lt;li&gt;Click the + icon under sources&lt;/li&gt;
&lt;li&gt;Select "Create new"&lt;/li&gt;
&lt;li&gt;Enter a name like "Teams Guest"&lt;/li&gt;
&lt;li&gt;Click OK
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-add-source.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Configure the properties

&lt;ul&gt;
&lt;li&gt;Source name: Select the NDI Source from MS Team&lt;/li&gt;
&lt;li&gt;Bandwidth: Select Highest&lt;/li&gt;
&lt;li&gt;Sync: Select Source Timing (this syncs the audio / video)&lt;/li&gt;
&lt;li&gt;Check Allow hardware acceleration (this will use your GPU if available)&lt;/li&gt;
&lt;li&gt;Latency Mode: Select Low (With low there is almost no delay)
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-add-source-2.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuring the NDI Source
&lt;/h3&gt;

&lt;p&gt;A feature in Teams is that the video adjusts to the bandwidth available. This means in OBS that the resolution of the NDI source can change during a broadcast. Also the resolution of the video scales down if a screen is sharing. &lt;br&gt;
This results in the unwanted behavior that the source is getting bigger and smaller all the time. To avoid this you want to lock the size of the source and let the video scale to the inner bounds of the source.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Right click on the source&lt;/li&gt;
&lt;li&gt;Expand Transform&lt;/li&gt;
&lt;li&gt;Select 'Edit Transform'
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-transform-source.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;li&gt;Change the 'Bounding Box Type' to 'Scale to inner bounds'&lt;/li&gt;
&lt;li&gt;Click close to save
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-transform-box.png" alt="Teams Setting"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can repeat these steps to add more sources for other speakers and a screen share.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-final.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fhnky%2Fblog%2Fmaster%2Fimages%2Fndi%2Fobs-final.png" alt="Teams Setting"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips &amp;amp; Tricks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Audio&lt;/strong&gt;
For every person in Teams you get an individual NDI Feed. A good thing to know is that each NDI feed contains:

&lt;ul&gt;
&lt;li&gt;The video of the person (changeable resolution)&lt;/li&gt;
&lt;li&gt;The audio stream of the whole Teams conversation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;So don't forget, in OBS you can always hear everything that is going on in the Teams conversation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Continue learning:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/microsoftteams/use-ndi-in-meetings?WT.mc_id=teamsndi-blog-heboelma" rel="noopener noreferrer"&gt;Read more about NDI in MS Teams on Microsoft Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/MicrosoftTeams/?WT.mc_id=teamsndi-blog-heboelma" rel="noopener noreferrer"&gt;Everything about Teams on Microsoft Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/learn/browse/?WT.mc_id=teamsndi-blog-heboelma&amp;amp;expanded=m365&amp;amp;filter-products=teams&amp;amp;products=office-teams" rel="noopener noreferrer"&gt;Microsoft Teams Learning Paths&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.henkboelman.com/articles/online-meetups-with-obs-and-skype/" rel="noopener noreferrer"&gt;Online meetups with OBS and Skype&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/azure/provisioning-azure-vm-as-a-streamer-machine-with-chocolatey-2pha"&gt;Provisioning Azure VM as a Streamer Machine with Chocolatey&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.maartenballiauw.be/post/2020/04/02/streaming-a-community-event-on-youtube-sharing-the-technologies-and-learnings-from-virtual-azure-community-day.html" rel="noopener noreferrer"&gt;Streaming a Community Event on YouTube&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>teams</category>
      <category>streaming</category>
      <category>azure</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Create your first model with Azure Custom Vision and Python</title>
      <dc:creator>Henk Boelman</dc:creator>
      <pubDate>Mon, 28 Oct 2019 11:51:57 +0000</pubDate>
      <link>https://forem.com/azure/create-your-first-model-with-azure-custom-vision-and-python-3inl</link>
      <guid>https://forem.com/azure/create-your-first-model-with-azure-custom-vision-and-python-3inl</guid>
      <description>&lt;p&gt;Welcome to this first article in the AI for Developer series, in this series of articles I will share tips and tricks around Azure AI with you. My name is Henk Boelman, a Cloud Advocate at Microsoft based in the Netherlands, focusing on AI for developers.&lt;/p&gt;

&lt;p&gt;In this first article I want to share with you how you can create a classification model using the Custom Vision service with the Python SDK.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Python and not the visual interface?
&lt;/h3&gt;

&lt;p&gt;The answer to that is simple, if you build the training process in code you can version it for instance on Github. Having your code versioned means you can read back what you have done, work ina team on it and run it again if you need to. &lt;/p&gt;

&lt;p&gt;Let’s dive into the code! Before we start, I assume you have &lt;a href="https://www.python.org/downloads/"&gt;Python 3.6&lt;/a&gt; installed. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create resources in Azure
&lt;/h3&gt;

&lt;p&gt;The first thing you need to do is create an Azure Custom Vision service. If you don’t have an &lt;a href="https://azure.microsoft.com/free/?WT.mc_id=AI4DEV01-devto-heboelma"&gt;Azure subscription&lt;/a&gt; you can get $200 credit for the first month. &lt;/p&gt;

&lt;p&gt;You can create an Azure Custom Vision endpoint easily through the portal, but you can also use the &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?WT.mc_id=AI4DEV01-devto-heboelma"&gt;Azure CLI&lt;/a&gt; for this. If you don' t have the &lt;a href="https://pypi.org/project/azure-cli/?WT.mc_id=AI4DEV01-devto-heboelma"&gt;Azure cli&lt;/a&gt; installed you can install it using pip.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install azure-cli
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first step is to login to your Azure subscription, select the right subscription and create a resource group for the Custom Vision Endpoints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login
az account set -s &amp;lt;SUBSCRIPTION_ID&amp;gt;
az group create --name CustomVision_Demo-RG --location westeurope
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Custom Vision Service has 2 types of endpoints. One for training the model and one for running predictions against the model.&lt;/p&gt;

&lt;p&gt;Let’s create the two endpoints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az cognitiveservices account create --name CustomVisionDemo-Prediction --resource-group CustomVision_Demo-RG --kind CustomVision.Prediction --sku S0 --location westeurope –yes
az cognitiveservices account create --name CustomVisionDemo-Training --resource-group CustomVision_Demo-RG --kind CustomVision.Training --sku S0 --location westeurope –yes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can use the Azure CLI to easily get the training key and the prediction key for the endpoints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az cognitiveservices account keys list --name CustomVisionDemo-Training --resource-group CustomVision_Demo-RG
az cognitiveservices account keys list --name CustomVisionDemo-Prediction  --resource-group CustomVision_Demo-RG
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have created the endpoints we can start with training the model. &lt;br&gt;
 &lt;/p&gt;
&lt;h3&gt;
  
  
  It all starts with a question
&lt;/h3&gt;

&lt;p&gt;Every Machine Learning journey starts with a question you want to have answered. For this example, you are going to answer the question: Is it a Homer or a Marge Lego figure. &lt;/p&gt;

&lt;p&gt;Now that we know what to ask the model, we can go on to the next requirement; that is data. Our model is going to be a classification model, meaning the model will look at the picture and scores the pictures against the different classes. So, the output will be I’m 70% confident this is Homer and 1% confident that this is Marge. By taking the class with the highest score and setting a minimum threshold for the confidence score we know what is on the picture.&lt;/p&gt;

&lt;p&gt;I have created a dataset for you with 50 pictures of a Homer Simpson Lego figure and 50 pictures of a Marge Simpsons Lego figure. I have taken the photos with a few things in mind, used a lot of different backgrounds and took the photos from different angles. I made sure the only object in the photo was Homer or Marge and the quality of the photos was somehow the consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.henkboelman.com/media/45lkxosc/ai4dev01-dataset.zip"&gt;Download the dataset here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Train the model
&lt;/h3&gt;

&lt;p&gt;For the training we are going the use the &lt;a href="https://docs.microsoft.com/en-us/python/api/overview/azure/cognitiveservices/customvision?view=azure-python&amp;amp;WT.mc_id=AI4DEV01-devto-heboelma"&gt;Custom Vision Service Python SDK&lt;/a&gt;, you can install this package using pip.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install azure-cognitiveservices-vision-customvision
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a new Python file called 'train.py' and start adding code.&lt;/p&gt;

&lt;p&gt;Start with importing the packages needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
from azure.cognitiveservices.vision.customvision.training.models import ImageFileCreateEntry
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, create variables for the Custom Vision endpoint, Custom Vision training key and the location where the training images are stored.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cv_endpoint = "https://westeurope.api.cognitive.microsoft.com"
training_key = "&amp;lt;INSERT TRAINING KEY&amp;gt;"
training_images = "LegoSimpsons/TrainingImages"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To start with the training, we need to create a Training Client. This method takes as input the endpoint and the training key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trainer = CustomVisionTrainingClient(training_key, endpoint= cv_endpoint)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you are ready to create your first project. The project takes a name and domain as input, the name can be anything. The domain is a different story. You can ask for a list of all possible domains and choose the one closest to what you are trying to accomplish. For instance if you are trying to classify food you pick the domain “Food” or “Landmarks” for landmarks. Use the code below to show all domains.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for domain in trainer.get_domains():
  print(domain.id, "\t", domain.name) 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You might notice that some domains have the word “Compact” next to them. If this is the case it means the Azure Custom Vision Service will create a smaller model, which you will be able to export and run locally on your mobile phone or desktop.&lt;/p&gt;

&lt;p&gt;Let’s create a new project with the domain set to “General Compact”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project = trainer.create_project("Lego - Simpsons - v1","0732100f-1a38-4e49-a514-c9b44c697ab5")
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next you need to create tags, these tags are the same as classes mentioned above. When you have created a few tags we can tag images with them and upload the images to the Azure Custom Vision Service.&lt;/p&gt;

&lt;p&gt;Our images are sorted per tag/class in a folder. All the photos of Marge are in the folder named 'Marge' and all the images of Homer are in the folder named 'Homer'.&lt;/p&gt;

&lt;p&gt;In the code below we do the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We open the directory containing the folders with training images.&lt;/li&gt;
&lt;li&gt;  Loop through all the directories found in this folder&lt;/li&gt;
&lt;li&gt;  Create a new tag with the folder name&lt;/li&gt;
&lt;li&gt;  Open the folder containing the images &lt;/li&gt;
&lt;li&gt;  Create, for every image in that folder, an ImageFileEntry that contains the filename, file content and the tag.&lt;/li&gt;
&lt;li&gt;  Add this ImageFileEntry to a list.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image_list = []
directories = os.listdir(training_images)

for tagName in directories:
    tag = trainer.create_tag(project.id, tagName)
    images = os.listdir(os.path.join(training_images,tagName))
    for img in images:
        with open(os.path.join(training_images,tagName,img), "rb") as image_contents:
            image_list.append(ImageFileCreateEntry(name=img, contents=image_contents.read(), tag_ids=[tag.id]))  
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have a list that contains all tagged images. So far no images have been added to the Azure Custom Vision service, only the tags have been created.&lt;/p&gt;

&lt;p&gt;Uploading images goes in batches with a max size of 64 images per batch. Our dataset is 100 images big, so first we need to split the list into chunks of 64 images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def chunks(l, n):
    for i in range(0, len(l), n):
        yield l[i:i + n]
batchedImages = chunks(image_list, 64)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we have our images split in batches of 64, we can upload them batch by batch to the Azure Custom Vision Service. &lt;em&gt;Note: This can take a while!&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for batchOfImages in batchedImages:
    upload_result = trainer.create_images_from_files(project.id, images=batchOfImages)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From this point, there are only two steps remaining before you can access the model through an API endpoint. &lt;br&gt;
First you need to train the model and finally you must publish the model, so it is accessible through a prediction API. The training can take a while, so you can create a while loop after the train request that checks the status of the model training every second.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time
iteration = trainer.train_project(project.id)
while (iteration.status != "Completed"):
    iteration = trainer.get_iteration(project.id, iteration.id)
    print ("Training status: " + iteration.status)
    time.sleep(1)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have reached the final step, we can publish the model. Its then available in a prediction API and ready to be consumed from an application.&lt;/p&gt;

&lt;p&gt;Every time you train your model its called an iteration. Often you have to retrain your model when you have new data or when you find out that in the real world your model is behaving different than expected. &lt;/p&gt;

&lt;p&gt;The concept of the Custom Vision Service is that you can publish an iteration of your model under a specific name. This means that you can have multiple versions of your model available for your application to use, for instance you can a-b test your model very quickly with this.&lt;/p&gt;

&lt;p&gt;To publish an iteration of your model you call the publish_iteration method, this method requires a few parameters.&lt;/p&gt;

&lt;p&gt;Project ID and Iteration ID, these are values from the previous steps. You can choose a name for publication of your model, for instance 'latest' or 'version1 . The last parameter you need is the 'resource identifier' of the resource where you want to publish it to. This is the resource identifier of the Azure Custom Vision Prediction resource we created at the beginning with our AZ command.&lt;/p&gt;

&lt;p&gt;You can use this command to retrieve all the details about the Prediction resource you created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az cognitiveservices account show --name CustomVisionDemo-Prediction --resource-group CustomVision_Demo-RG
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can copy the value that is behind the field ID, it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/subscriptions/&amp;lt;SUBSCRIPTION-ID&amp;gt;/resourceGroups/&amp;lt;RESOURCE_GROUP_NAME&amp;gt;/providers/Microsoft.CognitiveServices/accounts/&amp;lt;RESOURCE_NAME&amp;gt;")
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When you have the resource ID, paste it in the variable below and call the 'publish_iteration' method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;publish_iteration_name = ''
resource_identifier = ''
trainer.publish_iteration(project.id, iteration.id, publish_iteration_name, resource_identifier)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have successfully trained and published your model!&lt;/p&gt;

&lt;p&gt;A small recap of what have we done: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You created an Azure Resource group containing an Azure Custom Vision service training and prediction endpoint&lt;/li&gt;
&lt;li&gt;  You have created a new Project&lt;/li&gt;
&lt;li&gt;  In that project you have created tags&lt;/li&gt;
&lt;li&gt;  You have uploaded images in batches of 64 and tagged them&lt;/li&gt;
&lt;li&gt;  You have trained an iteration of your model&lt;/li&gt;
&lt;li&gt;  You have published the iteration to a prediction endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="'https://github.com/hnky/AI4DEV01-CustomVision/blob/master/train.py'"&gt;View the full code here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s test the model!
&lt;/h3&gt;

&lt;p&gt;Using the model in an application is as easy as calling an API. You could do just a json post to the endpoint, but you can also use the methods in the Custom Vision Python SDK, which will make things a lot easier. &lt;/p&gt;

&lt;p&gt;Create a new file called 'predict.py'&lt;/p&gt;

&lt;p&gt;Start with importing the dependencies you need to do a prediction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The next thing you need is the prediction key. This is the key from the resource where you have published the model to. &lt;br&gt;
You can use this az command to list the keys&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az cognitiveservices account keys list --name CustomVisionDemo-Prediction --resource-group CustomVision_Demo-RG
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When you have your prediction key you can create a prediction client. For this client you also need the endpoint. You can run the az command below and copy the url behind the field “endpoint”.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az cognitiveservices account show --name CustomVisionDemo-Prediction --resource-group CustomVision_Demo-RG
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have the prediction key and the endpoint you can create the PredictionClient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;predictor = CustomVisionPredictionClient(prediction_key, endpoint=ENDPOINT)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You have  multiple options to classify an image. You can send a URL or you can send the binary image to the endpoint. By default the Azure Custom Vision service keeps a history of all the images posted to the endpoint. The images and their predictions can be reviewed in the portal and used to retrain your model. But sometimes you don’t want the images to be kept in history and therefore it is possible to disable this feature.&lt;/p&gt;

&lt;p&gt;I have uploaded 2 images you can use for testing, but feel free to use a search engine to find other images of &lt;a href="https://www.bing.com/images/search?q=marge+simpson+lego"&gt;Marge&lt;/a&gt; and &lt;a href="https://www.bing.com/images/search?q=homer+simpson+lego"&gt;Homer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To classify an image using a URL and keep the history you call the 'classify_image_url' method. You give it the project id and iteration name from a few steps above and provide the URL to the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;results = predictor.classify_image_url(project.id,publish_iteration_name,url="https://missedprints.com/wp-content/uploads/2014/03/marge-simpson-lego-minifig.jpg")
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To show the score for the different classes on the screen you can use the code below to loop through the results and display the tag name and confidence score for the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for prediction in results.predictions:
    print("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you are all done and have your own classification model running in the cloud! Here is a recap of what you have achieved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We asked a question&lt;/li&gt;
&lt;li&gt;Collected data&lt;/li&gt;
&lt;li&gt;Created an Azure Custom Vision Service endpoint&lt;/li&gt;
&lt;li&gt;Created a new Project&lt;/li&gt;
&lt;li&gt;Tagged and uploaded content&lt;/li&gt;
&lt;li&gt;Trained the model &lt;/li&gt;
&lt;li&gt;Published the iteration so it can be used in an API&lt;/li&gt;
&lt;li&gt;Ran predictions against the model using the API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the rest of this series of articles we will use this model for different solutions! Stay tuned! &lt;/p&gt;

&lt;p&gt;&lt;a href="'https://github.com/hnky/AI4DEV01-CustomVision/blob/master/predict.py'"&gt;View the full code here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?WT.mc_id=AI4DEV01-devto-heboelma"&gt;How to install the Azure CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account-cli?WT.mc_id=AI4DEV01-devto-heboelma"&gt;Creating cognitive services through the CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/python/api/overview/azure/cognitiveservices/customvision?view=azure-python&amp;amp;WT.mc_id=AI4DEV01-devto-heboelma"&gt;Python SDK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service?WT.mc_id=AI4DEV01-devto-heboelma"&gt;Custom Vision Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
