<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Olalekan Oladiran</title>
    <description>The latest articles on Forem by Olalekan Oladiran (@olalekan_oladiran_d74b7a6).</description>
    <link>https://forem.com/olalekan_oladiran_d74b7a6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/olalekan_oladiran_d74b7a6"/>
    <language>en</language>
    <item>
      <title>Build a Fruit Detection AI with Azure Custom Vision: A Step-by-Step Guide</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Mon, 04 Aug 2025 15:54:37 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/build-a-fruit-detection-ai-with-azure-custom-vision-a-step-by-step-guide-1p2b</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/build-a-fruit-detection-ai-with-azure-custom-vision-a-step-by-step-guide-1p2b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The Azure AI Custom Vision service enables you to create computer vision models that are trained on your own images. You can use it to train image classification and object detection models; which you can then publish and consume from applications.&lt;/p&gt;

&lt;p&gt;In this exercise, you will use the Custom Vision service to train an object detection model that can detect and locate three classes of fruit (apple, banana, and orange) in an image.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create Custom Vision resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open the Azure portal at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.&lt;/li&gt;
&lt;li&gt;Select Create a resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywpqayqudyut3ppce92.png" alt=" " width="800" height="145"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the search bar, search for Custom Vision, select Custom Vision, and create the resource with the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create options: Both&lt;/li&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Create or select a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose any available region&lt;/li&gt;
&lt;li&gt;Name: A valid name for your Custom Vision resource&lt;/li&gt;
&lt;li&gt;Training pricing tier: F0&lt;/li&gt;
&lt;li&gt;Prediction pricing tier: F0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z9h9lhbjnoeghrvgf8w.png" alt=" " width="800" height="613"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqrn9dpscgccgynpcb4y.png" alt=" " width="800" height="704"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Create the resource and wait for deployment to complete, and then view the deployment details. Note that two Custom Vision resources are provisioned; one for training, and another for prediction.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngchhwnf4k85wbcfcqsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngchhwnf4k85wbcfcqsx.png" alt=" " width="800" height="921"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yefsctcr99j1t6s1j1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yefsctcr99j1t6s1j1h.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Each resource has its own endpoint and keys, which are used to manage access from your code. To train an image classification model, your code must use the training resource (with its endpoint and key); and to use the trained model to predict image classes, your code must use the prediction resource (with its endpoint and key).&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the resources have been deployed, go to the resource group to view them. You should see two custom vision resources, one with the suffix -Prediction.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdql576iqb6a1p37dv8zi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdql576iqb6a1p37dv8zi.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Create a Custom Vision project in the Custom Vision portal
&lt;/h1&gt;

&lt;p&gt;To train an object detection model, you need to create a Custom Vision project based on your training resource. To do this, you’ll use the Custom Vision portal.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a new browser tab (keeping the Azure portal tab open - you’ll return to it later).&lt;/li&gt;
&lt;li&gt;In the new browser tab, open the Custom Vision portal at &lt;a href="https://customvision.ai" rel="noopener noreferrer"&gt;https://customvision.ai&lt;/a&gt;. If prompted, sign in using your Azure credentials and agree to the terms of service.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qodkccxj4ibp1sir6fa.png" alt=" " width="800" height="460"&gt;
&lt;/li&gt;
&lt;li&gt;Create a new project with the following settings:

&lt;ul&gt;
&lt;li&gt;Name: Detect Fruit&lt;/li&gt;
&lt;li&gt;Description: Object detection for fruit.&lt;/li&gt;
&lt;li&gt;Resource: Your Custom Vision resource&lt;/li&gt;
&lt;li&gt;Project Types: Object Detection&lt;/li&gt;
&lt;li&gt;Domains: General&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Wait for the project to be created and opened in the browser.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk83lzbsi60booxg8drd.png" alt=" " width="800" height="1013"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Upload and tag images
&lt;/h1&gt;

&lt;p&gt;Now that you have an object detection project, you can upload and tag images to train a model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Custom Vision portal includes visual tools that you can use to upload images and tag regions within them that contain multiple types of object.&lt;/li&gt;
&lt;li&gt;In a new browser tab, download the training images from

&lt;code&gt;https://github.com/MicrosoftLearning/mslearn-ai-vision/raw/main/Labfiles/object-detection/training-images.zip&lt;/code&gt;

and extract the zip folder to view its contents. This folder contains images of fruit.
- In the Custom Vision portal, in your object detection project, select Add images and upload all of the images in the extracted folder.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkiwwa07ymwk4sap7f4cm.png" alt=" " width="800" height="360"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71tmti0canp92q0nswfc.png" alt=" " width="800" height="632"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bf29hsxytxcb9pb1i6l.png" alt=" " width="800" height="382"&gt;
- After the images have been uploaded, select the first one to open it.
- Hold the mouse over any object in the image until an automatically detected region is displayed like the image below. Then select the object, and if necessary resize the region to surround it.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8w7hxa5jp8tn2e5ioag4.png" alt=" " width="800" height="499"&gt;
Alternatively, you can simply drag around the object to create a region.
- When the region surrounds the object, add a new tag with the appropriate object type (apple, banana, or orange) as shown here:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35l8o3u91kqy92vjh3mq.png" alt=" " width="800" height="499"&gt;
- Select and tag each other object in the image, resizing the regions and adding new tags as required.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfdrjagbohlg9jpd9y9a.png" alt=" " width="800" height="499"&gt;
- Use the &amp;gt; link on the right to go to the next image, and tag its objects. Then just keep working through the entire image collection, tagging each apple, banana, and orange.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fry4ntriwjbywiasx8nqm.png" alt=" " width="800" height="499"&gt;
- When you have finished tagging the last image, close the Image Detail editor. On the Training Images page, under Tags, select Tagged to see all of your tagged images:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bsc5fd5lsuvwet92sal.png" alt=" " width="800" height="446"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Use the Custom Vision SDK to upload images
&lt;/h1&gt;

&lt;p&gt;You can use the UI in the Custom Vision portal to tag your images, but many AI development teams use other tools that generate files containing information about tags and object regions in images. In scenarios like this, you can use the Custom Vision training API to upload tagged images to the project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the settings (⚙) icon at the top right of the Training Images page in the Custom Vision portal to view the project settings.&lt;/li&gt;
&lt;li&gt;Under General (on the left), note the Project Id that uniquely identifies this project.&lt;/li&gt;
&lt;li&gt;On the right, under Resources note that the Key and Endpoint are shown. These are the details for the training resource (you can also obtain this information by viewing the resource in the Azure portal).&lt;/li&gt;
&lt;li&gt;Return to the browser tab containing the Azure portal (keeping the Custom Vision portal tab open - you’ll return to it later).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa689vrgotqou8w5pai98.png" alt=" " width="800" height="418"&gt;
&lt;/li&gt;
&lt;li&gt;- Open VS Code&lt;/li&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo containing the code files for this exercise

&lt;code&gt;git clone https://github.com/MicrosoftLearning/mslearn-ai-vision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, use the following command to navigate to the application code files:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;cd mslearn-ai-vision/Labfiles/object-detection/python/train-detector&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finiqv837tsaz7lwo7wnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finiqv837tsaz7lwo7wnz.png" alt=" " width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The folder contains application configuration and code files for your app. It also contains a tagged-images.json file which contains bounding box coordinates for objects in multiple images, and an /images subfolder, which contains the images.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flltzxkc4x6516va0t77o.png" alt=" " width="662" height="1512"&gt;
&lt;/li&gt;
&lt;li&gt;Install the Azure AI Custom Vision SDK package for training and any other required packages by running the following commands:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pip install -r requirements.txt azure-cognitiveservices-vision-customvision&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferylk01etsz83zobvies.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferylk01etsz83zobvies.png" alt=" " width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open env file in VS Code, update the configuration values it contains to reflect the Endpoint and an authentication Key for your Custom Vision training resource, and the Project ID for the custom vision project you created previously.&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, within the code editor, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq3okp0lk73m0eaht42a.png" alt=" " width="800" height="340"&gt;
&lt;/li&gt;
&lt;li&gt;Open the tagged-images.json file to see the tagging information for the image files in the /images subfolder:
JSON defines a list of images, each containing one or more tagged regions. Each tagged region includes a tag name, and the top and left coordinates and width and height dimensions of the bounding box containing the tagged object.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j1fuxc7e3dkql5emmyh.png" alt=" " width="800" height="654"&gt;
&lt;/li&gt;
&lt;li&gt;Open add-tagged-images.py&lt;/li&gt;
&lt;li&gt;Note the following details in the code file:

&lt;ul&gt;
&lt;li&gt;The namespaces for the Azure AI Custom Vision SDK are imported.&lt;/li&gt;
&lt;li&gt;The Main function retrieves the configuration settings, and uses the key and endpoint to create an authenticated&lt;/li&gt;
&lt;li&gt;CustomVisionTrainingClient, which is then used with the project ID to create a Project reference to your project.&lt;/li&gt;
&lt;li&gt;The Upload_Images function extracts the tagged region information from the JSON file and uses it to create a batch of images with regions, which it then uploads to the project.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjs2f4m1k6l4rtb3i544.png" alt=" " width="800" height="497"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enter the following command to run the program:
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;python3 add-tagged-images.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait for the program to end.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeu1n96hpva61egmqv24.png" alt=" " width="800" height="152"&gt;
&lt;/li&gt;
&lt;li&gt;Switch back to the browser tab containing the Custom Vision portal (keeping the Azure portal cloud shell tab open), and view the Training Images page for your project (refreshing the browser if necessary).&lt;/li&gt;
&lt;li&gt;Verify that some new tagged images have been added to the project.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbepxnx75sbz1mvjb5ey.png" alt=" " width="800" height="455"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Train and test a model
&lt;/h1&gt;

&lt;p&gt;Now that you’ve tagged the images in your project, you’re ready to train a model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Custom Vision project, click Train (⚙⚙) to train an object detection model using the tagged images. Select the Quick Training option.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6aj1dxux9z1r4w7xz6qn.png" alt=" " width="800" height="314"&gt;
&lt;/li&gt;
&lt;li&gt;Wait for training to complete (it might take ten minutes or so).&lt;/li&gt;
&lt;li&gt;In the Custom Vision portal, when training has finished, review the Precision, Recall, and mAP performance metrics - these measure the prediction accuracy of the object detection model, and should all be high.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyii40vi1baw8i1tfure5.png" alt=" " width="800" height="449"&gt;
&lt;/li&gt;
&lt;li&gt;At the top right of the page, click Quick Test, and then in the Image URL box, type &lt;a href="https://aka.ms/test-fruit" rel="noopener noreferrer"&gt;https://aka.ms/test-fruit&lt;/a&gt; and click the quick test image (➔) button.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0rqf1e98giul7on8wod.png" alt=" " width="800" height="298"&gt;
&lt;/li&gt;
&lt;li&gt;View the prediction that is generated.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y4qlqf30f0waqkmipdy.png" alt=" " width="800" height="507"&gt;
&lt;/li&gt;
&lt;li&gt;Close the Quick Test window.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use the object detector in a client application
&lt;/h1&gt;

&lt;p&gt;Now you’re ready to publish your trained model and use it in a client application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Custom Vision portal, on the Performance page, click 🗸 Publish to publish the trained model with the following settings:

&lt;ul&gt;
&lt;li&gt;Model name: fruit-detector&lt;/li&gt;
&lt;li&gt;Prediction Resource: The prediction resource you created previously which ends with “-Prediction” (not the training resource).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybj62ed82a5dugr7whdy.png" alt=" " width="800" height="406"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; At the top left of the Project Settings page, click the Projects Gallery (👁) icon to return to the Custom Vision portal home page, where your project is now listed.&lt;/li&gt;

&lt;li&gt;On the Custom Vision portal home page, at the top right, click the settings (⚙) icon to view the settings for your Custom Vision service. Then, under Resources, find your prediction resource which ends with “-Prediction” (not the training resource) to determine its Key and Endpoint values (you can also obtain this information by viewing the resource in the Azure portal).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqrek8vy8yqtf8r1jw1h.png" alt=" " width="800" height="400"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv34vfl0qz8zlwtru1x5r.png" alt=" " width="800" height="663"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use the image classifier from a client application
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Return to VS Code&lt;/li&gt;
&lt;li&gt;Run the following commands to switch to the folder for you client application

&lt;code&gt;cd ../test-detector&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmipmxf5571jeur1kz25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmipmxf5571jeur1kz25.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The folder contains application configuration and code files for your app. It also contains the following produce.jpg image file, which you’ll use to test your model.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkx4zkod821zdfh0jh6i.png" alt=" " width="800" height="328"&gt;
&lt;/li&gt;
&lt;li&gt;Open .env file&lt;/li&gt;
&lt;li&gt;Update the configuration values to reflect the Endpoint and Key for your Custom Vision prediction resource, the Project ID for the object detection project, and the name of your published model (which should be fruit-detector). Save your changes (CTRL+S)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo2t54fzev7nrswy11gr.png" alt=" " width="800" height="283"&gt;
&lt;/li&gt;
&lt;li&gt;Open test-detector.py&lt;/li&gt;
&lt;li&gt;Review the code, noting the following details:

&lt;ul&gt;
&lt;li&gt;The namespaces for the Azure AI Custom Vision SDK are imported.&lt;/li&gt;
&lt;li&gt;The Main function retrieves the configuration settings, and uses the key and endpoint to create an authenticated CustomVisionPredictionClient.&lt;/li&gt;
&lt;li&gt;The prediction client object is used to get object detection predictions for the produce.jpg image, specifying the project ID and model name in the request. The predicted tagged regions are then drawn on the image, and the result is saved as output.jpg.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enter the following command to run the program:
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;python3 test-detector.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevgh3u3ovjxs2tccjprs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevgh3u3ovjxs2tccjprs.png" alt=" " width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review the program output, which lists each object detected in the image.&lt;/li&gt;
&lt;li&gt;Note that an image file named output.jpg is generated.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi85jig0yqulyzeve6q92.png" alt=" " width="800" height="426"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve just transformed raw images into an intelligent fruit-detection system—proving how accessible computer vision has become with Azure Custom Vision. Whether you're building a smart grocery scanner, industrial quality checker, or just exploring AI, the pattern remains the same: tag, train, and deploy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project guide link:&lt;/em&gt; &lt;a href="https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/05-custom-vision-object-detection.html" rel="noopener noreferrer"&gt;https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/05-custom-vision-object-detection.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
      <category>azure</category>
    </item>
    <item>
      <title>No ML Expertise Needed: Build a Computer Vision Model in Azure (Fruit Classification Example)</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Wed, 30 Jul 2025 17:13:32 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/no-ml-expertise-needed-build-a-computer-vision-model-in-azure-fruit-classification-example-29jj</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/no-ml-expertise-needed-build-a-computer-vision-model-in-azure-fruit-classification-example-29jj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The Azure AI Custom Vision service enables you to create computer vision models that are trained on your own images. You can use it to train image classification and object detection models; which you can then publish and consume from applications.&lt;/p&gt;

&lt;p&gt;In this exercise, you will use the Custom Vision service to train an image classification model that can identify three classes of fruit (apple, banana, and orange).&lt;/p&gt;

&lt;h1&gt;
  
  
  Create Custom Vision resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open the Azure portal at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.&lt;/li&gt;
&lt;li&gt;Select Create a resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywpqayqudyut3ppce92.png" alt=" " width="800" height="145"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the search bar, search for Custom Vision, select Custom Vision, and create the resource with the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create options: Both&lt;/li&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Create or select a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose any available region&lt;/li&gt;
&lt;li&gt;Name: A valid name for your Custom Vision resource&lt;/li&gt;
&lt;li&gt;Training pricing tier: F0&lt;/li&gt;
&lt;li&gt;Prediction pricing tier: F0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z9h9lhbjnoeghrvgf8w.png" alt=" " width="800" height="613"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqrn9dpscgccgynpcb4y.png" alt=" " width="800" height="704"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Create the resource and wait for deployment to complete, and then view the deployment details. Note that two Custom Vision resources are provisioned; one for training, and another for prediction.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngchhwnf4k85wbcfcqsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngchhwnf4k85wbcfcqsx.png" alt=" " width="800" height="921"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yefsctcr99j1t6s1j1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yefsctcr99j1t6s1j1h.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Each resource has its own endpoint and keys, which are used to manage access from your code. To train an image classification model, your code must use the training resource (with its endpoint and key); and to use the trained model to predict image classes, your code must use the prediction resource (with its endpoint and key).&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When the resources have been deployed, go to the resource group to view them. You should see two custom vision resources, one with the suffix -Prediction.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdql576iqb6a1p37dv8zi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdql576iqb6a1p37dv8zi.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a Custom Vision project in the Custom Vision portal
&lt;/h1&gt;

&lt;p&gt;To train an image classification model, you need to create a Custom Vision project based on your training resource. To do this, you’ll use the Custom Vision portal.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a new browser tab (keeping the Azure portal tab open - you’ll return to it later).&lt;/li&gt;
&lt;li&gt;In the new browser tab, open the Custom Vision portal at &lt;a href="https://customvision.ai" rel="noopener noreferrer"&gt;https://customvision.ai&lt;/a&gt;. If prompted, sign in using your Azure credentials and agree to the terms of service.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qodkccxj4ibp1sir6fa.png" alt=" " width="800" height="460"&gt;
&lt;/li&gt;
&lt;li&gt;In the Custom Vision portal, create a new project with the following settings:

&lt;ul&gt;
&lt;li&gt;Name: Classify Fruit&lt;/li&gt;
&lt;li&gt;Description: Image classification for fruit&lt;/li&gt;
&lt;li&gt;Resource: Your Custom Vision resource&lt;/li&gt;
&lt;li&gt;Project Types: Classification&lt;/li&gt;
&lt;li&gt;Classification Types: Multiclass (single tag per image)&lt;/li&gt;
&lt;li&gt;Domains: Food
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8y8e0jo1b86035zesqlf.png" alt=" " width="800" height="593"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wh799edielji3st0n5a.png" alt=" " width="800" height="1086"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Upload and tag images
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In a new browser tab, download the training images from

&lt;code&gt;https://github.com/MicrosoftLearning/mslearn-ai-vision/raw/main/Labfiles/image-classification/training-images.zip&lt;/code&gt;

and extract the zip folder to view its contents. This folder contains subfolders of apple, banana, and orange images.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvolwstgymno5fvf7963.png" alt=" " width="570" height="1286"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the Custom Vision portal, in your image classification project, click Add images, and select all of the files in the training-images/apple folder you downloaded and extracted previously. Then upload the image files, specifying the tag apple.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45gep65g6rnsjxjhwwcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45gep65g6rnsjxjhwwcb.png" alt=" " width="800" height="896"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26c5ak60aus6ojy1895m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26c5ak60aus6ojy1895m.png" alt=" " width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Add Images ([+]) toolbar icon to repeat the previous step to upload the images in the banana folder with the tag banana, and the images in the orange folder with the tag orange.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih4b2g503e6m9soylfl2.png" alt=" " width="800" height="531"&gt;
&lt;/li&gt;
&lt;li&gt;Explore the images you have uploaded in the Custom Vision project - there should be 15 images of each class.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniw20qrxpclbzsdlutd3.png" alt=" " width="800" height="457"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Train a model
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the Custom Vision project, above the images, click Train (⚙⚙) to train a classification model using the tagged images. Select the Quick Training option, and then wait for the training iteration to complete (this may take a minute or so).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfnqgdiyhliq5xeotgmk.png" alt=" " width="800" height="206"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqwpa9nuei6qvxx31qho.png" alt=" " width="800" height="342"&gt;
&lt;/li&gt;
&lt;li&gt;When the model iteration has been trained, review the Precision, Recall, and AP performance metrics - these measure the prediction accuracy of the classification model, and should all be high.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78myjzufrk9e70vedfih.png" alt=" " width="800" height="450"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Test the model
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Above the performance metrics, click Quick Test.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsigwemys3u3zoznlyuf.png" alt=" " width="800" height="450"&gt;
&lt;/li&gt;
&lt;li&gt;In the Image URL box, type &lt;a href="https://aka.ms/test-apple" rel="noopener noreferrer"&gt;https://aka.ms/test-apple&lt;/a&gt; and click the quick test image (➔) button.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbws9xm2g958iskxdccy5.png" alt=" " width="800" height="499"&gt;
&lt;/li&gt;
&lt;li&gt;View the predictions returned by your model - the probability score for apple should be the highest
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo22giklr81gg90lhlq5p.png" alt=" " width="800" height="500"&gt;
&lt;/li&gt;
&lt;li&gt;Try testing the following images:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aka.ms/test-banana" rel="noopener noreferrer"&gt;https://aka.ms/test-banana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aka.ms/test-orange" rel="noopener noreferrer"&gt;https://aka.ms/test-orange&lt;/a&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyth8va1kfv25vt59fdr.png" alt=" " width="800" height="499"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1s0e20opuuiuezlideg.png" alt=" " width="800" height="498"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Close the Quick Test window.&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  View the project settings
&lt;/h1&gt;

&lt;p&gt;The project you have created has been assigned a unique identifier, which you will need to specify in any code that interacts with it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the settings (⚙) icon at the top right of the Performance page to view the project settings.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjiq5r1w5zg6j3b6eqr39.png" alt=" " width="800" height="131"&gt;
&lt;/li&gt;
&lt;li&gt;Under General (on the left), note the Project Id that uniquely identifies this project.&lt;/li&gt;
&lt;li&gt;On the right, under Resources note that the key and endpoint are shown. These are the details for the training resource (you can also obtain this information by viewing the resource in the Azure portal).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4cxcfrhbvwg4xgrw6c6.png" alt=" " width="800" height="424"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use the training API
&lt;/h1&gt;

&lt;p&gt;The Custom Vision portal provides a convenient user interface that you can use to upload and tag images, and train models. However, in some scenarios you may want to automate model training by using the Custom Vision training API.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code&lt;/li&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo containing the code files for this exercise

&lt;code&gt;git clone https://github.com/MicrosoftLearning/mslearn-ai-vision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, use the following command to navigate to the application code files:

&lt;code&gt;cd mslearn-ai-vision/Labfiles/image-classification/python/train-classifier&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvuh7ypcbgolpgfmg1lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvuh7ypcbgolpgfmg1lr.png" alt=" " width="800" height="68"&gt;&lt;/a&gt;&lt;br&gt;
The folder contains application configuration and code files for your app. It also contains an /more-training-images subfolder, which contains some image files you’ll use to perform additional training of your model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Azure AI Custom Vision SDK package for training and any other required packages by running the following commands:

&lt;code&gt;pip install -r requirements.txt azure-cognitiveservices-vision-customvision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0zl5to2gpwwkzj54m2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0zl5to2gpwwkzj54m2p.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open env file in VS Code, update the configuration values it contains to reflect the Endpoint and an authentication Key for your Custom Vision training resource, and the Project ID for the custom vision project you created previously.&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, within the code editor, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46gd3mdu7v1krt10hs7i.png" alt=" " width="800" height="308"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Write code to perform model training
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open train-classifier.py&lt;/li&gt;
&lt;li&gt;Note the following details in the code file:

&lt;ul&gt;
&lt;li&gt;The namespaces for the Azure AI Custom Vision SDK are imported.&lt;/li&gt;
&lt;li&gt;The Main function retrieves the configuration settings, and uses the key and endpoint to create an authenticated.&lt;/li&gt;
&lt;li&gt; CustomVisionTrainingClient, which is then used with the project ID to create a Project reference to your project.&lt;/li&gt;
&lt;li&gt;The Upload_Images function retrieves the tags that are defined in the Custom Vision project and then uploads image files from correspondingly named folders to the project, assigning the appropriate tag ID.&lt;/li&gt;
&lt;li&gt;The Train_Model function creates a new training iteration for the project and waits for training to complete.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enter the following command to run the program:

&lt;p&gt;&lt;code&gt;python3 train-classifier.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvh2rkod5xkm3jar9yme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvh2rkod5xkm3jar9yme.png" alt=" " width="800" height="879"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait for the program to end. Then return to the browser tab containing the Custom Vision portal, and view the Training Images page for your project (refreshing the browser if necessary).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6b3qistgrng73xrll8an.png" alt=" " width="800" height="449"&gt;
&lt;/li&gt;
&lt;li&gt;Verify that some new tagged images have been added to the project. Then view the Performance page and verify that a new iteration has been created.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg4ous3sn4l85jvz5vkf.png" alt=" " width="800" height="449"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedlpbzxyp0iratz06xme.png" alt=" " width="800" height="467"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use the image classifier in a client application.
&lt;/h1&gt;

&lt;p&gt;Now you’re ready to publish your trained model and use it in a client application.&lt;/p&gt;

&lt;h1&gt;
  
  
  Publish the image classification model
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the Custom Vision portal, on the Performance page, click 🗸 Publish to publish the trained model with the following settings:

&lt;ul&gt;
&lt;li&gt;Model name: fruit-classifier&lt;/li&gt;
&lt;li&gt;Prediction Resource: The prediction resource you created previously which ends with “-Prediction” (not the training resource).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo2dj9iqz1xpuus71p2a.png" alt=" " width="800" height="445"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekn11h4rmh5qcm98fof7.png" alt=" " width="800" height="577"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;At the top left of the Project Settings page, click the Projects Gallery (👁) icon to return to the Custom Vision portal home page, where your project is now listed.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8a5ykrg15py7ae9p68k.png" alt=" " width="800" height="165"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hk0pj2da8zunipjvdod.png" alt=" " width="800" height="573"&gt;
&lt;/li&gt;

&lt;li&gt;On the Custom Vision portal home page, at the top right, click the settings (⚙) icon to view the settings for your Custom Vision service. Then, under Resources, find your prediction resource which ends with “-Prediction” (not the training resource) to determine its Key and Endpoint values (you can also obtain this information by viewing the resource in the Azure portal).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5chsnek7sz47h9otr0q.png" alt=" " width="800" height="455"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Use the image classifier from a client application
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Return to VS Code and run

&lt;code&gt;cd ../test-classifier&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbsl6a3n6ck6100iq2mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbsl6a3n6ck6100iq2mk.png" alt=" " width="800" height="54"&gt;&lt;/a&gt;&lt;br&gt;
The folder contains application configuration and code files for your app. It also contains a /test-images subfolder, which contains some image files you’ll use to test your model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Azure AI Custom Vision SDK package for prediction and any other required packages by running the following commands:

&lt;code&gt;pip install -r requirements.txt azure-cognitiveservices-vision-customvision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvjrvyrfnpiuncdwhxdy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvjrvyrfnpiuncdwhxdy.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open .env file in VS Code and update the configuration values to reflect the Endpoint and Key for your Custom Vision prediction resource, the Project ID for the classification project, and the name of your published model (which should be fruit-classifier). Save your changes (CTRL+S) and close the code editor (CTRL+Q).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45vqu2oyck8tig092569.png" alt=" " width="800" height="315"&gt;
&lt;/li&gt;
&lt;li&gt;Open test-classifier.py in VS Code &lt;/li&gt;
&lt;li&gt;Review the code, noting the following details:

&lt;ul&gt;
&lt;li&gt;The namespaces for the Azure AI Custom Vision SDK are imported.&lt;/li&gt;
&lt;li&gt;The Main function retrieves the configuration settings, and uses the key and endpoint to create an authenticated CustomVisionPredictionClient.&lt;/li&gt;
&lt;li&gt;The prediction client object is used to predict a class for each image in the test-images folder, specifying the project ID and model name for each request. Each prediction includes a probability for each possible class, and only predicted tags with a probability greater than 50% are displayed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Close the code editor and enter the following command to run the program:

&lt;p&gt;&lt;code&gt;python3 test-classifier.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomnm9oidmup3qv9zrtte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomnm9oidmup3qv9zrtte.png" alt=" " width="800" height="136"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflnlk2qlfz0wkp7chsp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflnlk2qlfz0wkp7chsp3.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxci88bt5ylfnaimbgjq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxci88bt5ylfnaimbgjq2.png" alt=" " width="800" height="331"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixpcov6yw56wjc15m1qr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixpcov6yw56wjc15m1qr.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ve just trained an AI to see and classify fruit—transforming raw images into intelligent predictions with Azure Custom Vision. This is the magic of computer vision: turning pixels into actionable insights with just a few lines of code and a handful of tagged images.&lt;br&gt;
But why stop at fruit? Imagine applying these same techniques to:&lt;br&gt;
• Quality control in manufacturing (defect detection)&lt;br&gt;
• Retail inventory management (product categorization)&lt;br&gt;
• Medical imaging (preliminary diagnostics)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project guide link:&lt;/em&gt; &lt;a href="https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/04-image-classification.html" rel="noopener noreferrer"&gt;https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/04-image-classification.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>azure</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Unlocking Facial Recognition with Azure AI: A Step-by-Step Developer Guide</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Tue, 29 Jul 2025 22:40:47 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/unlocking-facial-recognition-with-azure-ai-a-step-by-step-developer-guide-1abc</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/unlocking-facial-recognition-with-azure-ai-a-step-by-step-developer-guide-1abc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The ability to detect and analyze human faces is a core AI capability. In this exercise, you’ll explore the Face service to work with faces.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Face API resource
&lt;/h1&gt;

&lt;p&gt;Open the Azure portal at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Create a resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywpqayqudyut3ppce92.png" alt=" " width="800" height="145"&gt;
&lt;/li&gt;
&lt;li&gt;In the search bar, search for Face, select Face, and create the resource with the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Create or select a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose any available region&lt;/li&gt;
&lt;li&gt;Name: A valid name for your Face resource&lt;/li&gt;
&lt;li&gt;Pricing tier: Free F0&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdauoccs6vdz53bmh9qb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdauoccs6vdz53bmh9qb.png" alt=" " width="800" height="612"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiic4kg5jpuvn582p8kvr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiic4kg5jpuvn582p8kvr.png" alt=" " width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the resource and wait for deployment to complete, and then view the deployment details.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50bb3zrrc4koxav9tpxj.png" alt=" " width="800" height="886"&gt;
&lt;/li&gt;
&lt;li&gt;When the resource has been deployed, go to it and under the Resource management node in the navigation pane, view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc5bzpzwb3243k1sc00d.png" alt=" " width="800" height="398"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyn58m7xkmk4kms0jo44.png" alt=" " width="800" height="468"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Develop a facial analysis app with the Face SDK
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code&lt;/li&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo containing the code files for this exercise

&lt;code&gt;git clone https://github.com/MicrosoftLearning/mslearn-ai-vision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, use the following command to navigate to the application code files:

&lt;code&gt;cd mslearn-ai-vision/Labfiles/face/python/face-api&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs53wtzbsyuh0nqhh1gio.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs53wtzbsyuh0nqhh1gio.png" alt=" " width="800" height="92"&gt;&lt;/a&gt;&lt;br&gt;
The folder contains application configuration and code files for your app. It also contains an /images subfolder, which contains some image files for your app to analyze.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Azure AI Vision SDK package and other required packages by running the following commands:

&lt;code&gt;pip install -r requirements.txt azure-ai-vision-face==1.0.0b2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open env file in VS Code, update the configuration values it contains to reflect the endpoint and an authentication key for your Computer Vision resource (copied from its Keys and Endpoint page in the Azure portal).&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfuzidrfg4pj9c1cefm0.png" alt=" " width="800" height="256"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to create a Face API client
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open analyze-faces.py in VS Code.&lt;/li&gt;
&lt;li&gt;In the code file, find the comment Import namespaces, and add the following code to import the namespaces you will need to use the Azure AI Vision SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Import namespaces
from azure.ai.vision.face import FaceClient
from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel, FaceAttributeTypeDetection01
from azure.core.credentials import AzureKeyCredential
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the Main function, note that the code to load the configuration settings and determine the image to be analyzed has been provided. Then find the comment Authenticate Face client and add the following code to create and authenticate a FaceClient object:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Authenticate Face client
face_client = FaceClient(
     endpoint=cog_endpoint,
     credential=AzureKeyCredential(cog_key))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Add code to detect and analyze faces
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the code file for your application, in the Main function, find the comment Specify facial features to be retrieved and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Specify facial features to be retrieved
features = [FaceAttributeTypeDetection01.HEAD_POSE,
             FaceAttributeTypeDetection01.OCCLUSION,
             FaceAttributeTypeDetection01.ACCESSORIES]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkb67asifjufduj1z8b9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkb67asifjufduj1z8b9.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, under the code you just added, find the comment Get faces and add the following code to print the facial feature information and call a function that annotates the image with the bounding box for each detected face (based on the face_rectangle property of each face):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get faces
with open(image_file, mode="rb") as image_data:
     detected_faces = face_client.detect(
         image_content=image_data.read(),
         detection_model=FaceDetectionModel.DETECTION01,
         recognition_model=FaceRecognitionModel.RECOGNITION01,
         return_face_id=False,
         return_face_attributes=features,
     )

face_count = 0
if len(detected_faces) &amp;gt; 0:
     print(len(detected_faces), 'faces detected.')
     for face in detected_faces:

         # Get face properties
         face_count += 1
         print('\nFace number {}'.format(face_count))
         print(' - Head Pose (Yaw): {}'.format(face.face_attributes.head_pose.yaw))
         print(' - Head Pose (Pitch): {}'.format(face.face_attributes.head_pose.pitch))
         print(' - Head Pose (Roll): {}'.format(face.face_attributes.head_pose.roll))
         print(' - Forehead occluded?: {}'.format(face.face_attributes.occlusion["foreheadOccluded"]))
         print(' - Eye occluded?: {}'.format(face.face_attributes.occlusion["eyeOccluded"]))
         print(' - Mouth occluded?: {}'.format(face.face_attributes.occlusion["mouthOccluded"]))
         print(' - Accessories:')
         for accessory in face.face_attributes.accessories:
             print('   - {}'.format(accessory.type))
         # Annotate faces in the image
         annotate_faces(image_file, detected_faces)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp7oo2jub208ncqebp6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp7oo2jub208ncqebp6n.png" alt=" " width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examine the code you added to the Main function. It analyzes an image file and detects any faces it contains, including attributes for head pose, occlusion, and the presence of accessories such as glasses. Additionally, a function is called to annotate the original image with a bounding box for each detected face.&lt;/li&gt;
&lt;li&gt;Save your changes (CTRL+S) but keep the code editor open in case you need to fix any typo’s.&lt;/li&gt;
&lt;li&gt;Resize the panes so you can see more of the console, then enter the following command to run the program with the argument images/face1.jpg:

&lt;code&gt;python3 analyze-faces.py images/face1.jpg&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app runs and analyzes the following image:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryqpumn776k9t88gba35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryqpumn776k9t88gba35.png" alt=" " width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observe the output, which should include the ID and attributes of each face detected.&lt;/li&gt;
&lt;li&gt;Note that an image file named detected_faces.jpg is also generated. Open detected_faces.jpg:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp69g82o8j8gr2u75keqm.png" alt=" " width="800" height="498"&gt;
&lt;/li&gt;
&lt;li&gt;Run the program again, this time specifying the parameter images/face2.jpg to extract text from the following image:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff14qbjlf7dxxsy4h2foq.png" alt=" " width="800" height="475"&gt;
&lt;/li&gt;
&lt;li&gt;Run the program one more time, this time specifying the parameter images/faces.jpg to extract text from this image:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8y7cmvxhrdvl2plaa3r.png" alt=" " width="800" height="622"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve just harnessed the power of Azure’s Face API to detect and analyze human faces with remarkable precision—from identifying accessories to measuring head pose. This technology opens doors to transformative applications: smarter security systems, personalized retail experiences, and accessible UI design that adapts to users’ expressions and focus.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project guide link:&lt;/em&gt; &lt;a href="https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/03-face-service.html" rel="noopener noreferrer"&gt;https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/03-face-service.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>ai</category>
      <category>python</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Extract Text Like Magic: Build an OCR App with Azure AI Vision in Python</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Tue, 29 Jul 2025 21:13:52 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/extract-text-like-magic-build-an-ocr-app-with-azure-ai-vision-in-python-59b4</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/extract-text-like-magic-build-an-ocr-app-with-azure-ai-vision-in-python-59b4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Optical character recognition (OCR) is a subset of computer vision that deals with reading text in images and documents. The Azure AI Vision Image Analysis service provides an API for reading text, which you’ll explore in this exercise.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Vision resource
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open the Azure portal at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.&lt;/li&gt;
&lt;li&gt;Select Create a resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywpqayqudyut3ppce92.png" alt=" " width="800" height="145"&gt;
&lt;/li&gt;
&lt;li&gt;In the search bar, search for Computer Vision, select Computer Vision, and create the resource with the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Create or select a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose from East US, West US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, or East Asia*&lt;/li&gt;
&lt;li&gt;Name: A valid name for your Computer Vision resource&lt;/li&gt;
&lt;li&gt;Pricing tier: Free F0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxunvtrpp8o4fct7252cf.png" alt=" " width="800" height="556"&gt;
*Azure AI Vision 4.0 full feature sets are currently only available in these regions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select the required checkboxes and create the resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qh77mpcxdqq56zkktnr.png" alt=" " width="800" height="738"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falxynq55e41n9loi7g00.png" alt=" " width="800" height="886"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for deployment to complete, and then view the deployment details.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1vdf6hi7o1pviic1984.png" alt=" " width="800" height="393"&gt;
&lt;/li&gt;

&lt;li&gt;When the resource has been deployed, go to it and under the Resource management node in the navigation pane, view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foty3kfmcg4l6er2dcvxs.png" alt=" " width="800" height="427"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Develop a text extraction app with the Azure AI Vision SDK
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code&lt;/li&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo containing the code files for this exercise

&lt;code&gt;git clone https://github.com/MicrosoftLearning/mslearn-ai-vision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, use the following command to navigate to and view the folder containing the application code files:

&lt;code&gt;cd mslearn-ai-vision/Labfiles/ocr/python/read-text&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4exq8w6c5agxp3fnxid9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4exq8w6c5agxp3fnxid9.png" alt=" " width="800" height="42"&gt;&lt;/a&gt;&lt;br&gt;
The folder contains application configuration and code files for your app. It also contains an /images subfolder, which contains some image files for your app to analyze.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Azure AI Vision SDK package and other required packages by running the following commands:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pip install -r requirements.txt azure-ai-vision-imageanalysis==1.0.0&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open env file in VS Code, update the configuration values it contains to reflect the endpoint and an authentication key for your Computer Vision resource (copied from its Keys and Endpoint page in the Azure portal).&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfuzidrfg4pj9c1cefm0.png" alt=" " width="800" height="256"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to read text from an image
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open read-text.py in VS Code.&lt;/li&gt;
&lt;li&gt;In the code file, find the comment Import namespaces, and add the following code to import the namespaces you will need to use the Azure AI Vision SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# import namespaces
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
from azure.core.credentials import AzureKeyCredential
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02e2b92xzhzyibuyt7rp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02e2b92xzhzyibuyt7rp.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, the code to load the configuration settings and determine the file to be analyzed has been provided. Then find the comment Authenticate Azure AI Vision client and add the following language-specific code to create and authenticate an Azure AI Vision Image Analysis client object:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Authenticate Azure AI Vision client
cv_client = ImageAnalysisClient(
     endpoint=ai_endpoint,
     credential=AzureKeyCredential(ai_key))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the Main function, under the code you just added, find the comment Read text in image and add the following code to use the Image Analysis client to read the text in the image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Read text in image
with open(image_file, "rb") as f:
     image_data = f.read()
print (f"\nReading text in {image_file}")

result = cv_client.analyze(
     image_data=image_data,
     visual_features=[VisualFeatures.READ])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Find the comment Print the text and add the following code (including the final comment) to print the lines of text that were found and call a function to annotate them in the image (using the bounding_polygon returned for each line of text):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Print the text
if result.read is not None:
     print("\nText:")

     for line in result.read.blocks[0].lines:
         print(f" {line.text}")        
     # Annotate the text in the image
     annotate_lines(image_file, result.read)

     # Find individual words in each line

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoaccema6kpu5v37rp73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoaccema6kpu5v37rp73.png" alt=" " width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S) but keep the code editor open in case you need to fix any typo’s.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resize the panes so you can see more of the console, then enter the following command to run the program:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;python3 read-text.py images/Lincoln.jpg&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The program reads the text in the specified image file (images/Lincoln.jpg), which looks like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrs2jxksfywem1vfcv92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrs2jxksfywem1vfcv92.png" alt=" " width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open lines.jpg&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxbnv8s7oba4fbe81dmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxbnv8s7oba4fbe81dmr.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the program again, this time specifying the parameter images/Business-card.jpg to extract text from the following image:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;python read-text.py images/Business-card.jpg&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qom70s4ymc5l7wb2ech.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qom70s4ymc5l7wb2ech.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the program one more time, this time specifying the parameter images/Note.jpg to extract text from this image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;python read-text.py images/Note.jpg&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p9yrgk2zivvi4v7himr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p9yrgk2zivvi4v7himr.png" alt=" " width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Add code to return the position of individual words
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Resize the panes so you can see more of the code file. Then find the comment Find individual words in each line and add the following code (being careful to maintain the correct indentation level):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Find individual words in each line
print ("\nIndividual words:")
for line in result.read.blocks[0].lines:
     for word in line.words:
         print(f"  {word.text} (Confidence: {word.confidence:.2f}%)")
# Annotate the words in the image
annotate_words(image_file, result.read)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fho3xhwffszu4v5q891pb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fho3xhwffszu4v5q891pb.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S). Then, in the command line pane, rerun the program to extract text from images/Lincoln.jpg.&lt;/li&gt;
&lt;li&gt;Observe the output, which should include each individual word in the image, and the confidence associated with their prediction.&lt;/li&gt;
&lt;li&gt;In the read-text folder, a words.jpg image has been created. Open words.jpg
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqfqxdv721ggvu0awodr.png" alt=" " width="800" height="624"&gt;
&lt;/li&gt;
&lt;li&gt;Rerun the program for images/Business-card.jpg and images/Note.jpg; viewing the words.jpg file generated for each image.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fteb56kv41qtkivmznubi.png" alt=" " width="800" height="582"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpbxs7khlkq6lyq74j38.png" alt=" " width="800" height="592"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve just unlocked the power to transform images into actionable text data—whether it’s digitizing documents, processing receipts, or extracting text from photos. With Azure AI Vision, what once required manual effort now takes just a few lines of Python code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project guide link:&lt;/em&gt; &lt;a href="https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/02-ocr.html" rel="noopener noreferrer"&gt;https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/02-ocr.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>azure</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Teaching Computers to Understand Images: Hands-On with Azure AI Vision</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Tue, 29 Jul 2025 14:41:46 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/teaching-computers-to-understand-images-hands-on-with-azure-ai-vision-20pf</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/teaching-computers-to-understand-images-hands-on-with-azure-ai-vision-20pf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Azure AI Vision is an artificial intelligence capability that enables software systems to interpret visual input by analyzing images. In Microsoft Azure, the Vision Azure AI service provides pre-built models for common computer vision tasks, including analysis of images to suggest captions and tags, detection of common objects and people. You can also use the Azure AI Vision service to remove the background or create a foreground matting of images.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Vision resource
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open the Azure portal at &lt;a href="https://portal.azure.com" rel="noopener noreferrer"&gt;https://portal.azure.com&lt;/a&gt;, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.&lt;/li&gt;
&lt;li&gt;Select Create a resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywpqayqudyut3ppce92.png" alt=" " width="800" height="145"&gt;
&lt;/li&gt;
&lt;li&gt;In the search bar, search for Computer Vision, select Computer Vision, and create the resource with the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Create or select a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose from East US, West US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, or East Asia*&lt;/li&gt;
&lt;li&gt;Name: A valid name for your Computer Vision resource&lt;/li&gt;
&lt;li&gt;Pricing tier: Free F0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxunvtrpp8o4fct7252cf.png" alt=" " width="800" height="556"&gt;
*Azure AI Vision 4.0 full feature sets are currently only available in these regions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select the required checkboxes and create the resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qh77mpcxdqq56zkktnr.png" alt=" " width="800" height="738"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falxynq55e41n9loi7g00.png" alt=" " width="800" height="886"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for deployment to complete, and then view the deployment details.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1vdf6hi7o1pviic1984.png" alt=" " width="800" height="393"&gt;
&lt;/li&gt;

&lt;li&gt;When the resource has been deployed, go to it and under the Resource management node in the navigation pane, view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foty3kfmcg4l6er2dcvxs.png" alt=" " width="800" height="427"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Develop an image analysis app with the Azure AI Vision SDK
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code&lt;/li&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo containing the code files for this exercise

&lt;code&gt;git clone https://github.com/MicrosoftLearning/mslearn-ai-vision&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzp9dawuwnxmrvg2nnvj0.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, use the following command to navigate to and view the folder containing the application code files:

&lt;code&gt;cd mslearn-ai-vision/Labfiles/analyze-images/python/image-analysis&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05u0xjwwdoi60rdilta7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05u0xjwwdoi60rdilta7.png" alt=" " width="800" height="38"&gt;&lt;/a&gt;&lt;br&gt;
The folder contains application configuration and code files for your app. It also contains a /images subfolder, which contains some image files for your app to analyze.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5fiats7z6bt2zovqxuf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5fiats7z6bt2zovqxuf.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Azure AI Vision SDK package and other required packages by running the following commands:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt azure-ai-vision-imageanalysis==1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1dqvjvpwlky2vym0rkl.png" alt=" " width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open env file in VS Code, update the configuration values it contains to reflect the endpoint and an authentication key for your Computer Vision resource (copied from its Keys and Endpoint page in the Azure portal).&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfuzidrfg4pj9c1cefm0.png" alt=" " width="800" height="256"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to suggest a caption
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open image-analysis.py in VS Code.&lt;/li&gt;
&lt;li&gt;In the code file, find the comment Import namespaces, and add the following code to import the namespaces you will need to use the Azure AI Vision SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# import namespaces
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
from azure.core.credentials import AzureKeyCredential
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ktis7u9bsu0xqng8ox1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ktis7u9bsu0xqng8ox1.png" alt=" " width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, note that the code to load the configuration settings and determine the image file to be analyzed has been provided. Then find the comment Authenticate Azure AI Vision client and add the following code to create and authenticate a Azure AI Vision client object (be sure to maintain the correct indentation levels):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Authenticate Azure AI Vision client
cv_client = ImageAnalysisClient(
     endpoint=ai_endpoint,
     credential=AzureKeyCredential(ai_key))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the Main function, under the code you just added, find the comment Analyze image and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Analyze image
with open(image_file, "rb") as f:
     image_data = f.read()
print(f'\nAnalyzing {image_file}\n')

result = cv_client.analyze(
     image_data=image_data,
     visual_features=[
         VisualFeatures.CAPTION,
         VisualFeatures.DENSE_CAPTIONS,
         VisualFeatures.TAGS,
         VisualFeatures.OBJECTS,
         VisualFeatures.PEOPLE],
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F818p1026iwo3c9upc17m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F818p1026iwo3c9upc17m.png" alt=" " width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the comment Get image captions, add the following code to display image captions and dense captions:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get image captions
if result.caption is not None:
     print("\nCaption:")
     print(" Caption: '{}' (confidence: {:.2f}%)".format(result.caption.text, result.caption.confidence * 100))

if result.dense_captions is not None:
     print("\nDense Captions:")
     for caption in result.dense_captions.list:
         print(" Caption: '{}' (confidence: {:.2f}%)".format(caption.text, caption.confidence * 100))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S) and resize the panes so you can clearly see the command line console while keeping the code editor open. Then enter the following command to run the program with the argument images/street.jpg:

&lt;code&gt;python3 image-analysis.py images/street.jpg&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9mmpwsahd57saw4apvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9mmpwsahd57saw4apvw.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observe the output, which should include a suggested caption for the street.jpg image, which looks like this:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnwhi6rpxaw9r2747hq9.png" alt=" " width="800" height="278"&gt;
&lt;/li&gt;
&lt;li&gt;Run the program again, this time with the argument images/building.jpg to see the caption that gets generated for the building.jpg image, which looks like this:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrnmtjnni31b3u54qjj.png" alt=" " width="800" height="426"&gt;
&lt;/li&gt;
&lt;li&gt;Repeat the previous step to generate a caption for the images/person.jpg file, which looks like this:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqwbk8ixts25o9s5n5is.png" alt=" " width="800" height="416"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to generate suggested tags
&lt;/h1&gt;

&lt;p&gt;It can sometimes be useful to identify relevant tags that provide clues about the contents of an image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In VS Code, ind the comment Get image tags and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get image tags
if result.tags is not None:
     print("\nTags:")
     for tag in result.tags.list:
         print(" Tag: '{}' (confidence: {:.2f}%)".format(tag.name, tag.confidence * 100))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmzvme1znwwxsf818yjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmzvme1znwwxsf818yjt.png" alt=" " width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S) and run the program with the argument images/street.jpg, observing that in addition to the image caption, a list of suggested tags is displayed.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsyenymetojmeqh8bhky.png" alt=" " width="800" height="640"&gt;
&lt;/li&gt;
&lt;li&gt;Rerun the program for the images/building.jpg and images/person.jpg files.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5qxxsohy2551yz6ert5.png" alt=" " width="800" height="506"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz5a56yb91s7jyjag608.png" alt=" " width="800" height="554"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to detect and locate objects
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the code editor, in the AnalyzeImage function, find the comment Get objects in the image and add the following code to list the objects detected in the image, and call the provided function to annotate an image with the detected objects:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get objects in the image
if result.objects is not None:
     print("\nObjects in image:")
     for detected_object in result.objects.list:
         # Print object tag and confidence
         print(" {} (confidence: {:.2f}%)".format(detected_object.tags[0].name, detected_object.tags[0].confidence * 100))
     # Annotate objects in the image
     show_objects(image_file, result.objects.list)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj05inwiiuxh0m7p8ifop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj05inwiiuxh0m7p8ifop.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S) and run the program with the argument images/street.jpg, observing that in addition to the image caption and suggested tags; a file named objects.jpg is generated.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkc0051siqz8laoayz6d.png" alt=" " width="800" height="585"&gt;
&lt;/li&gt;
&lt;li&gt;Check the object file
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl8yi40rr21yzmkybjd8.png" alt=" " width="800" height="373"&gt;
&lt;/li&gt;
&lt;li&gt;Rerun the program for the images/building.jpg and images/person.jpg files, downloading the generated objects.jpg file after each run.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhuhz07kyveh8z1zgdrv.png" alt=" " width="800" height="549"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcifiy0i60c8fmz8xnacq.png" alt=" " width="800" height="598"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to detect and locate people
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the code editor, in the AnalyzeImage function, find the comment Get people in the image and add the following code to list any detected people with a confidence level of 20% or more, and call a provided function to annotate them in an image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get people in the image
if result.people is not None:
     print("\nPeople in image:")

     for detected_person in result.people.list:
         if detected_person.confidence &amp;gt; 0.2:
             # Print location and confidence of each person detected
             print(" {} (confidence: {:.2f}%)".format(detected_person.bounding_box, detected_person.confidence * 100))
     # Annotate people in the image
     show_people(image_file, result.people.list)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu62vypn6fny819ckr71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu62vypn6fny819ckr71.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S) and run the program with the argument images/street.jpg, observing that in addition to the image caption, suggested tags, and objects.jpg file; a list of person locations and file named people.jpg is generated.&lt;/li&gt;
&lt;li&gt;Open people.jpg file
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2961a3yd6cme9rmov09.png" alt=" " width="800" height="600"&gt;
&lt;/li&gt;
&lt;li&gt;Rerun the program for the images/building.jpg and images/person.jpg files, downloading the generated people.jpg file after each run.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpb1anto5ngzt4ft5bzl.png" alt=" " width="800" height="608"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk0qeiq7t2psbrup1m4e.png" alt=" " width="800" height="613"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve just unlocked the power of computer vision—transforming raw pixels into intelligent insights with Azure AI. From auto-generating captions to detecting objects and people, you now have the tools to build applications that truly see and understand visual data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Project guide link:&lt;/em&gt; &lt;a href="https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/01-analyze-images.html" rel="noopener noreferrer"&gt;https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/01-analyze-images.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>python</category>
      <category>vscode</category>
    </item>
    <item>
      <title>Build a Talking Clock with Azure AI Speech: A Step-by-Step Guide to Speech Synthesis &amp; Recognition</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Sun, 13 Jul 2025 17:52:45 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/build-a-talking-clock-with-azure-ai-speech-a-step-by-step-guide-to-speech-synthesis-recognition-258c</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/build-a-talking-clock-with-azure-ai-speech-a-step-by-step-guide-to-speech-synthesis-recognition-258c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Azure AI Speech is a service that provides speech-related functionality, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A speech-to-text API that enables you to implement speech recognition (converting audible spoken words into text).&lt;/li&gt;
&lt;li&gt;A text-to-speech API that enables you to implement speech synthesis (converting text into audible speech).
In this exercise, you’ll use both of these APIs to implement a speaking clock application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Create an Azure AI Speech resource
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Go to your Azure portal and click create a resource
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzrjigyhutczkgns5ayf.png" alt=" " width="800" height="207"&gt;
&lt;/li&gt;
&lt;li&gt;In the top search field, search for Speech service. Select it from the list, then select Create.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbi8bpkuyg7mto6isejg.png" alt=" " width="800" height="658"&gt;
Provision the resource using the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription.&lt;/li&gt;
&lt;li&gt;Resource group: Choose or create a resource group.&lt;/li&gt;
&lt;li&gt;Region:Choose any available region&lt;/li&gt;
&lt;li&gt;Name: Enter a unique name.&lt;/li&gt;
&lt;li&gt;Pricing tier: Select F0 (free), or S (standard) if F is not available.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select Review + create, then select Create to provision the resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffghqskl0t4nz66qzpqvh.png" alt=" " width="800" height="737"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for deployment to complete, and then go to the deployed resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomj04qxidwmot5vkkowe.png" alt=" " width="800" height="405"&gt;
&lt;/li&gt;

&lt;li&gt;View the Keys and Endpoint page in the Resource Management section. You will need the information on this page later in the exercise.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0mn896oul1ouxitdttm.png" alt=" " width="800" height="431"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prepare and configure the speaking clock app
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open your VS Code and enter the following commands to clone the GitHub repo for this exercise:

&lt;code&gt;git clone https://github.com/microsoftlearning/mslearn-ai-language mslearn-ai-language&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" alt=" " width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, navigate to the folder containing the speaking clock application code files:

&lt;code&gt;cd mslearn-ai-language/Labfiles/07-speech/C-sharp/speaking-clock&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn63d7o1ym0ig93xvt29w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn63d7o1ym0ig93xvt29w.png" alt=" " width="800" height="43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the following command to install the libraries you’ll use:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package Azure.Identity
dotnet add package Azure.AI.Projects --prerelease
dotnet add package Microsoft.CognitiveServices.Speech --version 1.42.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmbtu72q1o3u4h1x14yv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmbtu72q1o3u4h1x14yv.png" alt=" " width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the appsettings.json by expanding speaking-clock&lt;/li&gt;
&lt;li&gt;In the code file, replace the your_project_api_key and your_project_location placeholders with the API key and location for your project (copied from the portal page you left open).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficpq9wmuiv9vqgyz8q45.png" alt=" " width="800" height="448"&gt;
&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, within the code editor, use the CTRL+S command or Right-click &amp;gt; Save to save your changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to use the Azure AI Speech SDK
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open Programs.cs file and at the top of the code file, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Azure AI Speech SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Import namespaces
using Azure.Identity;
using Azure.AI.Projects;
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpear8f4jlzazccnwedu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpear8f4jlzazccnwedu.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the main function, under the comment Get config settings, note that the code loads the project key and location you defined in the configuration file.&lt;/li&gt;
&lt;li&gt;Under the comment Configure speech service, add the following code to use the AI Services key and your project’s region to configure your connection to the Azure AI Services Speech endpoint
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Configure speech service
speechConfig = SpeechConfig.FromSubscription(projectKey, location);
Console.WriteLine("Ready to use speech service in " + speechConfig.Region);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0be5ewpaj2qd66vt8lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0be5ewpaj2qd66vt8lh.png" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S), but leave the code editor open.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Run the app
&lt;/h1&gt;

&lt;p&gt;So far, the app doesn’t do anything other than connect to your Azure AI Speech service, but it’s useful to run it and check that it works before adding speech functionality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the command line, enter the following language-specific command to run the speaking clock app:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are using C#, you can ignore any warnings about using the await operator in asynchronous methods - we’ll fix that later. The code should display the region of the speech service resource the application will use. A successful run indicates that the app has connected to your Azure AI Speech resource.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F784n5f4ut192brbbu0uq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F784n5f4ut192brbbu0uq.png" alt=" " width="800" height="141"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;NOTE:&lt;/strong&gt; I encountered this error&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OLAMOBILEs-MacBook-Pro:speaking-clock olamobile$ dotnet run
/Users/olamobile/AI-pract/mslearn-ai-language/Labfiles/07-speech/C-Sharp/speaking-clock/Program.cs(6,13): error CS0234: The type or namespace name 'Identity' does not exist in the namespace 'Azure' (are you missing an assembly reference?)
/Users/olamobile/AI-pract/mslearn-ai-language/Labfiles/07-speech/C-Sharp/speaking-clock/Program.cs(7,16): error CS0234: The type or namespace name 'Projects' does not exist in the namespace 'Azure.AI' (are you missing an assembly reference?)

The build failed. Fix the build errors and run again.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To fix this, I comment the first two lines under // Import namespaces.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F085k8tyv68w8gg8i72hj.png" alt=" " width="800" height="465"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to recognize speech
&lt;/h1&gt;

&lt;p&gt;Now that you have a SpeechConfig for the speech service in your project’s Azure AI Services resource, you can use the Speech-to-text API to recognize speech and transcribe it to text.&lt;br&gt;
In this procedure, the speech input is captured from an audio file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, note that the code uses the TranscribeCommand function to accept spoken input. Then in the TranscribeCommand function, under the comment Configure speech recognition, add the appropriate code below to create a SpeechRecognizer client that can be used to recognize and transcribe speech from an audio file:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Configure speech recognition
string audioFile = "time.wav";
using AudioConfig audioConfig = AudioConfig.FromWavFileInput(audioFile);
using SpeechRecognizer speechRecognizer = new SpeechRecognizer(speechConfig, audioConfig);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;In the TranscribeCommand function, under the comment Process speech input, add the following code to listen for spoken input, being careful not to replace the code at the end of the function that returns the command:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Process speech input
Console.WriteLine("Listening...");
SpeechRecognitionResult speech = await speechRecognizer.RecognizeOnceAsync();
if (speech.Reason == ResultReason.RecognizedSpeech)
{
    command = speech.Text;
    Console.WriteLine(command);
}
else
{
    Console.WriteLine(speech.Reason);
    if (speech.Reason == ResultReason.Canceled)
    {
        var cancellation = CancellationDetails.FromResult(speech);
        Console.WriteLine(cancellation.Reason);
        Console.WriteLine(cancellation.ErrorDetails);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheq2u0i780kp2w5vz3ed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheq2u0i780kp2w5vz3ed.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes (CTRL+S), and then in the command line below the code editor, enter the following command to run the program:

&lt;code&gt;dotnet run&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhlvb1kv09pczpn1r736.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhlvb1kv09pczpn1r736.png" alt=" " width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Synthesize speech
&lt;/h1&gt;

&lt;p&gt;Your speaking clock application accepts spoken input, but it doesn’t actually speak! Let’s fix that by adding code to synthesize speech.&lt;br&gt;
Once again, due to the hardware limitations of the cloud shell we’ll direct the synthesized speech output to a file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function for your program, note that the code uses the TellTime function to tell the user the current time.&lt;/li&gt;
&lt;li&gt;In the TellTime function, under the comment Configure speech synthesis, add the following code to create a SpeechSynthesizer client that can be used to generate spoken output:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Configure speech synthesis
var outputFile = "output.wav";
speechConfig.SpeechSynthesisVoiceName = "en-GB-RyanNeural";
using var audioConfig = AudioConfig.FromWavFileOutput(outputFile);
using SpeechSynthesizer speechSynthesizer = new SpeechSynthesizer(speechConfig, audioConfig);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;In the TellTime function, under the comment Synthesize spoken output, add the following code to generate spoken output, being careful not to replace the code at the end of the function that prints the response:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Synthesize spoken output
SpeechSynthesisResult speak = await speechSynthesizer.SpeakTextAsync(responseText);
if (speak.Reason != ResultReason.SynthesizingAudioCompleted)
{
    Console.WriteLine(speak.Reason);
}
else
{
    Console.WriteLine("Spoken output saved in " + outputFile);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save your changes (CTRL+S), and then in the command line below the code editor, enter the following command to run the program:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the output from the application, which should indicate that the spoken output was saved in a file.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ggsgvkop096dyefolre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ggsgvkop096dyefolre.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you have a media player capable of playing .wav audio files, in the toolbar for the cloud shell pane, use the Upload/Download files button to download the audio file from your app folder, and then play it:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/home/user/mslearn-ai-language/Labfiles/07-speech/C-Sharp/speaking-clock/output.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45lg69q5qqkvxui2oqm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45lg69q5qqkvxui2oqm2.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Use Speech Synthesis Markup Language
&lt;/h1&gt;

&lt;p&gt;Speech Synthesis Markup Language (SSML) enables you to customize the way your speech is synthesized using an XML-based format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the TellTime function, replace all of the current code under the comment Synthesize spoken output with the following code (leave the code under the comment Print the response):
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Synthesize spoken output
string responseSsml = $@"
    &amp;lt;speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='en-US'&amp;gt;
        &amp;lt;voice name='en-GB-LibbyNeural'&amp;gt;
            {responseText}
            &amp;lt;break strength='weak'/&amp;gt;
            Time to end this lab!
        &amp;lt;/voice&amp;gt;
    &amp;lt;/speak&amp;gt;";
SpeechSynthesisResult speak = await speechSynthesizer.SpeakSsmlAsync(responseSsml);
if (speak.Reason != ResultReason.SynthesizingAudioCompleted)
{
    Console.WriteLine(speak.Reason);
}
else
{
     Console.WriteLine("Spoken output saved in " + outputFile);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save your changes and return to the integrated terminal for the speaking-clock folder, and enter the following command to run the program:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the output from the application, which should indicate that the spoken output was saved in a file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once again, if you have a media player capable of playing .wav audio files, in the toolbar for the cloud shell pane, use the Upload/Download files button to download the audio file from your app folder, and then play it:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/home/user/mslearn-ai-language/Labfiles/07-speech/C-Sharp/speaking-clock/output.wav
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ojl69i6besc4eaq6vtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ojl69i6besc4eaq6vtg.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  What if you have a mic and speaker?
&lt;/h1&gt;

&lt;p&gt;In this exercise, you used audio files for the speech input and output. Let’s see how the code can be modified to use audio hardware.&lt;/p&gt;
&lt;h1&gt;
  
  
  Using speech recognition with a microphone
&lt;/h1&gt;

&lt;p&gt;If you have a mic, you can use the following code to capture spoken input for speech recognition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Configure speech recognition
using AudioConfig audioConfig = AudioConfig.FromDefaultMicrophoneInput();
using SpeechRecognizer speechRecognizer = new SpeechRecognizer(speechConfig, audioConfig);
Console.WriteLine("Speak now...");

SpeechRecognitionResult speech = await speechRecognizer.RecognizeOnceAsync();
if (speech.Reason == ResultReason.RecognizedSpeech)
{
    command = speech.Text;
    Console.WriteLine(command);
}
else
{
    Console.WriteLine(speech.Reason);
    if (speech.Reason == ResultReason.Canceled)
    {
        var cancellation = CancellationDetails.FromResult(speech);
        Console.WriteLine(cancellation.Reason);
        Console.WriteLine(cancellation.ErrorDetails);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqs9xe8ho5e9timo3k54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqs9xe8ho5e9timo3k54.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Using speech synthesis with a speaker
&lt;/h1&gt;

&lt;p&gt;If you have a speaker, you can use the following code to synthesize speech.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var now = DateTime.Now;
string responseText = "The time is " + now.Hour.ToString() + ":" + now.Minute.ToString("D2");

// Configure speech synthesis
speechConfig.SpeechSynthesisVoiceName = "en-GB-RyanNeural";
using var audioConfig = AudioConfig.FromDefaultSpeakerOutput();
using SpeechSynthesizer speechSynthesizer = new SpeechSynthesizer(speechConfig, audioConfig);

// Synthesize spoken output
SpeechSynthesisResult speak = await speechSynthesizer.SpeakTextAsync(responseText);
if (speak.Reason != ResultReason.SynthesizingAudioCompleted)
{
    Console.WriteLine(speak.Reason);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhl7va6c8tvcao8y617m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhl7va6c8tvcao8y617m.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ve just transformed text into speech and speech into understanding—turning Azure AI’s capabilities into a functional talking clock. But this is just the beginning. Imagine applying these same techniques to build voice assistants, interactive IVR systems, or even accessibility tools that give your applications a voice.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>ai</category>
      <category>cloud</category>
      <category>azure</category>
    </item>
    <item>
      <title>How I Built a Smart Translation App Using Azure AI (And You Can Too)</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Sun, 13 Jul 2025 00:45:42 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/how-i-built-a-smart-translation-app-using-azure-ai-and-you-can-too-4i71</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/how-i-built-a-smart-translation-app-using-azure-ai-and-you-can-too-4i71</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Azure AI Translator is a service that enables you to translate text between languages. In this exercise, you’ll use it to create a simple app that translates input in any supported language to the target language of your choice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Translator resource
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Go to your Azure portal and click create a resource
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzrjigyhutczkgns5ayf.png" alt=" " width="800" height="207"&gt;
&lt;/li&gt;
&lt;li&gt;In the search field at the top, search for Translators then select Translators in the results.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8rgs4r33mq10ew574hj.png" alt=" " width="800" height="653"&gt;
&lt;/li&gt;
&lt;li&gt;Create a resource with the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription&lt;/li&gt;
&lt;li&gt;Resource group: Choose or create a resource group&lt;/li&gt;
&lt;li&gt;Region: Choose any available region&lt;/li&gt;
&lt;li&gt;Name: Enter a unique name&lt;/li&gt;
&lt;li&gt;Pricing tier: Select F0 (free), or S (standard) if F is not available.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select Review + create, then select Create to provision the resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i6hw9rtic8woc3tjwpk.png" alt=" " width="800" height="756"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for deployment to complete, and then go to the deployed resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yj8xeiouezz16uy9eia.png" alt=" " width="800" height="321"&gt;
&lt;/li&gt;

&lt;li&gt;View the Keys and Endpoint page. You will need the information on this page later in the exercise.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44srmlm9aq9db1aym4k7.png" alt=" " width="800" height="495"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prepare to develop an app in Cloud Shell
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Enter the following commands to clone the GitHub repo for this exercise:

&lt;code&gt;git clone https://github.com/microsoftlearning/mslearn-ai-language mslearn-ai-language&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" alt=" " width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, navigate to the folder containing the application code files:

&lt;code&gt;cd mslearn-ai-language/Labfiles/06-translator-sdk&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Configure your application
&lt;/h1&gt;

&lt;p&gt;Applications for both C# and Python have been provided, as well as a sample text file you’ll use to test the summarization. Both apps feature the same functionality. We will be using C# for this project. First, you’ll complete some key parts of the application to enable it to use your Azure AI Translator resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the command cd C-Sharp/translate-text or on your language preference. Each folder contains the language-specific code files for an app into which you’re you’re going to integrate Azure AI Translator functionality.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ygear57zusyerw57i4z.png" alt=" " width="800" height="60"&gt;
&lt;/li&gt;
&lt;li&gt;Install the Azure AI Translator SDK package by running the appropriate command for your language preference:

&lt;code&gt;dotnet add package Azure.AI.Translation.Text --version 1.0.0&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr31wm1z2msrl2brq3vbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr31wm1z2msrl2brq3vbf.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the appsettings.json by expanding translate-text.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ppuwq7uuo7aob22pye.png" alt=" " width="800" height="412"&gt;
&lt;/li&gt;
&lt;li&gt;Update the configuration values to include the region and a key from the Azure AI Translator resource you created (available on the Keys and Endpoint page for your Azure AI Translator resource in the Azure portal).&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, within the code editor, use the CTRL+S command or Right-click &amp;gt; Save to save your changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to translate text
&lt;/h1&gt;

&lt;p&gt;Now you’re ready to use Azure AI Translator to translate text.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Programs.cs file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Text Analytics SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // import namespaces
 using Azure;
 using Azure.AI.Translation.Text;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaeup7sqt5ujw7mev2at.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaeup7sqt5ujw7mev2at.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the comment Create client using endpoint and key and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Create client using endpoint and key
 AzureKeyCredential credential = new(translatorKey);
 TextTranslationClient client = new(credential, translatorRegion);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mamc8mgu2m0h3t9juft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mamc8mgu2m0h3t9juft.png" alt=" " width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the comment Choose target language and add the following code, which uses the Text Translator service to return list of supported languages for translation, and prompts the user to select a language code for the target language.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Choose target language
 Response&amp;lt;GetSupportedLanguagesResult&amp;gt; languagesResponse = await client.GetSupportedLanguagesAsync(scope:"translation").ConfigureAwait(false);
 GetSupportedLanguagesResult languages = languagesResponse.Value;
 Console.WriteLine($"{languages.Translation.Count} languages available.\n(See https://learn.microsoft.com/azure/ai-services/translator/language-support#translation)");
 Console.WriteLine("Enter a target language code for translation (for example, 'en'):");
 string targetLanguage = "xx";
 bool languageSupported = false;
 while (!languageSupported)
 {
     targetLanguage = Console.ReadLine();
     if (languages.Translation.ContainsKey(targetLanguage))
     {
         languageSupported = true;
     }
     else
     {
         Console.WriteLine($"{targetLanguage} is not a supported language.");
     }

 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjr4t2fhy2pq8bwgaxcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjr4t2fhy2pq8bwgaxcp.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the comment Translate text and add the following code, which repeatedly prompts the user for text to be translated, uses the Azure AI Translator service to translate it to the target language (detecting the source language automatically), and displays the results until the user enters quit.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Translate text
 string inputText = "";
 while (inputText.ToLower() != "quit")
 {
     Console.WriteLine("Enter text to translate ('quit' to exit)");
     inputText = Console.ReadLine();
     if (inputText.ToLower() != "quit")
     {
         Response&amp;lt;IReadOnlyList&amp;lt;TranslatedTextItem&amp;gt;&amp;gt; translationResponse = await client.TranslateAsync(targetLanguage, inputText).ConfigureAwait(false);
         IReadOnlyList&amp;lt;TranslatedTextItem&amp;gt; translations = translationResponse.Value;
         TranslatedTextItem translation = translations[0];
         string sourceLanguage = translation?.DetectedLanguage?.Language;
         Console.WriteLine($"'{inputText}' translated from {sourceLanguage} to {translation?.Translations[0].TargetLanguage} as '{translation?.Translations?[0]?.Text}'.");
     }
 } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg03lbkok2c8ot75tif6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg03lbkok2c8ot75tif6w.png" alt=" " width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save the changes to your code file and close the code editor.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Test your application
&lt;/h1&gt;

&lt;p&gt;Now your application is ready to test.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enter the following command to run the program (you maximize the console panel to see more text):&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When prompted, enter a valid target language from the list displayed e.g. en for English&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter a phrase to be translated (for example This is a test or C'est un test) and view the results, which should detect the source language and translate the text to the target language.&lt;br&gt;
When you’re done, enter quit. You can run the application again and choose a different target language.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9ne71vkbnxbdygdm8kd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9ne71vkbnxbdygdm8kd.png" alt=" " width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You've just unlocked the power of Azure AI Translator – transforming any text into multiple languages with just a few lines of code. Whether you're building multilingual apps, global customer support, or breaking down language barriers in your projects, this technology opens doors to truly borderless communication.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to Train an AI Model to Understand Time, Date &amp; Location Requests</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Tue, 08 Jul 2025 11:59:41 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/how-to-train-an-ai-model-to-understand-time-date-location-requests-1197</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/how-to-train-an-ai-model-to-understand-time-date-location-requests-1197</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;The Azure AI Language service enables you to define a conversational language understanding model that applications can use to interpret natural language input from users, predict the users intent (what they want to achieve), and identify any entities to which the intent should be applied.&lt;/p&gt;

&lt;p&gt;For example, a conversational language model for a clock application might be expected to process input such as:&lt;/p&gt;

&lt;p&gt;What is the time in London?&lt;/p&gt;

&lt;p&gt;This kind of input is an example of an utterance (something a user might say or type), for which the desired intent is to get the time in a specific location (an entity); in this case, London.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Language resource
&lt;/h1&gt;

&lt;p&gt;Click &lt;a href="https://dev.to/olalekan_oladiran_d74b7a6/build-your-first-text-analytics-app-with-azure-ai-in-under-30-minutes-4dm6"&gt;Build Your First Text Analytics App with Azure AI in Under 30 Minutes&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a conversational language understanding project
&lt;/h1&gt;

&lt;p&gt;Now that you have created an authoring resource, you can use it to create a conversational language understanding project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In a new browser tab, open the Azure AI Language Studio portal at &lt;a href="https://language.cognitive.azure.com/" rel="noopener noreferrer"&gt;https://language.cognitive.azure.com/&lt;/a&gt; and sign in using the Microsoft account associated with your Azure subscription.&lt;/li&gt;
&lt;li&gt;In a new browser tab, go to the Language Studio portal at &lt;a href="https://language.cognitive.azure.com/" rel="noopener noreferrer"&gt;https://language.cognitive.azure.com/&lt;/a&gt; and sign in using the Microsoft account associated with your Azure subscription.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte1foy377875pelcp911.png" alt=" " width="800" height="291"&gt;
&lt;/li&gt;
&lt;li&gt;If you’re prompted to choose a Language resource, select the following settings:

&lt;ul&gt;
&lt;li&gt;Azure Directory: The Azure directory containing your subscription.&lt;/li&gt;
&lt;li&gt;Azure subscription: Your Azure subscription.&lt;/li&gt;
&lt;li&gt;Resource type: Language&lt;/li&gt;
&lt;li&gt;Resource name: The Azure AI Language resource you created previously.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqh5x1mgdw9fm0d5v30g.png" alt=" " width="800" height="925"&gt;
If you are not prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:&lt;/li&gt;
&lt;li&gt;On the bar at the top if the page, select the Settings (⚙) button.&lt;/li&gt;
&lt;li&gt;On the Settings page, view the Resources tab.&lt;/li&gt;
&lt;li&gt;Select the language resource you just created, and click Switch resource.&lt;/li&gt;
&lt;li&gt;At the top of the page, click Language Studio to return to the Language Studio home page.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;At the top of the portal, in the Create new menu, select Conversational language understanding.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zu5lmfsnc0aq1ozoc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zu5lmfsnc0aq1ozoc4.png" alt=" " width="800" height="724"&gt;&lt;/a&gt;&lt;br&gt;
In the Create a project dialog box, on the Enter basic information page, enter the following details and then select Next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: Clock&lt;/li&gt;
&lt;li&gt;Utterances primary language: English&lt;/li&gt;
&lt;li&gt;Enable multiple languages in project?: Unselected&lt;/li&gt;
&lt;li&gt;Description: Natural language clock
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfdni534jpcpb4cdy7eg.png" alt=" " width="800" height="623"&gt;
On the Review and finish page, select Create.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffiin656n38rbtva3xlzd.png" alt=" " width="800" height="625"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Create intents
&lt;/h1&gt;

&lt;p&gt;The first thing we’ll do in the new project is to define some intents. The model will ultimately predict which of these intents a user is requesting when submitting a natural language utterance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Schema definition page, on the Intents tab, select ＋ Add to add a new intent named GetTime.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh194vqq4vysxqibsd7g4.png" alt=" " width="800" height="252"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6bsf6yewmunpoiqcqas.png" alt=" " width="800" height="424"&gt;
&lt;/li&gt;
&lt;li&gt;Verify that the GetTime intent is listed (along with the default None intent). Then add the following additional intents:

&lt;ul&gt;
&lt;li&gt;GetDay&lt;/li&gt;
&lt;li&gt;GetDate
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsluxyuy2tk7yznca4bh5.png" alt=" " width="800" height="397"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select the new GetTime intent and enter the utterance what is the time?. This adds the utterance as sample input for the intent.
Add the following additional utterances for the GetTime intent:

&lt;ul&gt;
&lt;li&gt;what's the time?&lt;/li&gt;
&lt;li&gt;what time is it?&lt;/li&gt;
&lt;li&gt;tell me the time
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cw5ngmdgcigq9mh48k8.png" alt=" " width="800" height="616"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select the GetDay intent and add the following utterances as example input for that intent:

&lt;ul&gt;
&lt;li&gt;what day is it?&lt;/li&gt;
&lt;li&gt;what's the day?&lt;/li&gt;
&lt;li&gt;what is the day today?&lt;/li&gt;
&lt;li&gt;what day of the week is it?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select the GetDate intent and add the following utterances for it:

&lt;ul&gt;
&lt;li&gt;what date is it?&lt;/li&gt;
&lt;li&gt;what's the date?&lt;/li&gt;
&lt;li&gt;what is the date today?&lt;/li&gt;
&lt;li&gt;what's today's date?
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzcg76xfmrdkssnlyqf1.png" alt=" " width="800" height="589"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzwjaknb087dp5w6ks5w.png" alt=" " width="800" height="425"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Train and test the model
&lt;/h1&gt;

&lt;p&gt;Now that you’ve added some intents, let’s train the language model and see if it can correctly predict them from user input.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the pane on the left, select Training jobs. Then select + Start a training job.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffu44zp2192uf0gsuwqnz.png" alt=" " width="800" height="308"&gt;
&lt;/li&gt;
&lt;li&gt;On the Start a training job dialog, select the option to train a new model, name it Clock. Select Standard training mode and the default Data splitting options.&lt;/li&gt;
&lt;li&gt;To begin the process of training your model, select Train.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekv9iu1hshdy4ru50iz4.png" alt=" " width="800" height="446"&gt;
&lt;/li&gt;
&lt;li&gt;When training is complete (which may take several minutes) the job Status will change to Training succeeded.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbndqcdjxekxweix5qof.png" alt=" " width="800" height="184"&gt;
&lt;/li&gt;
&lt;li&gt;Select the Model performance page, and then select the Clock model. Review the overall and per-intent evaluation metrics (precision, recall, and F1 score) and the confusion matrix generated by the evaluation that was performed when training (note that due to the small number of sample utterances, not all intents may be included in the results).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foreph6dnwfarersp5ylf.png" alt=" " width="800" height="266"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foelmhub56fbfynhv37z0.png" alt=" " width="800" height="384"&gt;
&lt;/li&gt;
&lt;li&gt;Go to the Deploying a model page, then select Add deployment.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejc724bra3bwlppy0f0e.png" alt=" " width="800" height="340"&gt;
&lt;/li&gt;
&lt;li&gt;On the Add deployment dialog, select Create a new deployment name, and then enter production.&lt;/li&gt;
&lt;li&gt;Select the Clock model in the Model field then select Deploy. The deployment may take some time.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwx4at4ykq0x8qe8mbiw.png" alt=" " width="800" height="621"&gt;
&lt;/li&gt;
&lt;li&gt;When the model has been deployed, select the Testing deployments page, then select the production deployment in the Deployment name field.&lt;/li&gt;
&lt;li&gt;Enter the following text in the empty textbox, and then select Run the test: what's the time now?
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzysxtotroz9pyk97qv7g.png" alt=" " width="800" height="431"&gt;
&lt;/li&gt;
&lt;li&gt;Review the result that is returned, noting that it includes the predicted intent (which should be GetTime) and a confidence score that indicates the probability the model calculated for the predicted intent. The JSON tab shows the comparative confidence for each potential intent (the one with the highest confidence score is the predicted intent)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeb3s1ravbq0067pxlbt.png" alt=" " width="800" height="570"&gt;
&lt;/li&gt;
&lt;li&gt;Clear the text box, and then run another test with the following text: tell me the time&lt;/li&gt;
&lt;li&gt;Again, review the predicted intent and confidence score.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq41joj6buqs11ep91o51.png" alt=" " width="800" height="542"&gt;
&lt;/li&gt;
&lt;li&gt;Try the following text: what's the day today?&lt;/li&gt;
&lt;li&gt;Hopefully the model predicts the GetDay intent.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdcx8bwb86y6zqh35coc.png" alt=" " width="800" height="546"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Add entities
&lt;/h1&gt;

&lt;p&gt;So far you’ve defined some simple utterances that map to intents. Most real applications include more complex utterances from which specific data entities must be extracted to get more context for the intent.&lt;br&gt;
The most common kind of entity is a learned entity, in which the model learns to identify entity values based on examples.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Language Studio, return to the Schema definition page and then on the Entities tab, select ＋ Add to add a new entity.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tblukf5hqthxoozows7.png" alt=" " width="800" height="438"&gt;
&lt;/li&gt;
&lt;li&gt;In the Add an entity dialog box, enter the entity name Location and ensure that the Learned tab is selected. Then select Add entity.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffutqn8ideuisatjeskfm.png" alt=" " width="800" height="825"&gt;
&lt;/li&gt;
&lt;li&gt;After the Location entity has been created, return to the Data labeling page.&lt;/li&gt;
&lt;li&gt;Select the GetTime intent and enter the following new example utterance: what time is it in London?
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsai7wzt0dwikdwyuyacz.png" alt=" " width="800" height="439"&gt;
&lt;/li&gt;
&lt;li&gt;When the utterance has been added, select the word London, and in the drop-down list that appears, select Location to indicate that “London” is an example of a location.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zk4apd8wbvns3jjp483.png" alt=" " width="800" height="420"&gt;
&lt;/li&gt;
&lt;li&gt;Add another example utterance for the GetTime intent: Tell me the time in Paris?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, select the word Paris, and map it to the Location entity.&lt;/li&gt;
&lt;li&gt;Add another example utterance for the GetTime intent: what's the time in New York?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, select the words New York, and map them to the Location entity.&lt;/li&gt;
&lt;li&gt;Select Save changes to save the new utterances.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k0kikn8t8y5hkspcr1s.png" alt=" " width="800" height="438"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Add a list entity
&lt;/h1&gt;

&lt;p&gt;In some cases, valid values for an entity can be restricted to a list of specific terms and synonyms; which can help the app identify instances of the entity in utterances.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Language Studio, return to the Schema definition page and then on the Entities tab, select ＋ Add to add a new entity.&lt;/li&gt;
&lt;li&gt;In the Add an entity dialog box, enter the entity name Weekday and select the List entity tab. Then select Add entity.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwx5za589uf39xgkcwnt9.png" alt=" " width="800" height="582"&gt;
&lt;/li&gt;
&lt;li&gt;On the page for the Weekday entity, in the Learned section, ensure Not required is selected. Then, in the List section, select ＋ Add new list. Then enter the following value and synonym and select Save:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;List key&lt;/th&gt;
&lt;th&gt;Synonyms&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sunday&lt;/td&gt;
&lt;td&gt;Sun&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monday&lt;/td&gt;
&lt;td&gt;Mon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tuesday&lt;/td&gt;
&lt;td&gt;Tue, Tues&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wednesday&lt;/td&gt;
&lt;td&gt;Wed, Weds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thursday&lt;/td&gt;
&lt;td&gt;Thur, Thurs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Friday&lt;/td&gt;
&lt;td&gt;Fri&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Saturday&lt;/td&gt;
&lt;td&gt;Sat&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v14x8yqmkv8uovlxrhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v14x8yqmkv8uovlxrhk.png" alt=" " width="800" height="351"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09aweovoza5acvh2e2bm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09aweovoza5acvh2e2bm.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After adding and saving the list values, return to the Data labeling page.&lt;/li&gt;
&lt;li&gt;Select the GetDate intent and enter the following new example utterance: what date was it on Saturday?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, select the word Saturday, and in the drop-down list that appears, select Weekday.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv41tnknfm8szy72spf2p.png" alt=" " width="800" height="436"&gt;
&lt;/li&gt;
&lt;li&gt;Add another example utterance for the GetDate intent: what date will it be on Friday?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, map Friday to the Weekday entity. &lt;/li&gt;
&lt;li&gt;Add another example utterance for the GetDate intent: what will the date be on Thurs?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, map Thurs to the Weekday entity.
select Save changes to save the new utterances.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluuh304r1fvt58lyzk9f.png" alt=" " width="800" height="483"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Add a prebuilt entity
&lt;/h1&gt;

&lt;p&gt;The Azure AI Language service provides a set of prebuilt entities that are commonly used in conversational applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Language Studio, return to the Schema definition page and then on the Entities tab, select ＋ Add to add a new entity.&lt;/li&gt;
&lt;li&gt;In the Add an entity dialog box, enter the entity name Date and select the Prebuilt entity tab. Then select Add entity.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft79kqqvr6qzgmaacgyze.png" alt=" " width="800" height="586"&gt;
&lt;/li&gt;
&lt;li&gt;On the page for the Date entity, in the Learned section, ensure Not required is selected. Then, in the Prebuilt section, select ＋ Add new prebuilt.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1itk0xz1402ib0van1gl.png" alt=" " width="800" height="329"&gt;
&lt;/li&gt;
&lt;li&gt;In the Select prebuilt list, select DateTime and then select Save.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruxb9daydle77lufhwy6.png" alt=" " width="800" height="226"&gt;
&lt;/li&gt;
&lt;li&gt;After adding the prebuilt entity, return to the Data labeling page&lt;/li&gt;
&lt;li&gt;Select the GetDay intent and enter the following new example utterance: what day was 01/01/1901?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, select 01/01/1901, and in the drop-down list that appears, select Date.&lt;/li&gt;
&lt;li&gt;Add another example utterance for the GetDay intent: what day will it be on Dec 31st 2099?&lt;/li&gt;
&lt;li&gt;When the utterance has been added, map Dec 31st 2099 to the Date entity.&lt;/li&gt;
&lt;li&gt;Select Save changes to save the new utterances.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwuf1jc8rfvvbp6jaz3l.png" alt=" " width="800" height="440"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Retrain the model
&lt;/h1&gt;

&lt;p&gt;Now that you’ve modified the schema, you need to retrain and retest the model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Training jobs page, select Start a training job.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wdf1mkx8qizyzefoat4.png" alt=" " width="800" height="219"&gt;
&lt;/li&gt;
&lt;li&gt;On the Start a training job dialog, select overwrite an existing model and specify the Clock model. Select Train to train the model. If prompted, confirm you want to overwrite the existing model.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih5r2neh9duqmflw77wz.png" alt=" " width="800" height="517"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5vv50ag667h53i0u90t.png" alt=" " width="800" height="253"&gt;
&lt;/li&gt;
&lt;li&gt;When training is complete the job Status will update to Training succeeded.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05nawtjdm43aql9v6ply.png" alt=" " width="800" height="240"&gt;
&lt;/li&gt;
&lt;li&gt;Select the Model performance page and then select the Clock model. Review the evaluation metrics (precision, recall, and F1 score) and the confusion matrix generated by the evaluation that was performed when training (note that due to the small number of sample utterances, not all intents may be included in the results).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnmfcam20gvblu683xoe.png" alt=" " width="800" height="344"&gt;
&lt;/li&gt;
&lt;li&gt;On the Deploying a model page, select Add deployment.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiybsv7ey56ydqjcauba8.png" alt=" " width="800" height="364"&gt;
&lt;/li&gt;
&lt;li&gt;On the Add deployment dialog, select Override an existing deployment name, and then select production.&lt;/li&gt;
&lt;li&gt;Select the Clock model in the Model field and then select Deploy to deploy it. This may take some time.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaewghs3m11bn53rnlsa.png" alt=" " width="800" height="659"&gt;
&lt;/li&gt;
&lt;li&gt;When the model is deployed, on the Testing deployments page, select the production deployment under the Deployment name field, and then test it with the following text: what's the time in Edinburgh?&lt;/li&gt;
&lt;li&gt;Review the result that is returned, which should hopefully predict the GetTime intent and a Location entity with the text value “Edinburgh”.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxki6utzteyic28op1lw1.png" alt=" " width="800" height="543"&gt;
&lt;/li&gt;
&lt;li&gt;Try testing the following utterances:

&lt;ul&gt;
&lt;li&gt;what time is it in Tokyo?&lt;/li&gt;
&lt;li&gt;what date is it on Friday?&lt;/li&gt;
&lt;li&gt;what's the date on Weds?&lt;/li&gt;
&lt;li&gt;what day was 01/01/2020?&lt;/li&gt;
&lt;li&gt;what day will Mar 7th 2030 be?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Use the model from a client app
&lt;/h1&gt;

&lt;p&gt;In a real project, you’d iteratively refine intents and entities, retrain, and retest until you are satisfied with the predictive performance. Then, when you’ve tested it and are satisfied with its predictive performance, you can use it in a client app by calling its REST interface or a runtime-specific SDK.&lt;/p&gt;
&lt;h1&gt;
  
  
  Prepare to develop an app in VS Code
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code and enter the following commands to clone the GitHub repo for this exercise:

&lt;code&gt;git clone https://github.com/microsoftlearning/mslearn-ai-language mslearn-ai-language&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrhwtvgptcw5dykhxfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrhwtvgptcw5dykhxfs.png" alt=" " width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, navigate to the folder containing the application code files:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;cd mslearn-ai-language/Labfiles/03-language&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlhg5e9wrpe2kfjwjqph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlhg5e9wrpe2kfjwjqph.png" alt=" " width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure your application
&lt;/h1&gt;

&lt;p&gt;Applications for both C# and Python have been provided, as well as a sample text file you’ll use to test the summarization. Both apps feature the same functionality. We will be using C# for this project. First, you’ll complete some key parts of the application to enable it to use your Azure AI Language resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the command cd C-Sharp/clock-client on your language preference. Each folder contains the language-specific files for an app into which you’re going to integrate Azure AI Language question answering functionality.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnusy283hvqh5hkkwc20o.png" alt=" " width="800" height="61"&gt;
&lt;/li&gt;
&lt;li&gt;Install the Azure AI Language conversational language understanding SDK package by running the appropriate command for your language preference:

&lt;code&gt;dotnet add package Azure.AI.Language.Conversations --version 1.1.0&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp23v244w3j2a8fi2r1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp23v244w3j2a8fi2r1o.png" alt=" " width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open appsettings.json file.&lt;/li&gt;
&lt;li&gt;Update the configuration values to include the endpoint and a key from the Azure Language resource you created (available on the Keys and Endpoint page for your Azure AI Language resource in the Azure portal).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk97gp4821n4vn9yjovfh.png" alt=" " width="800" height="304"&gt;
&lt;/li&gt;
&lt;li&gt; After you’ve replaced the placeholders, within the code editor, use the CTRL+S command or Right-click &amp;gt; Save to save your changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to the application
&lt;/h1&gt;

&lt;p&gt;Now you’re ready to add the code necessary to import the required SDK libraries, establish an authenticated connection to your deployed project, and submit questions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note that the clock-client folder contains a code file for the client application: Program.cs&lt;/li&gt;
&lt;li&gt;Open the code file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Text Analytics SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // import namespaces
 using Azure;
 using Azure.AI.Language.Conversations;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgla4rsmbxbtlrmf87r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgla4rsmbxbtlrmf87r.png" alt=" " width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, note that code to load the prediction endpoint and key from the configuration file has already been provided. Then find the comment Create a client for the Language service model and add the following code to create a prediction client for your Language Service app:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Create a client for the Language service model
 Uri endpoint = new Uri(predictionEndpoint);
 AzureKeyCredential credential = new AzureKeyCredential(predictionKey);

 ConversationAnalysisClient client = new ConversationAnalysisClient(endpoint, credential);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrs3y3146x0jp2bomxak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrs3y3146x0jp2bomxak.png" alt=" " width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note that the code in the Main function prompts for user input until the user enters “quit”. Within this loop, find the comment Call the Language service model to get intent and entities and add the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Call the Language service model to get intent and entities
 var projectName = "Clock";
 var deploymentName = "production";
 var data = new
 {
     analysisInput = new
     {
         conversationItem = new
         {
             text = userText,
             id = "1",
             participantId = "1",
         }
     },
     parameters = new
     {
         projectName,
         deploymentName,
         // Use Utf16CodeUnit for strings in .NET.
         stringIndexType = "Utf16CodeUnit",
     },
     kind = "Conversation",
 };
 // Send request
 Response response = await client.AnalyzeConversationAsync(RequestContent.Create(data));
 dynamic conversationalTaskResult = response.Content.ToDynamicFromJson(JsonPropertyNames.CamelCase);
 dynamic conversationPrediction = conversationalTaskResult.Result.Prediction;   
 var options = new JsonSerializerOptions { WriteIndented = true };
 Console.WriteLine(JsonSerializer.Serialize(conversationalTaskResult, options));
 Console.WriteLine("--------------------\n");
 Console.WriteLine(userText);
 var topIntent = "";
 if (conversationPrediction.Intents[0].ConfidenceScore &amp;gt; 0.5)
 {
     topIntent = conversationPrediction.TopIntent;
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls35i7pnjmfcabzmf7ek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls35i7pnjmfcabzmf7ek.png" alt=" " width="800" height="515"&gt;&lt;/a&gt;&lt;br&gt;
The call to the Language service model returns a prediction/result, which includes the top (most likely) intent as well as any entities that were detected in the input utterance. Your client application must now use that prediction to determine and perform the appropriate action.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the comment Apply the appropriate action, and add the following code, which checks for intents supported by the application (GetTime, GetDate, and GetDay) and determines if any relevant entities have been detected, before calling an existing function to produce an appropriate response.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Apply the appropriate action
 switch (topIntent)
 {
     case "GetTime":
         var location = "local";           
         // Check for a location entity
         foreach (dynamic entity in conversationPrediction.Entities)
         {
             if (entity.Category == "Location")
             {
                 //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                 location = entity.Text;
             }
         }
         // Get the time for the specified location
         string timeResponse = GetTime(location);
         Console.WriteLine(timeResponse);
         break;
     case "GetDay":
         var date = DateTime.Today.ToShortDateString();            
         // Check for a Date entity
         foreach (dynamic entity in conversationPrediction.Entities)
         {
             if (entity.Category == "Date")
             {
                 //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                 date = entity.Text;
             }
         }            
         // Get the day for the specified date
         string dayResponse = GetDay(date);
         Console.WriteLine(dayResponse);
         break;
     case "GetDate":
         var day = DateTime.Today.DayOfWeek.ToString();
         // Check for entities            
         // Check for a Weekday entity
         foreach (dynamic entity in conversationPrediction.Entities)
         {
             if (entity.Category == "Weekday")
             {
                 //Console.WriteLine($"Location Confidence: {entity.ConfidenceScore}");
                 day = entity.Text;
             }
         }          
         // Get the date for the specified day
         string dateResponse = GetDate(day);
         Console.WriteLine(dateResponse);
         break;
     default:
         // Some other intent (for example, "None") was predicted
         Console.WriteLine("Try asking me for the time, the day, or the date.");
         break;
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxm10kgdldezri6buiqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxm10kgdldezri6buiqu.png" alt=" " width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and close the code editor, then enter the following command to run the program (you maximize the console panel to see more text):  dotnet run&lt;/li&gt;
&lt;li&gt;When prompted, enter utterances to test the application. For example, try:

&lt;ul&gt;
&lt;li&gt;Hello&lt;/li&gt;
&lt;li&gt;What time is it?&lt;/li&gt;
&lt;li&gt;What’s the time in London?&lt;/li&gt;
&lt;li&gt;What’s the date?&lt;/li&gt;
&lt;li&gt;What date is Sunday?&lt;/li&gt;
&lt;li&gt;What day is it?&lt;/li&gt;
&lt;li&gt;What day is 01/01/2025?
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fisflm5fd19jbtzjdqk1v.png" alt=" " width="800" height="842"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;When you have finished testing, enter quit.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You’ve just transformed raw text into an intelligent conversational experience—teaching Azure AI to understand time queries, extract locations, and even handle date math. This is the power of modern NLP: turning ambiguous human phrases into precise actions with a trained language model.&lt;/p&gt;

&lt;p&gt;But don’t stop at clocks! Imagine applying these techniques to customer support bots, smart home controls, or workflow automation. The patterns are the same; only the intents change.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>vscode</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Build a Smarter FAQ Bot with Azure AI: A Step-by-Step Guide to Question Answering</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Sun, 06 Jul 2025 12:02:14 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/build-a-smarter-faq-bot-with-azure-ai-a-step-by-step-guide-to-question-answering-bfe</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/build-a-smarter-faq-bot-with-azure-ai-a-step-by-step-guide-to-question-answering-bfe</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;One of the most common conversational scenarios is providing support through a knowledge base of frequently asked questions (FAQs). Many organizations publish FAQs as documents or web pages, which works well for a small set of question and answer pairs, but large documents can be difficult and time-consuming to search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure AI Language&lt;/strong&gt; includes a question answering capability that enables you to create a knowledge base of question and answer pairs that can be queried using natural language input, and is most commonly used as a resource that a bot can use to look up answers to questions submitted by users.&lt;/p&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Language resource
&lt;/h1&gt;

&lt;p&gt;Click &lt;a href="https://dev.to/olalekan_oladiran_d74b7a6/build-your-first-text-analytics-app-with-azure-ai-in-under-30-minutes-4dm6"&gt;Build Your First Text Analytics App with Azure AI in Under 30 Minutes&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create a question answering project
&lt;/h1&gt;

&lt;p&gt;To create a knowledge base for question answering in your Azure AI Language resource, you can use the Language Studio portal to create a question answering project. In this case, you’ll create a knowledge base containing questions and answers about Microsoft Learn.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In a new browser tab, go to the Language Studio portal at &lt;a href="https://language.cognitive.azure.com/" rel="noopener noreferrer"&gt;https://language.cognitive.azure.com/&lt;/a&gt; and sign in using the Microsoft account associated with your Azure subscription.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fte1foy377875pelcp911.png" alt="Image description" width="800" height="291"&gt;
&lt;/li&gt;
&lt;li&gt;If you’re prompted to choose a Language resource, select the following settings:

&lt;ul&gt;
&lt;li&gt;Azure Directory: The Azure directory containing your subscription.&lt;/li&gt;
&lt;li&gt;Azure subscription: Your Azure subscription.&lt;/li&gt;
&lt;li&gt;Resource type: Language&lt;/li&gt;
&lt;li&gt;Resource name: The Azure AI Language resource you created previously.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqh5x1mgdw9fm0d5v30g.png" alt="Image description" width="800" height="925"&gt;
If you are not prompted to choose a language resource, it may be because you have multiple Language resources in your subscription; in which case:&lt;/li&gt;
&lt;li&gt;On the bar at the top if the page, select the Settings (⚙) button.&lt;/li&gt;
&lt;li&gt;On the Settings page, view the Resources tab.&lt;/li&gt;
&lt;li&gt;Select the language resource you just created, and click Switch resource.&lt;/li&gt;
&lt;li&gt;At the top of the page, click Language Studio to return to the Language Studio home page.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click create new and select Custom question answering.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7m8bjq7hlrmloj2wyi7.png" alt="Image description" width="800" height="631"&gt;
&lt;/li&gt;

&lt;li&gt;In the Create a project wizard, on the Choose language setting page, select the option to Select the language for all projects, and select English as the language. Then select Next.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febvvjfuur9hvdzzbpfc0.png" alt="Image description" width="800" height="620"&gt;
&lt;/li&gt;

&lt;li&gt;On the Enter basic information page, enter the following details:

&lt;ul&gt;
&lt;li&gt;Name LearnFAQ&lt;/li&gt;
&lt;li&gt;Description: FAQ for Microsoft Learn&lt;/li&gt;
&lt;li&gt;Default answer when no answer is returned: Sorry, I don't understand the question
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazgfii5oa4uiv667geae.png" alt="Image description" width="800" height="625"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select Next.&lt;/li&gt;

&lt;li&gt;On the Review and finish page, select Create project.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv479kt2fqgzokmeed7g.png" alt="Image description" width="800" height="620"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add sources to the knowledge base
&lt;/h1&gt;

&lt;p&gt;You can create a knowledge base from scratch, but it’s common to start by importing questions and answers from an existing FAQ page or document. In this case, you’ll import data from an existing FAQ web page for Microsoft Learn, and you’ll also import some pre-defined “chit chat” questions and answers to support common conversational exchanges.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Manage sources page for your question answering project, in the + Add source list, select URLs. Then in the Add URLs dialog box, select + Add url and set the following name and URL before you select Add all to add it to the knowledge base:

&lt;ul&gt;
&lt;li&gt;Name: Learn FAQ Page&lt;/li&gt;
&lt;li&gt;URL: &lt;a href="https://docs.microsoft.com/en-us/learn/support/faq" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/learn/support/faq&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ownt6ukinem9gtaj1ue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ownt6ukinem9gtaj1ue.png" alt="Image description" width="800" height="468"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w4okujog0cf8t0esu73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w4okujog0cf8t0esu73.png" alt="Image description" width="800" height="348"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfseq4v58hp91ad3qyth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfseq4v58hp91ad3qyth.png" alt="Image description" width="800" height="345"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pguk2ezrkcv1c59pm6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pguk2ezrkcv1c59pm6x.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Manage sources page for your question answering project, in the + Add source list, select Chitchat. The in the Add chit chat dialog box, select Friendly and select Add chit chat.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfv6a2p1hgf7gcsj4a8m.png" alt="Image description" width="800" height="224"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtc4iisw6xtv51m3kowt.png" alt="Image description" width="800" height="504"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Edit the knowledge base
&lt;/h1&gt;

&lt;p&gt;Your knowledge base has been populated with question and answer pairs from the Microsoft Learn FAQ, supplemented with a set of conversational chit-chat question and answer pairs. You can extend the knowledge base by adding additional question and answer pairs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your LearnFAQ project in Language Studio, select the Edit knowledge base page to see the existing question and answer pairs (if some tips are displayed, read them and choose Got it to dismiss them, or select Skip all)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fudfywdxdyhb9f38c2pjl.png" alt="Image description" width="800" height="356"&gt;
&lt;/li&gt;
&lt;li&gt;In the knowledge base, on the Question answer pairs tab, select ＋, and create a new question answer pair with the following settings:

&lt;ul&gt;
&lt;li&gt;Source: &lt;a href="https://docs.microsoft.com/en-us/learn/support/faq" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/learn/support/faq&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Question: What are Microsoft credentials?&lt;/li&gt;
&lt;li&gt;Answer: Microsoft credentials enable you to validate and prove your skills with Microsoft technologies.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select Done.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pupbrjjeuvegckx627o.png" alt="Image description" width="800" height="789"&gt;
&lt;/li&gt;
&lt;li&gt;In the page for the What are Microsoft credentials? question that is created, expand Alternate questions. Then add the alternate question How can I demonstrate my Microsoft technology skills?.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsd3w4qp8dc6wu3evyrj3.png" alt="Image description" width="800" height="518"&gt;
&lt;/li&gt;
&lt;li&gt;Under the answer you entered for the certification question, expand Follow-up prompts and add the following follow-up prompt:

&lt;ul&gt;
&lt;li&gt;Text displayed in the prompt to the user: Learn more about credentials.&lt;/li&gt;
&lt;li&gt;Select the Create link to new pair tab, and enter this text: You can learn more about credentials on the &lt;a href="https://docs.microsoft.com/learn/credentials/" rel="noopener noreferrer"&gt;Microsoft credentials page&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select Show in contextual flow only. This option ensures that the answer is only ever returned in the context of a follow-up question from the original certification question.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Select Add prompt.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbh8fwfa9eqguihrb06a.png" alt="Image description" width="800" height="596"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6slydjyspbno2k2anfr0.png" alt="Image description" width="800" height="629"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Train and test the knowledge base
&lt;/h1&gt;

&lt;p&gt;Now that you have a knowledge base, you can test it in Language Studio.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save the changes to your knowledge base by selecting the Save button under the Question answer pairs tab on the left.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbii8ixzybwuw3qunscm.png" alt="Image description" width="800" height="336"&gt;
&lt;/li&gt;
&lt;li&gt;After the changes have been saved, select the Test button to open the test pane.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2trvpwp0z62vnsw0kqyw.png" alt="Image description" width="800" height="137"&gt;
&lt;/li&gt;
&lt;li&gt;In the test pane, at the top, deselect Include short answer response (if not already unselected). Then at the bottom enter the message Hello. A suitable response should be returned.&lt;/li&gt;
&lt;li&gt;In the test pane, at the bottom enter the message What is Microsoft Learn?. An appropriate response from the FAQ should be returned.&lt;/li&gt;
&lt;li&gt;Enter the message Thanks! An appropriate chit-chat response should be returned.&lt;/li&gt;
&lt;li&gt;Enter the message Tell me about Microsoft credentials. The answer you created should be returned along with a follow-up prompt link.&lt;/li&gt;
&lt;li&gt;Select the Learn more about credentials follow-up link. The follow-up answer with a link to the certification page should be returned.&lt;/li&gt;
&lt;li&gt;When you’re done testing the knowledge base, close the test pane.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcjgpm3e40o9dajfe3vgq.png" alt="Image description" width="800" height="1538"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Deploy the knowledge base
&lt;/h1&gt;

&lt;p&gt;The knowledge base provides a back-end service that client applications can use to answer questions. Now you are ready to publish your knowledge base and access its REST interface from a client.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the LearnFAQ project in Language Studio, select the Deploy knowledge base page from the navigation menu on the left.&lt;/li&gt;
&lt;li&gt;At the top of the page, select Deploy. Then select Deploy to confirm you want to deploy the knowledge base.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwtuswnxwzj2rwppy68n.png" alt="Image description" width="800" height="354"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3um5ob1hu7jitgnqi62i.png" alt="Image description" width="800" height="237"&gt;
&lt;/li&gt;
&lt;li&gt; When deployment is complete, select Get prediction URL to view the REST endpoint for your knowledge base and note that the sample request includes parameters for:

&lt;ul&gt;
&lt;li&gt;projectName: The name of your project (which should be LearnFAQ)&lt;/li&gt;
&lt;li&gt;deploymentName: The name of your deployment (which should be production)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhqbliatxu0tllnmwiks.png" alt="Image description" width="800" height="401"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wzrzucrw4yyfrsvxnsj.png" alt="Image description" width="800" height="764"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Close the prediction URL dialog box.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Prepare to develop an app in VS Code
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open VS Code and enter the following commands to clone the GitHub repo for this exercise:

&lt;code&gt;git clone https://github.com/microsoftlearning/mslearn-ai-language mslearn-ai-language&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrhwtvgptcw5dykhxfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrhwtvgptcw5dykhxfs.png" alt="Image description" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, navigate to the folder containing the application code files:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;cd mslearn-ai-language/Labfiles/02-qna&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt0px1tmzyxh1h8lcwkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt0px1tmzyxh1h8lcwkw.png" alt="Image description" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure your application
&lt;/h1&gt;

&lt;p&gt;Applications for both C# and Python have been provided, as well as a sample text file you’ll use to test the summarization. Both apps feature the same functionality. We will be using C# for this project. First, you’ll complete some key parts of the application to enable it to use your Azure AI Language resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the command cd C-Sharp/qna-app on your language preference. Each folder contains the language-specific files for an app into which you’re going to integrate Azure AI Language question answering functionality.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;dotnet add package Azure.AI.Language.QuestionAnswering&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a3cp6opjaa26kizat7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a3cp6opjaa26kizat7h.png" alt="Image description" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open appsettings.json file.&lt;/li&gt;
&lt;li&gt;In the code file, update the configuration values it contains to reflect the endpoint and an authentication key for the Azure Language resource you created (available on the Keys and Endpoint page for your Azure AI Language resource in the Azure portal). The project name and deployment name for your deployed knowledge base should also be in this file.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxidcbw0wbcqxgewqytq.png" alt="Image description" width="800" height="354"&gt;
&lt;/li&gt;
&lt;li&gt; After you’ve replaced the placeholders, within the code editor, use the CTRL+S command or Right-click &amp;gt; Save to save your changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to the application
&lt;/h1&gt;

&lt;p&gt;Now you’re ready to add the code necessary to import the required SDK libraries, establish an authenticated connection to your deployed project, and submit questions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note that the qna-app folder contains a code file for the client application: C#: Program.cs&lt;/li&gt;
&lt;li&gt;Open the code file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Text Analytics SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // import namespaces
 using Azure;
 using Azure.AI.Language.QuestionAnswering;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswpfn1t35po6kmxulmni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswpfn1t35po6kmxulmni.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, note that code to load the Azure AI Language service endpoint and key from the configuration file has already been provided. Then find the comment Create client using endpoint and key, and add the following code to create a client for the Text Analysis API:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create client using endpoint and key
AzureKeyCredential credentials = new AzureKeyCredential(aiSvcKey);
Uri endpoint = new Uri(aiSvcEndpoint);
QuestionAnsweringClient aiClient = new    QuestionAnsweringClient(endpoint, credentials); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpni3nglzzjbfnrr0c5xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpni3nglzzjbfnrr0c5xv.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, find the comment Submit a question and display the answer, and add the following code to repeatedly read questions from the command line, submit them to the service, and display details of the answers:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Submit a question and display the answer
 string user_question = "";
 while (true)
     {
         Console.Write("Question: ");
         user_question = Console.ReadLine();
         if (user_question.ToLower() == "quit")
             break;
         QuestionAnsweringProject project = new QuestionAnsweringProject(projectName, deploymentName);
         Response&amp;lt;AnswersResult&amp;gt; response = aiClient.GetAnswers(user_question, project);
         foreach (KnowledgeBaseAnswer answer in response.Value.Answers)
         {
             Console.WriteLine(answer.Answer);
             Console.WriteLine($"Confidence: {answer.Confidence:P2}");
             Console.WriteLine($"Source: {answer.Source}");
             Console.WriteLine();
         }
     }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqy6pzk13kpor649xk97.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqy6pzk13kpor649xk97.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save your changes, then enter the following command to run the program:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When prompted, enter a question to be submitted to your question answering project; for example What is a learning path?.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the answer that is returned.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gpt4cre7c1g47l6mjhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gpt4cre7c1g47l6mjhn.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ask more questions. When you’re done, enter quit.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve just transformed static FAQ documents into an intelligent, AI-powered question-answering system with Azure AI Language. By leveraging natural language understanding, your knowledge base can now deliver instant, accurate responses—freeing up time for more complex support tasks and improving user experiences.&lt;/p&gt;

&lt;p&gt;But don’t stop here! Experiment with adding more data sources, fine-tuning answers, or integrating your knowledge base into a chatbot for seamless customer interactions. The future of automated support starts with tools like these, and you’re already ahead of the curve.&lt;/p&gt;

&lt;p&gt;Ready to take the next step? Dive deeper into Azure AI documentation or try connecting your Q&amp;amp;A system to Microsoft Bot Framework. Happy building!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vscode</category>
      <category>programming</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Build Your First Text Analytics App with Azure AI in Under 30 Minutes</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Sat, 05 Jul 2025 13:37:05 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/build-your-first-text-analytics-app-with-azure-ai-in-under-30-minutes-4dm6</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/build-your-first-text-analytics-app-with-azure-ai-in-under-30-minutes-4dm6</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Azure Language supports analysis of text, including language detection, sentiment analysis, key phrase extraction, and entity recognition.&lt;/p&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft Azure subscription&lt;/li&gt;
&lt;li&gt;VS code&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Provision an Azure AI Language resource
&lt;/h1&gt;

&lt;p&gt;If you don’t already have one in your subscription, you’ll need to provision an Azure AI Language service resource in your Azure subscription.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your Azure portal and click create a resource
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzrjigyhutczkgns5ayf.png" alt="Image description" width="800" height="207"&gt;
&lt;/li&gt;
&lt;li&gt;Search for language service and select create under language service.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv6k0zjjnwpbop0xuzxz.png" alt="Image description" width="800" height="324"&gt;
&lt;/li&gt;
&lt;li&gt;Select continue to create your resource
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmclveq6jfcj8wb2dhxjq.png" alt="Image description" width="800" height="617"&gt;
&lt;/li&gt;
&lt;li&gt;Provision your resource using the following settings:

&lt;ul&gt;
&lt;li&gt;Subscription: Your Azure subscription.&lt;/li&gt;
&lt;li&gt;Resource group: Choose or create a resource group.&lt;/li&gt;
&lt;li&gt;Region:Choose any available region&lt;/li&gt;
&lt;li&gt;Name: Enter a unique name.&lt;/li&gt;
&lt;li&gt;Pricing tier: Select F0 (free), or S (standard) if F is not available.&lt;/li&gt;
&lt;li&gt;Responsible AI Notice: Agree.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt51i3tlo7fboac4i2to.png" alt="Image description" width="800" height="705"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select Review+create
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99qla6c78acqng8x57qv.png" alt="Image description" width="800" height="546"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for validation to complete and click create.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2egxzwf78h7wqgdaebyh.png" alt="Image description" width="800" height="897"&gt;
&lt;/li&gt;

&lt;li&gt;Wait for deployment to complete, and then go to the deployed resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6shek548r859lqtk1a5e.png" alt="Image description" width="800" height="318"&gt;
&lt;/li&gt;

&lt;li&gt;View the Keys and Endpoint page in the Resource Management section. You will need the information on this page later in the exercise.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9mw1qgqfx7ep4nulnsy.png" alt="Image description" width="800" height="536"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Clone the repository for this course
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open your VS Code and login to your Azure subscription by running the command:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;az login&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the command to sign into your Azure account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the following commands to clone the GitHub repo for this exercise:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;git clone https://github.com/microsoftlearning/mslearn-ai-language mslearn-ai-language&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgh0g8ipwgakqc4ixacv.png" alt="Image description" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the repo has been cloned, navigate to the folder containing the application code files:

&lt;code&gt;cd mslearn-ai-language/Labfiles/01-analyze-text&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7romqfou0z8q8sxd4npz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7romqfou0z8q8sxd4npz.png" alt="Image description" width="800" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure your application
&lt;/h1&gt;

&lt;p&gt;Applications for both C# and Python have been provided, as well as a sample text file you’ll use to test the summarization. Both apps feature the same functionality. We will be using C# for this project. First, you’ll complete some key parts of the application to enable it to use your Azure AI Language resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the command cd C-Sharp/text-analysis depending on your language preference. Each folder contains the language-specific files for an app into which you’re going to integrate Azure AI Language text analytics functionality.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyalg8hbpx23vl45bcz2p.png" alt="Image description" width="800" height="74"&gt;
&lt;/li&gt;
&lt;li&gt;Install the Azure AI Language Text Analytics SDK package by running the appropriate command for your language preference:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For C#: 
dotnet add package Azure.AI.TextAnalytics --version 5.3.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpde0b5e4e1w7rqpnx2k9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpde0b5e4e1w7rqpnx2k9.png" alt="Image description" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the appsettings.json by expanding text-analysis.&lt;/li&gt;
&lt;li&gt;Update the configuration values to include the endpoint and a key from the Azure Language resource you created (available on the Keys and Endpoint page for your Azure AI Language resource in the Azure portal)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxeafalbs5lwrgdmfuo1z.png" alt="Image description" width="800" height="333"&gt;
&lt;/li&gt;
&lt;li&gt;After you’ve replaced the placeholders, within the code editor, use the CTRL+S command or Right-click &amp;gt; Save to save your changes&lt;/li&gt;
&lt;li&gt;Open Programs.cs file and at the top, under the existing namespace references, find the comment Import namespaces. Then, under this comment, add the following language-specific code to import the namespaces you will need to use the Text Analytics SDK:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // import namespaces
 using Azure;
 using Azure.AI.TextAnalytics;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4btwk7znnnemrx2j00p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4btwk7znnnemrx2j00p.png" alt="Image description" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Main function, note that code to load the Azure AI Language service endpoint and key from the configuration file has already been provided. Then find the comment Create client using endpoint and key, and add the following code to create a client for the Text Analysis API:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Create client using endpoint and key
 AzureKeyCredential credentials = new AzureKeyCredential(aiSvcKey);
 Uri endpoint = new Uri(aiSvcEndpoint);
 TextAnalyticsClient aiClient = new TextAnalyticsClient(endpoint, credentials);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2j233u5orsupeqdwekv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2j233u5orsupeqdwekv.png" alt="Image description" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save your changes, then enter the following command to run the program:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dotnet run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observe the output as the code should run without error, displaying the contents of each review text file in the reviews folder. The application successfully creates a client for the Text Analytics API but doesn’t make use of it. We’ll fix that in the next section.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kt0p2qrcor3xxtt98y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kt0p2qrcor3xxtt98y4.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to detect language
&lt;/h1&gt;

&lt;p&gt;Now that you have created a client for the API, let’s use it to detect the language in which each review is written.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With Programs.cs file opened, find the comment Get language in the Main function for your program. Then, under this comment, add the code necessary to detect the language in each review document:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Get language
 DetectedLanguage detectedLanguage = aiClient.DetectLanguage(text);
 Console.WriteLine($"\nLanguage: {detectedLanguage.Name}");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb3hhcjfdv7czxbms4be.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsb3hhcjfdv7czxbms4be.png" alt="Image description" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and re-run the program.&lt;/li&gt;
&lt;li&gt;Observe the output, noting that this time the language for each review is identified.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5bg26ly1pulc48vwp35.png" alt="Image description" width="800" height="606"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to evaluate sentiment
&lt;/h1&gt;

&lt;p&gt;Sentiment analysis is a commonly used technique to classify text as positive or negative (or possible neutral or mixed). It’s commonly used to analyze social media posts, product reviews, and other items where the sentiment of the text may provide useful insights.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still with Programs.cs file opened, find the comment Get sentiment. Then, under this comment, add the code necessary to detect the sentiment of each review document:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Get sentiment
 DocumentSentiment sentimentAnalysis = aiClient.AnalyzeSentiment(text);
 Console.WriteLine($"\nSentiment: {sentimentAnalysis.Sentiment}");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5pr38z1xj05ln1k9g2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5pr38z1xj05ln1k9g2a.png" alt="Image description" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and re-run the program.&lt;/li&gt;
&lt;li&gt;Observe the output, noting that the sentiment of the reviews is detected.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jzadxe5oodvi7bbck6f.png" alt="Image description" width="800" height="606"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to identify key phrases
&lt;/h1&gt;

&lt;p&gt;It can be useful to identify key phrases in a body of text to help determine the main topics that it discusses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still with Programs.cs file opened, find the comment Get key phrases. Then, under this comment, add the code necessary to detect the key phrases of each review document:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Get key phrases
 KeyPhraseCollection phrases = aiClient.ExtractKeyPhrases(text);
 if (phrases.Count &amp;gt; 0)
 {
     Console.WriteLine("\nKey Phrases:");
     foreach(string phrase in phrases)
     {
         Console.WriteLine($"\t{phrase}");
     }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famvhliw52o9h2l0cnl0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famvhliw52o9h2l0cnl0v.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and re-run the program.&lt;/li&gt;
&lt;li&gt;Observe the output, noting that each document contains key phrases that give some insights into what the review is about.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vcdf8zng6kfvfmz6cu7.png" alt="Image description" width="800" height="602"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to extract entities
&lt;/h1&gt;

&lt;p&gt;Often, documents or other bodies of text mention people, places, time periods, or other entities. The text Analytics API can detect multiple categories (and subcategories) of entity in your text.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still with Programs.cs file opened, find the comment Get entities phrases. Then, under this comment, add the code necessary to identify entities that are mentioned in each review:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Get entities
 CategorizedEntityCollection entities = aiClient.RecognizeEntities(text);
 if (entities.Count &amp;gt; 0)
 {
     Console.WriteLine("\nEntities:");
     foreach(CategorizedEntity entity in entities)
     {
         Console.WriteLine($"\t{entity.Text} ({entity.Category})");
     }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xf8s14evzocrvf2y897.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xf8s14evzocrvf2y897.png" alt="Image description" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and re-run the program.&lt;/li&gt;
&lt;li&gt;Observe the output, noting the entities that have been detected in the text.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8v6u9mw27f5mhs24oknq.png" alt="Image description" width="800" height="927"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Add code to extract linked entities
&lt;/h1&gt;

&lt;p&gt;In addition to categorized entities, the Text Analytics API can detect entities for which there are known links to data sources, such as Wikipedia.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still with Programs.cs file opened, find the comment Get linked entities. Then, under this comment, add the code necessary to identify linked entities that are mentioned in each review:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; // Get linked entities
 LinkedEntityCollection linkedEntities = aiClient.RecognizeLinkedEntities(text);
 if (linkedEntities.Count &amp;gt; 0)
 {
     Console.WriteLine("\nLinks:");
     foreach(LinkedEntity linkedEntity in linkedEntities)
     {
         Console.WriteLine($"\t{linkedEntity.Name} ({linkedEntity.Url})");
     }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonblzx5kwjjqmpnrwzal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonblzx5kwjjqmpnrwzal.png" alt="Image description" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save your changes and re-run the program.&lt;/li&gt;
&lt;li&gt;Observe the output, noting the linked entities that are identified.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqpelyksfclx4og7y4gm.png" alt="Image description" width="800" height="754"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Azure AI Language opens up a world of possibilities for analyzing text—whether you're assessing customer sentiment, extracting key insights, or identifying important entities. With just a few lines of code, you can integrate powerful NLP capabilities into your applications and unlock deeper understanding from unstructured data.&lt;/p&gt;

&lt;p&gt;Now that you've seen how easy it is to get started, why not experiment further? Try analyzing your own datasets, fine-tuning results, or even combining these features with other Azure AI services to build even smarter solutions.&lt;/p&gt;

&lt;p&gt;The future of intelligent applications starts here—happy coding!&lt;br&gt;
credits: &lt;a href="https://microsoftlearning.github.io/mslearn-ai-language/Instructions/Labs/01-analyze-text.html" rel="noopener noreferrer"&gt;Analyze text with Azure AI Language&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>vscode</category>
      <category>coding</category>
    </item>
    <item>
      <title>From Local to Cloud: Building &amp; Deploying C# Azure Functions in VS Code</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Sun, 08 Jun 2025 16:07:55 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/from-local-to-cloud-building-deploying-c-azure-functions-in-vs-code-3gfh</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/from-local-to-cloud-building-deploying-c-azure-functions-in-vs-code-3gfh</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Azure Functions let you run event-driven code without managing infrastructure—perfect for APIs, file processing, and automation. In this step-by-step guide, you’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Develop locally in C# with VS Code&lt;/li&gt;
&lt;li&gt;Test thoroughly using the Azure Functions Core Tools emulator&lt;/li&gt;
&lt;li&gt;Deploy seamlessly to the cloud with built-in CI/CD&lt;/li&gt;
&lt;li&gt;Validate execution in both local and Azure environments
Whether you’re building microservices or backend APIs, this tutorial equips you with a production-ready workflow using .NET 8’s isolated process model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;An Azure subscription. If you don't already have one, you can &lt;a href="https://azure.microsoft.com/" rel="noopener noreferrer"&gt;sign up for one.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt; on one of the &lt;a href="https://code.visualstudio.com/docs/supporting/requirements#_platforms" rel="noopener noreferrer"&gt;supported platforms.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dotnet.microsoft.com/en-us/download/dotnet/8.0" rel="noopener noreferrer"&gt;.NET 8&lt;/a&gt; is the target framework.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csdevkit" rel="noopener noreferrer"&gt;C# Dev Kit&lt;/a&gt; for Visual Studio Code.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions" rel="noopener noreferrer"&gt;Azure Functions extension&lt;/a&gt; for Visual Studio Code.&lt;/li&gt;
&lt;li&gt;Azure Functions Core Tools version 4.x. Run the following commands in a terminal to install Azure Functions Core Tools on your system. Visit &lt;a href="https://github.com/Azure/azure-functions-core-tools?tab=readme-ov-file#installing" rel="noopener noreferrer"&gt;Azure Function Core Tools&lt;/a&gt; on GitHub for installation instructions on other platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Setting Up Your Local Project
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Create a foundation for your serverless function before deploying to Azure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Visual Studio Code, press F1 to open the command palette and search for and run the command Azure Functions: Create New Project....
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphcbjtz43getbmmgplb4.png" alt="Image description" width="800" height="172"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8s88exjrtj3n1fpo9nd.png" alt="Image description" width="800" height="214"&gt;
&lt;/li&gt;
&lt;li&gt;Select the directory location for your project workspace and choose Select. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuaf4ctnhdmyarym38et.png" alt="Image description" width="800" height="121"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk73civld8o2586prw1n.png" alt="Image description" width="800" height="720"&gt;
&lt;/li&gt;
&lt;li&gt;Provide the following information at the prompts:
Prompt  Action

&lt;ul&gt;
&lt;li&gt;Select the folder that will contain your function projects  Select Browse... to select a folder for you app.&lt;/li&gt;
&lt;li&gt;Select a language   Select C#.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fucl3ix870qv4krv9ci.png" alt="Image description" width="800" height="333"&gt;
&lt;/li&gt;
&lt;li&gt;Select a .NET runtime   Select .NET 8.0 Isolated
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyxzyp34z7s5p76qbed7.png" alt="Image description" width="800" height="264"&gt;
&lt;/li&gt;
&lt;li&gt;Select a template for your project's first function     Select HTTP trigger.1
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdb7sphal9lwyhtipthrc.png" alt="Image description" width="800" height="465"&gt;
&lt;/li&gt;
&lt;li&gt;Provide a function name     Enter HttpExample.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcb3zq0xj7qir97udtah5.png" alt="Image description" width="800" height="148"&gt;
&lt;/li&gt;
&lt;li&gt;Provide a namespace     Enter My.Function.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnqukwo5mca2u2oiyffk.png" alt="Image description" width="800" height="143"&gt;
&lt;/li&gt;
&lt;li&gt;Authorization level     Select Anonymous, which enables anyone to call your function endpoint.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1rke9gogxwa5f4uy5f7.png" alt="Image description" width="800" height="183"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m13hdwwswoshzwaejd9.png" alt="Image description" width="800" height="188"&gt;
Depending on your VS Code settings, you might need to use the Change template filter option to see the full list of templates.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ni847shnciiyenbkk99.png" alt="Image description" width="800" height="342"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Local Testing
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Make sure the terminal is open in Visual Studio Code. You can open the terminal by selecting Terminal and then New Terminal in the menu bar.&lt;/li&gt;
&lt;li&gt;Switch to Run and Debug, click on start debugging.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56gq3kvbhz3lu7im7a12.png" alt="Image description" width="800" height="610"&gt;
&lt;/li&gt;
&lt;li&gt;Output from Core Tools is displayed in the Terminal panel. Your app starts in the Terminal panel. You can see the URL endpoint of your HTTP-triggered function running locally.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lq06xgi4jrfmk7ni3i5.png" alt="Image description" width="800" height="359"&gt;
&lt;/li&gt;
&lt;li&gt;With Core Tools running, go to the Azure: Functions area. Under Functions, expand Local Project &amp;gt; Functions. Right-click the HttpExample function and select Execute Function Now....
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x4yk2z37vc5gt9kag0f.png" alt="Image description" width="800" height="1207"&gt;
&lt;/li&gt;
&lt;li&gt;In Enter request body type the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjl81yws6vbcekhmyyum.png" alt="Image description" width="800" height="101"&gt;
&lt;/li&gt;
&lt;li&gt;select the notification bell icon to view the notification. Information about the function execution is shown in Terminal panel.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj18yh0i8g3qdy4mhqht.png" alt="Image description" width="800" height="196"&gt;
&lt;/li&gt;
&lt;li&gt;Press control + C to stop Core Tools and disconnect the debugger.
After verifying that the function runs correctly on your local computer, it's time to use Visual Studio Code to publish the project directly to Azure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Deploy and execute the function in Azure
&lt;/h1&gt;

&lt;p&gt;In this section you create an Azure Function App resource and deploy the function to the resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sign in to Azure
&lt;/h3&gt;

&lt;p&gt;Before you can publish your app, you must sign in to Azure. If you already signed in, go to the next section.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you aren't already signed in, choose the Azure icon in the Activity bar, then in the Azure: Functions area, choose Sign in to Azure....
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb8bhhua1nbk2g0ial17.png" alt="Image description" width="390" height="215"&gt;
&lt;/li&gt;
&lt;li&gt;When prompted in the browser, choose your Azure account and sign in using your Azure account credentials.&lt;/li&gt;
&lt;li&gt;After successfully signing in, you can close the new browser window. The subscriptions that belong to your Azure account are displayed in the Side bar.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create resources in Azure
&lt;/h3&gt;

&lt;p&gt;In this section, you create the Azure resources you need to deploy your local function app.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the Azure icon in the Activity bar, then in the Resources area select the Create resource... button.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide the following information at the prompts:&lt;br&gt;
Prompt  Action&lt;br&gt;
Select a resource to create     Select Create Function App in Azure...&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj30mmw320rmu7jdjo69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foj30mmw320rmu7jdjo69.png" alt="Image description" width="800" height="190"&gt;&lt;/a&gt;&lt;br&gt;
Select subscription     Select the subscription to use. You won't see this if you only have one subscription.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrxvcmcneqsifyiersc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrxvcmcneqsifyiersc9.png" alt="Image description" width="800" height="101"&gt;&lt;/a&gt;&lt;br&gt;
Enter a globally unique name for the function app   Type a name that is valid in a URL path, for example myfunctionappola. The name you type is validated to make sure that it's unique.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr4a9143duonndv2owon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr4a9143duonndv2owon.png" alt="Image description" width="800" height="92"&gt;&lt;/a&gt;&lt;br&gt;
Select a location for new resources     For better performance, select a region near you.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoqr9yjz66p44eimb9tw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnoqr9yjz66p44eimb9tw.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;br&gt;
Select a runtime stack  Select .NET 8.0 Isolated.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fze5nqfv2sv4umy7eq3yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fze5nqfv2sv4umy7eq3yq.png" alt="Image description" width="800" height="119"&gt;&lt;/a&gt;&lt;br&gt;
Select resource authentication type     Select Secrets&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnkncrrkk84fw6264hvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnkncrrkk84fw6264hvm.png" alt="Image description" width="800" height="136"&gt;&lt;/a&gt;&lt;br&gt;
The extension shows the status of individual resources as they're being created in the AZURE area of the terminal window.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6jo08q6hnbwo7oni8cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6jo08q6hnbwo7oni8cu.png" alt="Image description" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;When completed, the following Azure resources are created in your subscription, using names based on your function app name:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A resource group, which is a logical container for related resources.&lt;/li&gt;
&lt;li&gt;A standard Azure Storage account, which maintains state and other information about your projects.&lt;/li&gt;
&lt;li&gt;A Flex consumption plan, which defines the underlying host for your serverless function app.&lt;/li&gt;
&lt;li&gt;A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.&lt;/li&gt;
&lt;li&gt;An Application Insights instance connected to the function app, which tracks usage of your serverless function.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Deploy the project to Azure
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;In the command palette, search for and run the command Azure Functions: Deploy to Function App....
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6h5eh1eexpfm3xpktut.png" alt="Image description" width="800" height="277"&gt;
&lt;/li&gt;
&lt;li&gt;Select the subscription you used when creating the resources.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9prkqgi8nrpb7pl0uyeb.png" alt="Image description" width="800" height="105"&gt;
&lt;/li&gt;
&lt;li&gt;Select the function app you created. When prompted about overwriting previous deployments, select Deploy to deploy your function code to the new function app resource.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jhxixcu5zq62ywdlchl.png" alt="Image description" width="800" height="89"&gt;
&lt;/li&gt;
&lt;li&gt;After deployment completes, select View Output to view the details of the deployment results. If you miss the notification, select the notification bell icon in the lower right corner to see it again.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhc8y1vdnyhcqerqyf8s6.png" alt="Image description" width="554" height="676"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqbx9n2ix0z6o39bue73x.png" alt="Image description" width="800" height="112"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Run the function in Azure
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click the HttpExample function and choose Execute Function Now....
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1kpnnbyf58pqcfi867l.png" alt="Image description" width="800" height="550"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydrtultvcm0m7aun9avy.png" alt="Image description" width="800" height="96"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0469nlk6hpr6eyg0olpb.png" alt="Image description" width="800" height="88"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspllpii1jiyphyxwgkhs.png" alt="Image description" width="800" height="88"&gt;
&lt;/li&gt;
&lt;li&gt;In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9ahimgu82j9cc1mquep.png" alt="Image description" width="800" height="66"&gt;
&lt;/li&gt;
&lt;li&gt;When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code. select the notification bell icon to view the notification.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfm204delmr2pocndxgl.png" alt="Image description" width="800" height="259"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cicd</category>
      <category>vscode</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to Use Deployment Slots in Azure App Service for Zero-Downtime Updates</title>
      <dc:creator>Olalekan Oladiran</dc:creator>
      <pubDate>Fri, 06 Jun 2025 11:21:48 +0000</pubDate>
      <link>https://forem.com/olalekan_oladiran_d74b7a6/how-to-use-deployment-slots-in-azure-app-service-for-zero-downtime-updates-17i1</link>
      <guid>https://forem.com/olalekan_oladiran_d74b7a6/how-to-use-deployment-slots-in-azure-app-service-for-zero-downtime-updates-17i1</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Deployment slots in Azure App Service enable zero-downtime updates and risk-free testing by providing isolated staging environments that mirror your production app. In this guide, you’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create deployment slots (and upgrade your App Service plan when needed)&lt;/li&gt;
&lt;li&gt;Deploy changes to a staging slot for validation&lt;/li&gt;
&lt;li&gt;Swap slots seamlessly to promote tested code to production&lt;/li&gt;
&lt;li&gt;Verify the swap while maintaining rollback capability&lt;/li&gt;
&lt;li&gt;Perfect for CI/CD pipelines or manual deployments, this approach minimizes downtime and ensures smoother releases. Let’s dive in!&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Requirements
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;You must have completed &lt;a href="https://dev.to/olalekan_oladiran_d74b7a6/getting-started-with-azure-app-service-deploy-a-web-app-in-minutes-4jp3"&gt;Getting Started with Azure App Service: Deploy a Web App in Minutes&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Configure Deployment Slot
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open your Azure App services and select Deployment Slot under Deployment section.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohfm46tbm3h6jwkp523d.png" alt="Image description" width="800" height="408"&gt;
&lt;/li&gt;
&lt;li&gt;Since deployment slot is not supported by shared service plan, it will ask you to upgrade to standard or premium.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3qvjawq1btax2sczhle.png" alt="Image description" width="800" height="319"&gt;
&lt;/li&gt;
&lt;li&gt;Click upgrade, choose standard S1 and click on Select
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nad9gxgbq79ub4mwkey.png" alt="Image description" width="800" height="597"&gt;
&lt;/li&gt;
&lt;li&gt;Click Upgrade
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20kvi21mbal1mljvftbn.png" alt="Image description" width="800" height="254"&gt;
&lt;/li&gt;
&lt;li&gt;Go back to deployment and click Add slot
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9orwfjjcaaacim1tdg8.png" alt="Image description" width="800" height="263"&gt;
&lt;/li&gt;
&lt;li&gt;Choose a name for your slot which will serve as a unique url for your slot. Click Add after
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59gzwdq0wrsetk6xpgbj.png" alt="Image description" width="800" height="1087"&gt;
&lt;/li&gt;
&lt;li&gt;Confirm that the slot is created
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzasl51xpiw0r875nw6wb.png" alt="Image description" width="800" height="232"&gt;
&lt;/li&gt;
&lt;li&gt;Open the slot by clicking on it. Click the Default Domain of the staging slot to check if it is running.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r8hf9ujpaamjzq5o58j.png" alt="Image description" width="800" height="342"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rzpng2piyydwe1fy7c9.png" alt="Image description" width="800" height="630"&gt;
&lt;/li&gt;
&lt;li&gt;You need to push your development code to this staging slot and see how you can swap it with the production after testing.&lt;/li&gt;
&lt;li&gt;Head over to VS Code and make some changes to the index.html file. I added version 2.0 to distinguish it from the production slot
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzrtw6xdw860m1p620cb.png" alt="Image description" width="800" height="302"&gt;
&lt;/li&gt;
&lt;li&gt;Next is to deploy it to slot by clicking view and select command pallete. Repeat the steps used to deploy the production slot&lt;/li&gt;
&lt;li&gt;Search and select Deploy to slot
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0b4be7hxwl9nz3ssj9d.png" alt="Image description" width="800" height="212"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5sgm2w7tuw750pw9c0o.png" alt="Image description" width="800" height="117"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kia328wmm9fzvu453m2.png" alt="Image description" width="800" height="109"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa87z61jbwelxnkbiie9.png" alt="Image description" width="800" height="101"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5szwqxhfspv9q6gtyu8.png" alt="Image description" width="800" height="106"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7ewgn5buewu2o3w03ew.png" alt="Image description" width="680" height="528"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckzfrss8i0sc2ydnapet.png" alt="Image description" width="800" height="115"&gt;
&lt;/li&gt;
&lt;li&gt;After deployment is succeeded, go back to the browser and click on refresh.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7d6qmox08r24zeba66w.png" alt="Image description" width="800" height="332"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Swap deployment Slot
&lt;/h1&gt;

&lt;p&gt;This allow seamless transition between production and staging environment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go back to your production slot and click on swap.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7udm7e4l9kd5tnhbu64.png" alt="Image description" width="800" height="225"&gt;
&lt;/li&gt;
&lt;li&gt;Leave the settings as default and click start swap and wait for the swap to finish.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihj1im2mryqbij4lpur2.png" alt="Image description" width="800" height="1051"&gt;
&lt;/li&gt;
&lt;li&gt;To check if the swap works, go to your production Default Domain and it should display app with version 2.0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxmjtm8sgxm5stp4ukr3.png" alt="Image description" width="800" height="334"&gt; &lt;/li&gt;
&lt;li&gt;Staging will display app without version 2.0
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta409lhvaw7rc40ugoe3.png" alt="Image description" width="800" height="381"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Thanks for staying till the end&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>kubernetes</category>
      <category>programming</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
