<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Uroš Miletić</title>
    <description>The latest articles on Forem by Uroš Miletić (@uveta).</description>
    <link>https://forem.com/uveta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/uveta"/>
    <language>en</language>
    <item>
      <title>Fix Semantic Kernel logging when using Dependency Injection</title>
      <dc:creator>Uroš Miletić</dc:creator>
      <pubDate>Mon, 19 May 2025 14:35:55 +0000</pubDate>
      <link>https://forem.com/uveta/fix-semantic-kernel-logging-when-using-dependency-injection-4df2</link>
      <guid>https://forem.com/uveta/fix-semantic-kernel-logging-when-using-dependency-injection-4df2</guid>
      <description>&lt;p&gt;Integrating Semantic Kernel into ASP.NET applications is straightforward, due to built-in support for Dependency Injection (DI). This makes well-known features such as configuration, logging, and hosting trivial to use for AI Agent development. However, while debugging an agent issue, I discovered that all logging information from the Semantic Kernel was missing! This article explains issue's root cause and how to fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Original Setup
&lt;/h2&gt;

&lt;p&gt;Here is a simplified version of the code I used to set up and use the Semantic Kernel in my ASP.NET application. Please note how &lt;code&gt;IChatCompletionService&lt;/code&gt; and &lt;code&gt;Kernel&lt;/code&gt; services are registered within DI container.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;We use &lt;code&gt;Agent&lt;/code&gt; to complete a chat based on user's prompt. All dependencies are injected into the &lt;code&gt;Agent&lt;/code&gt; constructor automatically.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;After detecting an unrelated issue in request processing, I turned to good old console logs to find the root cause. To my surprise, I noticed no logging information was generated by the Kernel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4n8213ckfjl89wc28p2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4n8213ckfjl89wc28p2.png" alt="No Logging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first, I thought it was a configuration issue. But no matter how I configured the logging, nor what scope I set, the Kernel information was missing from the console. I even tried to set the logging level to &lt;code&gt;Debug&lt;/code&gt; or &lt;code&gt;Verbose&lt;/code&gt;, but nothing worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;It turned out that the issue was not in the Kernel itself, but in the way how underlying &lt;code&gt;AzureOpenAIChatCompletionService&lt;/code&gt; was created. Even though the &lt;code&gt;_kernel&lt;/code&gt; instance was used to call &lt;code&gt;GetChatMessageContentAsync()&lt;/code&gt; method of &lt;code&gt;IChatCompletionService&lt;/code&gt;, logging provider registered in DI container was not used. The solution was to provide it during registration of the &lt;code&gt;AzureOpenAIChatCompletionService&lt;/code&gt; instance.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And whoila! Now we can see the debug information in the console!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl601rr10vvnby1stifx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl601rr10vvnby1stifx8.png" alt="Debug Logging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Even though the Semantic Kernel is designed to work with DI, it does not mean other parts of .NET AI stack are. However, in this case, the solution was simple. Just make sure to pass the logging factory to the &lt;code&gt;IChatCompletionService&lt;/code&gt; implementation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>azure</category>
      <category>dotnet</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Deploy and use DeepSeek R1 with Azure and .NET</title>
      <dc:creator>Uroš Miletić</dc:creator>
      <pubDate>Mon, 17 Feb 2025 07:42:37 +0000</pubDate>
      <link>https://forem.com/uveta/deploy-and-use-deepseek-r1-with-azure-and-net-1fh6</link>
      <guid>https://forem.com/uveta/deploy-and-use-deepseek-r1-with-azure-and-net-1fh6</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Deploy DeepSeek R1 on Azure AI Foundry&lt;/li&gt;
&lt;li&gt;Consume from .NET&lt;/li&gt;
&lt;li&gt;Consume from Semantic Kernel&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a id="intro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;DeepSeek models have taken technological world by surprise, demonstrating that cutting-edge AI development is no longer confined to the certain valley made out of silicon, but has become a global phenomenon. Although Microsoft has traditionally partnered with OpenAI, the users of its technologies still have reasons to be optimistic. The Azure cloud platform recently announced support for the DeepSeek R1 model through its Azure AI Foundry service. Currently in public preview, the model may be run in serverless mode and is free of charge. This article will guide you through deploying the R1 model and integrating it with .NET applications.&lt;/p&gt;

&lt;p&gt;&lt;a id="deploy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy DeepSeek R1 on Azure AI Foundry
&lt;/h2&gt;

&lt;p&gt;Deploying the DeepSeek model on Azure is straightforward, even for those new to Azure AI Foundry (formerly Azure AI Studio).&lt;/p&gt;

&lt;p&gt;Start by creating a new hub, which serves as a container for your AI applications and models. This can be done via AI Foundry &lt;a href="https://ai.azure.com/managementCenter/allResources" rel="noopener noreferrer"&gt;Management Center&lt;/a&gt;. Note that the region you select for your hub will impact model availability. As of February 2025, the DeepSeek R1 model is available in East US, East US 2, West US, West US 3, South Central US, and North Central US regions only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkehvs5fef5fufl384wl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkehvs5fef5fufl384wl3.png" alt="Creating Azure AI Foundry hub" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you need to create a new project. In the &lt;a href="https://ai.azure.com/managementCenter/allResources" rel="noopener noreferrer"&gt;Management Center&lt;/a&gt; select the hub you created, and click on "New project" button. Provide a name for your project and click "Create." Your project will be ready in a few seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy46nbxqkcw3vvkys7ere.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy46nbxqkcw3vvkys7ere.png" alt="Creating Azure AI Foundry project" width="644" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have your hub and project ready, you can deploy DeepSeek R1 model. Navigate to the &lt;a href="https://ai.azure.com/explore/models" rel="noopener noreferrer"&gt;Model catalog&lt;/a&gt; tab within your project. Search for the "DeepSeek R1" model, and click "Deploy". Provide a region unique name for your model and optionally apply content filters. Click "Deploy" (again) to start provisioning the model, which may take a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwkotehmy8lxlmqjwoob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwkotehmy8lxlmqjwoob.png" alt="Deploying DeepSeek R1 model" width="642" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After deployment finishes, you will find the model in the &lt;a href="https://ai.azure.com/build/deployments/model" rel="noopener noreferrer"&gt;Models + endpoints&lt;/a&gt; tab of your project. Select the deployment name to access detailed information, including the endpoint URL and API key, which are necessary for programmatic consumption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa60396ypwfmerh1sv8my.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa60396ypwfmerh1sv8my.png" alt="DeepSeek R1 deployment details" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the chat playground available in the &lt;a href="https://ai.azure.com/playgrounds" rel="noopener noreferrer"&gt;Playgrounds&lt;/a&gt; tab of your project to ensure the deployment is functioning correctly. Make sure to select DeepSeek R1 deployment before starting the conversation. This step helps verify that the model will work seamlessly when integrated programmatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f6z5oxgm71becaapuwj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6f6z5oxgm71becaapuwj.png" alt="Chat playground" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a id="dotnet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Consume from .NET
&lt;/h2&gt;

&lt;p&gt;Models deployed via Azure AI Foundry can be accessed from any programming language that supports HTTP requests. For .NET, Azure provides an SDK through the &lt;a href="https://www.nuget.org/packages/Azure.AI.Inference" rel="noopener noreferrer"&gt;Azure AI Inference&lt;/a&gt; library. To consume the model, create a chat client using the deployment endpoint URL and API key, and then run chat completion.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;a id="semantic-kernel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Consume from SemanticKernel
&lt;/h2&gt;

&lt;p&gt;For more complex applications using Semantic Kernel, consuming models deployed in Azure AI Foundry is straightforward. Utilize the &lt;a href="https://www.nuget.org/packages/Microsoft.SemanticKernel.Connectors.AzureAIInference" rel="noopener noreferrer"&gt;Microsoft.SemanticKernel.Connectors.AzureAIInference&lt;/a&gt; connector library. Register the AI Inference connector using the deployment name, endpoint URL, and API key while building the kernel. Once configured, provision the kernel and use the &lt;code&gt;IChatCompletionService&lt;/code&gt; service to run chat completion.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;a id="conclusion"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Complete .NET and Semantic Kernel chat samples are available on &lt;a href="https://github.com/uveta/demo-azure-deepseek" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Make sure you add the deployment name, endpoint URL, and API key where indicated in the code to run the applications without issues.&lt;/p&gt;

&lt;p&gt;Keep in mind that the DeepSeek R1 model on Azure is still in preview and is subject to throttling and rate limiting. While it may take from couple of seconds up to few minutes to receive a meaningful response, the service is currently free, allowing for extensive experimentation.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>deepseek</category>
      <category>dotnet</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
