<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: El Bruno</title>
    <description>The latest articles on Forem by El Bruno (@elbruno).</description>
    <link>https://forem.com/elbruno</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/elbruno"/>
    <language>en</language>
    <item>
      <title>AI Agents Built My Shopping Cart (And I Just Watched) 🤖</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Mon, 07 Jul 2025 13:05:00 +0000</pubDate>
      <link>https://forem.com/elbruno/ai-agents-built-my-shopping-cart-and-i-just-watched-44kn</link>
      <guid>https://forem.com/elbruno/ai-agents-built-my-shopping-cart-and-i-just-watched-44kn</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;FYI: This was written with some AI help 😂&lt;/p&gt;

&lt;p&gt;Okay, so picture this: I’m sitting here with my morning coffee, and I decide to add a shopping cart feature to the &lt;a href="http://aka.ms/eshoplite/repo" rel="noopener noreferrer"&gt;eShopLite demo scenarios&lt;/a&gt;. But instead of cracking my knuckles and diving into code like usual… I just watched AI agents do ALL the work. 🤯&lt;/p&gt;

&lt;p&gt;I’m not kidding. In 8 minutes, I recorded the entire journey from “hey, we need a shopping cart” to a complete, production-ready e-commerce feature. And honestly? It blew my mind.&lt;/p&gt;

&lt;p&gt;TL:DR: here is the video&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup: eShopLite Needs a Shopping Cart
&lt;/h2&gt;

&lt;p&gt;So I was playing around with &lt;a href="http://aka.ms/eshoplite/repo" rel="noopener noreferrer"&gt;eShopLite&lt;/a&gt; – our sample e-commerce app that’s built with .NET Aspire, Blazor, and has this cool Semantic search feature using Azure AI Foundry models.&lt;/p&gt;

&lt;p&gt;Pretty neat stuff, but it was missing something obvious: you couldn’t actually &lt;em&gt;buy&lt;/em&gt; anything. 🛒 &lt;/p&gt;

&lt;p&gt;The old me would’ve fired up Visual Studio, and start coding. But 2025 me? I decided to see what happens when you let AI agents handle the entire thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Round 1: Claude Sonnet Gets the Requirements Right
&lt;/h2&gt;

&lt;p&gt;First up was  &lt;strong&gt;Claude Sonnet 4&lt;/strong&gt;  in VS Code. I basically said “Hey, look at this codebase and write me a proper Product Requirements Document for a shopping cart feature.”&lt;/p&gt;

&lt;p&gt;What came back was… honestly incredible. This wasn’t some half-baked bullet point list. We’re talking about a &lt;a href="https://gist.github.com/elbruno/3e079eab10f552ab5ad91695fb6701de" rel="noopener noreferrer"&gt;full 12-section PRD&lt;/a&gt; with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📊 &lt;strong&gt;Executive summary with actual success metrics&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Functional requirements&lt;/strong&gt;  that covered everything from cart management to checkout flows&lt;/li&gt;
&lt;li&gt;🏗 &lt;strong&gt;Technical architecture&lt;/strong&gt;  with real code examples (not pseudo-code!)&lt;/li&gt;
&lt;li&gt;📅 &lt;strong&gt;4-phase implementation plan&lt;/strong&gt;  broken down by weeks&lt;/li&gt;
&lt;li&gt;🧪 &lt;strong&gt;Testing strategy&lt;/strong&gt;  covering unit tests, integration tests, the works&lt;/li&gt;
&lt;li&gt;⚠ &lt;strong&gt;Risk assessment&lt;/strong&gt;  with actual mitigation strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out some of the entity models it generated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CartItem&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;ProductId&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Product&lt;/span&gt; &lt;span class="n"&gt;Product&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;Quantity&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;decimal&lt;/span&gt; &lt;span class="n"&gt;UnitPrice&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;decimal&lt;/span&gt; &lt;span class="n"&gt;TotalPrice&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Quantity&lt;/span&gt; &lt;span class="p"&gt;*&lt;/span&gt; &lt;span class="n"&gt;UnitPrice&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Cart&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;SessionId&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CartItem&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Items&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;DateTime&lt;/span&gt; &lt;span class="n"&gt;CreatedAt&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;DateTime&lt;/span&gt; &lt;span class="n"&gt;UpdatedAt&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;decimal&lt;/span&gt; &lt;span class="n"&gt;TotalAmount&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TotalPrice&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;TotalItems&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Quantity&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Round 2: GitHub Copilot Becomes Project Manager
&lt;/h2&gt;

&lt;p&gt;Next, I told  &lt;strong&gt;GitHub Copilot&lt;/strong&gt;  to take all that PRD goodness and create a proper GitHub issue in the &lt;a href="https://github.com/Azure-Samples/eShopLite" rel="noopener noreferrer"&gt;Azure-Samples/eShopLite repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And wow, it didn’t just slap together some basic “add shopping cart pls” issue. It created &lt;a href="https://github.com/Azure-Samples/eShopLite/issues/31" rel="noopener noreferrer"&gt;Issue #31&lt;/a&gt; with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎯 Complete technical specifications&lt;/li&gt;
&lt;li&gt;📋 Implementation phases with clear deliverables&lt;/li&gt;
&lt;li&gt;🏷 Proper labels (enhancement, feature-request, shopping-cart, e-commerce)&lt;/li&gt;
&lt;li&gt;✅ Detailed acceptance criteria&lt;/li&gt;
&lt;li&gt;📚 Links to all the relevant docs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Round 3: GitHub Copilot Coding Agent Goes Full Beast Mode
&lt;/h2&gt;

&lt;p&gt;Instead of coding it myself, I assigned the issue to  &lt;strong&gt;GitHub Copilot Coding Agent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I literally just typed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add the Issue #31 to &lt;a class="mentioned-user" href="https://dev.to/copilot"&gt;@copilot&lt;/a&gt;”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then I sat back and watched the magic happen. This AI agent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🔍 &lt;strong&gt;Analyzed the entire codebase&lt;/strong&gt;  to understand how everything worked&lt;/li&gt;
&lt;li&gt;🌿 &lt;strong&gt;Created a new branch&lt;/strong&gt;  (copilot/fix-31) like a good developer&lt;/li&gt;
&lt;li&gt;🏗 &lt;strong&gt;Built multiple projects&lt;/strong&gt;  – CartEntities, service interfaces, Blazor components&lt;/li&gt;
&lt;li&gt;💻 &lt;strong&gt;Implemented EVERYTHING&lt;/strong&gt;  with proper error handling and validation&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Created a pull request&lt;/strong&gt;  with detailed docs explaining all the changes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Plot Twist: I Became the AI Whisperer 🎭
&lt;/h2&gt;

&lt;p&gt;Now, don’t get me wrong – I wasn’t just sitting there doing nothing. My role shifted to something way more interesting, because, in every iteration, once GH Copilot Coding Agent generated some code, I started to do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎯 &lt;strong&gt;Requirements&lt;/strong&gt; : Making sure the AI “got” what we actually needed. &lt;/li&gt;
&lt;li&gt;👀 &lt;strong&gt;Code Reviewer&lt;/strong&gt; : Spotting edge cases and suggesting improvements, and hey, marked down some errors!&lt;/li&gt;
&lt;li&gt;🎨 &lt;strong&gt;UX Advocate&lt;/strong&gt; : Ensuring the user experience didn’t suck&lt;/li&gt;
&lt;li&gt;🛡 &lt;strong&gt;Quality Guardian&lt;/strong&gt; : Making sure this was actually production-ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The conversation with the Copilot agent was surprisingly natural. When I said “hey, the checkout validation could be better,” it immediately understood and enhanced the implementation. When I mentioned mobile responsiveness, boom – updated CSS on the spot.&lt;/p&gt;

&lt;p&gt;It felt less like programming and more like directing a really smart intern who happens to code at superhuman speed. 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Got Built
&lt;/h2&gt;

&lt;p&gt;The final result? A complete shopping cart system that would make any e-commerce developer proud:&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠 Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;.NET + Aspire&lt;/strong&gt; : Microsoft’s cloud-native platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blazor Server&lt;/strong&gt; : For those sweet real-time UI updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Storage&lt;/strong&gt; : Using ProtectedSessionStorage (secure guest checkout)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bootstrap + Font Awesome&lt;/strong&gt; : Because we’re not savages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✨ Features That Actually Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🛒 &lt;strong&gt;Add to Cart&lt;/strong&gt; : With quantity selection and visual feedback&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Cart Management&lt;/strong&gt; : View, update, remove items, clear everything&lt;/li&gt;
&lt;li&gt;💳 &lt;strong&gt;Checkout Process&lt;/strong&gt; : Multi-step form with progress indicators&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🏆 Code Quality
&lt;/h3&gt;

&lt;p&gt;This wasn’t some weekend hackathon code. The AI generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comprehensive error handling with proper logging&lt;/li&gt;
&lt;li&gt;Client and server-side validation&lt;/li&gt;
&lt;li&gt;WCAG 2.1 AA accessibility compliance&lt;/li&gt;
&lt;li&gt;Performance optimizations for concurrent users&lt;/li&gt;
&lt;li&gt;Security best practices throughout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final &lt;a href="https://github.com/Azure-Samples/eShopLite/pull/32" rel="noopener noreferrer"&gt;Pull Request #32&lt;/a&gt; was a thing of beauty:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📁 &lt;strong&gt;8 new project files&lt;/strong&gt;  with complete cart functionality&lt;/li&gt;
&lt;li&gt;🎨 &lt;strong&gt;15+ Blazor components&lt;/strong&gt;  for the UI&lt;/li&gt;
&lt;li&gt;⚙ &lt;strong&gt;Complete service layer&lt;/strong&gt;  with interfaces and implementations&lt;/li&gt;
&lt;li&gt;💄 &lt;strong&gt;Responsive styling&lt;/strong&gt;  that actually looks good&lt;/li&gt;
&lt;li&gt;📖 &lt;strong&gt;Documentation&lt;/strong&gt;  explaining everything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Done in almost an hour by AI agents. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjprlucltldm4uxcweewc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjprlucltldm4uxcweewc.png" alt="🤯" width="72" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Us Developers
&lt;/h2&gt;

&lt;p&gt;Okay, so this experiment kind of broke my brain a little. Here’s what I think is happening:&lt;/p&gt;

&lt;h3&gt;
  
  
  🎭 We’re Becoming AI Orchestrators
&lt;/h3&gt;

&lt;p&gt;Instead of writing every line of code, we’re becoming conductors of AI-powered development orchestras. The skill isn’t knowing every API by heart – it’s knowing which AI agent to use and how to coordinate them.&lt;/p&gt;

&lt;h3&gt;
  
  
  📋 Requirements Are Everything Now
&lt;/h3&gt;

&lt;p&gt;With AI handling the implementation, the quality of your requirements becomes the main bottleneck. Garbage in, garbage out – but good requirements in, amazing code out.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎨 Creativity Goes to User Experience
&lt;/h3&gt;

&lt;p&gt;While AI handles the technical heavy lifting, we get to focus on the fun stuff – user experience, business logic, creative problem-solving.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 Code Review Gets Strategic
&lt;/h3&gt;

&lt;p&gt;Instead of hunting for syntax errors, we’re validating architecture, ensuring business requirements are met, and checking that the AI understood the bigger picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to Try This Yourself?
&lt;/h2&gt;

&lt;p&gt;If you’re fired up to experiment with AI-driven development (and you should be!), here’s how to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🛠 &lt;strong&gt;Get the tools&lt;/strong&gt; : VS Code + &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;GitHub Copilot extension&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🐣 &lt;strong&gt;Start small&lt;/strong&gt; : Try simple features first, build up your AI-whispering skills&lt;/li&gt;
&lt;li&gt;📝 &lt;strong&gt;Write clear requirements&lt;/strong&gt; : Seriously, this is like 80% of the battle now&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Embrace iteration&lt;/strong&gt; : Use AI feedback loops to refine implementations&lt;/li&gt;
&lt;li&gt;👀 &lt;strong&gt;Learn to review AI code&lt;/strong&gt; : Develop your “AI code quality radar”&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final stuff
&lt;/h2&gt;

&lt;p&gt;Look, I’ve been coding for years, and this felt like a glimpse into a completely different future. We’re not just getting AI assistance anymore – we’re orchestrating AI agents to build entire solutions while we focus on the strategic stuff.&lt;/p&gt;

&lt;p&gt;The shopping cart that would normally take a long time to properly plan, implement, and test? AI agents knocked it out in minutes. And it wasn’t some demo-quality code – this was (almost) production-ready, well-architected, properly tested software.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com" rel="noopener noreferrer"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More info in &lt;a href="https://beacons.ai/elbruno" rel="noopener noreferrer"&gt;https://beacons.ai/elbruno&lt;/a&gt;&lt;/p&gt;







&lt;h3&gt;
  
  
  🔗 Useful Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure-Samples/eShopLite" rel="noopener noreferrer"&gt;eShopLite Sample Application&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure-Samples/eShopLite/pull/32" rel="noopener noreferrer"&gt;Shopping Cart Implementation PR&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Azure-Samples/eShopLite/blob/main/scenarios/01-SemanticSearch/Shopping_Cart_PRD.md" rel="noopener noreferrer"&gt;Complete PRD Document&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/copilot" rel="noopener noreferrer"&gt;GitHub Copilot Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/aspire/" rel="noopener noreferrer"&gt;.NET Aspire Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>englishpost</category>
      <category>artificialintelligen</category>
      <category>blazor</category>
      <category>codesample</category>
    </item>
    <item>
      <title>🧠✨ Testing GPT-4o’s Image Generation – From C# with ❤️ and Microsoft.Extensions.AI</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Thu, 27 Mar 2025 15:41:03 +0000</pubDate>
      <link>https://forem.com/elbruno/testing-gpt-4os-image-generation-from-c-with-and-microsoftextensionsai-1c9i</link>
      <guid>https://forem.com/elbruno/testing-gpt-4os-image-generation-from-c-with-and-microsoftextensionsai-1c9i</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;OpenAI just dropped some exciting news:&lt;br&gt;&lt;br&gt;
 👉 &lt;strong&gt;GPT-4o can now generate images directly from prompts.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Announced here: &lt;a href="https://openai.com/index/introducing-4o-image-generation/" rel="noopener noreferrer"&gt;Introducing GPT-4o Image Generation&lt;/a&gt;, this new feature lets you go from &lt;em&gt;words&lt;/em&gt; to &lt;em&gt;stunning visuals&lt;/em&gt; – including photorealistic scenes, illustrations, logos, and more.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The fun part? You can now generate images just by chatting with GPT-4o.&lt;/p&gt;

&lt;p&gt;The challenging part? If you’re a developer trying this via the OpenAI API… it’s not quite ready yet 😅&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  🎯 My Test: “Make Me a Cute Sticker”
&lt;/h2&gt;

&lt;p&gt;So, naturally, I tried this in &lt;strong&gt;ChatGPT&lt;/strong&gt; — and it worked beautifully.&lt;br&gt;&lt;br&gt;
Just uploaded a photo of my cat, added this prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Make me a cute minimalist sticker based on the provided image. Use a thick white border and transparent background.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Boom 🐾 — instant sticker! Minimalist lines, clear cat expression, perfect for printing or slapping on your laptop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46faetxmislongkkuwnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46faetxmislongkkuwnj.png" width="800" height="916"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  🔧 Now Let’s Try in C# (Spoiler: it’s not ready… yet)
&lt;/h2&gt;

&lt;p&gt;Being a .NET fanboy and AI tinkerer, I fired up a quick console app using the awesome new &lt;a href="https://learn.microsoft.com/en-us/dotnet/communitytoolkit/microsoft-extensions-ai/overview" rel="noopener noreferrer"&gt;&lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt;&lt;/a&gt; library — designed to unify and simplify AI model calls from .NET.&lt;/p&gt;

&lt;p&gt;Here’s my code using the &lt;code&gt;OpenAIClient&lt;/code&gt; with GPT-4o:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;





&lt;h2&gt;
  
  
  📉 The Actual Output
&lt;/h2&gt;

&lt;p&gt;Here’s what I got back from GPT-4o via API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Prompt: make me a cute minimalist sticker based on the provided image...

Response:
To create a cute minimalist sticker from the provided image, follow these steps:
1. Crop and Simplify...
2. Add a thick white border...
3. Use a transparent background...

You can use tools like Adobe Illustrator or Canva.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically — the model understood the task, but instead of returning a new image, it gave instructions 📄.&lt;/p&gt;




&lt;h2&gt;
  
  
  ❗ Why the Disconnect?
&lt;/h2&gt;

&lt;p&gt;The image generation capability &lt;em&gt;is live in ChatGPT&lt;/em&gt;, but &lt;strong&gt;not yet exposed via the OpenAI API&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
As of now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GetResponseAsync()&lt;/code&gt; from &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt; supports image inputs ✅&lt;/li&gt;
&lt;li&gt;But image generation as an &lt;strong&gt;output&lt;/strong&gt; is &lt;strong&gt;not yet&lt;/strong&gt; supported ❌&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So developers: sit tight, it’s coming.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GPT-4o’s new image generation is 🔥 — in ChatGPT.&lt;/li&gt;
&lt;li&gt;If you’re building with .NET and &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt;, you’re already in a great spot to tap into these APIs as soon as image outputs are supported.&lt;/li&gt;
&lt;li&gt;Until then, your code can analyze and interpret images with GPT-4o, but it can’t yet generate them.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  👀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;I’m keeping this code snippet ready for when OpenAI opens up the feature.&lt;br&gt;&lt;br&gt;
And I’m already thinking about using this in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sticker generators 🐱&lt;/li&gt;
&lt;li&gt;Avatar creators 🧙&lt;/li&gt;
&lt;li&gt;Meme bots 🤖&lt;/li&gt;
&lt;li&gt;Or even something weird, like “What if your plant could draw?”&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Let me know if you want to explore this together — or if you’re building cool stuff with GPT-4o and .NET.&lt;/p&gt;

&lt;p&gt;Until then, happy coding!&lt;/p&gt;

&lt;p&gt;— Bruno 💬 🐾&lt;/p&gt;

</description>
      <category>englishpost</category>
      <category>ai</category>
      <category>artificialintelligen</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Using Large Language Models with .NET: Generating Image Alt Text Automatically</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Tue, 07 Jan 2025 14:00:00 +0000</pubDate>
      <link>https://forem.com/elbruno/using-large-language-models-with-net-generating-image-alt-text-automatically-5e02</link>
      <guid>https://forem.com/elbruno/using-large-language-models-with-net-generating-image-alt-text-automatically-5e02</guid>
      <description>&lt;p&gt;Hi! The following post is generated using some super cool LLMs, and hey it’s also about LLMs!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiojqnr3gz5hoc4pt3fxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiojqnr3gz5hoc4pt3fxo.png" alt="the image shows the output of the application when running and analyzing an image. the output is a message with the alt-text of the image and the locations of the files processed" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this blog post, we’ll explore a practical use case for LLMs: generating alt text for images automatically, ensuring your applications are more inclusive and accessible.&lt;/p&gt;

&lt;p&gt;By the end of this post, you’ll learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leverage Llama 3.2 Vision for image analysis.&lt;/li&gt;
&lt;li&gt;Implement the solution in a .NET project.&lt;/li&gt;
&lt;li&gt;Test the application using a sample GitHub repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the main repo: &lt;a href="https://github.com/elbruno/Image-AltText-Generator" rel="noopener noreferrer"&gt;https://github.com/elbruno/Image-AltText-Generator&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Let’s dive in!&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;a href="https://github.com/elbruno/Image-AltText-Generator" rel="noopener noreferrer"&gt;Main Demo Repo&lt;/a&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What Does the Application Do?
&lt;/h3&gt;

&lt;p&gt;The application generates descriptive alt text for images using gpt-40-mini (online) or Llama3.2-Vision (locally). Alt text is essential for accessibility, enabling visually impaired users to understand the content of an image. This solution uses AI to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload and analyze an image.&lt;/li&gt;
&lt;li&gt;Generate an accurate, human-readable description of the image content.&lt;/li&gt;
&lt;li&gt;Provide the description as alt text that can be used in web applications or reports.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Technology Stack
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://dotnet.microsoft.com/en-us/download/dotnet/9.0" rel="noopener noreferrer"&gt;.NET&lt;/a&gt;&lt;/strong&gt;: main desktop app.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;ollama&lt;/strong&gt;&lt;/a&gt;: to run LLMs locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://ollama.com/library/llama3.2-vision" rel="noopener noreferrer"&gt;Llama 3.2 Vision&lt;/a&gt;&lt;/strong&gt;: Visual language model for understanding and describing images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://platform.openai.com/" rel="noopener noreferrer"&gt;OpenAI APIs&lt;/a&gt;&lt;/strong&gt;: Integration with advanced LLMs to enhance text generation capabilities.&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  How to Run the Test Using the Sample Code
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before running the application, ensure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;.NET SDK&lt;/strong&gt; : Version 9.0 or higher.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git&lt;/strong&gt; : To clone the sample repository.&lt;/li&gt;
&lt;li&gt;To analyze the image, 2 options:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI API Key&lt;/strong&gt; : For text generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt; : To run the Llama 3.2 Vision model.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Step-by-Step Guide
&lt;/h3&gt;
&lt;h4&gt;
  
  
  1. Clone the Repository
&lt;/h4&gt;

&lt;p&gt;Start by cloning the sample repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
git clone https://github.com/elbruno/Image-AltText-Generator.git
cd Image-AltText-Generator

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. For local use, set Up the Llama 3.2 Vision Model
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Installing Ollama: To install Ollama, follow the instructions provided on the &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Download the Llama3.2-vision Model: After installing Ollama, download the &lt;a href="https://ollama.com/library/llama3.2-vision" rel="noopener noreferrer"&gt;Llama3.2-vision model&lt;/a&gt; by following the instructions on the Ollama website or using the Ollama CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Run the Application
&lt;/h4&gt;

&lt;p&gt;Launch the .NET application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
dotnet run

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  5. Test the Application
&lt;/h4&gt;

&lt;p&gt;Copy an image to the clipboard and run the application. The application will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyze the image using a Vision model.&lt;/li&gt;
&lt;li&gt;Generate alt text .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, uploading a picture of a cat might return:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“A fluffy orange cat sitting on a wooden bench surrounded by green plants.”&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Integrating AI models like Llama 3.2 Vision into .NET applications unlocks powerful, real-world use cases. By automating tasks like generating alt text, developers can enhance user experiences and build more inclusive applications effortlessly.&lt;/p&gt;

&lt;p&gt;This tutorial demonstrates how easy it is to use LLMs for image understanding and description in .NET. With tools like Docker, OpenAI APIs, and accessible GitHub repositories, you’re just a few steps away from embedding AI capabilities into your projects.&lt;/p&gt;

&lt;p&gt;So, what are you waiting for? Clone the repo, test the application, and start building your next AI-powered solution!&lt;/p&gt;

&lt;p&gt;Happy coding! &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6npxlgkg7tflsm2otu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6npxlgkg7tflsm2otu5.png" alt="🚀" width="72" height="72"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/elbruno/Image-AltText-Generator" rel="noopener noreferrer"&gt;GitHub Repository: Image AltText Generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ollama.com/library/llama3.2-vision" rel="noopener noreferrer"&gt;Llama 3.2 Vision Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/" rel="noopener noreferrer"&gt;OpenAI APIs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com" rel="noopener noreferrer"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More info in &lt;a href="https://beacons.ai/elbruno" rel="noopener noreferrer"&gt;https://beacons.ai/elbruno&lt;/a&gt;&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>codesample</category>
      <category>llama32vision</category>
    </item>
    <item>
      <title>CPU vs GPU: Which Wins for Running LLMs Locally?</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Thu, 02 Jan 2025 15:06:48 +0000</pubDate>
      <link>https://forem.com/elbruno/cpu-vs-gpu-which-wins-for-running-llms-locally-3iig</link>
      <guid>https://forem.com/elbruno/cpu-vs-gpu-which-wins-for-running-llms-locally-3iig</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Running large language models (LLMs) locally has become increasingly accessible, thanks to advancements in hardware and model optimization. For .NET programmers, understanding the performance differences between CPUs and GPUs is crucial to selecting the best setup for their use case.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll explore these differences by benchmarking the Llama 3.2 Vision model using a locally hosted environment with ollama running in docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Watch the Video Tutorial&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before diving in, check out the &lt;a href="https://www.youtube.com/watch?v=3JpoISL_Fx0" rel="noopener noreferrer"&gt;video tutorial&lt;/a&gt; for a quick overview of the process and key concepts covered in this blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=3JpoISL_Fx0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ukm6c4jv13rrwbxyxei.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Goal of the Comparison
&lt;/h2&gt;

&lt;p&gt;The goal of this exercise is to evaluate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Execution Time&lt;/strong&gt; : How fast can the model process queries on CPU vs GPU?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Utilization&lt;/strong&gt; : How do hardware resources (memory, power) compare between the two setups?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suitability&lt;/strong&gt; : Which setup is better for different programming tasks?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By the end, you’ll have a clear understanding of the trade-offs and be equipped to choose the most appropriate setup for your projects.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Run the Test Using the Sample Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Start by cloning the &lt;a href="https://github.com/elbruno/Ollama-llama3.2-vision-Benchmark" rel="noopener noreferrer"&gt;sample GitHub repository&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
git clone https://github.com/elbruno/Ollama-llama3.2-vision-Benchmark
cd Ollama-llama3.2-vision-Benchmark

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure you have the necessary dependencies installed:&lt;/p&gt;

&lt;p&gt;For .NET &amp;gt; Install &lt;a href="https://dotnet.microsoft.com/download" rel="noopener noreferrer"&gt;.NET SDK&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Model Download&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Install  &lt;strong&gt;Docker&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Run the Docker container for GPU and CPU:

&lt;ul&gt;
&lt;li&gt;GPU running on port 11434 (default)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;docker run -d –gpus=all -v ollama:/root/.ollama -p 11434:11434 –name ollama ollama/ollama&lt;/em&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CPU running on port 11435&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;docker run -d -v ollamacpu:/root/.ollamacpu -p 11435:11434 –name ollamacpu ollama/ollama&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On each container pull the llama3.2-vision image. Run the command
&lt;em&gt;ollama run llama3.2-vision&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;You will have docker running 2 instances of ollama, similar to this image:&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczl4lpku7r4sbqqd0p2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczl4lpku7r4sbqqd0p2g.png" alt="docker running 2 instances of ollama" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Run Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For benchmarking, we are using BenchmarkDotNet for .NET.&lt;/p&gt;

&lt;p&gt;Open the &lt;strong&gt;OllamaBenchmark.sln&lt;/strong&gt; and run the solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Results Analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The .NET solution will output detailed performance metrics, including execution time and resource usage. Compare these metrics to identify the strengths of each hardware setup.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt; : GPUs consistently outperform CPUs in execution time for LLMs, especially for larger models like Llama 3.2 Vision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency&lt;/strong&gt; : While GPUs are faster, they consume more power. CPUs, on the other hand, are more energy-efficient but slower.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases&lt;/strong&gt; :

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPU&lt;/strong&gt; : Best for lightweight, cost-sensitive tasks or development environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU&lt;/strong&gt; : Ideal for production workloads requiring high throughput or real-time inference.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Running benchmarks is a straightforward way to determine the best hardware for your specific needs. By following the steps outlined here, you can confidently experiment with LLMs and optimize your local environment for maximum efficiency.&lt;/p&gt;

&lt;p&gt;For a detailed walkthrough, check out the &lt;a href="https://www.youtube.com/watch?v=3JpoISL_Fx0" rel="noopener noreferrer"&gt;video tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com" rel="noopener noreferrer"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More info in &lt;a href="https://beacons.ai/elbruno" rel="noopener noreferrer"&gt;https://beacons.ai/elbruno&lt;/a&gt;&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>ai</category>
      <category>artificialintelligen</category>
      <category>codesample</category>
    </item>
    <item>
      <title>Code Sample: Integrating Azure OpenAI Search with #SemanticKernel in .NET</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Fri, 14 Jun 2024 13:59:42 +0000</pubDate>
      <link>https://forem.com/azure/code-sample-integrating-azure-openai-search-with-semantickernel-in-net-223o</link>
      <guid>https://forem.com/azure/code-sample-integrating-azure-openai-search-with-semantickernel-in-net-223o</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;Today I’ll try to expand a little the scenario described in this Semantic Kernel blog post: “&lt;a href="https://devblogs.microsoft.com/semantic-kernel/azure-openai-on-your-data-with-semantic-kernel?WT.mc_id=academic-00000-brunocapuano"&gt;Azure OpenAI On Your Data with Semantic Kernel&lt;/a&gt;“.&lt;/p&gt;

&lt;p&gt;The code below uses an GPT-4o model to support the chat and also is connected to Azure AI Search using SK. While runing this demo, you will notice the mentions to [doc1], [doc2] and more. Extending the original SK blog post, this sample shows at the bottom the details of each one of the mentioned documents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7CiNf_lL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-2.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7CiNf_lL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-2.png%3Fw%3D1024" alt="Console output with the demo working and showing the related docs" width="800" height="740"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A similar question using Azure AI Studio, will also shows the references to source documents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u6O1h4xA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-3.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u6O1h4xA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-3.png%3Fw%3D1024" alt="Azure AI Studio showing the query in the chat showing the related docs" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Semantic Kernel Blog Post
&lt;/h2&gt;

&lt;p&gt;The SK team explored how to leverage Azure OpenAI Service in conjunction with the Semantic Kernel to enhance AI solutions. By combining these tools, you can harness the capabilities of large language models to work effectively with data, using Azure AI Search capabilities. The post covered the integration process, highlighted the benefits, and provided a high-level overview of the architecture.&lt;/p&gt;

&lt;p&gt;The post showcases the importance of context-aware responses and how the Semantic Kernel can manage state and memory to deliver more accurate and relevant results. This integration between SK and Azure AI Search empowers developers to build applications that &lt;strong&gt;&lt;em&gt;understand and respond to user queries in a more human-like manner&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The blog post provides a code sample showcasing the integration steps. To run the scenario, you’ll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload our data files to Azure Blob storage.&lt;/li&gt;
&lt;li&gt;Vectorize and index data in Azure AI Search.&lt;/li&gt;
&lt;li&gt;Connect Azure OpenAI service with Azure AI Search.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more in-depth guidance, be sure to check out the full post &lt;a href="https://devblogs.microsoft.com/semantic-kernel/azure-openai-on-your-data-with-semantic-kernel/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Sample
&lt;/h2&gt;

&lt;p&gt;And now it’s time to who how we can access the details of the response from a SK call, when the response includes information from Azure AI Search.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the following program.cs to understand its structure and functionality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The sample program is a showcase of how to utilize Azure OpenAI and Semantic Kernel to create a chat application capable of generating suggestions based on user queries. &lt;/li&gt;
&lt;li&gt;The program starts by importing necessary namespaces, ensuring access to Azure OpenAI, configuration management, and Semantic Kernel functionalities. &lt;/li&gt;
&lt;li&gt;Next, the program uses a configuration builder to securely load Azure OpenAI keys from user secrets. &lt;/li&gt;
&lt;li&gt;The core of the program lies in setting up a chat completion service with Semantic Kernel. This service is configured to use Azure OpenAI for generating chat responses, utilizing the previously loaded API keys and endpoints.&lt;/li&gt;
&lt;li&gt;To handle the conversation, the program creates a sample chat history. This history includes both system and user messages, forming the basis for the chat completion service to generate responses.&lt;/li&gt;
&lt;li&gt;An Azure Search extension is configured to enrich the chat responses with relevant information. This extension uses an Azure Search index to pull in data, enhancing the chat service’s ability to provide informative and contextually relevant responses.&lt;/li&gt;
&lt;li&gt;Finally, the program runs the chat prompt, using the chat history and the Azure Search extension configuration to generate a response. &lt;/li&gt;
&lt;li&gt;This response is then printed to the console. Additionally, if the response includes citations from the Azure Search extension, these are also processed and printed, showcasing the integration’s ability to provide detailed and informative answers.
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>azureaisearch</category>
      <category>codesample</category>
    </item>
    <item>
      <title>Sample Code using the new OpenAI library for .NET</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Fri, 07 Jun 2024 10:30:04 +0000</pubDate>
      <link>https://forem.com/azure/sample-code-using-the-new-openai-library-for-net-4cbh</link>
      <guid>https://forem.com/azure/sample-code-using-the-new-openai-library-for-net-4cbh</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;The new OpenAI SDK for NET was officially announced, so today, let’s review the annoucement and show some sample code on how to use it.&lt;/p&gt;

&lt;p&gt;The current blog scenarios are&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sample Chat demo&lt;/li&gt;
&lt;li&gt;Sample Audio to Text demo&lt;/li&gt;
&lt;li&gt;Sample Image analisis demo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a sample output for the chat demo! And yes, with a funny system message and questions about France!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L7ZDQIKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L7ZDQIKq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image.png%3Fw%3D1024" alt="sample output for the chat scenario" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Code Repository: &lt;a href="https://github.com/elbruno/gpt4o-labs-csharp?WT.mc_id=academic-00000-brunocapuano"&gt;GPT-4o Labs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://devblogs.microsoft.com/dotnet/openai-dotnet-library/?WT.mc_id=academic-000000-brunocapuano"&gt;OpenAI SDK for .NET&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Microsoft Build 2024 has unveiled new AI investments for .NET developers, including the first beta release of the official OpenAI .NET library, version 2.0.0-beta.1. This library facilitates smooth integration with OpenAI and Azure OpenAI, complementing existing libraries for Python and TypeScript/JavaScript.&lt;/p&gt;

&lt;p&gt;Developed on GitHub, the .NET library will stay current with OpenAI’s latest features, with ongoing work to refine it based on community feedback. The release acknowledges Roger Pincombe’s pioneering work on the initial OpenAI .NET package and encourages continued innovation from community library developers. Participation and collaboration within the community are highly encouraged as the project progresses.&lt;/p&gt;

&lt;p&gt;_ &lt;strong&gt;Note:&lt;/strong&gt;  Part of the content of this post was generated by Microsoft Copilot, an AI assistant._&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 1: Chat
&lt;/h2&gt;

&lt;p&gt;Time to share the “Hello World” of using GPT models. The following sample is a console application that interacts with the OpenAI API to generate chat responses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It retrieves an API key from user secrets and specifies the model to be used for the chat. &lt;/li&gt;
&lt;li&gt;It initializes a ChatClient with the model and API key. &lt;/li&gt;
&lt;li&gt;The system message and user question are defined and added to a list of chat messages. &lt;/li&gt;
&lt;li&gt;The chat is then completed using the ChatClient and the response from the chat is retrieved. &lt;/li&gt;
&lt;li&gt;Finally, the system prompt, user question, and chat response are displayed in the console.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This is the sample output for this program:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;System Prompt: You are a useful assitant that replies using a funny style.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;User Question: What is the capital of France?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Response: Well, let me put on my fancy beret and sip some imaginary café au lait while I tell ya – the capital of France is the one and only Paris! Yes, indeed, the city of love, croissants, and an Eiffel Tower that’s basically the world’s largest toothpick! So, if you’re planning to hunt for baguettes or have a romantic escapade, Paris is where the magic happens! ??????&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 2: Audio
&lt;/h2&gt;

&lt;p&gt;The SDK also allow us to work with Whisper and audio files.&lt;/p&gt;

&lt;p&gt;The following code is a C# console application that uses the OpenAI API to transcribe audio files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It sets up the configuration to retrieve the OpenAI API key and specifies the model to be used for transcription. &lt;/li&gt;
&lt;li&gt;It creates an AudioClient instance with the model and API key.&lt;/li&gt;
&lt;li&gt;The audio file to be transcribed is specified, and transcription options are set, including the response format and granularities for timestamps. &lt;/li&gt;
&lt;li&gt;The audio file is then transcribed, and the transcription text is printed to the console. &lt;/li&gt;
&lt;li&gt;Additionally, the start and end times for each word and segment in the transcription are also printed to the console.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This is a sample output for this program:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Transcription:
Estás escuchando No Tiene Nombre, un podcast sobre tecnología que es bastante probable que sea escrito por una inteligencia artificial, por ejemplo, con chat GPT. El host, autor, editor y encargado de los efectos visuales del podcast es Bruno Capuano. Puedes contactarlo en Twitter en arroba elbruno o en las redes sociales también buscando por elbruno. Mi nombre es Elena, de Italia YOS invito al episodio de hoy. Subtítulos por la comunidad de Amara.org

Words:
            Estás : 700 - 1420
       escuchando : 1420 - 2060
               No : 2060 - 2420
            Tiene : 2420 - 2580
           Nombre : 2580 - 3000
               un : 3340 - 3500
          podcast : 3500 - 3780
            sobre : 3780 - 4100
       tecnología : 4100 - 4720
              que : 4720 - 4960
               es : 4960 - 5120
         bastante : 5120 - 5560
         probable : 5560 - 6020
              que : 6020 - 6260
              sea : 6260 - 6480
          escrito : 6480 - 6820
              por : 6820 - 7100
              una : 7100 - 7460
     inteligencia : 7460 - 7820
       artificial : 7820 - 8380
              por : 8820 - 9020
          ejemplo : 9020 - 9300
              con : 9680 - 9880
             chat : 9880 - 10060
              GPT : 10060 - 11340
               El : 11360 - 11760
             host : 11760 - 12100
            autor : 12660 - 12660
           editor : 13140 - 13140
                y : 13140 - 13340
        encargado : 13340 - 13740
               de : 13740 - 13920
              los : 13920 - 14300
          efectos : 14300 - 14420
         visuales : 14420 - 14860
              del : 14860 - 15020
          podcast : 15020 - 15400
               es : 15400 - 15760
            Bruno : 15760 - 15920
          Capuano : 15920 - 16400
           Puedes : 17640 - 17700
      contactarlo : 17700 - 18320
               en : 18320 - 18500
          Twitter : 18500 - 18740
               en : 18740 - 18920
           arroba : 18920 - 19160
          elbruno : 19160 - 19540
                o : 19540 - 19960
               en : 19960 - 20080
              las : 20080 - 20360
            redes : 20360 - 20360
         sociales : 20360 - 20800
          también : 20800 - 21180
         buscando : 21180 - 21560
              por : 21560 - 21960
          elbruno : 21960 - 22340
               Mi : 23480 - 23560
           nombre : 23560 - 23840
               es : 23840 - 24280
            Elena : 24280 - 24440
               de : 24780 - 24940
           Italia : 24940 - 25360
              YOS : 25360 - 26180
           invito : 26180 - 26660
               al : 26660 - 26880
         episodio : 26880 - 27340
               de : 27340 - 27720
              hoy : 27720 - 27720
       Subtítulos : 30000 - 31540
              por : 31540 - 31540
               la : 31540 - 31540
        comunidad : 31540 - 31540
               de : 31540 - 31540
            Amara : 31540 - 31540
              org : 31540 - 31540

Segments:
      Estás escuchando No Tiene Nombre, un podcast sobre tecnología que es bastante probable : 700 - 6480
                 que sea escrito por una inteligencia artificial, por ejemplo, con chat GPT. : 6480 - 12660
    El host, autor, editor y encargado de los efectos visuales del podcast es Bruno Capuano. : 13140 - 18500
   Puedes contactarlo en Twitter en arroba elbruno o en las redes sociales también buscando por elbruno. : 18500 - 26180
                                Mi nombre es Elena, de Italia YOS invito al episodio de hoy. : 26180 - 27720
                                                    Subtítulos por la comunidad de Amara.org : 30000 - 31540

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Scenario 3: Using Vision with GPT4o.&lt;/p&gt;

&lt;p&gt;And the final scenario will be on how to use vision capabilities with this new SDK. The following program project is a C# console application that uses the OpenAI API to interact with an AI assistant.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It starts by reading the OpenAI API key from the user secrets and initializing the OpenAI client with this key. &lt;/li&gt;
&lt;li&gt;It uploads an image file named foggyday.png from the imgs directory to the OpenAI server for the purpose of vision tasks.&lt;/li&gt;
&lt;li&gt;After the image is uploaded, it creates an AI assistant with a specific instruction to reply in a funny style. &lt;/li&gt;
&lt;li&gt;It starts a new thread with the assistant, sending an initial message asking the assistant to describe the uploaded image.&lt;/li&gt;
&lt;li&gt;The program then enters a loop, listening for updates from the assistant. &lt;/li&gt;
&lt;li&gt;When a new message is received from the assistant, it is printed to the console. &lt;/li&gt;
&lt;li&gt;The program continues to listen for updates until it is manually stopped.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;This is a sample output for this program, and below is the image used to test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
--- Run started! ---
Well, well, well! What do we have here? Is that paradise? A spa? Or just someone's backyard on a dreamy, misty morning? There's a pristine blue pool just begging for a cannonball, surrounded by neat paving stones that scream, "We mean business!" Looming over the scene is a terrific tree that looks like it's auditioning for the role of "Majestic Sentinel."

In the distance, the fog is putting up a great show, hiding whatever secrets lie beyond this yard-maybe a herd of unicorns or a neighborhood of lawn gnomes planning their next move. The whole scene feels like it's on the set of the next great mystery movie... or maybe just an ad for pool cleaning services. Either way, sign me up for a dip!
D:\sk\gpt4ol-sk-csharp\src\OAINETSDK_lab03\bin\Debug\net8.0\OAINETSDK_lab03.exe (process 3668) exited with code 0.
To automatically close the console when debugging stops, enable Tools-&amp;gt;Options-&amp;gt;Debugging-&amp;gt;Automatically close the console when debugging stops.
Press any key to close this window . . .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IORYaC0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-1.png%3Fw%3D343" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IORYaC0o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/06/image-1.png%3Fw%3D343" alt="picture of a backyard in a foggy day" width="343" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The new OpenAI SDK for .NET is a great one. I’ll keep testing and sharing samples, for each new scenario supported by the SDK!&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>codesample</category>
      <category>openai</category>
    </item>
    <item>
      <title>Powering Up #NET Apps with #Phi-3 and #SemanticKernel</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Fri, 31 May 2024 13:49:48 +0000</pubDate>
      <link>https://forem.com/azure/powering-up-net-apps-with-phi-3-and-semantickernel-203d</link>
      <guid>https://forem.com/azure/powering-up-net-apps-with-phi-3-and-semantickernel-203d</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;h1&gt;
  
  
  Introducing the Phi-3 Small Language Model
&lt;/h1&gt;

&lt;p&gt;Phi-3 is an amazing Small Language Model. And hey, it’s also an easy one to use in C#. I already wrote how to use it with ollama, now it’s time to hit the ONNX version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to Phi-3 Small Language Model
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/"&gt;Phi-3 Small Language Model (SLM)&lt;/a&gt; represents a significant leap forward in the field of artificial intelligence. Developed by Microsoft, the Phi-3 family of models is a collection of the most capable and cost-effective SLMs available today. These models have been meticulously crafted to outperform other models of similar or even larger sizes across various benchmarks, including language understanding, reasoning, coding, and mathematical tasks.&lt;/p&gt;

&lt;p&gt;Phi-3 models are not only remarkable for their performance but also for their efficiency and adaptability. They are designed to operate across a wide range of hardware, from traditional computing devices to edge devices like mobile phones and IoT devices. This makes Phi-3 particularly suitable for developers looking to integrate advanced AI capabilities into applications that require strong reasoning, limited compute resources, and low latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  C# Phi3-Labs GitHub Repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/elbruno/phi3-labs"&gt;The GitHub repository in question serves as a practical guide for developers looking to harness the power of the Phi-3 SLM within their applications&lt;/a&gt;. It provides a set of sample projects to demonstrate how to use Phi-3 and C#.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LabsPhi301&lt;/strong&gt;. This is a sample project that uses a local phi3 model to ask a question. The project loads a local ONNX Phi-3 model using the Microsoft.ML.OnnxRuntime libraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LabsPhi302&lt;/strong&gt;. This is a sample project that implements a Console chat using Semantic Kernel. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LabsPhi303 (Comming soon!)&lt;/strong&gt;. This is a sample project that uses Phi-3 Vision to analyze images. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The repository contains detailed instructions for setting up the development environment, cloning the necessary models from Hugging Face, and installing the Phi-3 model. &lt;/p&gt;

&lt;p&gt;You can learn more about &lt;a href="https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx"&gt;Phi-3 in Hugging Face&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A final running application will look like this.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvfgyqoq3chtn4uvbo50.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvfgyqoq3chtn4uvbo50.gif" alt="Chat demo" width="800" height="556"&gt;&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>codesample</category>
      <category>github</category>
    </item>
    <item>
      <title>#SemanticKernel and GPT-4o: Image analysis labs in C#</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Tue, 14 May 2024 13:21:26 +0000</pubDate>
      <link>https://forem.com/azure/semantickernel-and-gpt-4o-image-analysis-labs-in-c-3lee</link>
      <guid>https://forem.com/azure/semantickernel-and-gpt-4o-image-analysis-labs-in-c-3lee</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to GPT-4o
&lt;/h2&gt;

&lt;p&gt;GPT-4o, developed by OpenAI, represents a significant leap forward in AI technology. &lt;a href="https://openai.com/index/hello-gpt-4o/"&gt;Dubbed “omni” for its all-encompassing capabilities, GPT-4o is a multimodal model that can process and generate text, audio, and images&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;It’s designed to facilitate more natural human-computer interactions, responding to audio inputs in as little as 232 milliseconds. This model is not only faster but also 50% cheaper to use in the API, making it a cost-effective solution for developers and businesses alike.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample GitHub Repository
&lt;/h2&gt;

&lt;p&gt;I decided to test some of the new features, and created this repository with some samples using Semantic Kernel and the new GPT-4o model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/elbruno/gpt4ol-sk-csharp/"&gt;https://github.com/elbruno/gpt4ol-sk-csharp/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Repository Content
&lt;/h3&gt;

&lt;p&gt;The repo describes the basic steps to set up and utilize the GPT-4o model with .NET. The repository includes sample code, and links to further reading materials.&lt;/p&gt;

&lt;p&gt;The initial demo shows how to use GPT-4o to analyze the following image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DhTgtIe1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/rpi-run-neofetch.jpg%3Fw%3D750" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DhTgtIe1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/rpi-run-neofetch.jpg%3Fw%3D750" alt="rpi shell console showing the rapsberry pi information with the tool neofetch" width="750" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
The image appears to be a screenshot of a terminal window running on a Raspberry Pi device. The user has executed the `neofetch` command with `sudo`, and the terminal displayed system information. Additionally, the `ollama list` command was executed, showing a list of local models.

Here's the breakdown of the terminal output:

### System Information (Neofetch Output)
- **OS:** Debian GNU/Linux 12 (bookworm) aarch64
- **Host:** Raspberry Pi 5 Model B Rev 1.0
- **Kernel:** 6.6.20+rpt-rpi-2712
- **Uptime:** 3 mins
- **Packages:** 694 (dpkg)
- **Shell:** bash 5.2.15
- **CPU:** 4 cores @ 2.400GHz
- **Memory:** 640MiB / 8052MiB

### Models List (Ollama List Command Output)
- **llama3:latest**
- **ID:** 71a106a91016
- **Size:** 4.7 GB
- **Modified:** 4 days ago
- **phi3:latest**
- **ID:** a2c89ceaed85
- **Size:** 2.3 GB
- **Modified:** 3 days ago

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ll keep updating the samples with more scenarios.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>codesample</category>
      <category>gpt4o</category>
    </item>
    <item>
      <title>Demo – API Manifest Plugins for Semantic Kernel</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Wed, 08 May 2024 13:00:00 +0000</pubDate>
      <link>https://forem.com/azure/demo-api-manifest-plugins-for-semantic-kernel-2bpa</link>
      <guid>https://forem.com/azure/demo-api-manifest-plugins-for-semantic-kernel-2bpa</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;Today, it’s time to share some insights into the world of API Manifest Plugins for Semantic Kernel, these are a cool way to work with OpenAI’s Large Language Models (LLMs).&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to API Manifest Plugins for Semantic Kernel
&lt;/h2&gt;

&lt;p&gt;Semantic Kernel is at the forefront of AI development, allowing developers to import plugins from OpenAPI documents. However, importing entire OpenAPI documents for large APIs can be inefficient. &lt;a href="https://devblogs.microsoft.com/semantic-kernel/introducing-api-manifest-plugins-for-semantic-kernel-2/"&gt;To tackle this, Microsoft introduced API Manifest Plugins for Semantic Kernel&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An  &lt;strong&gt;API Manifest&lt;/strong&gt;  is a document that outlines an application’s API dependencies, including links to API descriptions, the requests made, and their authorization requirements. &lt;a href="https://devblogs.microsoft.com/semantic-kernel/introducing-api-manifest-plugins-for-semantic-kernel-2/?WT.mc_id=academic-00000-brunocapuano"&gt;These plugins enable Semantic Kernel functions to be generated from API Manifest files, which can then be called by LLMs in various scenarios&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slicing large API descriptions into task-specific parts.&lt;/li&gt;
&lt;li&gt;Packaging multiple API dependencies into a single plugin.&lt;/li&gt;
&lt;li&gt;Defining authorization requirements for API calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach not only streamlines the process but also enhances security and efficiency when integrating third-party APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Importance of Function Calling while working with OpenAI LLMs
&lt;/h2&gt;

&lt;p&gt;Function calling is a game-changer for developers working with OpenAI LLMs. It allows the models to intelligently output JSON objects containing arguments to call one or many functions, based on the input provided&lt;a href="https://platform.openai.com/docs/guides/function-calling"&gt;&lt;sup&gt;.&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This feature is crucial for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating assistants that answer questions by calling external APIs.&lt;/li&gt;
&lt;li&gt;Converting natural language into API calls.&lt;/li&gt;
&lt;li&gt;Extracting structured data from text.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Supported by the latest models, function calling ensures that developers can get structured data back from the model more reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference to the GitHub Repository
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/elbruno/sk-API-Manifest-Plugins"&gt;My GitHub repository “sk-API-Manifest-Plugins” is a sample entry for developers looking to leverage API Manifest Plugins with Semantic Kernel&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In this scenario, we will assume that there are public APIs with information about superheroes, and a pet database with information about pets and their owner information.

There is a fictitious Superhero Pet Day, so the owner of a Pet Store will trigger a plan to find which clients have pets with superhero names.

The plan will detect these pets and send an email congratulating them for the Superhero Pet Day with information about their pet name super hero.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Once you cloned the repository, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dotnet.microsoft.com/download/dotnet/8.0"&gt;.NET 8&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://visualstudio.microsoft.com/"&gt;Visual Studio 2022&lt;/a&gt; or &lt;a href="https://code.visualstudio.com/"&gt;Visual Studio Code&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Access to OpenAI APIs or &lt;a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/overview?WT.mc_id=academic-00000-brunocapuano"&gt;Azure OpenAI Services&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;p&gt;The repository includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 Sample API projects: PetStoreAPI and SuperHeroAPI.&lt;/li&gt;
&lt;li&gt;1 Semantic Kernel test project with a plan to solve the scenario. The project uses API Manifest to embed the plugins from the 2 APIs.&lt;/li&gt;
&lt;li&gt;Bonus: the SK test also use a native plugin that emulates the email sent.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Running the Demo
&lt;/h3&gt;

&lt;p&gt;To run the demo:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository from GitHub.&lt;/li&gt;
&lt;li&gt;Install the necessary dependencies listed in the repository.&lt;/li&gt;
&lt;li&gt;Follow the step-by-step guide to set up the environment and execute the demo.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is the start page of the console demo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mqmBeKeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/05/10appstart.png%3Fw%3D970" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mqmBeKeg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/05/10appstart.png%3Fw%3D970" alt="start page with the console start app" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the responses from the SK Planner:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--60TdPJb---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/05/20responses.png%3Fw%3D971" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--60TdPJb---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/05/20responses.png%3Fw%3D971" width="800" height="844"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last step also show a table with all the details and steps that the planner performed to solve the scenario.&lt;/p&gt;

&lt;p&gt;This hands-on experience will give you a practical understanding of how API Manifest Plugins work within Semantic Kernel.&lt;/p&gt;




&lt;p&gt;For more detailed information and to explore the full capabilities of API Manifest Plugins and function calling with OpenAI LLMs, visit the official &lt;a href="https://devblogs.microsoft.com/semantic-kernel/introducing-api-manifest-plugins-for-semantic-kernel-2/"&gt;Semantic Kernel blog&lt;/a&gt; and the &lt;a href="https://platform.openai.com/docs/guides/function-calling"&gt;OpenAI documentation&lt;/a&gt;. Embrace the future of AI with these powerful tools at your disposal.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>azureopenai</category>
      <category>codesample</category>
    </item>
    <item>
      <title>#SemanticKernel: Local LLMs Unleashed on #RaspberryPi 5</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Wed, 01 May 2024 12:00:00 +0000</pubDate>
      <link>https://forem.com/azure/semantickernel-local-llms-unleashed-on-raspberrypi-5-1m2g</link>
      <guid>https://forem.com/azure/semantickernel-local-llms-unleashed-on-raspberrypi-5-1m2g</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;Welcome to the exciting world of local Large Language Models (LLMs) where we’re pushing the boundaries of what’s possible with AI.&lt;/p&gt;

&lt;p&gt;Today let’s talk about a cool topic: run models locally, especially on devices like the Raspberry Pi 5. Let’s dive into the future of AI, right in our own backyards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ollama and using Open Source LLMs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;OLLAMA stands out as a platform that simplifies the process of running open-source LLMs locally on your machine&lt;/a&gt;. It bundles model weights, configuration, and data into a single package, making it accessible for developers and AI enthusiasts alike. The key benefits of using Ollama include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt; : Easy setup process without the need for deep machine learning knowledge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effectiveness&lt;/strong&gt; : Eliminates cloud costs, making it wallet-friendly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt; : Ensures data processing happens on your local machine, enhancing user privacy.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;&lt;strong&gt;Versatility&lt;/strong&gt; : Suitable for various applications beyond Python, including web development&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using Local LLMs like Llama3 or Phi-3
&lt;/h2&gt;

&lt;p&gt;Local LLMs like Llama3 and Phi-3 represent a significant shift towards more efficient and compact AI models. &lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;Llama3, with its Mixture-of-Experts (MoE) architecture, offers specialized neural networks for different tasks, providing high-quality outputs with a smaller parameter count&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;Phi-3, developed by Microsoft, uses advanced training techniques like quantization to maximize efficiency, making it ideal for deployment on a wide range of devices&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The use of local LLMs offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Latency&lt;/strong&gt; : Local models eliminate network latency associated with cloud-based solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Privacy&lt;/strong&gt; : Data remains on your local device, offering a secure environment for sensitive information.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;&lt;strong&gt;Customization&lt;/strong&gt; : Local models allow for greater flexibility to tweak and optimize the models as per your needs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Set Up a Local Ollama Inference Server on a Raspberry Pi 5
&lt;/h2&gt;

&lt;p&gt;I already wrote a couple of times, my own version of the 1st time setup for a Raspberry Pi (&lt;a href="https://dev.to/elbruno/raspberrypi-1st-setup-no-monitor-wifi-auto-connect-ssh-rename-update-docker-rust-and-more-update-2023-jan-04-1gp7-temp-slug-9533435"&gt;link&lt;/a&gt;). Once the device is ready, setting up Ollama on a Raspberry Pi 5 (or older) is a straightforward process. Here’s a quick guide to get you started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;&lt;strong&gt;Installation&lt;/strong&gt; : Use the official Ollama installation script to install it on your Raspberry Pi OS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The main command is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
curl -fsSL https://ollama.com/install.sh | sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;&lt;strong&gt;Running Models&lt;/strong&gt; : After installation, you can run various LLMs like tinyllama, phi, and llava, depending on your RAM capacity&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In example to install and run llama 3, we can use the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
ollama run llama3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once ollama is installed and a model is downloaded, the console should look similar to this one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ELLIsAly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/image-2.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ELLIsAly--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/image-2.png%3Fw%3D1024" alt="log view of the installation of ollama and the run of llama 3 model" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cheatsheet.md/llm-leaderboard/ollama.en"&gt;For a detailed step-by-step guide, including setting up Docker and accessing the Ollama WebUI, check out the resources available on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Tip: to check the realtime journal of the ollama service, we can run this command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
journalctl -u ollama -f

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; by defaul ollama server is available only for local calls. In order to enable access from other machines, you need to follow these steps:&lt;/p&gt;

&lt;p&gt;– Edit the systemd service by calling this command.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo systemctl edit ollama.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;– This will open an editor.&lt;/p&gt;

&lt;p&gt;– Add a line Environment under section [Service]:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[Service]
Environment="OLLAMA_HOST=0.0.0.0"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;– Save and exit.&lt;/p&gt;

&lt;p&gt;– Reload systemd and restart Ollama:&lt;/p&gt;

&lt;p&gt;More information in the &lt;a href="https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server"&gt;ollama FAQ&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Use Semantic Kernel to Call a Chat Generation from a Remote Server
&lt;/h2&gt;

&lt;p&gt;Let’s switch and write some code. This is a “Hello World” sample using Semantic Kernel and Azure OpenAI Services.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;You can learn more about this ai samples in: &lt;a href="https://aka.ms/dotnet-ai"&gt;https://aka.ms/dotnet-ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now , to use a remote LLM, like Llama 3 in a Raspberry Pi, we can add a service to the Builder, that uses OpenAI API specification. In the next sample this change in the line 35:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This makes the trick! And with just a single change in a line.&lt;/p&gt;

&lt;p&gt;And we also have the question about the performance, adding a StopWatch we can get a sense of the time elapsed for the call. For this simple call, the response is around 30-50 seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VcCL9ARp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/image-3.png%3Fw%3D990" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VcCL9ARp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/image-3.png%3Fw%3D990" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not bad at all for a small device!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The advent of local LLMs like Ollama is revolutionizing the way we approach AI, offering unprecedented opportunities for innovation and privacy. Whether you’re a seasoned developer or just starting out, the potential of local AI is immense and waiting for you to explore.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This blog post was generated using information from various online resources, including cheatsheet.md, anakin.ai, and techcommunity.microsoft.com, to provide a comprehensive guide on local LLMs and Ollama.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>codesample</category>
      <category>llama3</category>
    </item>
    <item>
      <title>#RaspberryPi – 1st setup no monitor 📺: Wifi 📶 auto connect, SSH, rename, update, docker 🐳, rust and more! Update 2023-Jan-04</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Tue, 30 Apr 2024 12:00:00 +0000</pubDate>
      <link>https://forem.com/elbruno/raspberrypi-1st-setup-no-monitor-wifi-auto-connect-ssh-rename-update-docker-rust-and-more-update-2023-jan-04-3dai</link>
      <guid>https://forem.com/elbruno/raspberrypi-1st-setup-no-monitor-wifi-auto-connect-ssh-rename-update-docker-rust-and-more-update-2023-jan-04-3dai</guid>
      <description>&lt;h2&gt;
  
  
  &lt;a href="https://dev.to/elbruno/raspberrypi-1st-setup-no-monitor-wifi-auto-connect-ssh-rename-update-docker-rust-and-more-update-2023-jan-04-31oi"&gt;Latest version 2024-Apr-30 here&lt;/a&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Content
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create SD card using Raspberry Pi Imager&lt;/li&gt;
&lt;li&gt;Configure Wireless connection (if needed)&lt;/li&gt;
&lt;li&gt;Enable SSH (if needed)&lt;/li&gt;
&lt;li&gt;Find the IP address in your network&lt;/li&gt;
&lt;li&gt;Access via SSH&lt;/li&gt;
&lt;li&gt;Change Password (if needed)&lt;/li&gt;
&lt;li&gt;Rename the device (if needed)&lt;/li&gt;
&lt;li&gt;Expand FileSystem&lt;/li&gt;
&lt;li&gt;Update the device&lt;/li&gt;
&lt;li&gt;Install neofetch&lt;/li&gt;
&lt;li&gt;Install Docker&lt;/li&gt;
&lt;li&gt;Setup SSH password-less access to the Raspberry Pi&lt;/li&gt;
&lt;li&gt;Setup password-less access to remote work with docker 🐳&lt;/li&gt;
&lt;li&gt;Run Docker 🐳] commands without sudo&lt;/li&gt;
&lt;li&gt;Install Rust&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;Let’s start installing the latest Raspberry Pi OS image in an SD card. Next steps will be focus on how to access and control remotely your device you may want to follow this steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yZkYnQht--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspberry-pi-images-install-so.png%3Fw%3D716" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yZkYnQht--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspberry-pi-images-install-so.png%3Fw%3D716" alt="raspberry Pi Images install SO" width="716" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This tutorial and tips works for Raspberry Pi 3, 4 and Zero.&lt;/p&gt;

&lt;p&gt;The version 1.6 of Raspberry Pi Imager includes a new feature that allows to&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define HostName&lt;/li&gt;
&lt;li&gt;Enable SSH&lt;/li&gt;
&lt;li&gt;Configure Wifi&lt;/li&gt;
&lt;li&gt;Set locale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Directly from the tool pressing the [Config] button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BWslLfJY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2022/03/raspberry-pi-imager-config.jpg%3Fw%3D697" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BWslLfJY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2022/03/raspberry-pi-imager-config.jpg%3Fw%3D697" alt="raspberry pi imager config" width="697" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;_ &lt;strong&gt;Note:&lt;/strong&gt; Open the setting using this button is new in the latest Raspverry Pi imager version._&lt;/p&gt;

&lt;p&gt;We can define this settings on the tool screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I53_zpZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/03/raspberry-pi-imager-advanced-options.png%3Fw%3D348" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I53_zpZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/03/raspberry-pi-imager-advanced-options.png%3Fw%3D348" width="348" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ll leave the old-school standard methods below just as a reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Wireless connection (if needed)
&lt;/h2&gt;

&lt;p&gt;In the SD Card, you need to create a file named [&lt;strong&gt;wpa_supplicant.conf&lt;/strong&gt;] in the root of the SD card with the following information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
country=ca
update_config=1
ctrl_interface=/var/run/wpa_supplicant

network={
 scan_ssid=1
 ssid=" Your WiFi SSID"
 psk="You amazing password"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file content is very straight forward to understand. Main values to complete are [ssid] and [psk].&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Once you put the SD card in the device and start the device, it will automatically connect to the configured WiFi.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enable SSH (if needed)
&lt;/h2&gt;

&lt;p&gt;If you also want to enable SSH, you need to create a blank file named [ssh] to the main partition.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Once you put the SD card in the device and start the device, it will automatically enable the SSH service.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, you need to create and copy 2 files to the root of your SD card&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wpa_supplicant.conf&lt;/li&gt;
&lt;li&gt;ssh&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Find the IP address in your network
&lt;/h2&gt;

&lt;p&gt;And that’s it, your Raspberry Pi will be connected to the Wifi and with SSH enabled. At this moment we can use a tool like &lt;strong&gt;&lt;em&gt;AngryIp&lt;/em&gt;&lt;/strong&gt; (see references) to detect the new device in the network&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pa_aPi39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/01/angry-ip-find-raspberry-pi.png%3Fw%3D543" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pa_aPi39--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/01/angry-ip-find-raspberry-pi.png%3Fw%3D543" width="543" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My new device IP is: 192.168.1.246&lt;/p&gt;

&lt;p&gt;I’m trying to avoid Java updates, and even install java, so lately I use a mobile app: Fing, and after a search the results are nicer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I-PucOYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/01/find-detected-devices.png%3Fw%3D469" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I-PucOYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/01/find-detected-devices.png%3Fw%3D469" width="469" height="844"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access via SSH
&lt;/h2&gt;

&lt;p&gt;I used to like Putty to connect to my device, however during the past months I’ve been using Windows Terminal and Powershell. In order to access the device I need to execute the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
ssh user@deviceaddress

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and my data is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user: pi&lt;/li&gt;
&lt;li&gt;ip: 192.168.1.246&lt;/li&gt;
&lt;li&gt;password: raspberry&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gws9pqEi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/01/access-to-raspberry-pi-via-ssh-with-windows-terminal.png%3Fw%3D914" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gws9pqEi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/01/access-to-raspberry-pi-via-ssh-with-windows-terminal.png%3Fw%3D914" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now start working with your Raspberry Pi !&lt;/p&gt;

&lt;p&gt;If you get a SSH error message similar to this one&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
---
Host key verification failed.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to run this command to fix the host hey.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# ssh-keygen -R &amp;lt;host&amp;gt;
ssh-keygen -R 192.168.1.247

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;_ &lt;strong&gt;Important:&lt;/strong&gt; the default password is raspberry, please follow next step!_&lt;/p&gt;

&lt;h2&gt;
  
  
  Change Password (if needed)
&lt;/h2&gt;

&lt;p&gt;The default password for the device is “raspberry”, and as usual, it’s recommended to change it. In order to do this, in the ssh terminal, let’s access to the device configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo raspi-config

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will open the configuration for the device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ryhS08lr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-main-menu.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ryhS08lr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-main-menu.png%3Fw%3D1024" alt="raspi config main menu" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Option number 1 will allow us to change the password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4t9zjObM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-change-password.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4t9zjObM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-change-password.png%3Fw%3D1024" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Rename the device (if needed)
&lt;/h2&gt;

&lt;p&gt;In the same section we can change the Host Name.&lt;/p&gt;

&lt;p&gt;And define the new name for the Raspberry Pi device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expand FileSystem
&lt;/h2&gt;

&lt;p&gt;Another important option in the configuration is to expand the SD disk.In the same configuration screen, select&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;6. Advanced Options&lt;/li&gt;
&lt;li&gt;Expand File System&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pMaKUMvh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-advanced-options.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pMaKUMvh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspi-config-advanced-options.png%3Fw%3D1024" alt="raspi config advanced options" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to reboot and after the reboot the file system should have been expanded to include all available space on your micro-SD card. Reboot with the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo reboot

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Update the device
&lt;/h2&gt;

&lt;p&gt;There are 2 ways to update the device, using commands and using raspi-config.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Raspi Config main menu, the option 8 will launch the update commands.&lt;/li&gt;
&lt;li&gt;If you prefer to manually type an update command, this one works for me
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo -- sh -c 'apt-get update; apt-get upgrade -y; apt-get dist-upgrade -y; apt-get autoremove -y; apt-get autoclean -y'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y85NXZNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspberry-pi-updates-completed.png%3Fw%3D791" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y85NXZNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2020/12/raspberry-pi-updates-completed.png%3Fw%3D791" alt="updates complete" width="791" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install neofetch&lt;/p&gt;

&lt;p&gt;Neofetch is a shell script that requires bash 3.2+ to function. By default, it displays an ASCII art of your Distro’s logo. Neofetch supports almost 150 different operating systems. From Linux to Windows.&lt;/p&gt;

&lt;p&gt;Installation is super easy, just run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo apt install neofetch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then run the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo neofetch

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will be similar to this one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vhEldY38--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/rpi-run-neofetch.jpg%3Fw%3D996" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vhEldY38--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2024/04/rpi-run-neofetch.jpg%3Fw%3D996" alt="raspberry pi 5 running neofecth" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Docker
&lt;/h2&gt;

&lt;p&gt;The information is available from the Official Docker Documentation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
curl -fsSL https://get.docker.com -o get-docker.sh

sudo sh get-docker.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then, a simple check for the docker version with the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup ssh password-less access to the Raspberry Pi
&lt;/h2&gt;

&lt;p&gt;The main resource is an official one fro the Raspberry Pi org: &lt;a href="https://www.raspberrypi.org/documentation/remote-access/ssh/passwordless.md"&gt;Passwordless SSH access&lt;/a&gt;. Here is a quick resume for Windows 11 users. First let’s run this command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
ls ~/.ssh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I already keys on my computer, so I found id_sa and id_rsa.pub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1vBVmcaK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/08/raspberry-pi-ssh-keys.png%3Fw%3D645" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1vBVmcaK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/08/raspberry-pi-ssh-keys.png%3Fw%3D645" alt="raspberry pi ssh keys" width="645" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s copy them to the device. For this this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
cat ~/.ssh/id_rsa.pub | ssh &amp;lt;USERNAME&amp;gt;@&amp;lt;IP-ADDRESS&amp;gt; 'mkdir -p ~/.ssh &amp;amp;&amp;amp; cat &amp;gt;&amp;gt; ~/.ssh/authorized_keys'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! Now we can ssh the Raspberry Pi without password.&lt;/p&gt;

&lt;p&gt;_ &lt;strong&gt;Disclaimer:&lt;/strong&gt; you know that this is not a very secure practice?_&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup SSH password-less access to remote work with docker 🐳
&lt;/h2&gt;

&lt;p&gt;Once we finished ssh password-less access, we can easily configure a TCP-enabled Docker.&lt;/p&gt;

&lt;p&gt;I followed the next steps. First, let’s edit &lt;em&gt;docker.service&lt;/em&gt; with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo nano /lib/systemd/system/docker.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And add [-H tcp://0.0.0.0:2375] at [ExecStart] section:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ut5g_O0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/08/raspberry-pi-ssh-password-less-docker-1.png%3Fw%3D976" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ut5g_O0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2021/08/raspberry-pi-ssh-password-less-docker-1.png%3Fw%3D976" alt="raspberry pi ssh password less docker" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And restart the device or just the needed services&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run &lt;em&gt;sudo systemctl daemon-reload&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;run &lt;em&gt;sudo systemctl restart docker&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Run Docker 🐳 commands without sudo
&lt;/h2&gt;

&lt;p&gt;Now let’s upgrade our docker commands, so we can run them without sudo. Just run this commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo groupadd docker
sudo gpasswd -a $USER docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And also restart the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo service docker restart

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Rust&lt;/p&gt;

&lt;p&gt;To install Rust run the following in your terminal, then follow the on-screen instructions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are the options, I’ll go with the default one&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0d843aSr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-10.png%3Fw%3D942" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0d843aSr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-10.png%3Fw%3D942" alt="rust install options" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a couple of minutes, we have rust installed !&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_1CaZoei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-11.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_1CaZoei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-11.png%3Fw%3D1024" alt="rust installed" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I closed my terminal and opened again, and I can test the rust and cargo version with the commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
rustc --version
cargo --version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6lzGMO56--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-12.png%3Fw%3D1006" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6lzGMO56--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-12.png%3Fw%3D1006" alt="displaying rust and cargo version in the terminal" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can even create a new app using cargo, edit the main file and run it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OllSRs6d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-13.png%3Fw%3D1024" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OllSRs6d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://brunocapuano.files.wordpress.com/2023/01/image-13.png%3Fw%3D1024" alt="I can even create a new app using cargo, edit the main file and run it!" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rust is working fine !&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And that’s it, we have our device updated and running with the latest software versions and we didn’t use a monitor! I’ll update this post frequently to make it relevant with my personal best practices.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  My posts on Raspberry Pi ⚡🐲⚡
&lt;/h2&gt;

&lt;h6&gt;
  
  
  Dev posts for Raspberry Pi
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2020/01/04/howto-grant-permissions-to-a-folder-after-git-clone-to-perform-dotnet-restore-on-a-raspberrypi-dotnetcore/"&gt;How to grant permissions to a folder after git clone, to perform dotnet restore on a Raspberry Pi&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/12/30/raspberypi-how-to-install-net-core-3-1-in-a-raspberry-pi-4/"&gt;How to install .Net Core 3.1 in a Raspberry Pi 4&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/10/01/vscode-installing-visual-studio-code-code-in-a-raspberrypi-run-as-root-fix-black-screen-updated/"&gt;Installing Visual Studio Code in a Raspberry Pi 4, run as root, fix black screen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/08/27/raspberrypi-how-to-install-dotnetcore-in-a-raspberrypi4-and-test-with-helloworld-of-course/"&gt;How to install .Net Core in a Raspberry Pi 4 and test with Hello World&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/08/28/vscode-build-and-run-c-dotnetcore-projects-in-raspberrypi/"&gt;Build and Run C# NetCore projects in a Raspberry Pi 4 with Visual Studio Code&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/09/02/vscode-lets-do-some-git-dev-in-raspberrypi-github-and-azure-devops/"&gt;Let’s do some Git dev in Raspberry Pi 4 (GitHub and Azure DevOps!)&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2020/01/23/raspberrypi-install-opencv/"&gt;Install OpenCV&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2020/01/29/raspberrypi-install-virtual-environments/"&gt;Install Python 🐍 Virtual Environments in Raspberry Pi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2021/02/01/raspberrypi-setup-ssh-passwordless-access-to-remote-work-with-docker-%f0%9f%90%b3/"&gt;Setup SSH passwordless access to remote work with Docker 🐳&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2021/02/02/raspberrypi-manage-docker-%f0%9f%90%b3-as-a-non-root-user/"&gt;Manage Docker 🐳&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2021/02/03/raspberrypi-build-docker-%f0%9f%90%b3-images-from-visual-studio-code-remotely-using-a-raspberry-pi-azureiot/"&gt;Build Docker 🐳 images from Visual Studio Code remotely using a Raspberry Pi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Tools and Apps for Raspberry Pi
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/09/23/raspberrypi-where-is-my-task-manager-lets-try-htop/"&gt;Where is my Task Manager in RaspberryPi? Let’s try htop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/09/30/raspberrypi-multimonitor-support-in-raspberrypi4-rocks/"&gt;Multi-monitor 📺 in Raspberry Pi 4 rocks !&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/10/02/raspberrypi-double-commander-on-raspberrypi4-because-files-are-important/"&gt;Double Commander on RaspberryPi4, because files are important&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2020/01/24/raspberrypi-how-to-install-docker-updated-2020-01-24/"&gt;How to install Docker 🐳 in a Raspberry Pi 4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/08/20/vscode-installing-visual-studio-code-in-a-raspberrypi-a-couple-of-lessons-learned-code/"&gt;Installing Visual Studio Code in a Raspberry Pi&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/10/01/vscode-installing-visual-studio-code-code-in-a-raspberrypi-run-as-root-fix-black-screen-updated/"&gt;Installing Visual Studio Code in a Raspberry Pi, run as root, fix black screen (Updated)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://elbruno.com/2019/10/01/raspberrypi-6-commands-to-install-opencv-for-python-in-raspberrypi4/"&gt;6 commands to install OpenCV for Python 🐍 in a Raspberry Pi 4&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Setup the device
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2021/06/07/raspberrypi-1st-setup-no-monitor-%f0%9f%93%ba-wifi-%f0%9f%93%b6-auto-connect-ssh-rename-update-docker-%f0%9f%90%b3-and-more-update-2021-june-07/"&gt;1st Setup without monitor 📺: auto connect to WiFi 📶, enable SSH, update and more&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2020/01/21/raspberrypi-setup-without-monitor-enable-vnc/"&gt;Setup without monitor: enable VNC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/08/22/raspberrypi-how-to-enable-auto-start-with-hdmi-safe-mode/"&gt;How to enable auto start with HDMI safe mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2019/07/30/raspberrypi-running-a-python-script-in-a-python-virtual-environment-on-reboot-startup/"&gt;Running a Python 🐍 script in a Python Virtual Environment on reboot / startup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2021/02/12/raspberrypi-update-or-setup-wifi-configuration-in-ubuntu/"&gt;Setup Wifi on Ubuntu&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Hardware
&lt;/h6&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2020/01/06/raspberrypi-ice-tower-the-best-cooler-for-your-device-from-70c-to-40c-in-a-60-minutes-process-running-at-100-in-all-4-cores"&gt;Ice Tower, the best cooler 🧊 for your device! From 70C to &amp;lt;40C in a 60 minutes process running at 100% in all 4 cores&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elbruno.com/2020/01/28/raspberrypi-performance-differences-in-facerecognition-using-openvino-code-with-code/"&gt;Intel Neural Stick 2, Performance differences in Face Recognition using OpenVino&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>englishpost</category>
      <category>configuration</category>
      <category>docker</category>
    </item>
    <item>
      <title>🦙 Harnessing Local AI: Unleashing the Power of .NET Smart Components and Llama2</title>
      <dc:creator>El Bruno</dc:creator>
      <pubDate>Tue, 09 Apr 2024 12:00:00 +0000</pubDate>
      <link>https://forem.com/azure/harnessing-local-ai-unleashing-the-power-of-net-smart-components-and-llama2-11ba</link>
      <guid>https://forem.com/azure/harnessing-local-ai-unleashing-the-power-of-net-smart-components-and-llama2-11ba</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;.NET Smart Components are an amazing example of how to use AI to enhace the user experience in something as popular as a combobox.&lt;/p&gt;

&lt;p&gt;.NET Smart Components also support the use of local LLMs, so in this post I’ll show how to configure these components to use a local Llama 2 inference server. The following image shows the Smart TextArea doing completions with a local server, in the right we can check the local server journal to check the local http requests to the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8neHGbvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://elbruno.com/wp-content/uploads/2024/04/2024-04-04-net-blazor-smart-components-local-ollama-blog.gif%2520%257C%2520width%3D640x480" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8neHGbvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://elbruno.com/wp-content/uploads/2024/04/2024-04-04-net-blazor-smart-components-local-ollama-blog.gif%2520%257C%2520width%3D640x480" alt="Live sample of smart componentes with ollama" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to .NET Smart Components
&lt;/h2&gt;

&lt;p&gt;.NET Smart Components are a groundbreaking addition to the .NET ecosystem, offering AI-powered UI controls that seamlessly integrate into your applications. &lt;a href="https://dev.to/dotnetblogger/introducing-net-smart-components-ai-powered-ui-controls-16d7-temp-slug-2419622"&gt;These components are designed to enhance user productivity by providing intelligent features such as  &lt;strong&gt;Smart Paste&lt;/strong&gt; ,  &lt;strong&gt;Smart TextArea&lt;/strong&gt; , and  &lt;strong&gt;Smart ComboBox&lt;/strong&gt;&lt;/a&gt;&lt;a href="https://dev.to/dotnetblogger/introducing-net-smart-components-ai-powered-ui-controls-16d7-temp-slug-2419622"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smart Paste&lt;/strong&gt;  simplifies data entry by automatically filling out forms using data from the user’s clipboard.  &lt;strong&gt;Smart TextArea&lt;/strong&gt;  enhances the traditional textarea by providing autocomplete capabilities for sentences, URLs, and more. Lastly,  &lt;strong&gt;Smart ComboBox&lt;/strong&gt;  improves the traditional combo box by offering suggestions based on semantic matching&lt;sup&gt;.&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/dotnetblogger/introducing-net-smart-components-ai-powered-ui-controls-16d7-temp-slug-2419622"&gt;These components are currently available for Blazor, MVC, and Razor Pages with .NET 6 and later, and they represent an experiment in integrating AI directly into user interfaces&lt;/a&gt;&lt;a href="https://dev.to/dotnetblogger/introducing-net-smart-components-ai-powered-ui-controls-16d7-temp-slug-2419622"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Importance of Local LLMs like Llama2
&lt;/h2&gt;

&lt;p&gt;Local Large Language Models (LLMs) like Llama2 offer significant advantages, particularly in terms of data privacy and security. In example: &lt;a href="https://thenewstack.io/how-to-set-up-and-run-a-local-llm-with-ollama-and-llama-2/"&gt;running LLMs locally allows organizations to process sensitive data without exposing it to external servers, ensuring compliance with data protection regulation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Llama2 is an open-source model that provides robust performance across various tasks, including common-sense reasoning, mathematical abilities, and general knowledge. It supports a context length of 4096 tokens, which is double that of its predecessor, Llama1. This makes Llama2 an ideal choice for organizations looking to leverage AI while maintaining control over their data and infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to run .NET Smart Components with a Local Ollama Inference Server
&lt;/h2&gt;

&lt;p&gt;In previous posts, I shared how to run a local Ollama Inference Server in Ubuntu (&lt;a href="https://dev.to/azure/semantickernel-chat-service-demo-running-llama2-llm-locally-in-ubuntu-med"&gt;blog&lt;/a&gt;). Lucky us, you can also do this in Windows now.&lt;/p&gt;

&lt;p&gt;And once you clone the main Smart Component repository, you only need to add a small change to run the samples locally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the file [RepoSharedConfig.json]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EddLU6y3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/04/image-1.png%3Fw%3D1021" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EddLU6y3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://elbruno.com/wp-content/uploads/2024/04/image-1.png%3Fw%3D1021" alt="sample vscode editing json file" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the following configuration to use the local ollama model
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SmartComponents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;

   &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;local&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;demo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ollama&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;self-hosted&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"SelfHosted"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"DeploymentName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"llama2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:11434"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it!, now you can run the either the Blazor or the MVC demos and they will use the local Ollama server to run the completions!&lt;/p&gt;

&lt;p&gt;And hey, let’s keep an eye on the Smart Components, they are going to provide an amazing new user experience powered by AI!&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;Greetings&lt;/p&gt;

&lt;p&gt;El Bruno&lt;/p&gt;

&lt;p&gt;More posts in my blog &lt;a href="https://www.elbruno.com"&gt;ElBruno.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More info in &lt;a href="https://beacons.ai/elbruno"&gt;https://beacons.ai/elbruno&lt;/a&gt;&lt;/p&gt;




</description>
      <category>englishpost</category>
      <category>net</category>
      <category>netsmartcomponents</category>
    </item>
  </channel>
</rss>
