<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: FrankPohl</title>
    <description>The latest articles on Forem by FrankPohl (@fp).</description>
    <link>https://forem.com/fp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/fp"/>
    <language>en</language>
    <item>
      <title>Open Source Tools for .NET MAUI</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Wed, 23 Oct 2024 09:07:23 +0000</pubDate>
      <link>https://forem.com/fp/open-source-tools-for-net-maui-2g1g</link>
      <guid>https://forem.com/fp/open-source-tools-for-net-maui-2g1g</guid>
      <description>&lt;p&gt;Attention .NET MAUI Devs,&lt;br&gt;
Syncfusion has published 14 controls as Open Source.&lt;br&gt;
They have published a blog post about it here&lt;br&gt;
[&lt;a href="https://www.syncfusion.com/blogs/post/syncfusion-open-source-net-maui-controls-cross-platform" rel="noopener noreferrer"&gt;https://www.syncfusion.com/blogs/post/syncfusion-open-source-net-maui-controls-cross-platform&lt;/a&gt;].&lt;br&gt;
I am using their controls in my apps and I am very confident with the quality but especially with their support.&lt;br&gt;
Frank&lt;/p&gt;

</description>
      <category>programming</category>
      <category>maui</category>
      <category>net</category>
    </item>
    <item>
      <title>MS AppCenter shutting down in 2025</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Mon, 18 Mar 2024 07:55:59 +0000</pubDate>
      <link>https://forem.com/fp/ms-appcenter-shutting-down-in-2025-40nl</link>
      <guid>https://forem.com/fp/ms-appcenter-shutting-down-in-2025-40nl</guid>
      <description>&lt;p&gt;I was shocked this morning when I read this announcement in AppCenter.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj8q48neix46232dtfz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj8q48neix46232dtfz6.png" alt="AppCenter Announcement" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm using the logging service heavily in all my apps on all platforms to log errros and app usage.&lt;/p&gt;

&lt;p&gt;They propose to switch over to some Azure service. Yet I do not know how this will work out and at which cost this comes. &lt;/p&gt;

&lt;p&gt;Leave a comment if you use AppCenter too and share your way to replace it if you like.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>development</category>
      <category>microsoft</category>
      <category>appcenter</category>
    </item>
    <item>
      <title>Published my first .NET MAUI app</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Sun, 12 Mar 2023 15:22:35 +0000</pubDate>
      <link>https://forem.com/fp/published-my-first-net-maui-app-4jkn</link>
      <guid>https://forem.com/fp/published-my-first-net-maui-app-4jkn</guid>
      <description>&lt;p&gt;Last week I published my first .NET MAUI app named Health Data Diary in the Microsoft and the Google Play Store.&lt;/p&gt;

&lt;p&gt;Developing and publishing this app was really a tough ride and not as smooth as I expected.&lt;br&gt;
.NET MAUI is the successor of Xamarin and Xamarin and its base Framework is out of support for some time now. Therefore, I thought the release of NET MAUI as general available would offer a ripe product for the development of multi-platform solutions.  &lt;/p&gt;

&lt;p&gt;But there were a lot of unexpected problems along the way. Just to name a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Apps crash without an error message when running as Release build but not in Debug mode. And the reason is for example a missing permission or a Style that is used but not declared.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Controls behave different or have different bugs on the different platforms. For example, the Date and TimePicker have different bugs on the different platforms and localization of these control is more or less not existent. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You will have closer insights into the project file structure than in the past because some changes can only be made in the file and not with the UI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adding unit tests is not straight forward. It is necessary to edit the project file or to llink the files to be tested into the unit testing project&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The documentation is often not up to date and a lot of info is missing &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe I should have taken a closer look at the more than 1000 open issues in Github shows that .NET MAUI has before I started my project. &lt;/p&gt;

&lt;p&gt;I spent much more time on finding workarounds or finding the reason for some strange behavior as usual and much more than I expected. &lt;/p&gt;

&lt;p&gt;Do you also have any experiences with .NET MAUI? Leave a comment with your experiences or whether you agree or disagree.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>mobile</category>
      <category>tooling</category>
      <category>maui</category>
    </item>
    <item>
      <title>.NET MAUI for production code?</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Mon, 07 Nov 2022 21:22:57 +0000</pubDate>
      <link>https://forem.com/fp/net-maui-for-production-code-238b</link>
      <guid>https://forem.com/fp/net-maui-for-production-code-238b</guid>
      <description>&lt;p&gt;I've been working on a new mobile app for some time now. I wanted to implement it in .NET MAUI. This has been in General Availability status since the end of May and has been positioned as the successor to Xamarin.&lt;br&gt;
During the development of the app, disillusionment quickly set in because I was affected by one of the more than 1200 issues that are open in Github which are flagged as bugs. I haven't checked Github beforehand, so I was really astonished how many bugs are reported and still not solved. Some of them are already flagged with .NET8. &lt;br&gt;
Besides these bugs there are several hundred other issues. &lt;br&gt;
These numbers are of course across all supported platforms.&lt;/p&gt;

&lt;p&gt;On some controls like the CollectionView properties or methods are still marked as "To be added".&lt;/p&gt;

&lt;p&gt;Using third party tools does not make one’s life easier, because many .NET MAUI controls are still in preview mode.&lt;/p&gt;

&lt;p&gt;To use unit tests on Windows you have make manual modifications in the project files of the app to be tested. Also something which is not really well done.&lt;/p&gt;

&lt;p&gt;All in all, .NET MAUI doesn't look like a system I want to use productively. Love to read your opinion about .NET MAUI in the comments.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>maui</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Health Assistant</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Fri, 08 Apr 2022 17:21:19 +0000</pubDate>
      <link>https://forem.com/fp/health-assistant-2636</link>
      <guid>https://forem.com/fp/health-assistant-2636</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;With the submitted app &lt;strong&gt;Health Assistant&lt;/strong&gt; users can enter vital parameters with their voice and store it easily electronically. &lt;br&gt;
The program can gather these body parameters &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blood Pressure&lt;/li&gt;
&lt;li&gt;Glucose&lt;/li&gt;
&lt;li&gt;Heart Rate&lt;/li&gt;
&lt;li&gt;Temperature &lt;/li&gt;
&lt;li&gt;Weight&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To put in the data a kind of chat is used to ask the user for missing info and to provide feedback to the user.&lt;/p&gt;

&lt;p&gt;After you start the app, start the recognition and just say&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My weight was 86 kg today at 8 O'clock.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the intent and the necessary parameters where recognized the app asks for a confirmation and stores the record. If not all necessary data is given directly the program will ask for it.&lt;/p&gt;

&lt;p&gt;It is also possible to control the app. For example to display an overview of the stored data just say &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Show weight&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The image shows an example for such a conversation to record data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5V7uiyoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ijl6zdkz3v6hnpzi62x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5V7uiyoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ijl6zdkz3v6hnpzi62x.png" alt="Example of the UI" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Submission Category: Accessibility Advocates
&lt;/h3&gt;

&lt;p&gt;I used this category because the app should help people with no experience with computers or disabilities to store important health data in an electronic record so that they can share it easily with physicians.&lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code on GitHub
&lt;/h3&gt;

&lt;p&gt;The code for this sample app is shared in this Git repository. Here you can also find more technical details.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/FrankPohl"&gt;
        FrankPohl
      &lt;/a&gt; / &lt;a href="https://github.com/FrankPohl/HealthAssistant"&gt;
        HealthAssistant
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Content&lt;/h1&gt;
&lt;p&gt;This repository contains a MAUI app that was developed in a Hackathon which was started to develop solutions which benefit from Deepgram's speech recognition service
The user can put in health data like heart rate, temperature or weight in a kind of chat. The utterances of the user are translated into text from the Deepgram service, and the program extracts the relevant info from this text and stores it. In case some info is missing the app asks the user to put in that data.&lt;/p&gt;
&lt;h2&gt;
General Info&lt;/h2&gt;
&lt;p&gt;The app is a prototype and was developed in a short timeframe. Therefore you should not expect ready to roll-out code with outstanding code quality.&lt;/p&gt;
&lt;p&gt;Nevertheless the app works and can be used to test the analysis approach and the behavior of the Deepgram API
If you want to see it in action check out this &lt;a href="https://youtu.be/3R08QHkPRLo" rel="nofollow"&gt;video on YouTube&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
Technical details&lt;/h2&gt;
&lt;h3&gt;
General&lt;/h3&gt;
&lt;p&gt;…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/FrankPohl/HealthAssistant"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  License
&lt;/h3&gt;

&lt;p&gt;The code is licensed under the Apache-2 permissive license. Please find the &lt;a href="https://github.com/FrankPohl/HealthAssistant/blob/main/LICENSE"&gt;license file in the repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;The app in the repository can only be built with the preview version of Visual Studio 2022.&lt;br&gt;
Check out the video in my YouTube channel to see the app in action.&lt;br&gt;
&lt;a href="https://youtu.be/3R08QHkPRLo"&gt;Dev-Deepgram-Hackathon Demo&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Improvements
&lt;/h3&gt;

&lt;p&gt;For sure, there is always room for improvement and especially with such a tight schedule and a lot of new technology to learn.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The results would be even better if a specialized model in Deepgram is used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I'm not sure whether the approach to process/analyze the text that the Deepgram service has recognized is sufficient. Usually, people would create some kind of AI for that but my educated guess is, that this is not necessary - ELIZA worked also without any AI. With this approach the latency for analysis and recognition by the Deepgram service deliver a fluent interaction for the user. More sophisticated analyses would take more time and may disturb the flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Clearly not all possible input is taken into account in the analysis, and it has to be analyzed what utterances are used. Unfortunately (at least from this point of view) the recognized text is not available in the Deepgram dashboard of the service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>hackwithdg</category>
      <category>speechrecognition</category>
      <category>maui</category>
    </item>
    <item>
      <title>Dental Care Professional Assistant</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Fri, 08 Apr 2022 13:59:58 +0000</pubDate>
      <link>https://forem.com/fp/dental-care-professional-assistant-2np</link>
      <guid>https://forem.com/fp/dental-care-professional-assistant-2np</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;I'm looking into Speech Recognition for a while now because I think using speech can be a real game changer in a lot of use cases. This is true for all app users and not only for disabled people because there are many situations where you do not have your hands free, or the use of keyboard and mouse is just cumbersome, or you can simply talk faster than you type.&lt;br&gt;
I have selected this category because the use case cannot be solved with a standalone app but can act as an addon to existing electronic health record systems. Because of the business area there might apply further restrictions and regulations for such a software.&lt;br&gt;
A prototype might have been an option but wasn't feasible in the given time frame besides my other hackathon activities ;-).&lt;/p&gt;

&lt;h3&gt;
  
  
  My Deepgram Use-Case
&lt;/h3&gt;

&lt;p&gt;The use case is from the medical area.&lt;br&gt;
It is a help for dental care professionals to fill out the periodental chart. This is always done when the periodical check of your set of teeth is done. &lt;br&gt;
To fill out the chart the dental care professional evaluates the teeth and records these observations &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mobility&lt;/li&gt;
&lt;li&gt;Is it an Implant&lt;/li&gt;
&lt;li&gt;Furcation&lt;/li&gt;
&lt;li&gt;Gingival Margin&lt;/li&gt;
&lt;li&gt;Probing Depth&lt;/li&gt;
&lt;li&gt;Bleeding when probing&lt;/li&gt;
&lt;li&gt;Has Plague&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is done for the buccal and lingual parts for every single tooth. As you see, the gathered data is simple. Just an identifier for the tooth (which is a number), whether it is lingual or buccal, the observation with a boolean or number value.&lt;/p&gt;

&lt;p&gt;No rocket science to fill out a form with this but this cannot be done by the dental care professional because he has no hands free to put it in. He must dictate it to an assistant or record the dictation and the data is put afterwards. &lt;br&gt;
Both solutions double the amount of work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dive into Details
&lt;/h3&gt;

&lt;p&gt;There are several electronic dental record systems on the market. And in one part of such a systems the teeth status is tracked.&lt;br&gt;
Usually, there is a form with the representation of the teeth used to put in the values directly. Which is fine and fast if you are working in a standard desktop setting.&lt;br&gt;
My proposal is to equip the dental professional with a headset (a headset filters environmental noise much better than another microphone) and a speech recognition solution based on Deepgram running on a PC with a big screen.&lt;br&gt;
During the examination the professional must utter these four things for every tooth:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the tooth number&lt;/li&gt;
&lt;li&gt;the side &lt;/li&gt;
&lt;li&gt;the observation&lt;/li&gt;
&lt;li&gt;the value for the observation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If these utterances are processed directly and visualized in an image of a set of teeth, the user has direct feedback that his input was correct and he can be sure to save the correct diagnosis. Being sure that everything is well understood is a problem with speech recognition and good user feedback often is key to success. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Why is Deepgram suitable for this? I think there are several reasons. Here are the main ones&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the latency of the recognition service is short so there should not be any disruptions because the user has to wait for the system before the next input&lt;/li&gt;
&lt;li&gt;a model for the use case can be defined easily and will improve recognition rate &lt;/li&gt;
&lt;li&gt;Deepgram service has a decent pricing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the main reason that let me think that this use case can be implemented with Deepgram's recognition service in a reliable and well accepted way.&lt;br&gt;
But due to the forementioned restrictions this is not a task for a single developer.&lt;/p&gt;

</description>
      <category>hackwithdg</category>
    </item>
    <item>
      <title>Syncfusion Community License Offer</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Thu, 07 Apr 2022 07:38:29 +0000</pubDate>
      <link>https://forem.com/fp/syncfusion-community-license-offer-4m5j</link>
      <guid>https://forem.com/fp/syncfusion-community-license-offer-4m5j</guid>
      <description>&lt;p&gt;I want to share information about a nice offer from Syncfusion for .NET, JavaScript, iOS, Android, and Xamarin developers. &lt;br&gt;
They offer a community license for small companies for free. &lt;br&gt;
You can check it out on their &lt;a href="https://www.syncfusion.com/products/communitylicense"&gt;website&lt;/a&gt;&lt;br&gt;
I'm using their Xamarin Forms and .NET controls and can confirm that they are as powerful as the tools from their competitors. &lt;/p&gt;

</description>
      <category>news</category>
      <category>pr</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Sample to connect Mic input stream directly to Deepgram .NET SDK</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Sun, 03 Apr 2022 11:11:29 +0000</pubDate>
      <link>https://forem.com/fp/sample-code-with-net-sdk-3026</link>
      <guid>https://forem.com/fp/sample-code-with-net-sdk-3026</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;My submission for the category Wacky Wildcards is a console app for the Windows platform. This is an example how to use the newly introduced .NET SDK. &lt;br&gt;
Most applications that use speech, be it in a Command &amp;amp; Control speech application or a speech recognition app, must use the audio input directly with the smallest possible latency to make the interaction with the user smooth.&lt;br&gt;
The latency of the service is sufficient for these scenarios if the input stream is send directly to the SKD and from there to the service. Saving the input in a file and process this file in the service would not work.&lt;/p&gt;
&lt;h3&gt;
  
  
  Submission Category: Wacky Wildcards
&lt;/h3&gt;

&lt;p&gt;I decided to publish this under Wacky Wildcards because it doesn't solve a problem or use case directly but helps to implement solutions that have to send the audio input directly to the DeepGram service. &lt;/p&gt;
&lt;h3&gt;
  
  
  Link to Code on GitHub
&lt;/h3&gt;

&lt;p&gt;The code for this sample app is shared in this Git repository. Here you can also find more technical details.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/FrankPohl"&gt;
        FrankPohl
      &lt;/a&gt; / &lt;a href="https://github.com/FrankPohl/DeepGram.NETSample"&gt;
        DeepGram.NETSample
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Content&lt;/h1&gt;
&lt;p&gt;The console app connects the audio input of the PC directly with the deepgram speech recognition service
In general a .NET Core console app can run on any platform but not in this case because the code to retrieve the audio input is platform specific
In the sample I'm using NAudio which is only available on the Windows platform.&lt;/p&gt;
&lt;h1&gt;
Seting everything up&lt;/h1&gt;
&lt;p&gt;To use this recognition service you have to setup a Deepgram account here &lt;a href="https://deepgram.com/" rel="nofollow"&gt;https://deepgram.com/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;After you have created an account you can create an API key to be used in the deepgram SDK
The generated secret key is used in the source code (you'll find one there but this one does not work any longer).&lt;/p&gt;
&lt;p&gt;In the program the NUGET packages for NAudio and the Deepgram SDK are used.&lt;/p&gt;
&lt;p&gt;install-package Deepgram
install-package NAudio&lt;/p&gt;
&lt;p&gt;The package for DeepGram is only available as a .NET core package
Therefore you…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/FrankPohl/DeepGram.NETSample"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  License
&lt;/h3&gt;

&lt;p&gt;The code is licensed under the Apache-2 permissive license. Please find the &lt;a href="https://github.com/FrankPohl/DeepGram.NETSample/blob/main/LICENSE"&gt;license file in the repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;This sample targets the Windows platform but on Android or iOS you will have the same problems. &lt;br&gt;
The .NET SDK from DeepGram can be used in MAUI apps on this platform too.&lt;/p&gt;

</description>
      <category>hackwithdg</category>
    </item>
    <item>
      <title>Health Mate - Collect body parameters with speech</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Thu, 03 Mar 2022 09:29:57 +0000</pubDate>
      <link>https://forem.com/fp/health-mate-collect-body-parameters-with-speech-4824</link>
      <guid>https://forem.com/fp/health-mate-collect-body-parameters-with-speech-4824</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;My submission to this app is a client app that uses Azure Cognitive services to provide a natural language interface to collect various health parameters.&lt;br&gt;
I created this app to help people who have difficulties to enter data via a keyboard because they have an impairment or because they think it is easier to communicate with the computer via a speech interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category: AI Aces
&lt;/h3&gt;

&lt;p&gt;My submission is in the category AI Aces because the core of the app is the Natural Language processing of the speech input in the Azures Cognitive Services. This LUIS app transforms speech into text and analyses this text and transforms it into intents and entities.&lt;br&gt;
These intents and entities are evaluated in the client app and translated into actions/behavior of the app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code on GitHub
&lt;/h3&gt;

&lt;p&gt;The Github repository contains mainly the Xamarin.Forms app and the definition of the LUIS app. &lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/FrankPohl" rel="noopener noreferrer"&gt;
        FrankPohl
      &lt;/a&gt; / &lt;a href="https://github.com/FrankPohl/healthmate" rel="noopener noreferrer"&gt;
        healthmate
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Using an Azure Bot and LUIS to input blood pressure, pulse and blood sugar
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Health Mate&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This is a contribution to the Azure hackathon on DEV. See this post for more details &lt;a href="https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5" rel="nofollow"&gt;https://dev.to/devteam/hack-the-microsoft-azure-trial-on-dev-2ne5&lt;/a&gt;.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why Health Mate&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Health Mate was created with people in mind, that are not familiar with apps or used to use apps on the phone or desktop
To help these people collect important health information like blood pressure or pulse on a regular basis Health Mate uses a speech as the interface instead of a keyboard.
What is measured with which value and when is captured in a natural language dialog between the computer and the user.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Using the app&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;The app can collect data for Temperature, Blood Pressure, Pulse and Glucose.
After the app is started it is ready for input and prompts the user to say what was measured.
An example dialog may look like that:&lt;/p&gt;

&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;The system welcomes the user and asks for his input
    User says:&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/FrankPohl/healthmate" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
The code on Github contains everything that is necessary to compile and run the code with Visual Studio 2022. The necessary keys to connect to the LUIS app are part of the repository, too. They shouldn't be but this was the easiest way to make my submission testable.&lt;/p&gt;

&lt;p&gt;The MIT license can be found in the LICENSE file and is part of the repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;I have published a video with a demo on YouTube.&lt;br&gt;
&lt;a href="https://youtu.be/PqGQ7eDsDRo" rel="noopener noreferrer"&gt;DEV Hackathon- Healthmate Demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The readme.md file in Github contains more details about the implementation.&lt;/p&gt;

&lt;p&gt;There were no other developers involved in this work.&lt;/p&gt;

</description>
      <category>azuretrialhack</category>
      <category>cognitiveservices</category>
      <category>xamarinforms</category>
    </item>
    <item>
      <title>My way to show my anger about the war in Ukraine</title>
      <dc:creator>FrankPohl</dc:creator>
      <pubDate>Tue, 01 Mar 2022 16:51:17 +0000</pubDate>
      <link>https://forem.com/fp/my-way-to-show-my-anger-about-the-war-in-ukraine-10a0</link>
      <guid>https://forem.com/fp/my-way-to-show-my-anger-about-the-war-in-ukraine-10a0</guid>
      <description>&lt;p&gt;Today I started my boycott of Russia and Belarus.&lt;br&gt;
I removed my app Darts from the Google Play Store and the Microsoft Store of Russia and Belarus as well Speech4Excel and Speech4Excel Pro. &lt;/p&gt;

&lt;p&gt;These are not the most important apps of the world but maybe it can be an example for other developers to express their anger about this war.&lt;/p&gt;

&lt;p&gt;Please share the word and let us dry out the app stores of Russia and Belarus.&lt;/p&gt;

&lt;p&gt;Frank&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
