<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Michal Kovacik</title>
    <description>The latest articles on Forem by Michal Kovacik (@michal_kovacik).</description>
    <link>https://forem.com/michal_kovacik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/michal_kovacik"/>
    <language>en</language>
    <item>
      <title>Can AI Help with Repository Base Code Understanding?</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Wed, 19 Jun 2024 16:21:34 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/can-ai-help-with-repository-base-code-understanding-1la</link>
      <guid>https://forem.com/michal_kovacik/can-ai-help-with-repository-base-code-understanding-1la</guid>
      <description>&lt;p&gt;Understanding and maintaining large codebases is a common challenge in software development, leading to significant time and resource expenditure. Addressing this issue is essential for improving developer productivity and reducing technical debt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is code?&lt;/strong&gt; Code is a recipe for solving a concrete problem. With just the code, you can reverse-engineer to understand which problem it solves and how it does so. This reverse engineering allows you to formulate user stories describing the problem. From these user stories, AI can generate new code. Is this just theoretical, or can current technology help create tools to solve this problem?&lt;/p&gt;

&lt;p&gt;In DTIT, particularly within AI4Coding, we’re thinking about technological debt and how to address it.&lt;/p&gt;

&lt;p&gt;We start from the premise that the current state of AI systems is not able to offer the in-depth contextual understanding necessary for effective coding support at the repository level. Users of AI tools for code generation and completion often encounter reliability issues when dealing with larger codebases.&lt;/p&gt;

&lt;p&gt;Our research indicates that RAG (retrieval-augmented generation) can be beneficial but has limits. Even concepts like Agentic with Chain of Thoughts or Tree of Thoughts are insufficient and can be costly. What else can help? Abstract Syntax Trees (ASTs) are useful, but they don’t provide a repository-level understanding of the code.&lt;/p&gt;

&lt;p&gt;Current research shows that knowledge graphs excel in modeling complex relationships and dependencies within code across entire repositories. We utilize RAG, Agentic approaches, and ASTs, but knowledge graphs have been a game-changer for our product—Advanced Coding Assistant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why do we still have “assistant” in the title?&lt;/strong&gt; Even though we are trying to use all known best approaches, keeping the developer in the loop is crucial.&lt;/p&gt;

&lt;p&gt;So, my answer to my introductory theoretical question is YES, but we are not in the Harry Potter universe, and AI is not a magic wand, and you cannot expect a “one click” solution. However, providing developers with tools that enhance code understanding at the project level enables them to not only work faster but also tackle tasks that were previously unsolvable.&lt;/p&gt;

&lt;p&gt;For more information, please read the articles by my colleagues:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@cyrilsadovsky/advanced-coding-chatbot-knowledge-graphs-and-asts-0c18c90373be"&gt;https://medium.com/@cyrilsadovsky/advanced-coding-chatbot-knowledge-graphs-and-asts-0c18c90373be&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@ziche94/building-knowledge-graph-over-a-codebase-for-llm-245686917f96"&gt;https://medium.com/@ziche94/building-knowledge-graph-over-a-codebase-for-llm-245686917f96&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stay tune for more information. We will definitely share results from our research.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI Chat Applications with the Metacognition Approach: Tree of Thoughts (ToT)</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Fri, 22 Mar 2024 12:02:46 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/ai-chat-applications-with-the-metacognition-approach-tree-of-thoughts-tot-579k</link>
      <guid>https://forem.com/michal_kovacik/ai-chat-applications-with-the-metacognition-approach-tree-of-thoughts-tot-579k</guid>
      <description>&lt;p&gt;In the rapidly evolving field of artificial intelligence (AI), the quest for creating chat applications that can understand and respond with almost human-like accuracy and context-awareness has led to significant innovations. One such innovation is the application of metacognitive strategies, particularly the ToT approach, integrated with Retrieval-Augmented Generation (RAG) technology. This combination promises to revolutionize how chatbots process information and interact with users.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Metacognition in AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Metacognition, in the context of AI, refers to the ability of systems to understand and analyze their own thought processes. In human cognition, this involves self-reflection and awareness of one's own knowledge and the ability to adjust strategies accordingly. When applied to AI, metacognition enables chat applications to assess their response mechanisms, leading to more accurate and contextually relevant interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/2305.10601"&gt;[2305.10601] Tree of Thoughts: Deliberate Problem Solving with Large Language Models (arxiv.org)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Role of RAG in Chat Applications&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Retrieval-Augmented Generation (RAG) is a technique that enhances chatbot responses by retrieving relevant information from a dataset to support the generation of answers. This method allows chat applications to pull from a vast pool of data, ensuring that responses are not only relevant but also enriched with the context necessary for meaningful conversations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/2312.10997.pdf"&gt;Retrieval-Augmented Generation for Large Language Models: A Survey  (arxiv.org)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Tree of Thoughts Approach&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The ToT is a conceptual framework that organizes information in a hierarchical structure, mirroring how human thought processes branch out and interconnect. In chat applications, this approach can manage dependent and independent pieces of information, ensuring that all relevant context is considered in the response generation process.&lt;/p&gt;

&lt;p&gt;For instance, when translating code from one file, a chatbot might need additional context from related files or documentation. The ToT approach ensures that these connections are made, enriching the chatbot's response with a comprehensive understanding of the query.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Implementing Tree of Thoughts in Chat Applications with RAG&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To address the challenge of integrating the ToT with RAG in a chat application, particularly for tasks like translating code, where I have most experience, for example from Perl to C#, it's essential to consider dependencies that are often overlooked by standard prompting methods. Here's an enhanced approach with concrete examples:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step-by-Step Integration with Concrete Examples&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Structuring and Dependency Mapping&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Begin by organizing your dataset and code repositories in a hierarchical structure that reflects the dependencies among various pieces of code, libraries, and documentation.

&lt;ul&gt;
&lt;li&gt;abstract syntax trees (AST) by &lt;a href="https://tree-sitter.github.io/tree-sitter/"&gt;Tree sitter&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dependency Analysis and Change May-Impact Analysis mention in &lt;a href="https://arxiv.org/abs/2309.12499"&gt;CodePlan&lt;/a&gt; (These features help understand the complex interdependencies within the code repository and predict how specific updates might affect other areas of the codebase)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: For a Perl to C# translation task, create a dependency graph that maps out how different Perl scripts interact with each other and with external libraries. This graph will guide the retrieval process.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval Mechanism with Dependency Awareness&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Implement a retrieval system capable of understanding and navigating the dependency graph to fetch not just the target script for translation but also any dependent scripts and libraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: If a Perl script &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt; uses a module &lt;strong&gt;&lt;code&gt;helper.pl&lt;/code&gt;&lt;/strong&gt;, the retrieval system should fetch both &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;helper.pl&lt;/code&gt;&lt;/strong&gt; when tasked with translating &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Prompting for Translation&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Use the retrieved information to construct enhanced prompts for the RAG model. These prompts should include context about the dependencies to inform the translation process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: When translating &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt; to C#, the prompt to the RAG model should not only include the content of &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt; but also a summary or key functions from &lt;strong&gt;&lt;code&gt;helper.pl&lt;/code&gt;&lt;/strong&gt; to ensure the translated C# code maintains functional integrity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterative Refinement with ToT&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Apply the Tree of Thoughts framework to iteratively refine the translation. After an initial translation attempt, use ToT to explore alternative translations or adjustments based on the dependencies and the overall structure of the code. For better understanding you can read Cyril Sadovsky &lt;a href="https://medium.com/aimonks/metacognition-experiments-with-ai-8ba10f284e4c"&gt;Blog&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;: If the initial translation of &lt;strong&gt;&lt;code&gt;main.pl&lt;/code&gt;&lt;/strong&gt; misses a crucial aspect handled in &lt;strong&gt;&lt;code&gt;helper.pl&lt;/code&gt;&lt;/strong&gt;, ToT can guide the model to reconsider this dependency and adjust the translation accordingly.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzze74hksnh1au2rjk8ug.png" alt="Image description" width="800" height="360"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Fig.1 - Tree of thought method visualization. While GPT-4 with chain-of-thought prompting only solved 4% of tasks, ToT method achieved a success rate of 74%. Code repo with all prompts: &lt;a href="https://github.com/princeton-nlp/tree-of-thought-llm"&gt;this https URL&lt;/a&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Most common used strategy: Multi-Agent Approach
&lt;/h3&gt;

&lt;p&gt;The multi-agent approach in chatbot development involves the coordination of multiple AI agents or models to handle different aspects of a conversation or to combine their strengths for complex tasks.&lt;/p&gt;

&lt;p&gt;Implementation examples:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.llamaindex.ai/blog/agentic-rag-with-llamaindex-2721b8a49ff6"&gt;Agentic RAG With LlamaIndex — LlamaIndex, Data Framework for LLM Applications&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.langchain.dev/langgraph-multi-agent-workflows/"&gt;LangGraph: Multi-Agent Workflows (langchain.dev)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Product which you can try - &lt;a href="https://github.com/Pythagora-io/pythagora"&gt;https://github.com/Pythagora-io/pythagora&lt;/a&gt;, check also video - &lt;a href="https://www.youtube.com/watch?v=xQlnqTMC9xA"&gt;Open-Source AI Agent Can Build FULL STACK Apps (FREE “Devin” Alternative) (youtube.com)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Metacognition approach vs Multi agent approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity and Implementation:&lt;/strong&gt; The metacognition approach focuses on the internal processes of a single agent, which might be simpler to implement but challenging to perfect, whereas the multi-agent approach involves coordinating multiple components, which can be more complex but offers scalability and specialization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility vs. Specialization:&lt;/strong&gt; Metacognition lends flexibility and adaptability to a single agent, making it better at self-improvement and handling a wide range of interactions with some depth. In contrast, the multi-agent approach leverages specialization, potentially offering depth in specific domains and a broader overall range of expertise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Potential:&lt;/strong&gt; These approaches are not mutually exclusive and could be integrated for enhanced performance. For instance, a multi-agent system could include a metacognitive agent responsible for assessing the system's performance and guiding the collaboration among agents.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why to consider metacognition approach?
&lt;/h2&gt;

&lt;p&gt;The integration of the metacognition approach, particularly the ToT, with RAG technology represents a significant leap forward in the development of AI chat applications. By enabling chatbots to understand and utilize the full context of their interactions, we can create more engaging, accurate, and human-like conversational experiences. As we continue to explore these innovative approaches, the potential for AI to understand and interact with the world in a more nuanced and meaningful way seems increasingly within reach.&lt;/p&gt;

&lt;p&gt;** comment: I really like generating picture with AI, so please do not hate me for my header picture.&lt;/p&gt;

</description>
      <category>tot</category>
      <category>ai</category>
      <category>rag</category>
      <category>chatbot</category>
    </item>
    <item>
      <title>Simplifying AI Integration with API Standards.</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Tue, 27 Feb 2024 14:13:25 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/simplifying-ai-integration-with-api-standards-18dg</link>
      <guid>https://forem.com/michal_kovacik/simplifying-ai-integration-with-api-standards-18dg</guid>
      <description>&lt;p&gt;The backbone of effective AI integration lies in the establishment and adherence to API standards. These standards are not merely guidelines but are instrumental in ensuring that different components of an application, such as backend services and front-end interfaces, can communicate effortlessly. The project under discussion serves as an exemplary case of this principle in action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Project Overview
&lt;/h3&gt;

&lt;p&gt;This Flask application acts as a middleman, facilitating communication between an AI model (for example OpenAI ChatGPT) and a user interface built with Streamlit. The key to its swift integration lies in the standardized API endpoints and data exchange formats, which are in line with OpenAI's API standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standardization at Work
&lt;/h3&gt;

&lt;p&gt;The Flask application defines this endpoint:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;/v1/chat/completions&lt;/strong&gt;: Handles requests to generate chat completions based on user prompts.&lt;/p&gt;

&lt;p&gt;These endpoints are designed to expect and return data in a structured format, mirroring the standards set by OpenAI. This consistency is crucial for integrating with ChatGPT (or different LLM - includes local deployments) and ensures that adding a new front-end interface like Streamlit is straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Snippets Demonstrating Standards and Integration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Flask Endpoint for Chat Completions&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;pythonCopy&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;
&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/v1/chat/completions&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chat_completions&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;request_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;
    &lt;span class="c1"&gt;# Standardized request handling
&lt;/span&gt;    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;default-model&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;security_token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...)&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code snippet shows how the Flask app handles POST requests to &lt;strong&gt;&lt;code&gt;/v1/chat/completions&lt;/code&gt;&lt;/strong&gt;. It adheres to a structured request format, expecting specific fields such as &lt;strong&gt;&lt;code&gt;model&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;prompt&lt;/code&gt;&lt;/strong&gt;. This alignment with OpenAI's API standards ensures that the application can easily parse requests and communicate with external AI services. On the top of it, you can extend call with your properties - session_id and security_token.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlit Interface for User Interaction&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;pythonCopy&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;streamlit&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Chat with AI&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;api_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;token&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;security&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="bp"&gt;...&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, the Streamlit script illustrates how the front end consumes the Flask API, sending data in a structured format that matches the expectations of the Flask endpoints. This seamless integration is made possible by the consistent application of API standards across both the Flask application and the Streamlit front end. It's also important to note that you can implement custom properties, as was proposed in the Flask backend app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrate, Integrate, Integrate
&lt;/h3&gt;

&lt;p&gt;The integration demonstrated in this project underscores the value of API standards in bridging the gap between complex AI functionalities, your services and user-friendly interfaces. Thus, you will be able to integrate open-source products, your custom products, or COTS in a very simple and direct way.&lt;/p&gt;

&lt;p&gt;This practical example serves as a blueprint for developers looking to utilizing the capabilities of AI in their applications, highlighting that through the lens of standards, the path to integration is not only viable but streamlined. &lt;/p&gt;

&lt;p&gt;For a deeper dive into the project and its implementation, exploring the &lt;a href="https://github.com/MKovacik/PoC-OpenAI-API-Standards" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; will provide additional insights and the full codebase.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjl4pjit0gftn41927dc.png" alt="Image description"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;*Fig.1 - The diagram presented highlights the difficulties that development teams may face when choosing to implement custom APIs for AI services. Custom solutions often lead to a complex mixture of integrations, each with unique maintenance and compatibility requirements—elements that can slow down the development process and increase the workload. On the other hand, embracing OpenAI API standards can simplify the integration process, promoting consistency and speeding up the progression of development projects.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Why to use Open AI API standards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI is the most popular format for describing AI APIs, leading to more community support and a proliferation of tools leveraging OpenAI for generating AI enabled applications&lt;/li&gt;
&lt;li&gt;OpenAI API standards provide a clear and concise way to define API endpoints, parameters, and responses, which reduces the risk of errors and bugs during integration, making APIs more developer-friendly and easier to use.&lt;/li&gt;
&lt;li&gt;A valid OpenAPI specification can save significant time and resources, allowing for quick and correct SDK generation, reducing support queries, and simplifying the integration process&lt;/li&gt;
&lt;li&gt;Investing in a fully compliant OpenAPI specification can reduce costs by eliminating the need for managing separate SDK development teams&lt;/li&gt;
&lt;li&gt;OpenAI API standards allow for the creation of AI solutions tailored to specific industries, providing insights that can inform long-term strategy&lt;/li&gt;
&lt;li&gt;Official documentation - &lt;a href="https://github.com/openai/openai-openapi" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;List of libraries and articles for inspiration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/libraries" rel="noopener noreferrer"&gt;https://platform.openai.com/docs/libraries&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vercel.com/docs/integrations/ai/openai" rel="noopener noreferrer"&gt;https://vercel.com/docs/integrations/ai/openai&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/TheoKanning/openai-java" rel="noopener noreferrer"&gt;https://github.com/TheoKanning/openai-java&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/getting-started-azure-openai-dotnet/" rel="noopener noreferrer"&gt;https://devblogs.microsoft.com/dotnet/getting-started-azure-openai-dotnet/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.c-sharpcorner.com/article/consume-chat-gpt-open-ai-api-inside-net-core-application-using-razor-pages/" rel="noopener noreferrer"&gt;https://www.c-sharpcorner.com/article/consume-chat-gpt-open-ai-api-inside-net-core-application-using-razor-pages/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/pulse/why-valid-openapi-specification-key-api-success-2023-lessons-melvin/" rel="noopener noreferrer"&gt;https://www.linkedin.com/pulse/why-valid-openapi-specification-key-api-success-2023-lessons-melvin/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks &lt;a href="https://www.linkedin.com/in/cyril-s-92907566/" rel="noopener noreferrer"&gt;Cyril Sadovsky&lt;/a&gt; to feedback and hints.&lt;/p&gt;

&lt;p&gt;** commented: I really like generating picture with AI, so please do not hate me for my header picture. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Theory Machine: A self-evolving Architecture</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Mon, 29 Jan 2024 09:17:09 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/theory-machine-a-self-evolving-architecture-4883</link>
      <guid>https://forem.com/michal_kovacik/theory-machine-a-self-evolving-architecture-4883</guid>
      <description>&lt;p&gt;I want to share what I've been working on during my personal time lately. Along with my colleague &lt;a href="https://www.linkedin.com/in/cyril-s-92907566/"&gt;Cyril Sadovsky&lt;/a&gt;, we've create a proposal titled "&lt;a href="https://www.researchgate.net/publication/377565623_THEORY_MACHINE_-_BRIDGING_GENERATIVE_AI_CREATIVITY_AND_TURING_MACHINE_FORMALITY_A_PROPOSAL"&gt;THEORY MACHINE - BRIDGING GENERATIVE AI CREATIVITY AND TURING MACHINE FORMALITY: A PROPOSAL&lt;/a&gt;". It's an exploration into how Generative AI and Turing Machines might integrate.&lt;/p&gt;

&lt;p&gt;This idea started to take shape during a conversation with Cyril about Turing machines. I remembered a story from my father about Alan Turing. He told me that during his university days in the early '80s in a communist country, Turing's work was almost unknown (Declassification of Bletchley Park’s operation in mid 70 and “Iron Curtain” fall in mid 80). This didn't directly inspire our work but nudged me to join Cyril on this quest. It reminded me that all great ideas begin as mere theories, often unrecognized. As Sigrid said to Jon Snow in Game of Thrones, “You know nothing” – a reminder that the end of a journey is unknown, but starting it is what counts.&lt;/p&gt;

&lt;p&gt;In our paper, we propose the 'Theory Machine', a conceptual framework that aims to combine the imaginative potential of Generative AI with the structured precision of Turing Machines. Our goal is to see how Foundational Models might be enhanced in a self-optimizing system. This proposal isn't about claiming a breakthrough; it's about exploring a hypothesis where AI-generated Turing Machines could evolve and adapt by processing continuous streams of data, transforming abstract 'noise' into structured, testable forms.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklzg6ay4nn7a5bcqmkif.png" alt="Theory Mashine" width="436" height="650"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Fig.1 - Our approach is based on the idea of Recursive Language/Turing Machines, which we envision as a superset of traditional Turing Machines and Neural Networks&lt;/em&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;After introducing our 'Theory Machine' in the paper, it's important to break down what this really means for those who aren't deeply immersed in technical jargon. Simply put, the 'Theory Machine' is like a sophisticated translator that takes the language of AI creativity and meshes it seamlessly with the logical world of Turing Machines, which are the fundamental principles guiding how computers operate.&lt;/p&gt;

&lt;p&gt;Imagine a scenario where AI isn't just following commands, but actively learning, adapting, and evolving as it receives new information. It's like to teaching a child not just to memorize facts, but to understand, question, and even challenge them to foster growth. This is what we envision with the 'Theory Machine'.&lt;/p&gt;

&lt;p&gt;For businesses, this could mean AI systems that not only perform tasks but also continuously improve their methods for greater efficiency. In healthcare, AI could evolve to better predict patient needs and treatment outcomes. Even in everyday life, this could lead to smarter, more intuitive technology that understands and adapts to your preferences and habits over time.&lt;/p&gt;

&lt;p&gt;Our proposal is the first step in exploring these potentials. It's about setting the stage for AI systems that aren't just tools, but partners in problem-solving and innovation.&lt;/p&gt;

&lt;p&gt;I would also recommend read a &lt;a href="https://medium.com/@cyrilsadovsky/ai-scientist-a-self-evolving-architecture-of-the-theory-machine-1f246e04ec2e"&gt;Cyril's blog post&lt;/a&gt;. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Exploring new solutions with AI and your data.</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Mon, 14 Aug 2023 07:55:44 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/exploring-new-frontiers-in-ai-development-microsoft-vs-amazon-solutions-a1j</link>
      <guid>https://forem.com/michal_kovacik/exploring-new-frontiers-in-ai-development-microsoft-vs-amazon-solutions-a1j</guid>
      <description>&lt;p&gt;As developers, we're always on the hunt for new and exciting tools. Recently, few AI solutions have caught my eye. Let's explore what these solutions offer and how you, as a fellow developer, can make use of them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Microsoft's Solution: ChatGPT with Azure Cognitive Search
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://techcommunity.microsoft.com/t5/azure-ai-services-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087"&gt;Learn More About Microsoft's Solution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Microsoft introduces Azure Cognitive Search and Azure OpenAI Service. This synergy allows developers to create solutions related to ChatGPT, but powered by your proprietary data. By using Azure's robust enterprise features, Cognitive Search's adeptness at indexing and retrieving data, and ChatGPT's natural language interaction capabilities, businesses can create dynamic conversational experiences rooted in their own knowledge bases.&lt;/p&gt;

&lt;h4&gt;
  
  
  Amazon's Solution: Generative AI with Kendra and LangChain
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/"&gt;Learn More About Amazon's Solution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS presents a solution for Generative AI applications using Amazon Kendra, large language models (LLMs), and the LangChain framework. Kendra retrieves relevant enterprise data, which is then processed by LLMs to produce accurate responses, all orchestrated through LangChain.&lt;/p&gt;

&lt;h4&gt;
  
  
  Google's Solution: Generative AI with Vertex AI PaLM 2 and LangChain
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/generative-ai-applications-with-vertex-ai-palm-2-models-and-langchain"&gt;Learn More About Google's Solution&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google Cloud offers approach for Generative AI applications by integrating Vertex AI PaLM 2 models with the LangChain framework. The Vertex AI PaLM models, tailored for text and chat interactions, work in tandem with LangChain, allowing Large Language Models (LLMs) to interact with external systems, such as databases or Google Search. This ensures that the AI not only understands but also reasons with data in a context-rich manner.&lt;/p&gt;

&lt;p&gt;All three solutions emphasize the use of proprietary data with RAG technique. While each solution has its unique features and benefits, they all ensure that AI not only understands but also reasons with data in a context-rich manner. This shared focus on RAG underscores its significance in the current AI landscape and its potential to change how we interact with and leverage AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Innovation: The Fridge Story
&lt;/h3&gt;

&lt;p&gt;Who makes the most from fridges? Not the sellers, but Coca-Cola - they make products for fridges. This story shows how we developers can use AI tools, like a fridge, to make new things.&lt;/p&gt;

&lt;p&gt;Like Coca-Cola used fridges, we can use AI for new ideas. The secret is how you use and change it for new challenges.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLM Techniques
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Fine-Tuning&lt;/strong&gt;&lt;br&gt;
Fine-tuning is the process of taking a pre-trained model and training it further on a smaller, specific dataset. This tailors the model's responses to particular tasks or domains, enhancing its performance on that specific data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Good for changing to fit new data.&lt;/li&gt;
&lt;li&gt;Makes things better but might cost more.&lt;/li&gt;
&lt;li&gt;Use the latest data for your business.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Retrieval Augmented Generation (RAG)&lt;/strong&gt;&lt;br&gt;
RAG combines the power of large language models with external knowledge retrieval. It first fetches relevant documents or passages from a database and then generates a response based on the retrieved information and the input query.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Another way to learn from new data.&lt;/li&gt;
&lt;li&gt;Think over new data without spending much.&lt;/li&gt;
&lt;li&gt;Update regularly and check facts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prompt engineering&lt;/strong&gt;&lt;br&gt;
Prompting involves feeding a model a specific input or "prompt" to guide its output. It's a way to interact with pre-trained models, like asking a question and receiving an answer, without additional training.&lt;/p&gt;

&lt;p&gt;By knowing these techniques, we can get more from AI, making new apps that keep up with changes in data and tech.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's try it&lt;/strong&gt;&lt;br&gt;
The world of AI has much to offer, and Microsoft's, Google's and Amazon's solutions are ready for you to try. Why not jump in and see what you can make?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Azure-Samples/azure-search-openai-demo/"&gt;Microsoft's Solution GitHub repo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/aws-samples/amazon-kendra-langchain-extensions"&gt;Amazon's Solution GitHub repo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/GoogleCloudPlatform/generative-ai"&gt;Google's Solution GitHub repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm trying these solutions myself and will share what I find. Stay tuned, and let's see where our creativity takes us.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Creating Webex Bot with GPT - A Simple Guide</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Wed, 24 May 2023 20:16:52 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/creating-webex-bot-with-gpt-a-simple-guide-3j6i</link>
      <guid>https://forem.com/michal_kovacik/creating-webex-bot-with-gpt-a-simple-guide-3j6i</guid>
      <description>&lt;p&gt;In an age where automation and efficiency are key to productivity, the integration of AI models like GPT with chat platforms has become essential. One such powerful integration is that of GPT (Generative Pre-training Transformer) with Webex, a widely used team collaboration tool. In this blog post, we will explore how you can easily create a Webex bot that harnesses the power of GPT and discuss some of the potential business applications of this combination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Power of GPT and Webex&lt;/strong&gt;&lt;br&gt;
GPT is an advanced language model that uses machine learning to generate human-like text. When integrated with a communication platform like Webex, it enables the creation of a bot capable of answering queries, giving responses, and holding conversations that closely resemble human interaction.&lt;/p&gt;

&lt;p&gt;Webex is a platform loved for its convenience and efficiency in handling team collaborations, meetings, and chats. It already includes user authentication which, when coupled with a GPT-powered bot, provides a secure, effective, and personalized user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Combine GPT and Webex?&lt;/strong&gt;&lt;br&gt;
This integration has several advantages. Firstly, the combination provides an AI assistant that is capable of human-like interactions. This can help in automating responses to FAQs, handling customer queries, scheduling meetings, and much more. It essentially makes your Webex bot much smarter and more useful.&lt;br&gt;
Secondly, with Webex's built-in user authentication, the security and privacy of your interactions with the GPT-powered bot are ensured. This is particularly important in a world where data privacy and security are paramount.&lt;br&gt;
Finally, the integration finds relevance in numerous business scenarios. It could serve as a virtual assistant, a customer support agent, a meeting scheduler, or even a tool for interactive language learning. The possibilities are immense and largely unexplored.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your Guide to Creating a Webex Bot Powered by GPT&lt;/strong&gt;&lt;br&gt;
If you're interested in leveraging the power of GPT for your Webex bot, look no further. I've shared a project on my &lt;a href="https://github.com/MKovacik/WebexBotWithAzure"&gt;GitHub&lt;/a&gt; that makes this process incredibly easy. The project hosts a Webex bot on Azure and uses an instance of GPT in Azure OpenAI Studio.&lt;br&gt;
The repository includes detailed instructions on setting up the bot, configuring the GPT instance in Azure, and deploying the entire application as an Azure Web App. This way, you can harness the power of GPT in your Webex bot with minimal hassle.&lt;/p&gt;

&lt;p&gt;The combination of GPT and Webex opens up exciting possibilities in the realm of AI chatbots. With the authentication already handled by Webex and the conversational capabilities of GPT, you can create an AI assistant that is both smart and secure. Check out the GitHub project to get started on creating your own GPT-powered Webex bot today!&lt;/p&gt;

&lt;p&gt;Remember, the journey of exploration and innovation is ongoing. Who knows what exciting applications you might find for your new GPT-powered bot?&lt;/p&gt;

</description>
      <category>azure</category>
      <category>gpt3</category>
      <category>webex</category>
      <category>python</category>
    </item>
    <item>
      <title>Power of Azure OpenAI Services with Python, Flask, and React</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Sun, 16 Apr 2023 17:35:07 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/unleashing-the-power-of-azure-openai-services-with-python-flask-and-react-7km</link>
      <guid>https://forem.com/michal_kovacik/unleashing-the-power-of-azure-openai-services-with-python-flask-and-react-7km</guid>
      <description>&lt;p&gt;Azure OpenAI Services offers an easy-to-use platform for deploying and managing powerful AI models like GPT-3.5-turbo. In this blog post, we will dive into the process of creating a Python Flask backend and React frontend that interact with Azure OpenAI Services to generate interactive conversations using the GPT-3.5-turbo model. We will also provide code examples to demonstrate how you can seamlessly integrate Azure OpenAI Services into your backend and frontend applications.&lt;/p&gt;

&lt;p&gt;Setting Up Azure OpenAI Services:&lt;br&gt;
Before diving into the code, you need to set up Azure OpenAI Services. Follow the instructions in the &lt;a href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource"&gt;official documentation&lt;/a&gt; to create a resource, deploy a model, and retrieve your resource's endpoint and API key.&lt;/p&gt;

&lt;p&gt;Backend Implementation:&lt;br&gt;
With Azure OpenAI Services set up, we can now create a Python Flask backend to interact with it. First, install the necessary libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install Flask flask-cors openai tiktoken
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a short example of how to create a Flask backend that communicates with Azure OpenAI Services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import requests
from flask import Flask, request, jsonify
from flask_cors import CORS  # Import the CORS package
import tiktoken
import openai

app = Flask(__name__)
CORS(app)  # Enable CORS for your Flask app and specify allowed origins

openai.api_type = "azure"
openai.api_version = "2023-03-15-preview" 
openai.api_base = "YOUR_AZURE_OPENAI_API_KEY"
openai.api_key = "YOUR_AZURE_OPENAI_RESOURCE_ENDPOINT"

deployment_name = "cs-chat"

max_response_tokens = 250
token_limit= 4000

def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"):
    encoding = tiktoken.encoding_for_model(model)
    num_tokens = 0
    for message in messages:
        num_tokens += 4  # every message follows &amp;lt;im_start&amp;gt;{role/name}\n{content}&amp;lt;im_end&amp;gt;\n
        for key, value in message.items():
            num_tokens += len(encoding.encode(value))
            if key == "name":  # if there's a name, the role is omitted
                num_tokens += -1  # role is always required and always 1 token
    num_tokens += 2  # every reply is primed with &amp;lt;im_start&amp;gt;assistant
    return num_tokens

@app.route('/api/chat', methods=['POST'])
def chat():
    user_input = request.json.get('user_input')
    conversation = request.json.get('conversation', [{"role": "system", "content": "You are a helpful assistant."}])

    conversation.append({"role": "user", "content": user_input})

    num_tokens = num_tokens_from_messages(conversation)
    while (num_tokens+max_response_tokens &amp;gt;= token_limit):
        del conversation[1] 
        num_tokens = num_tokens_from_messages(conversation)


    response = openai.ChatCompletion.create(
        engine="model-chat", # The deployment name you chose when you deployed the ChatGPT or GPT-4 model.
        messages = conversation,
        temperature=0.7,
        max_tokens=token_limit,
    )

    conversation.append({"role": "assistant", "content": response['choices'][0]['message']['content']})
    return jsonify(conversation)

if __name__ == '__main__':
    app.run(debug=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace YOUR_AZURE_OPENAI_API_KEY and YOUR_AZURE_OPENAI_RESOURCE_ENDPOINT with the respective values you obtained from your Azure OpenAI Service resource.&lt;/p&gt;

&lt;p&gt;Frontend Implementation:&lt;br&gt;
For the frontend, you can use React to create a simple user interface that interacts with the Flask backend. Here's a code for app.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useState } from "react";
import axios from "axios";
import "./App.css";


function App() {
  const [input, setInput] = useState("");
  const [loading, setLoading] = useState(false);
  const [messages, setMessages] = useState([

  ]);

  const handleChange = (e) =&amp;gt; {
    setInput(e.target.value);
  };

  const handleSubmit = async (e) =&amp;gt; {
    e.preventDefault();
    setLoading(true);

    try {
      const response = await axios.post("http://localhost:5000/api/chat", {
        user_input: input,
        conversation: messages.length === 0 ? undefined : messages,
      });

      if (response.data &amp;amp;&amp;amp; Array.isArray(response.data)) {
        setMessages(response.data);
      } else {
        setMessages([...messages, { role: "assistant", content: "No response received. Please try again." }]);
      }
    } catch (error) {
      console.error(error);
      setMessages([...messages, { role: "assistant", content: "An error occurred. Please try again." }]);
    } finally {
      setInput("");
      setLoading(false);
    }
  };

  return (
    &amp;lt;div className="container"&amp;gt;
      &amp;lt;header className="header"&amp;gt;
        &amp;lt;h1&amp;gt;Mizo's ChatGPT App - instance on Azure &amp;lt;/h1&amp;gt;
      &amp;lt;/header&amp;gt;
      &amp;lt;main&amp;gt;
        &amp;lt;form onSubmit={handleSubmit}&amp;gt;
          &amp;lt;div className="form-group"&amp;gt;
            &amp;lt;label htmlFor="input"&amp;gt;Your question:&amp;lt;/label&amp;gt;
            &amp;lt;input
              id="input"
              type="text"
              className="form-control"
              value={input}
              onChange={handleChange}
            /&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;button type="submit" className="btn btn-primary" disabled={loading}&amp;gt;
            {loading ? "Loading..." : "Submit"}
          &amp;lt;/button&amp;gt;
        &amp;lt;/form&amp;gt;
        &amp;lt;MessageList messages={messages} /&amp;gt;
      &amp;lt;/main&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

function MessageList({ messages }) {
  return (
    &amp;lt;div className="message-list"&amp;gt;
      &amp;lt;h2&amp;gt;Message History&amp;lt;/h2&amp;gt;
      &amp;lt;ul&amp;gt;
        {messages.map((message, index) =&amp;gt; (
          &amp;lt;li key={index}&amp;gt;
            &amp;lt;strong&amp;gt;{message.role === "user" ? "User" : "ChatGPT"}:&amp;lt;/strong&amp;gt;{" "}
            {message.content.includes("```

") ? (
              &amp;lt;pre className={message.role === "user" ? "user-code" : "chatgpt-code"}&amp;gt;
                &amp;lt;code&amp;gt;{message.content.replace(/

```/g, "")}&amp;lt;/code&amp;gt;
              &amp;lt;/pre&amp;gt;
            ) : (
              &amp;lt;pre className={message.role === "user" ? "user-message" : "chatgpt-message"}&amp;gt;
                {message.content}
              &amp;lt;/pre&amp;gt;
            )}
          &amp;lt;/li&amp;gt;
        ))}
      &amp;lt;/ul&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

export default App;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the Application:&lt;br&gt;
With both the backend and frontend implementations in place, you can now run your application. First, start the Flask backend by running the Python script. Next, start your React frontend with &lt;code&gt;npm start&lt;/code&gt; or &lt;code&gt;yarn start&lt;/code&gt;. Now, you can open the application in your browser and interact with the ChatGPT model powered by Azure OpenAI Services. For deployment of backend app you can use App service and for frontend you can use Static Web app service - link for documentation &lt;a href="https://learn.microsoft.com/en-us/azure/app-service/quickstart-html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this blog post, we have demonstrated how to create a Python Flask backend and React frontend to interact with Azure OpenAI Services using GPT-3.5-turbo. By leveraging the power of Azure OpenAI Services, you can easily integrate advanced AI models into your applications and provide a seamless experience for your users.&lt;/p&gt;

&lt;p&gt;It's worth mentioning that there are many existing implementations of similar approaches available on platforms like GitHub. One such example is the &lt;a href="https://github.com/mckaywrigley/chatbot-ui"&gt;Chatbot UI&lt;/a&gt; by McKay Wrigley. While experimenting and trying things out is an excellent way to learn, utilising existing solutions can save you valuable time and help you focus on other aspects of your project. By exploring these resources and building upon them, you can speed up your development process and create more efficient and innovative applications.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

&lt;p&gt;For those who wants to try it here is also app.css code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;body {
  background-color: #f8f9fa;
  font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
    'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
    sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
}

.container {
  max-width: 960px;
  margin: 0 auto;
  padding: 2rem 1rem;
}

.header {
  text-align: center;
}

.form-group {
  margin-bottom: 1rem;
}

.form-control {
  display: block;
  width: 100%;
  padding: 0.5rem 0.75rem;
  font-size: 1rem;
  line-height: 1.5;
  color: #495057;
  background-color: #fff;
  background-clip: padding-box;
  border: 1px solid #ced4da;
  border-radius: 0.25rem;
  transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}

.form-control:focus {
  border-color: #80bdff;
  outline: 0;
  box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25);
}

.btn {
  display: inline-block;
  font-weight: 400;
  color: #212529;
  text-align: center;
  vertical-align: middle;
  cursor: pointer;
  background-color: transparent;
  border: 1px solid transparent;
  padding: 0.5rem 0.75rem;
  font-size: 1rem;
  line-height: 1.5;
  border-radius: 0.25rem;
  user-select: none;
  transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out,
    border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}

.btn-primary {
  color: #fff;
  background-color: #007bff;
  border-color: #007bff;
}

.btn-primary:hover {
  color: #fff;
  background-color: #0069d9;
  border-color: #0062cc;
}

.btn-primary:focus {
  color: #fff;
  background-color: #0069d9;
  border-color: #0062cc;
  box-shadow: 0 0 0 0.2rem rgba(38, 143, 255, 0.5);
}

.btn-primary:disabled {
  color: #fff;
  background-color: #007bff;
  border-color: #007bff;
  opacity: 0.65;
  cursor: not-allowed;
}

.message-list {
  margin-top: 2rem;
}

.message-list ul {
  list-style-type: none;
  padding: 0;
}

.message-list li {
  margin-bottom: 1rem;
  padding: 1rem;
  border: 1px solid #ddd;
  border-radius: 0.25rem;
  background-color: #f8f9fa;
}

.user-message,
.chatgpt-message {
  display: inline;
  white-space: pre-wrap;
  font-family: monospace;
  margin: 0;
}

.chatgpt-message {
  color: #007bff;
}

.user-message,
.chatgpt-message,
.user-code,
.chatgpt-code {
  display: inline;
  white-space: pre-wrap;
  margin: 0;
  font-size: 0.75rem;
}

.user-message,
.chatgpt-message {
  font-family: monospace;
}

.chatgpt-code {
  color: #007bff;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How ChatGPT Turned Me into an Interactive Console</title>
      <dc:creator>Michal Kovacik</dc:creator>
      <pubDate>Tue, 28 Mar 2023 10:05:45 +0000</pubDate>
      <link>https://forem.com/michal_kovacik/how-chatgpt-turned-me-into-an-interactive-console-482</link>
      <guid>https://forem.com/michal_kovacik/how-chatgpt-turned-me-into-an-interactive-console-482</guid>
      <description>&lt;p&gt;The journey with a software engineer who becomes an interactive console for ChatGPT, an AI language model. Witness code, collaboration, and AI-powered guidance as they tackle GitLab APIs and Python code. Explore the potential and limitations of AI-human collaboration in this journey. Discover how the right guidance can conquer complex problems!&lt;/p&gt;

&lt;p&gt;It was a rainy Sunday, a day when most people would choose to stay indoors, warm and cozy. As the leader of a team of software engineers, and a developer at heart, I decided to combine work and play, embarking on a journey to the digital realm of ChatGPT.&lt;/p&gt;

&lt;p&gt;The adventure began with a simple query, echoing through the virtual landscape: "Can you recommend some interesting statistics I can create based on data from the GitLab API?" The AI oracle, ChatGPT, responded swiftly, offering a treasure trove of insights. Among the many options, I chose to focus on one specific statistic: "Projects with the longest open issues or merge requests."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpihab0alrusx8f3eu1pg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpihab0alrusx8f3eu1pg.jpg" alt="Image description" width="800" height="1183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I dove deeper into the code provided by ChatGPT, I stumbled upon a series of challenges. But the AI guide stood by my side, helping me troubleshoot and refine the code until it was perfect. It was during this process that I encountered a persistent error, and ChatGPT suggested: "We can add more print statements to help identify the issue further." It was at this moment that I realised the tables had turned - the "robot" was using me, a human, as a debugging console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf9pgrn7y1aewzgq97aq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhf9pgrn7y1aewzgq97aq.jpg" alt="Image description" width="796" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This unexpected role reversal led me to appreciate the power of AI in a new light. ChatGPT had transformed into an interactive console, guiding me through the debugging process with patience and wisdom. Together, we navigated the intricate world of GitLab APIs and Python code, resolving issues and refining solutions. Our collaboration became a story of discovery, learning, and problem-solving.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftc7jb70s48ca9m9m60lx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftc7jb70s48ca9m9m60lx.jpg" alt="Image description" width="797" height="1038"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, my journey with ChatGPT also revealed the limitations and risks of AI. As powerful and insightful as the AI language model may be, it is still dependent on the context and information provided by the human user. If I had given ChatGPT the wrong context, our collaboration might not have been as successful.While AI has made significant progress, it is essential to remember that it is not perfect. The human touch is still crucial in guiding the AI towards the right path and making the most of its capabilities.&lt;/p&gt;

&lt;p&gt;In the end, not only did ChatGPT help me craft the perfect Python script, but it also taught me invaluable lessons about troubleshooting and refining code. As a final request, I asked ChatGPT to assist me in sorting projects by the descending median time to close merge requests. Once again, ChatGPT delivered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk9sgmexdqdvp2vi64hr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk9sgmexdqdvp2vi64hr.jpg" alt="Image description" width="800" height="1222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My adventure with ChatGPT was an extraordinary journey of code, collaboration, and AI-powered guidance. It was a reminder that, with the right guidance, even the most complex problems can be conquered.&lt;/p&gt;

&lt;p&gt;And so, dear  colleagues, I share with you this story of a rainy Sunday, where the lines between human and AI blurred, and I became an interactive console for an AI language model. May your own encounters with ChatGPT be as inspiring and transformative as mine, and remember that the collaboration between human and AI is the key to unlocking their true potential.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>developers</category>
    </item>
  </channel>
</rss>
