<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: RoboCoach</title>
    <description>The latest articles on Forem by RoboCoach (@robocoach).</description>
    <link>https://forem.com/robocoach</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/robocoach"/>
    <language>en</language>
    <item>
      <title>ROScribe release v0.0.4: training ROScribe with ROS Index to create an AI agent expert in robotics</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Tue, 28 Nov 2023 23:01:59 +0000</pubDate>
      <link>https://forem.com/robocoach/roscribe-release-v004-training-roscribe-with-ros-index-to-create-an-ai-agent-expert-in-robotics-1p1f</link>
      <guid>https://forem.com/robocoach/roscribe-release-v004-training-roscribe-with-ros-index-to-create-an-ai-agent-expert-in-robotics-1p1f</guid>
      <description>&lt;p&gt;I am pleased to announce that we made a new release on &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; that supports a major feature which comes very helpful in robot integration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training ROScribe on ROS index&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We trained &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; on all open source repositories and ROS packages listed on ROS index. Under the hood, we load all documents and metadata associated with all repositories listed on ROS index into a vector database and use RAG (retrieval augmented generation) technique to access them. Using this method, we essentially teach the LLM (gpt3.5 in our default setting) everything on ROS Index to make it an AI agent expert in robotics. &lt;br&gt;
ROScribe is trained on all ROS versions (ROS &amp;amp; ROS 2) and all distributions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use ROScribe as a robotics expert&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With this release you can use &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; as your personal robotics consultant. You can ask him any technical question within robotics domain and have him show you the options you have within ROS index to build your robot. You can ask him to show you examples and demos of a particular solution, or help you install and run any of the ROS packages available in ROS index.&lt;br&gt;
Here is a &lt;a href="https://www.youtube.com/watch?v=3b5FyZvlkxI"&gt;demo&lt;/a&gt; that shows ROScribe helping a robotics engineer to find a multilayer grid mapping solution and shows him how to install it. &lt;/p&gt;

&lt;p&gt;To run ROScribe for this specific feature use: roscribe-rag in your command line. &lt;/p&gt;

&lt;p&gt;You can find more info on our &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;github&lt;/a&gt; and its wiki page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New in this release&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are what's new in this release.&lt;/p&gt;

&lt;p&gt;Knowledge extraction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scripts for automatic extraction of ROS package documentation given your choice of ROS version&lt;/li&gt;
&lt;li&gt;Build a vector database over ROS Index&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieval augmented generation (RAG) capabilities for ROScribe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now ROScribe has access to the most recent open-source ROS repositories that can be found on ROS Index&lt;/li&gt;
&lt;li&gt;ROScribe can be called as an AI agent that assists you with finding the relevant ROS packages for your project&lt;/li&gt;
&lt;li&gt;Use roscribe-rag to run the RAG agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating a wiki page for documentation to keep the readme file short.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About ROScribe &amp;amp; future roadmap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; is our AI-native robot integration solution.You can describe your robotic project in natural language and ROScribe generates the entire robot software for you (within ROS framework). &lt;br&gt;
As of now, the entire code is generated by the LLM, meaning that the RAG feature (explained above) is currently a stand-alone feature and isn't fully integrated into the main solution. We are working on a fully-integrated solution that retrieves the human-written (open source) ROS packages whenever possible (from ROS index or elsewhere), and only generates code when there is no better code available. This feature will be part of our next release.&lt;br&gt;
We also plan to give ROScribe a web-based GUI. &lt;/p&gt;

&lt;p&gt;Please checkout our &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;github&lt;/a&gt; and let us know what you think.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>robotics</category>
      <category>openai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>GPT-Synthesizer release v0.0.4: web-based GUI with Streamlit</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Mon, 30 Oct 2023 23:34:24 +0000</pubDate>
      <link>https://forem.com/robocoach/gpt-synthesizer-release-v004-web-based-gui-with-streamlit-284b</link>
      <guid>https://forem.com/robocoach/gpt-synthesizer-release-v004-web-based-gui-with-streamlit-284b</guid>
      <description>&lt;p&gt;I am pleased to announce that we released v0.0.4 of &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT-Synthesizer&lt;/a&gt; a few days ago. This release has a lot of quality-of-life improvements as was requested by some users.&lt;br&gt;
The main update is that we now have a web-based GUI using &lt;a href="https://streamlit.io/"&gt;Streamlit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Release Notes v0.0.4&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://streamlit.io/"&gt;Streamlit&lt;/a&gt; user interface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user can now choose the GPT model via the UI.&lt;/li&gt;
&lt;li&gt;Generated code base is shown in the UI.&lt;/li&gt;
&lt;li&gt;Quality of life improvements for interaction with GPT-Synthesizer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More bug fixes with the code generation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QJlLZGNU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4620qewdowc1aavugwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QJlLZGNU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4620qewdowc1aavugwx.png" alt="Web-based GUI of GPT-Synthesizer" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to run the Streamlit version:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT Synthesizer&lt;/a&gt; by typing gpt-synthesizer-streamlit in the terminal.&lt;/li&gt;
&lt;li&gt;Input your OpenAI API key in the sidebar&lt;/li&gt;
&lt;li&gt;Select the model you wish to use in the sidebar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Demo: &lt;a href="https://www.youtube.com/watch?v=y0_PpXPWeV8"&gt;GPT-Synthesizer Release v0.0.4 streamlit demo: calculator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How GPT-Synthesizer works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The design philosophy of &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT Synthesizer&lt;/a&gt; is rooted in the core belief that a single prompt is not enough to build a complete codebase for complex software. This is mainly due to the fact that, even in the presence of powerful LLMs, there are still many crucial details in the design specification that cannot be effectively captured in a single prompt. Attempting to include every bit of detail in a single prompt, if not impossible, would cause a loss of efficiency of the LLM engine. Powered by &lt;a href="https://python.langchain.com/docs/get_started/introduction"&gt;LangChain&lt;/a&gt;, GPT Synthesizer captures the design specification, step by step, through an AI-directed dialogue that explores the design space with the user.&lt;/p&gt;

&lt;p&gt;GPT Synthesizer interprets the initial prompt as a high-level description of a programming task. Then, through a process, which we named “prompt synthesis”, GPT Synthesizer compiles the initial prompt into multiple program components that the user might need for implementation. This step essentially turns 'unknown unknowns' into 'known unknowns', which can be very helpful for novice programmers who want to understand the overall flow of their desired implementation. Next, GPT Synthesizer and the user collaboratively find out the design details that will be used in the implementation of each program component.&lt;/p&gt;

&lt;p&gt;For a deep dive in how &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT-synthesizer&lt;/a&gt; works, read this &lt;a href="https://dev.to/robocoach/looking-inside-gpt-synthesizer-and-the-idea-of-llm-based-code-generation-41d9"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About GPT-Synthesizer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT-Synthesizer&lt;/a&gt; is a free open-source tool, under MIT license, that can help with your software design and code generation for personal or commercial use. We made GPT-Synthesizer open source hoping that it would benefit others who are interested in this domain. We encourage all of you to check out this tool, and give us your feedback here, or by filing issues on our GitHub. We plan to keep maintaining and updating this tool, and we welcome all of you to participate in this open source project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About RoboCoach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are a small early-stage startup company based in San Diego, California. We are exploring the applications of LLMs in software generation as well as some other domains. &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT-synthesizer&lt;/a&gt; is our general-purpose code generator. We have another open source product for special-purpose code generation in robotics domain, which is called &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt;. You can learn more about these tools in our &lt;a href="https://github.com/RoboCoachTechnologies"&gt;Github&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>gpt3</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Looking inside ROScribe and the idea of LLM-based robotic platform</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Mon, 23 Oct 2023 23:31:47 +0000</pubDate>
      <link>https://forem.com/robocoach/looking-inside-roscribe-and-the-idea-of-llm-based-robotic-platform-3i12</link>
      <guid>https://forem.com/robocoach/looking-inside-roscribe-and-the-idea-of-llm-based-robotic-platform-3i12</guid>
      <description>&lt;p&gt;&lt;strong&gt;ROScribe&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; is an open source tool that uses LLM for software generation in robotics within ROS (Robot Operating System) framework. ROScribe supports both ROS 1 and ROS 2 with python implementations. In this post, I want to dive deep into how ROScribe works under the hood, and explain some high level ideas behind this project. Further, I want to discuss how using LLM for code generation in a specific domain compares to a generic domain code generation. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robot Operating System (ROS)&lt;/strong&gt;&lt;br&gt;
ROS is not an operating system; it is an open-source software framework for robot software development. It provides hardware abstraction, device drivers, libraries of commonly-used functionalities, visualizers, message-passing, package management, and more [1, 2].&lt;br&gt;
ROS was designed to be open source, so that users could choose the configuration of tools and libraries to get their software stacks to fit their robot and application area [1]. There is very little which is core to ROS, beyond the general structure within which programs must exist and communicate. In one sense, ROS is the underlying plumbing behind processes and message passing [1]. However, in reality, ROS is not only that plumbing, but also a rich and mature set of tools, a wide-ranging set of robot-agnostic abilities provided by packages, and a greater ecosystem of additions to ROS [1]. &lt;br&gt;
There are two versions of ROS available today (i.e. ROS 1 and ROS 2), each containing several distributions (e.g. ROS Noetic, ROS 2 humble, ROS 2 Iron). A comprehensive collection of repositories and packages across different ROS versions and distributions can be found in ROS Index [3].  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROS graph&lt;/strong&gt;&lt;br&gt;
ROS graph is a graph representation of all processes involved in the robot and the communication among them. ROS graph is a directed &lt;a href="https://en.wikipedia.org/wiki/Bipartite_graph"&gt;bipartite graph&lt;/a&gt; (a.k.a. bigraph), meaning that there are two disjoint sets of vertices in the graph: ROS nodes, and ROS topics. ROS nodes abstract hardware components such as sensors, actuators, controllers, processors, etc., that run ROS-based processes. ROS topics are the information communicated by the ROS nodes. Figure 1 shows an example of a ROS graph. A ROS node can publish to a topic (when the node is the source of the data) or subscribe to it (when the node is the destination of the data). The edges of the ROS graph represent the publish and subscribe actions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8knrw--z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltoo9xfhvwjivmy20vi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8knrw--z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltoo9xfhvwjivmy20vi5.png" alt="Figure 1. An example of a ROS graph; ovals represent the ROS nodes (i.e. the processes) and rectangles represent the ROS topics (i.e. the data); the direction of the edges indicate the flow of data." width="800" height="271"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1. An example of a ROS graph; ovals represent the ROS nodes (i.e. the processes) and rectangles represent the ROS topics (i.e. the data); the direction of the edges indicate the flow of data.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adopting ROS&lt;/strong&gt;&lt;br&gt;
ROS is the most common framework for robot softwares; it is growing in popularity in robotics industry and it’s finding its way to other industries such as IoT (Internet of Things). However, if you are new to ROS, whether you are a robotics enthusiast, a college student, or a professional engineer, you might find it intimidating at first, and you might hesitate to adopt ROS for your robotic project. This skill barrier sometimes forces the beginner roboticist to give up on ROS altogether and opt out for non-standard options. &lt;br&gt;
&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; is designed to help you overcome the skill barrier so that you can adopt ROS for your robotic project. ROScribe creates the ROS graph for your project, and implements all ROS nodes, and takes care of the ROS-specific installation and launch files. If you are working on a new robotic solution (e.g. a new mapping algorithm, a new clustering solution), you can focus on your own component and let ROScribe take care of everything else that is needed to be connected to your component to have a functioning robot. In other words, you can use ROScribe to create a blueprint for your software; ROScribe gives you a complete software that you can use as an initial draft for your project; then, you can replace parts of the generated code (e.g. one of the ROS nodes) with the specific code that you have manually developed for your component of interest. &lt;br&gt;
We believe ROScribe helps beginners to better learn ROS and adopt it for their projects, while helping advanced users to quickly create a comprehensive blueprint of the robot software for their project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using LLMs to generate robot software&lt;/strong&gt;&lt;br&gt;
LLMs (Large Language Models) belong to a class of AI specialized in processing human languages. This raises the question: how well an AI made for human language can perform when dealing with programming languages such as Python and C? In [4], we discussed how LLMs fare in code generation for a general-purpose software. It is notable that the power of the LLM is mainly in understanding the software spec (which is written in human language) rather than in code generation itself. Please refer to [4] for a discussion on how LLMs can be used to generate a general-purpose software, and how computer programming paradigm could evolve in the presence of LLMs.&lt;br&gt;
In &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt;, we are not dealing with software generation in general, but rather, software generation in a special case of robotics. More specifically, we use LLMs to generate robot softwares within ROS framework.This limits the scope of the software design, and narrows down the LLM’s attention to a smaller task, resulting in lower likelihood of hallucination and higher quality outcome. Like humans, LLMs have limited attention span; having a smaller design space, when dealing with a more specific task of robot software, yields a better outcome compared to when dealing with a broader design space for a generic software design. Prompt engineering techniques, fine tuning, and priming could further help to limit the design space. In the next section, we will explain how ROScribe breaks down the given task to smaller pieces to implement the software through divide and conquer. &lt;br&gt;
To improve the quality of the generated code, and increase the efficiency of the LLM, other methods such as RAG (Retrieval Augmented Generation) can be used. Having a well-structured scope for designing robot softwares in ROS makes it easier to effectively integrate RAG in ROScribe and further improve the quality of the generated robot software. The current version of ROScribe (v0.0.3) doesn’t use RAG, and therefore we save the RAG discussion for future articles. In the current version of ROScribe, the entire robot software is generated by the LLM.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How ROScribe works&lt;/strong&gt;&lt;br&gt;
The process of robot software generation in &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; can be explained in three steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ROS graph synthesis&lt;/li&gt;
&lt;li&gt;ROS node synthesis&lt;/li&gt;
&lt;li&gt;ROS-specific installation scripts generation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;ROS graph synthesis&lt;/strong&gt; &lt;br&gt;
First, ROScribe captures a high-level description of the robotic task from the user in the initial prompt, and asks the user to specify which version of ROS to use (i.e. ROS 1 or ROS 2). &lt;br&gt;
Then, ROScribe asks a series of high-level questions about the overall design of the system. The questioning process continues until ROScribe gathers enough information for generating the ROS graph. The process of generating follow-up questions is illustrated in Figure 2. ROScribe uses the chat history of all previous questions and answers, as well as the first prompt about the task and ROS version, to generate a prompt that will be fed to the LLM to create the follow-up question. Sometimes the follow-up question is accompanied by a suggested answer, or viable options, to help guide the user to the right direction.&lt;br&gt;
Our intention here is to keep the human in the driver’s seat and let him make all decisions on how the robotic system should be designed. ROScribe uses the LLM as a robotics expert who knows what questions to ask to get the necessary information from the human designer.&lt;br&gt;
When there is no further question, the ROS graph can be generated. ROScribe uses the entire chat history, the initial prompt and ROS version, and some internal formatting instructions to generate a prompt and feed it to LLM to create the ROS graph (Figure 3). ROScribe drafts a visual representation of the ROS graph in RQT-style, and allows the user to modify it by adding or removing ROS nodes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FPlmr2T8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ucqcr1w8sf1fovzae5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FPlmr2T8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ucqcr1w8sf1fovzae5g.png" alt="Figure 2. ROS graph synthesis; generating follow-up questions" width="800" height="384"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2. ROS graph synthesis; generating follow-up questions&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ouaCRtFL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p705jv3t39xby2crveen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ouaCRtFL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p705jv3t39xby2crveen.png" alt="Figure 3. ROS graph generation" width="800" height="365"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3. ROS graph generation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROS node synthesis&lt;/strong&gt; &lt;br&gt;
For every ROS node identified and finalized in the previous step, ROScribe captures the spec from the user; only when the spec is completely clear, it implements that node. The task of capturing the spec involves an elaborate process of prompt engineering that we call prompt synthesis. This is the heart of ROScribe where the LLM’s strong suit is used in processing conversations and generating questions all in natural language. &lt;br&gt;
Figure 4 shows the process of prompt synthesis in which ROScribe uses a summary of the chat history plus the top-level information about the task, the ROS version, and the ROS graph to generate a prompt that will be fed to the LLM to create a follow-up question. This process will continue in a loop until the spec is clear and the user has provided the necessary details about the design.&lt;br&gt;
Finally, when the spec is clear, and all the design details are resolved, ROScribe generates the code for that ROS node and moves on to the next node. Figure 5 illustrates the ROS node generation step.&lt;br&gt;
Throughout the entire process of code generation, ROScribe keeps the user involved. The idea is to walk the user across the design space and shed light on different subjects that need clarification, and let the user make the decision on each subject. Ultimately, it is not the tool that invents the software; it is the user utilizing the tool who is in charge of the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MRolzAt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qro6e3s03rh5ur659jm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MRolzAt2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qro6e3s03rh5ur659jm.png" alt="Figure 4. ROS node specification using prompt synthesis" width="800" height="359"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 4. ROS node specification using prompt synthesis&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vpCmNT_u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6ejnvyufpcppj1z3pmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vpCmNT_u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6ejnvyufpcppj1z3pmh.png" alt="Figure 5. ROS node generation" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 5. ROS node generation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROS-specific installation scripts generation&lt;/strong&gt;&lt;br&gt;
At the final step of the software generation process, ROScribe creates the files needed for installation and launch of the generated ROS packages; the generated files include package.xml, CMakeLists.txt, and launchfile.launch (Figure 6).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---BPTAi-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0dm3hsgbchbuox0l8my.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---BPTAi-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0dm3hsgbchbuox0l8my.png" alt="Figure 6. ROS-specific installation files" width="800" height="444"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 6. ROS-specific installation files&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons we learned from ROScribe&lt;/strong&gt;&lt;br&gt;
The following remarks summarize the lessons we learned from development of &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The strength of LLM-based software generation tools are in capturing the spec, and the spec cannot be captured efficiently in a single prompt. &lt;/li&gt;
&lt;li&gt;A good prompt engineering is key to capture design details from user, and the LLM’s output is only as good as its prompts. &lt;/li&gt;
&lt;li&gt;Human should remain in the driver’s seat and control the design process. &lt;/li&gt;
&lt;li&gt;The LLM performs better when dealing with software generation in a special domain, as opposed to a general domain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;About ROScribe&lt;/strong&gt;&lt;br&gt;
We made &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; open source hoping that it would benefit robotics students and engineers who want to speed up the process of robot software generation. We encourage all of you to check out this tool, and give us feedback here, or by filing issues on our github. If you like ROScribe, or the ideas behind it, please star our &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;repository&lt;/a&gt; to give it more recognition and let others know about it. We plan to keep maintaining and updating this tool, and we welcome all of you to participate in this open source project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About RoboCoach&lt;/strong&gt;&lt;br&gt;
We are a small early-stage startup company based in San Diego, California. We are exploring the applications of LLMs in software generation in general, and in robot software generation specifically. You can learn more about our products in our &lt;a href="https://github.com/RoboCoachTechnologies"&gt;github&lt;/a&gt; [5, 6].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;br&gt;
[1] &lt;a href="https://en.wikipedia.org/wiki/Robot_Operating_System"&gt;https://en.wikipedia.org/wiki/Robot_Operating_System&lt;/a&gt;&lt;br&gt;
[2] &lt;a href="http://wiki.ros.org/Documentation"&gt;http://wiki.ros.org/Documentation&lt;/a&gt;&lt;br&gt;
[3] &lt;a href="https://index.ros.org/"&gt;https://index.ros.org/&lt;/a&gt;&lt;br&gt;
[4] &lt;a href="https://dev.to/robocoach/looking-inside-gpt-synthesizer-and-the-idea-of-llm-based-code-generation-41d9"&gt;https://dev.to/robocoach/looking-inside-gpt-synthesizer-and-the-idea-of-llm-based-code-generation-41d9&lt;/a&gt;&lt;br&gt;
[5] &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;https://github.com/RoboCoachTechnologies/GPT-Synthesizer&lt;/a&gt; &lt;br&gt;
[6] &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;https://github.com/RoboCoachTechnologies/ROScribe&lt;/a&gt; &lt;/p&gt;

</description>
      <category>robotics</category>
      <category>opensource</category>
      <category>ros</category>
      <category>ai</category>
    </item>
    <item>
      <title>Looking inside GPT-Synthesizer and the idea of LLM-based code generation</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Wed, 04 Oct 2023 22:52:36 +0000</pubDate>
      <link>https://forem.com/robocoach/looking-inside-gpt-synthesizer-and-the-idea-of-llm-based-code-generation-41d9</link>
      <guid>https://forem.com/robocoach/looking-inside-gpt-synthesizer-and-the-idea-of-llm-based-code-generation-41d9</guid>
      <description>&lt;p&gt;&lt;strong&gt;GPT-Synthesizer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer" rel="noopener noreferrer"&gt;GPT-Synthesizer&lt;/a&gt; is an open source tool that uses GPT for software generation. In this post, instead of talking about releases and features, I want to dive deep into how GPT-synthesizer works under the hood and explain some high level ideas behind this project. Further, I want to discuss the strengths and weaknesses of LLM-based code generation tools, and speculate on how they will evolve in future. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are LLMs good for code generation?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nowadays everybody is using LLMs (Large Language Models) for everything and that’s for a good reason; they are the shiny new technology and they are extremely powerful tools. We are all excited to explore where and how we can use them, but that doesn’t mean that they are the best tools to get the job done in each and every case. LLMs are made for interaction through human language, and that’s where they really shine. Take chat-gpt as an example, where both the inputs and outputs are in human language. In code generation, on the other hand, the generated code isn’t in natural language. It’s in Python, C, or programming languages, with well-defined syntax and rigid semantics. All programming languages were made for the human programmers to describe their intent to the machine in a clear and deterministically-interpretable format. &lt;/p&gt;

&lt;p&gt;Since software isn’t written in human language, why should we use LLMs for software generation? To answer this, we should recognize that there are two sides to software generation: (1) the input: capturing the spec, (2) the output: generating the code.&lt;/p&gt;

&lt;p&gt;The generated code isn’t in human language, but the input spec is. LLMs aren’t the best tools for code generation, but they are amazing at understanding the intent. That’s where they shine, and that’s where the focus of their application should be. In GPT-synthesizer the main focus is on understanding what exactly the user wants to do. The code generation itself is the smaller piece of the puzzle, and isn’t the main focus. &lt;br&gt;
This doesn’t mean that LLMs are necessarily bad at code generation. LLMs such at GPT4 are so powerful that they can do a decent job of it. With throwing so much raw power at it, LLMs can basically solve the problem by brute force. However, the code generation is not the strength of the LLMs or LLM-based software generation tools. The strength comes in communicating through the medium of natural language to capture the spec. This is where the focus of any LLM-based software generator should be, and this is where we put our thoughts and efforts when we made GPT-synthesizer. So let’s take a deeper look into how GPT-Synthesizer actually works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How GPT-Synthesizer works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The process of software generation in &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer" rel="noopener noreferrer"&gt;GPT-synthesizer&lt;/a&gt; can be explained in three steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Component synthesis&lt;/li&gt;
&lt;li&gt;Component specification &amp;amp; generation&lt;/li&gt;
&lt;li&gt;Top-level generation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Component synthesis:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;First, GPT-synthesizer reads the given programming task provided by the user in the initial prompt, and breaks it into software components that need to be implemented. We call this step component synthesis. Then, GPT-Synthesizer shows the user the compiled list of components along with their descriptions, and asks the user to finalize the list by adding/removing any component to/from the list. The idea here is to keep the user in the driver’s seat by asking for his confirmation. &lt;/p&gt;

&lt;p&gt;Ultimately, it is not the tool that invents the software; it is the user utilizing the tool who is in charge of the project. Figure 1 shows how GPT-synthesizer identifies a list of components in component synthesis. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1. Component synthesis&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh7ko0h97wrwm1e5936h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh7ko0h97wrwm1e5936h.png" alt="Figure 1. Component synthesis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Component specification &amp;amp; generation:&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;For every component identified and finalized in the previous step, GPT-synthesizer captures the intent from the user; only when the intent is completely clear, it implements that component. The task of capturing the intent involves an elaborate process of prompt engineering that we call prompt synthesis. This is the heart of GPT-synthesizer where the LLM’s strong suit is used in processing conversations and generating questions all in natural language.&lt;/p&gt;

&lt;p&gt;Figure 2 shows the process of prompt synthesis in which GPT-synthesizer uses a summary of the chat history plus the top-level information about the task, the output language, and the software component to generate a prompt that will be fed to the LLM to create a follow-up question. This process will continue in a loop until the spec is clear and the user has provided the necessary details about the design.&lt;/p&gt;

&lt;p&gt;The idea here is not just to keep human in the loop, but to keep him in the driver’s seat. We want the user to make decisions on the details of the design. We made GPT-synthesizer as a programming assistant tool that can be used in the early stages of the software design to create a draft (a blueprint) of the software project. GPT-synthesizer explores the design space and identifies the unknowns; it holds the user’s hand as it walks though the design space, sheds light on the design unknowns, brings them to the user’s attention, provides suggestions on those details, and asks the user for clarification and confirmation on design details.&lt;/p&gt;

&lt;p&gt;For a less-experienced user, who wants to write a software but doesn’t know where to start, or what goes into writing such software, GPT-synthesizer could be like a coach; someone that turns the unknown unknowns into known unknown. &lt;/p&gt;

&lt;p&gt;Finally, when the component spec is clear, and all the design details are resolved, GPT-synthesizer generates the code for that component. Figure 3 illustrates the component generation step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2. Component specification using prompt synthesis&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbecmzvj7grsuff8apyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbecmzvj7grsuff8apyw.png" alt="Figure 2. Component specification"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 3. Component generation&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1loawc6k3lqmyb0hywk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1loawc6k3lqmyb0hywk.png" alt="Figure 3. Component generation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top-level generation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the end, GPT-synthesizer creates the top/main function which will act as the entry point for the software. As of now, this step is only supported for python.&lt;/p&gt;

&lt;p&gt;By now, you can see that the heart of GPT-synthesizer is not the code generation, but rather the component synthesis and prompt synthesis; GPT-synthesizer’s strength is in capturing the specification through a conversation in natural language where the LLMs are at their best.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons we learned from GPT-synthesizer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following remarks summarize the lessons we learned from development of GPT-synthesizer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The strength of LLM-based software generation tools are in capturing the spec, and the spec cannot be captured efficiently in a single prompt. &lt;/li&gt;
&lt;li&gt;Human should remain in the driver’s seat and control the design process. &lt;/li&gt;
&lt;li&gt;A good prompt engineering is key to capture design details from user, and the LLM’s output is only as good as its prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, I would like to step aside from GPT-synthesizer for a bit, and speculate on what I think is the future for programming languages in the presence of LLMs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The future of programming languages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Programming languages are the relics of a past in which machines couldn’t understand the human language with its complex, irregular, and ambiguous structures. That has changed now. For the first time ever, in computer history, computers can understand us just the way we speak, and there is no need for us to speak to them in their language. &lt;/p&gt;

&lt;p&gt;So what will happens to programming languages then? Are they gonna vanish completely? I believe it would takes years, maybe even decades, for programming languages to gradually phase out and be replaced by human language. It’s a matter of the quality of the generated code, the power efficiency of the LLM tools, and the legacy of existing softwares written in programing languages. Eventually these matters sort themselves out, and natural languages will become the only interface between humans and machines, and the programming languages will only remain as intermediate formats inside the tools.&lt;/p&gt;

&lt;p&gt;When computers first came out, we had to talk to them in 0s and 1s which then was replaced by the assembly language. Later, we took one step farther from the machine language and described our intent in higher-level languages like C, Pascal, etc., and relied on compilers to translate our intent into the machine language.&lt;/p&gt;

&lt;p&gt;For some time, if you wanted your software to run efficiently, you had to manually modify the compiler-generated assembly code, or to skip the compiler altogether and write your assembly manually. Overtime as compilers got better, smarter, and more optimized, the generated assembly got better and better. At the same time, with transistor scaling as well as innovations in computer architecture, the processors became more powerful; therefore the lack of efficiency of the auto-generated assembly became less of an issue. Meanwhile, the advancements in chip design and manufacturing technologies improved the capacity and speed of both on-chip and off-chip memories, allowing programmers to be more lenient with the size of the generate assembly. Eventually, the combination of these advancements shifted the balance from having the most optimized hand-written assembly code to saving development time and effort by trusting compilers.&lt;/p&gt;

&lt;p&gt;With the success of the programming languages and compilers, we took more steps away from machine language, and used even higher-abstraction-level languages like Python or Matlab to communicate to machines. Now, with the invention of LLMs, we are taking one last step and completely switch to our own language to interface with the machines.&lt;/p&gt;

&lt;p&gt;I expect the same scenario to play out regarding trusting LLMs with our code generation. Overtime, LLMs will become more powerful, more efficient, and better integrated with current ecosystems to generate better softwares. At the same time, the processing power as well as the data capacity of the cloud services will grow, and the communication speed will improve, driving down the cost per unit, allowing more forgiveness on the efficiency of the LLM process and the quality of the generated code. It could take several years, but I believe we gradually take our hands off of the programming languages and trust language models to handle them. &lt;/p&gt;

&lt;p&gt;I don’t expect programming languages to vanish completely. I think they will exist as an intermediate  format the same way that the assembly language exists today. I would also predict that there will be a lot of consolidations in that space and only few languages will survive this transition. The traditional compilers and many other legacy softwares can coexist behind the scene and work under LLMs command.&lt;/p&gt;

&lt;p&gt;It is somewhat easier to think of LLMs not as AI programs, but rather as human experts who can understand our requirements in human language, and utilize other tools such as legacy softwares (e.g, compilers, synthesizers, convertors, traditional AI tools) to get the job done.&lt;/p&gt;

&lt;p&gt;These are my opinions and speculations regarding the future of LLMs. I am curious to learn about your thoughts on this matter. Please feel free to comment on that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About GPT-Synthesizer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We made &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer" rel="noopener noreferrer"&gt;GPT-Synthesizer&lt;/a&gt; open source hoping that it would benefit others who are interested in this domain. We encourage all of you to check out this tool, and give us your feedback here, or by filing issues on our GitHub. If you like GPT-Synthesizer or the ideas behind it, please star our repository to give it more recognition. We plan to keep maintaining and updating this tool, and we welcome all of you to participate in this open source project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About RoboCoach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are a small early-stage startup company based in San Diego, California. We are exploring the applications of LLMs in software generation as well as some other domains. &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer" rel="noopener noreferrer"&gt;GPT-synthesizer&lt;/a&gt; is our general-purpose code generator. We have another open source product for special-purpose code generation in robotics domain, which is called &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe" rel="noopener noreferrer"&gt;ROScribe&lt;/a&gt;. You can learn more about these tools in our Github.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>ROScribe release v0.0.3: Supporting ROS2</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Wed, 27 Sep 2023 19:28:18 +0000</pubDate>
      <link>https://forem.com/robocoach/roscribe-release-v003-supporting-ros2-3hf6</link>
      <guid>https://forem.com/robocoach/roscribe-release-v003-supporting-ros2-3hf6</guid>
      <description>&lt;p&gt;We are pleased to announce that we have released a new version of ROScribe that supports ROS2 and well as ROS1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROScribe&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; is an open source project that uses human language interface to capture the details of your robotic project and creates the entire ROS packages for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ROScribe motivates you to learn ROS&lt;/strong&gt;&lt;br&gt;
Learning ROS might feel intimidating for robotic enthusiasts, college students, or professional engineers who are using it for the first time. Sometimes this skill barrier forces them to give up on ROS altogether and opt out for non-standard options. We believe ROScribe helps students to better learn ROS and encourages them to adopt it for their projects.&lt;br&gt;
ROScribe eliminates the skill barrier for beginners, and saves time and hassle for skilled engineers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using LLM to generate ROS&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; combines the power and flexibility of large language models (LLMs) with prompt tuning techniques to capture the details of your robotic design and to automatically create an entire ROS package for your project. As of now, ROScribe supports both ROS1 and ROS2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keeping human in the loop&lt;/strong&gt;&lt;br&gt;
Inspired by &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT-Synthesizer&lt;/a&gt;, the design philosophy of &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; is rooted in the core belief that a single prompt is not enough to capture the details of a complex design. Attempting to include every bit of detail in a single prompt, if not impossible, would cause losing efficiency of the LLM engine. Powered by LangChain, ROScribe captures the design specification, step by step, through an AI-directed interview that explores the design space with the user in a top-down approach. We believe that keeping human in the loop is crucial for creating a high quality output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code generation and visualization&lt;/strong&gt;&lt;br&gt;
After capturing the design specification, ROScribe helps you with the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a list of ROS nodes and topics, based on your application and deployment (e.g. simulation vs. real-world)&lt;/li&gt;
&lt;li&gt;Visualizing your project in an RQT-style graph&lt;/li&gt;
&lt;li&gt;Generating code for each ROS node&lt;/li&gt;
&lt;li&gt;Writing launch file and installation scripts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Source code and demo&lt;/strong&gt; &lt;br&gt;
For further detail of how to install and use ROScribe, please refer to our Github and watch our demo:&lt;br&gt;
&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe open source repository&lt;/a&gt; &lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=H2QaeelkReU"&gt;TurtleSim demo&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version v0.0.3 release notes&lt;/strong&gt;&lt;br&gt;
ROS2 integration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now ROScribe supports both ROS1 and ROS2.&lt;/li&gt;
&lt;li&gt;Code generation for ROS2 uses rclpy instead of rospy&lt;/li&gt;
&lt;li&gt;Installation scripts for ROS2 use setup.py and setup.cfg instead of CMakeLists.txt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Roadmap&lt;/strong&gt;&lt;br&gt;
ROScribe supports both ROS1 and ROS2 with Python code generation. We plan to support the following features in the upcoming releases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;C++ code generation&lt;/li&gt;
&lt;li&gt;ROS1 to ROS2 automated codebase migration&lt;/li&gt;
&lt;li&gt;ROS-Industrial support&lt;/li&gt;
&lt;li&gt;Verification of an already existing codebase&lt;/li&gt;
&lt;li&gt;Graphic User Interface&lt;/li&gt;
&lt;li&gt;Enabling and integrating other robotic tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Call for contributions&lt;/strong&gt;&lt;br&gt;
ROScribe is a free and open source software. We encourage all of you to try it out and let us know what you think. We have a lot of plans for this project and we intend to support and maintain it regularly. we welcome all robotics enthusiasts to contribute to ROScribe. During each release, we will announce the list of new contributors.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>robotics</category>
      <category>opensource</category>
      <category>openai</category>
    </item>
    <item>
      <title>ROScribe</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Wed, 13 Sep 2023 23:27:30 +0000</pubDate>
      <link>https://forem.com/robocoach/roscribe-1h2k</link>
      <guid>https://forem.com/robocoach/roscribe-1h2k</guid>
      <description>&lt;p&gt;&lt;strong&gt;Create ROS packages using LLMs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Learning ROS (Robot Operating System) may prove to be challenging for robotic enthusiasts, college students, or professional engineers who are using it for the first time. Sometimes this skill barrier forces them to give up on ROS altogether and opt out for non-standard options. &lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; eliminates the skill barrier for beginners, and saves time and hassle for skilled engineers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe&lt;/a&gt; combines the power and flexibility of large language models (LLMs) with prompt tuning techniques to capture the details of your robotic design and to automatically create an entire ROS package for your project.&lt;/p&gt;

&lt;p&gt;Inspired by &lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;GPT Synthesizer&lt;/a&gt;, ROScribe builds an entire ROS package through a series of specification steps that identify the package elements in a top-down approach. In particular, ROScribe helps you with the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a list of ROS nodes and topics, based on your application and deployment (e.g. simulation vs. real-world)&lt;/li&gt;
&lt;li&gt;Visualizing your project in an RQT-style graph&lt;/li&gt;
&lt;li&gt;Generating code for each ROS node&lt;/li&gt;
&lt;li&gt;Writing launch file and installation scripts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are new to ROS, ROScribe will be your robot(ics) mentor 🤖️&lt;/p&gt;

&lt;p&gt;If you are a seasoned ROS user, ROScribe can help with creating a blueprint for your ROS package 📦️&lt;/p&gt;

&lt;p&gt;For further detail of how to install and use ROScribe, please refer to our Github and watch our demo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/ROScribe"&gt;ROScribe open source repository&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=H2QaeelkReU"&gt;TurtleSim demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roadmap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ROScribe v0.0.2 only supports ROS1 with Python code generation. We plan to add the following features in the upcoming releases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ROS2 &amp;amp; ROS-Industrial support&lt;/li&gt;
&lt;li&gt;C++ code generation&lt;/li&gt;
&lt;li&gt;ROS1 to ROS2 automated codebase migration&lt;/li&gt;
&lt;li&gt;Verification of an already existing codebase&lt;/li&gt;
&lt;li&gt;Graphic User Interface
&lt;/li&gt;
&lt;li&gt;Enabling and integrating other robotic tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Call for contributor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ROScribe is a free and open source software. We encourage all of you to try it out and let us know what you think. We have a lot of plans for this project and we intend to support and maintain it regularly. we welcome all robotics enthusiasts to contribute to ROScribe. During each release, we will announce the list of new contributors.&lt;/p&gt;

</description>
      <category>gpt3</category>
      <category>robotics</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>GPT-Synthesizer</title>
      <dc:creator>RoboCoach</dc:creator>
      <pubDate>Mon, 21 Aug 2023 20:14:06 +0000</pubDate>
      <link>https://forem.com/robocoach/gpt-synthesizer-713</link>
      <guid>https://forem.com/robocoach/gpt-synthesizer-713</guid>
      <description>&lt;p&gt;GPT-Synthesizer is a free open-source tool, under MIT license, that can help with your software design and code generation for personal or commercial use:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RoboCoachTechnologies/GPT-Synthesizer"&gt;https://github.com/RoboCoachTechnologies/GPT-Synthesizer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GPT Synthesizer is designed for software generation using Large Language Models (LLMs), including among others, GPT, BERT, LLaMA, and Chinchilla. As of now, GPT 3.5 is its default LLM, because it provides a reasonable balance of complexity and cost for generating a large and complex codebase.&lt;/p&gt;

&lt;p&gt;If you intend to write a rather small and straight forward software program, you can directly set your prompt in GPT (or other LLMs for that matter), and get your desired output without needing GPT Synthesizer or any other third-party tools. However, if your intended software task is complex, or if you have no idea where to start and how to describe it, GPT Synthesizer can be your best friend. GPT-Synthesizer walks you through the problem statement and explores the design space with you through a carefully moderated interview process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What makes GPT Synthesizer unique?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The design philosophy of GPT Synthesizer is rooted in the core, and rather contrarian, belief that a single prompt is not enough to build a complete codebase for a complex software. This is mainly due to the fact that, even in the presence of powerful LLMs, there are still many crucial details in the design specification which cannot be effectively captured in a single prompt. Attempting to include every bit of detail in a single prompt, if not impossible, would cause losing efficiency of the LLM engine. Powered by LangChain, GPT Synthesizer captures the design specification, step by step, through an AI-directed dialogue that explores the design space with the user.&lt;/p&gt;

&lt;p&gt;GPT Synthesizer interprets the initial prompt as a high-level description of a programming task. Then, through a process, which we name “prompt synthesis”, GPT Synthesizer compiles the initial prompt into multiple program components that the user might need for implementation. This step essentially turns 'unknown unknowns' into 'known unknowns', which can be very helpful for novice programmers who want to understand an overall flow of their desired implementation. Next, GPT Synthesizer and the user collaboratively find out the design details that will be used in the implementation of each program component.&lt;/p&gt;

&lt;p&gt;Different users might prefer different levels of interactivity depending on their unique skill set, their level of expertise, as well as the complexity of the task at hand. GPT Synthesizer distinguishes itself from other LLM-based code generation tools by finding the right balance between user participation and AI autonomy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demos:&lt;/strong&gt;&lt;br&gt;
GPT Sythesizer is easy to use. It provides you with an intuitive AI assistant in your command-line interface. Watch these demos to see how GPT Synthesizer works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=zFJDQOtIFGA"&gt;GPT-Synthesizer Release v0.0.2 demo: a snake game&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=_JdmzpXLyE0"&gt;GPT-Synthesizer Release v0.0.3 demo: a tic-tac-toe game&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gpt3</category>
      <category>ai</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
