<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jordi Cabot</title>
    <description>The latest articles on Forem by Jordi Cabot (@jcabot).</description>
    <link>https://forem.com/jcabot</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jcabot"/>
    <language>en</language>
    <item>
      <title>None of the top 10 projects in GitHub is actually a software project 🤯</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Sat, 10 May 2025 18:28:33 +0000</pubDate>
      <link>https://forem.com/jcabot/none-of-the-top-10-projects-in-github-is-actually-a-software-project-47nh</link>
      <guid>https://forem.com/jcabot/none-of-the-top-10-projects-in-github-is-actually-a-software-project-47nh</guid>
      <description>&lt;p&gt;As part of our ongoing research, we periodically monitor GitHub to understand what is gaining traction and popularity within the community, using stars as a proxy. In May 2022, &lt;a href="https://livablesoftware.com/looked-top-projects-github/" rel="noopener noreferrer"&gt;our analysis of the top 25 starred repositories&lt;/a&gt; revealed some surprising trends, particularly the prominence of non-software development projects. We analyzed the projects following the previously identified six main categories according to the project's content: software, awesome list, books, study plan, algorithm collection and style guide. You can refer to &lt;a href="https://livablesoftware.com/looked-top-projects-github/" rel="noopener noreferrer"&gt;our previous post&lt;/a&gt; for a description of the categories.&lt;/p&gt;

&lt;p&gt;We have now repeated the analysis, using the same categories. And we have observed some interesting shifts. The following figure shows the distribution of these categories for the top 25 starred repositories in GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flivablesoftware.com%2Fwp-content%2Fuploads%2F2025%2F05%2Ftop25-topics-1024x615.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flivablesoftware.com%2Fwp-content%2Fuploads%2F2025%2F05%2Ftop25-topics-1024x615.png" alt="" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The list of repositories can be found &lt;a href="https://docs.google.com/spreadsheets/d/1ievvZpS79Rs67_H_TqRmxVpd9mnWb-gpFgI0Aw45bvI/edit?usp=sharing" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our surprise was that while software development projects continue to hold a significant presence, their prominence has diminished compared to the remarkable growth of awesome lists and study plans. Awesome lists and study plans often focused on documentation and educational resources, which highlight &lt;strong&gt;GitHub's evolving role as a hub for collaborative knowledge sharing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Some other highlights, comparing them with our last analysis are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;We see &lt;strong&gt;an addition to the AI community&lt;/strong&gt; with &lt;a href="https://github.com/Significant-Gravitas/AutoGPT" rel="noopener noreferrer"&gt;AutoGPT&lt;/a&gt;. Along with &lt;a href="https://github.com/tensorflow/tensorflow" rel="noopener noreferrer"&gt;Tensorflow&lt;/a&gt; they represent the AI community in the software category, which is getting relevant (2 out of 8). We can expect in the future to have new AI projects in the top 25 such as &lt;a href="https://github.com/huggingface/transformers" rel="noopener noreferrer"&gt;Transformers&lt;/a&gt; or &lt;a href="https://github.com/ollama/ollama" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; (currently top 34 and 36, respectively).&lt;/li&gt;
    &lt;li&gt;There is no software project in the top 10 projects. These are awesome lists (5), study plans (4), and one books project. Thus, we see how important is the &lt;strong&gt;social role of GitHub&lt;/strong&gt; beyond coding. These kinds of projects are maintained by the community with the addition or removal of items from a list of items.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Chinese projects&lt;/strong&gt; are still the only non-English projects we see in the top 25, such as &lt;a href="https://github.com/CyC2018/CS-Notes" rel="noopener noreferrer"&gt;CyC2018/CS-Notes&lt;/a&gt;, maintaining its importance in the platform. There is also a project not related to code nor books or guides, but to the labor of coding. This is &lt;a href="https://github.com/996icu/996.ICU" rel="noopener noreferrer"&gt;996.ICU&lt;/a&gt;, used to track companies that still apply this abusive work schedule called &lt;a href="https://en.wikipedia.org/wiki/996_working_hour_system" rel="noopener noreferrer"&gt;996&lt;/a&gt; (9 a.m.–9 p.m., 6 days per week). So we can even see GitHub being used as a community building platform, where people share their concerns on practices related to developing software.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From this review, we can see that our conclusions from last post still stand:&lt;/p&gt;

&lt;p&gt;GitHub's transformation into a social platform for collaborative efforts beyond coding is evident. Reviewing our previous conclusions:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;From this review, it is clear that &lt;strong&gt;the presence of non-software projects on GitHub continues to grow&lt;/strong&gt;. This trend may significantly affect the efforts of the mining-software community, which relies on massive data analysis, or &lt;a href="https://livablesoftware.com/creating-representative-samples-of-software-repositories/" rel="noopener noreferrer"&gt;representative samples&lt;/a&gt;, to extract meaningful insights. As non-software repositories become more prevalent,&lt;strong&gt; researchers must take extra precautions to filter out irrelevant projects&lt;/strong&gt; to ensure the accuracy and relevance of their findings.&lt;/li&gt;
    &lt;li&gt;We see some projects that can financially survive (via sponsor or external infrastructure such as &lt;a href="https://opencollective.com/" rel="noopener noreferrer"&gt;open collective&lt;/a&gt; or &lt;a href="https://www.patreon.com/" rel="noopener noreferrer"&gt;patreon&lt;/a&gt;), favoring the &lt;a href="https://livablesoftware.com/bots-open-source-sustainability/" rel="noopener noreferrer"&gt;long-term sustainability&lt;/a&gt;. Thus, we keep our stand on promoting a &lt;a href="https://livablesoftware.com/transparent-governance-open-source/" rel="noopener noreferrer"&gt;transparent governance&lt;/a&gt; model to state where the investment will be managed and who can benefit from it, especially when knowing that &lt;a href="https://livablesoftware.com/importance-of-non-tech-contributor-roles-open-source/" rel="noopener noreferrer"&gt;non-technical users&lt;/a&gt; have an increasing key role in these communities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As GitHub continues to evolve, we will keep a close eye on these trends to provide ongoing insights into the platform's direction. Our research will adapt to these changes, ensuring that we remain at the forefront of understanding GitHub's role in the broader tech ecosystem.&lt;/p&gt;

</description>
      <category>github</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Beyond Vibe Coding: Welcome to Vibe Modeling</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Thu, 01 May 2025 14:38:40 +0000</pubDate>
      <link>https://forem.com/jcabot/beyond-vibe-coding-welcome-to-vibe-modeling-558o</link>
      <guid>https://forem.com/jcabot/beyond-vibe-coding-welcome-to-vibe-modeling-558o</guid>
      <description>&lt;p&gt;Everybody talks about &lt;a href="https://x.com/karpathy/status/1886192184808149383?lang=en" rel="noopener noreferrer"&gt;vibe coding&lt;/a&gt;, where you develop software by talking to an LLM tuned for coding. And keep asking it to create what you need until you get something that (apparently) works. And the "apparently" is key here. Even Andrej Karpathy, who coined the term, said it was great for &lt;em&gt;throwaway weekend projects &lt;/em&gt;where people, even if they had no coding expertise, could quickly explore and build artefacts that &lt;em&gt;mostly&lt;/em&gt; work.&lt;/p&gt;

&lt;p&gt;Despite all the hype, vibe coding should never be used for more than this, as getting the result of a vibe coding session ready for production would require:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;coding expertise and&lt;/li&gt;
    &lt;li&gt;as much time to test and fix the code as if you started the project from scratch with a "traditional" process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But what if we could keep the magic of AI-assisted development &lt;em&gt;without&lt;/em&gt; the unpredictability of LLM-generated code? That’s where &lt;strong&gt;vibe modeling&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;h2&gt;What is vibe modeling?&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vibe modeling&lt;/strong&gt; is the process of building software through conversational interaction with an LLM trained for &lt;strong&gt;modeling&lt;/strong&gt;, not coding. And then following a &lt;a href="https://modeling-languages.com/low-code-vs-model-driven/" rel="noopener noreferrer"&gt;model-based / low-code&lt;/a&gt; approach to generate deterministic code from those "vibed models". Think of vibe modeling as a model-driven vibe coding approach.&lt;/p&gt;

&lt;p&gt;Indeed, in vibe modeling, the &lt;strong&gt;LLM does not aim to generate code but models&lt;/strong&gt;. And the model-to-code step is performed with "classical" code-generation templates (or any other type of precise and semantically equivalent &lt;a href="https://modeling-languages.com/executable-models-vs-code-generation-vs-model-interpretation-2/" rel="noopener noreferrer"&gt;executable modeling&lt;/a&gt; techniques).&lt;/p&gt;

&lt;p&gt;This has two major advantages over vibe coding:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;
&lt;strong&gt;Understandable output&lt;/strong&gt;. A user is able to validate the quality of the LLM modeling output even if he has no coding expertise. Models are more abstract and closer to the domain and, therefore, a user should be able to understand them with limited effort. True, some basic modeling knowledge may still be required but for sure it's much easier to validate a model (e.g. a graphical class diagram) than a bunch of lines of code&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Reliable code-generation&lt;/strong&gt;. The generation process is deterministic. If the model is good, we know the code is good and there is no need to check it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And vibe modeling is just one of the possible &lt;a href="https://modeling-languages.com/welcome-to-the-low-modeling-revolution/" rel="noopener noreferrer"&gt;low-modeling&lt;/a&gt; strategies and could be combined with them. For instance, you could upload to the LLM (as context for the prompt) any document already describing the domain you want to model (interviews, manuals, tutorials, ....) and then have a chat with the LLM to improve / adapt this first &lt;a href="https://modeling-languages.com/nlp-architecture-model-autocompletion-domain/" rel="noopener noreferrer"&gt;partial model&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Is there any vibe modeling tool?&lt;/h2&gt;

&lt;p&gt;Not quite. Low-code tools (like &lt;a href="https://modeling-languages.com/?s=BESSER" rel="noopener noreferrer"&gt;BESSER&lt;/a&gt; or &lt;a href="https://www.mendix.com/platform/ai/" rel="noopener noreferrer"&gt;Mendix&lt;/a&gt;) offer more and more AI features. But right now, mostly focusing on some kind of smart autocomplete for models. Not so much offering a kind of an integrated chatbot you can use for vibe modeling. We do have implemented a &lt;a href="https://modeling-languages.com/tot-llm-domain-modeling/" rel="noopener noreferrer"&gt;tree-of-thoughts approach for domain modeling&lt;/a&gt; and now are working in integrating a chatbot in the &lt;a href="https://modeling-languages.com/besser-graphical-modeling-editor/" rel="noopener noreferrer"&gt;BESSER web modeling editor&lt;/a&gt; to go from this one-shot model to an iterative, conversation-based, model refinement. But you'll need to wait a little bit more for that.&lt;/p&gt;

&lt;p&gt;In the meantime, we’d love to hear from you: 👉 What features would &lt;em&gt;you&lt;/em&gt; want in an ideal vibe modeling experience?&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>modeling</category>
      <category>llm</category>
    </item>
    <item>
      <title>Three laws of software agents by Isaac Asimov (kind of)</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Thu, 03 Apr 2025 23:35:47 +0000</pubDate>
      <link>https://forem.com/jcabot/three-laws-of-software-agents-by-isaac-asimov-kind-of-36ol</link>
      <guid>https://forem.com/jcabot/three-laws-of-software-agents-by-isaac-asimov-kind-of-36ol</guid>
      <description>&lt;p&gt;What would Isaac Asimov think about the &lt;a href="https://livablesoftware.com/besser-agentic-framework/" rel="noopener noreferrer"&gt;software agents&lt;/a&gt; taking the world by storm and, more and more, &lt;a href="https://livablesoftware.com/who-will-develop-apps-future/" rel="nofollow noopener noreferrer"&gt;taking care of building the applications of the future&lt;/a&gt;? Would he had reframed his &lt;a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics" rel="noopener noreferrer"&gt;three laws of robotics&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;We will never know, but I'd like to propose my own interpretation, keeping the original spirit of the rules (if you need a reminder, read the original rules at the end).&lt;/p&gt;

&lt;h2&gt;The three laws of software agents&lt;/h2&gt;

&lt;ol&gt;
    &lt;li&gt;An agent should not introduce bugs in a software, or through inaction, allow a human to do so&lt;/li&gt;
    &lt;li&gt; An agent must obey orders to modify the software given by human beings except when such orders would conflict with the First law&lt;/li&gt;
    &lt;li&gt;An agent must protect its own contributions to the software as long as such protection does not conflict with the First or Second Law&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What do you think? Do you have an alternative proposal (if so, happy to read it, leave it a comment!)&lt;/p&gt;

&lt;h2&gt;The original rules&lt;/h2&gt;

&lt;p&gt;In case any of you needs a reminder, here are the original three laws of robotics which were to be followed by robots in several of his stories. The rules were introduced in his 1942&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;A robot may not injure a human being or, through inaction, allow a human being to come to harm.&lt;/li&gt;
    &lt;li&gt;A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;/li&gt;
    &lt;li&gt;A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;Other rule variations&lt;/h2&gt;

&lt;p&gt;I'm obviously not the only one versioning the rules. For instance, I found these other &lt;a href="https://secretgeek.net/laws_3" rel="noopener noreferrer"&gt;three rules for software development&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;A developer must write code that creates value.&lt;/li&gt;
    &lt;li&gt;A developer must expend effort making their code easy to maintain, except where such expenditure will conflict with the first law.&lt;/li&gt;
    &lt;li&gt;A developer must reduce their code to the smallest size possible, as long as such reduction does not conflict with the first two laws.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;or these other &lt;a href="https://medium.com/@schemouil/rust-and-the-three-laws-of-informatics-4324062b322b" rel="noopener noreferrer"&gt;three laws of informatics&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;Programs must be &lt;strong&gt;correct&lt;/strong&gt;.&lt;/li&gt;
    &lt;li&gt;Programs must be &lt;strong&gt;maintainable&lt;/strong&gt;, except where it would conflict with the First Law.&lt;/li&gt;
    &lt;li&gt;Programs must be &lt;strong&gt;efficient&lt;/strong&gt;, except where it would conflict with the First or Second Law.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;that also make sense to me. Have you seen other examples?&lt;/p&gt;

</description>
      <category>asimov</category>
      <category>agents</category>
      <category>law</category>
    </item>
    <item>
      <title>On the rewards of programming by Donald E Knuth</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Sun, 24 Nov 2024 18:17:24 +0000</pubDate>
      <link>https://forem.com/jcabot/on-the-rewards-of-programming-by-donald-e-knuth-527j</link>
      <guid>https://forem.com/jcabot/on-the-rewards-of-programming-by-donald-e-knuth-527j</guid>
      <description>&lt;p&gt;If &lt;a href="https://www-cs-faculty.stanford.edu/~knuth/" rel="noopener noreferrer"&gt;Donald E Knuth&lt;/a&gt; says it, who are we to disagree?:&lt;/p&gt;

&lt;p&gt;"Computer programs are fun to write, and well-written computer programs are fun to read. One of life's greatest pleasures can be the composition of a computer program that you know will be a pleasure for other people to read, and for yourself to read.&lt;/p&gt;

&lt;p&gt;Computer programs can also do useful work. One of life's greatest sources of satisfaction is the knowledge that something you have created is contributing to the progress of welfare of society.&lt;/p&gt;

&lt;p&gt;Some people even get paid for writing computer programs! Programming can therefore be triply rewarding ---on aesthetic, humanitarian, and economic grounds"&lt;/p&gt;

&lt;p&gt;Taken from his &lt;a href="https://en.wikipedia.org/wiki/Literate_programming" rel="noopener noreferrer"&gt;book on Literate Programming&lt;/a&gt; &lt;/p&gt;

</description>
      <category>programming</category>
      <category>knuth</category>
    </item>
    <item>
      <title>Dashboard of open source low-code tools</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Sun, 24 Nov 2024 13:59:25 +0000</pubDate>
      <link>https://forem.com/jcabot/dashboard-of-open-source-low-code-tools-39md</link>
      <guid>https://forem.com/jcabot/dashboard-of-open-source-low-code-tools-39md</guid>
      <description>&lt;p&gt;I've created a &lt;a href="https://oss-lowcode-tools.streamlit.app/" rel="noopener noreferrer"&gt;dashboard listing open-source low-code tools&lt;/a&gt; available as public repositories in GitHub.&lt;/p&gt;

&lt;p&gt;The selection method is based on the following inclusion criteria:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Repositories that declare themselves as low-code projects&lt;/li&gt;
    &lt;li&gt;Repositories with more than 50 stars&lt;/li&gt;
    &lt;li&gt;Active repositories (last commit is no more than 1 year ago&lt;/li&gt;
    &lt;li&gt;Tool aims to generate any component of a software application, including AI components, dashboards, or full applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and exclusion criteria:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Repositories with no information in English&lt;/li&gt;
    &lt;li&gt;Repositories that were just created to host the source code of a published article&lt;/li&gt;
    &lt;li&gt;Repositories that are &lt;a href="https://github.com/sindresorhus/awesome" rel="noopener noreferrer"&gt;awesome lists&lt;/a&gt; or collection of resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final list is the intersection of the above criteria. The final list has also been manually curated to remove projects that use low-code in a different sense of what we mean by low-code in software development.&lt;/p&gt;

&lt;p&gt;An initial search with the above inclusion resulted in a surprising total of 301 candidates. But a couple of years ago (February 2022) I had written a post complaining about the &lt;a href="https://modeling-languages.com/on-the-lack-of-open-source-low-code-tools/" rel="noopener noreferrer"&gt;lack of open source low-code tools&lt;/a&gt;. Was I wrong? Has the situation changed drastically over the last two years? What's going on?&lt;/p&gt;

&lt;h2&gt;Real number of real low-code tools&lt;/h2&gt;

&lt;p&gt;While it's true that there were 301 tools satisfying the inclusion criteria, the &lt;a href="https://oss-lowcode-tools.streamlit.app/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; only lists 151, once we remove those not complying with the exclusion criteria.&lt;/p&gt;

&lt;p&gt;But 151 is still quite a lot compared with what I covered in the previous post. So I decided to see how many of these low-code tools were model-based. Or just mentioned the word models (or derivatives). It turns out, &lt;strong&gt;only 9 of the 151 low-code tools claim to use any type of model&lt;/strong&gt;. Not even 9 if we remove a couple that only target AI components and merge the three repos of the same platform. Of course, &lt;a href="https://modeling-languages.com/lowcode-opensource-besser/" rel="noopener noreferrer"&gt;BESSER&lt;/a&gt; is one of the few exceptions 😇.&lt;/p&gt;

&lt;p&gt;And to me, this minimal number is the one that matters. As you know, my &lt;a href="https://modeling-languages.com/low-code-vs-model-driven/" rel="noopener noreferrer"&gt;vision of low-code is linked to that of model-based approaches&lt;/a&gt; so tools called low-code where there is zero control on how such code is generated are not what I'd really call low-code.  So in this sense, I still believe there is a &lt;a href="https://modeling-languages.com/on-the-lack-of-open-source-low-code-tools/" rel="noopener noreferrer"&gt;lack of open source low-code tools,&lt;/a&gt; as I understand them.&lt;/p&gt;

&lt;h2&gt;A fragmented community&lt;/h2&gt;

&lt;p&gt;Keep in mind that I'm not saying there are no other model-driven tools or code-generators in GitHub beyond the tools in the dashboard.&lt;/p&gt;

&lt;p&gt;In fact, there are a few. For instance, &lt;a href="https://github.com/telosys-tools-bricks/telosys-cli" rel="noopener noreferrer"&gt;Telosys&lt;/a&gt; (that we &lt;a href="https://modeling-languages.com/telosys-tools-the-concept-of-lightweight-model-for-code-generation/" rel="noopener noreferrer"&gt;also covered here&lt;/a&gt;). But Telosys defines itself as a "&lt;em&gt;lightweight and pragmatic code generator&lt;/em&gt;", not as a low-code tool. Once again, the diverse terminology in our area makes a small domain even smaller and hides useful tools from potential users that are using the "wrong" keywords. To read mode about the relationship between the low-code and model-driven communities, take a look at our work on a &lt;a href="https://modeling-languages.com/a-metascience-study-of-the-adoption-of-low-code-terminology-in-modeling-publications/" rel="noopener noreferrer"&gt;Metascience Study of the Adoption of Low-Code terminology in Modeling Publications&lt;/a&gt;, which we are now updating and extending as I still believe it's an important discussion to have in our community.&lt;/p&gt;

&lt;h2&gt;Other interesting aspects&lt;/h2&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;A significant number of low-code tools are Chinese&lt;/strong&gt; and targeting only Chinese users (as there is zero effort in having any type of English documentation, not even in the readme). As I don't speak Chinese I can't really dig deeper in the characteristics of these tools but it does seem low-code has a strong presence in China. Given the exclusion criteria, these tools are not part of the dashboard but it was indeed one of the most used exclusion criteria when going down from 301 to 151.&lt;/li&gt;
    &lt;li&gt;There are &lt;strong&gt;very few low-code tools in Python&lt;/strong&gt; (see the end of the dashboard for some global stats like this one). Most of the existing ones are low-code tools targeting specific AI components, not low-code platforms for a full application development. Once again BESSER is one of the few exceptions, but in a world becoming more and more AI-driven, I'd like to see more low-code tools embracing Python, and thus, facilitating the interaction with machine learning and AI libraries.&lt;/li&gt;
    &lt;li&gt;If you look at the distribution of tools per year of creation, you'll see that &lt;strong&gt;some tools were created before the low-code term was coined. &lt;/strong&gt;This means that such tools saw the marketing wave of low-code and decided to rebrand themselves to gain visibility.&lt;/li&gt;
    &lt;li&gt;The dashboard itself was built with the help of Cursor. As usual, I was faster than if I had done it alone but I still suffer some hallucinations (Cursos suggested to use a Cell Renderer that made a lot of sense but that, unfortunatley, didn't exist and for a couple of bugs I had to go the "old way" and find myself the solution in StackOverflow and the &lt;a href="https://streamlit.io/" rel="noopener noreferrer"&gt;Streamlit&lt;/a&gt; forums)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Check out the dashboard&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://oss-lowcode-tools.streamlit.app/" rel="noopener noreferrer"&gt;Check out the dashboard&lt;/a&gt;, play with it and let me know what you think. Here or in the&lt;a href="https://github.com/jcabot/oss-lowcode-tools" rel="noopener noreferrer"&gt; dashboard GitHub repository&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>lowcode</category>
    </item>
    <item>
      <title>A wishlist and (potential) architecture for a low-code platform for smart software development</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Fri, 13 Jan 2023 12:38:42 +0000</pubDate>
      <link>https://forem.com/jcabot/a-wishlist-and-potential-architecture-for-a-low-code-platform-for-smart-software-development-n9p</link>
      <guid>https://forem.com/jcabot/a-wishlist-and-potential-architecture-for-a-low-code-platform-for-smart-software-development-n9p</guid>
      <description>&lt;p&gt;Smart software, also called "AI-enhanced" or "ML-enabled"give rise to unique software engineering challenge as these systems are harder to specify, verify and test. Additional complexity arises from all the potential interactions between the AI components and the “traditional” ones (since we need to specify how they collaborate, test that they behave consistently and analyze their interdependencies).&lt;/p&gt;

&lt;p&gt;In this post we offer a “wishlist” that outlines what developers need to watch for in low-code tools for smart software. Also, we present work on an architecture (see featured image) that is one way to satisfy items on that wish list.&lt;/p&gt;

&lt;h2&gt;Low-code for smart software wishlist&lt;/h2&gt;

&lt;p&gt;We believe a developer working on smart soft-ware would be interested in a low-code platform capable of:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Managing concerns for both AI components and traditional software components in a consistent and integrated way including their interdependencies (e.g., an AI component trained using the data entered via a regular component).&lt;/li&gt;
    &lt;li&gt;Supporting the complete lifecycle of the required AI components (training, validation, deployment and monitoring), as well as tracking the decisions behind their architecture and evolution.&lt;/li&gt;
    &lt;li&gt;Operating with a technology-independent and platform-agnostic specification while supporting a transparent deployment to different AI service providers.&lt;/li&gt;
    &lt;li&gt;Enabling the integration of AI components in both the front-end (e.g., chatbots) and back-end (e.g., prediction tasks) of our system.&lt;/li&gt;
    &lt;li&gt;Defining high-level goals or quality concerns (e.g., fairness) that can be automatically tested and/or monitored after deployment.&lt;/li&gt;
    &lt;li&gt;Being used without intricate knowledge of the underlying AI techniques, offering mechanisms to automatically select a suitable method and (hyper)parameters for a particular usage scenario.&lt;/li&gt;
    &lt;li&gt;Supporting a variety of AI tasks, beyond text or image classification.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;A low-code architecture for smart software&lt;/h2&gt;

&lt;p&gt;To provide the features identified in the previous wishlist, we envision the following architecture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxxn5ro3tntf3i6ha8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxxn5ro3tntf3i6ha8a.png" alt="Image description" width="720" height="405"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This architecture is based around the following components:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Model editor&lt;/strong&gt;. Developers provide the description of the software system using a unified notation, a smart software model, which includes both traditional and smart elements:
&lt;ul&gt;
    &lt;li&gt;A description of the application domain and the architecture of the software system, i.e., components and the relationships between them.&lt;/li&gt;
    &lt;li&gt;A high-level description of the tasks to be performed by the AI components, the target quality metrics as well as concerns (e.g., ethical issues, resource budget for training or deployment) that should be considered.&lt;/li&gt;
    &lt;li&gt;A description of the input data sources storing to be used for the training of AI components, with emphasis on the &lt;a href="https://modeling-languages.com/describeml-machine-learning-datasets/" rel="noopener noreferrer"&gt;identification of information relevant from the point of view of fairness&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;/li&gt;


    &lt;li&gt;

&lt;strong&gt;Code generator&lt;/strong&gt;. The information provided in the model drives the generation of code implementing the different processes within the low-code tool. Who knows? &lt;a href="https://modeling-languages.com/models-to-code-models-to-prompts/" rel="noopener noreferrer"&gt;LLMs (large language models) could also play a role here&lt;/a&gt;!&lt;/li&gt;


    &lt;li&gt;

&lt;strong&gt;Training&lt;/strong&gt;. The code generator emits code to train a ML model, preparing training and validation datasets from the input data sources according to the resource budget. After training, the target quality metrics are measured and any ethical constraint is checked.&lt;/li&gt;


    &lt;li&gt;

&lt;strong&gt;Deployment.&lt;/strong&gt; The trained model is deployed on a particular AI platform, which can be either a cloud service from a variety of providers or a local AI package.&lt;/li&gt;


    &lt;li&gt;

&lt;strong&gt;Traditional software components&lt;/strong&gt;. Software modules that do not integrate AI features are generated in the usual way. These modules interact with AI components through a dedicated API.&lt;/li&gt;


    &lt;li&gt;

&lt;strong&gt;Monitoring and feedback&lt;/strong&gt;. Finally, the AI components should be continuously monitored and tested after deployment in order to provide continuous feedback to the developer. This feedback should include explanations regarding the decisions made by AI components, linking back to the input data sources or tracing back to requirements in the input model.&lt;/li&gt;


&lt;/ul&gt;

&lt;p&gt;All components in this architecture are feasible and partially already exist as separate elements in other low-code, AI or monitoring plat-forms. &lt;strong&gt;But bringing them all together in a unified framework could be a force multiplier&lt;/strong&gt; and a significant next step in lowering the barrier to entry for the next generation of smart software developers.&lt;/p&gt;

&lt;h2&gt; To read more &lt;/h2&gt;

&lt;p&gt;This post is a summary of a reflection we published in &lt;a href="https://ieeexplore.ieee.org/abstract/document/9994062" rel="noopener noreferrer"&gt;this column,&lt;/a&gt; co-authored by &lt;a href="https://jordicabot.com/" rel="noopener noreferrer"&gt;myself&lt;/a&gt; and &lt;a href="https://robertclariso.github.io/" rel="noopener noreferrer"&gt;Robert Clarisó&lt;/a&gt; for &lt;a href="https://www.computer.org/csdl/magazine/so" rel="noopener noreferrer"&gt;IEEE Software&lt;/a&gt; on the promises and challenges of &lt;a href="https://modeling-languages.com/low-code-vs-model-driven/" rel="noopener noreferrer"&gt;low-code platforms&lt;/a&gt; to accelerate the development of &lt;a href="https://modeling-languages.com/smart-modeling-smart-software-keynote/" rel="noopener noreferrer"&gt;smart software&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>debugging</category>
      <category>developer</category>
    </item>
    <item>
      <title>The full tech stack to run a chatbot — behind the scenes of an open source bot platform</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Tue, 12 Jul 2022 12:46:39 +0000</pubDate>
      <link>https://forem.com/jcabot/the-full-tech-stack-to-run-a-chatbot-behind-the-scenes-of-an-open-source-bot-platform-16no</link>
      <guid>https://forem.com/jcabot/the-full-tech-stack-to-run-a-chatbot-behind-the-scenes-of-an-open-source-bot-platform-16no</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb62mj60dx2464hjgu7q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb62mj60dx2464hjgu7q.jpg" alt="Image description" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We always aim to "educate" aspiring chatbot developers about the&lt;a href="https://xatkit.com/the-software-challenges-of-building-smart-chatbots/" rel="noopener noreferrer"&gt; software challenges of building and running a good chatbot&lt;/a&gt;. Today we want to go a bit further and give you a behind-the-scenes look at all the languages, libraries and frameworks that must play together to run a chatbot.&lt;/p&gt;

&lt;p&gt;As the full stack depends on the specific chatbot (especially regarding the &lt;a href="https://github.com/xatkit-bot-platform/xatkit/wiki/Xatkit-Chat-Platform" rel="noopener noreferrer"&gt;communication channels&lt;/a&gt; where it will be deployed), we will use our &lt;a href="https://wooxatkitdemo.wpengine.com/" rel="noopener noreferrer"&gt;ecommerce chatbot&lt;/a&gt; as an example.&lt;/p&gt;

&lt;h2&gt;1. The core chatbot logic&lt;/h2&gt;

&lt;p&gt;The core elements of the ecommerce bot (the intents, training sentences and business logic to execute for every matched intent) are implemented with our &lt;a href="https://xatkit.com/fluent-interface-building-chatbots-bots/" rel="noopener noreferrer"&gt;&lt;strong&gt;Java&lt;/strong&gt; Fluent API&lt;/a&gt; that wraps a set of &lt;a href="https://www.eclipse.org/modeling/emf/" rel="noopener noreferrer"&gt;EMF&lt;/a&gt;-based core classes modeling our chatbot specification primitives and the help of &lt;a href="https://projectlombok.org/" rel="noopener noreferrer"&gt;Lombok&lt;/a&gt; to simplify the writing of the bot specification.&lt;/p&gt;

&lt;p&gt;As the bot is automatically adapted to the data of the eCommerce shop hosting the bot, the bot logic communicates with the &lt;a href="https://woocommerce.com/" rel="noopener noreferrer"&gt;WooCommerce&lt;/a&gt; API and stores some shop data into a &lt;strong&gt;SQL&lt;/strong&gt; &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL database&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;2. The NLP Engine&lt;/h2&gt;

&lt;p&gt;To determine which chatbot intent is the best match for the user textual input, we rely on &lt;a href="https://github.com/axa-group/nlp.js/" rel="noopener noreferrer"&gt;nlp.js&lt;/a&gt; (in &lt;strong&gt;JS&lt;/strong&gt;) though we are in the process of moving to our new&lt;a href="https://github.com/xatkit-bot-platform/xatkit-nlu-server" rel="noopener noreferrer"&gt;&lt;strong&gt; Python&lt;/strong&gt; NLP server&lt;/a&gt; for better optimization of the needs of eCommerce conversations. A &lt;a href="https://github.com/xatkit-bot-platform/xatkit/wiki/Processors" rel="noopener noreferrer"&gt;preprocessor language model&lt;/a&gt; is also used to improve the chances of a matching.&lt;/p&gt;

&lt;h2&gt;3. The front-end&lt;/h2&gt;

&lt;p&gt;The eCommerce chatbot is implemented as a &lt;a href="https://wordpress.org/plugins/xatkit-chatbot-for-woocommerce/" rel="noopener noreferrer"&gt;WordPress plugin &lt;/a&gt;in &lt;strong&gt;PHP &lt;/strong&gt;whose mission is to simply embed in the proper PHP WP pages the &lt;a href="https://github.com/xatkit-bot-platform/xatkit-chat-widget" rel="noopener noreferrer"&gt;Xatkit widget&lt;/a&gt; displaying the bot. The widget itself is a &lt;a href="https://reactjs.org/" rel="noopener noreferrer"&gt;react component&lt;/a&gt; that talks with the server component managing the core chatbot logic via a &lt;a href="https://en.wikipedia.org/wiki/WebSocket" rel="noopener noreferrer"&gt;websocket&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;4. The configuration and monitoring component&lt;/h2&gt;

&lt;p&gt;Any non-trivial bot comes with a dashboard to configure the bot (e.g. removing some predefined conversations, adding new ones, ...) and monitor its results, showing, for instance, the user questions the bot was unable to understand.&lt;/p&gt;

&lt;p&gt;Our eCommerce dashboard is a &lt;a href="https://spring.io/" rel="noopener noreferrer"&gt;Spring&lt;/a&gt; application relying on &lt;a href="https://www.thymeleaf.org/" rel="noopener noreferrer"&gt;Thymeleaf &lt;/a&gt;as server-side Java template engine and &lt;a href="https://get.foundation/" rel="noopener noreferrer"&gt;foundation&lt;/a&gt; as responsive front-end framework.&lt;/p&gt;

&lt;h2&gt;All in all, chatbots are indeed a complex piece of engineering&lt;/h2&gt;

&lt;p&gt;Take a look at all the links above. We have 5 major programming languages (together with their package managers: maven, npm,...). 7 if we count HTML and CSS. Plus a myriad of auxiliary libraries and frameworks that require specialized knowledge, e.g. knowing Java doesn't mean you can master Spring in a breeze. Or even less that you would be able to create Java Fluent APIs.&lt;/p&gt;

&lt;p&gt;If you don't have the luxury of having a large team with specialized profiles, maybe it's time to start testing all these &lt;a href="https://livablesoftware.com/smart-intelligent-ide-programming/" rel="noopener noreferrer"&gt;intelligent code assistants&lt;/a&gt;. And I'm only half-joking.&lt;/p&gt;

&lt;p&gt;Definitely, I feel we need better software engineering tools that help manage the complexity of these type of multi-language projects and their dependencies. For instance, we have (a few) tools for reverse engineering large codebases written in one language and help you understand and evolve it. But I don't know any that can do the same with a number of different repositories (we have over 80, including private ones, in &lt;a href="https://github.com/xatkit-bot-platform" rel="noopener noreferrer"&gt;Xatkit&lt;/a&gt; !) in a mix of languages where the dependencies between the components are not as "easy" as looking for imports or references in the package manager definition.&lt;/p&gt;

&lt;p&gt;While we wait for these tools to pop up, any tech question on the internals of &lt;a href="https://xatkit.com/" rel="noopener noreferrer"&gt;Xatkit &lt;/a&gt;you'd like to know? And if you want to read more about the technologies we have listed above, this twitter thread gives some pointers to good tutorials for them:&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1545179026666438657-764" src="https://platform.twitter.com/embed/Tweet.html?id=1545179026666438657"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1545179026666438657-764');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1545179026666438657&amp;amp;theme=dark"
  }



&lt;/p&gt;

</description>
      <category>chatbots</category>
      <category>programming</category>
      <category>nlp</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to build your own chatbot NLP engine</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Mon, 28 Mar 2022 09:23:57 +0000</pubDate>
      <link>https://forem.com/jcabot/how-to-build-your-own-chatbot-nlp-engine-326n</link>
      <guid>https://forem.com/jcabot/how-to-build-your-own-chatbot-nlp-engine-326n</guid>
      <description>&lt;p&gt;This post goes over our &lt;a href="https://github.com/xatkit-bot-platform/xatkit-nlu-server" rel="noopener noreferrer"&gt;new NLU chatbot intent classifier engine for chatbots&lt;/a&gt; to show how you can use it to:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;(obviously) Create your own chatbots (pairing it up with &lt;a href="https://github.com/xatkit-bot-platform/xatkit" rel="noopener noreferrer"&gt;Xatkit &lt;/a&gt;or any other chatbot platform for all the front-end and behaviour processing components)&lt;/li&gt;
    &lt;li&gt;Learn about natural language processing by playing with the code and executing it with different parameter combinations&lt;/li&gt;
    &lt;li&gt;Demystify the complexity of building a NLU engine (thanks to the myriad of wonderful open-source libraries and frameworks available) and provide you with a starting point you can use to build your own.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ready for some NLP fun? We'll first give some context about the project and then we'll take a deep dive into the NLU engine code.&lt;/p&gt;

&lt;h2&gt;What is an intent classifier?&lt;/h2&gt;

&lt;p&gt;An intent or &lt;em&gt;intention&lt;/em&gt; is a user question / request a chatbot should be able to recognize. For instance, the questions in a FAQ-like chatbot. To be able to answer the user input request (called user &lt;em&gt;utterance&lt;/em&gt;) the bot needs to first understands what the user is asking about.&lt;/p&gt;

&lt;p&gt;An intent classifier (also known as intent recognition, intent matching, intent detector,...) is then the function that, given a set of intents the bot can understand and an utterance from the user, returns the probability that the user is asking about a certain intent.&lt;/p&gt;

&lt;p&gt;Nowadays, intent classifiers are typically implemented as &lt;a href="https://en.wikipedia.org/wiki/Multiclass_classification#Neural_networks" rel="noopener noreferrer"&gt;multi-class classifier&lt;/a&gt; neural network.&lt;/p&gt;

&lt;h2&gt;Does the world really needs yet another NLU chatbot engine?&lt;/h2&gt;

&lt;p&gt;Probably not. In fact, in Xatkit we aim to be a &lt;a href="https://xatkit.com/chatbot-orchestration-platform-open-source/" rel="noopener noreferrer"&gt;chatbot orchestration platform&lt;/a&gt; exactly to avoid reinventing the wheel and the &lt;a href="https://en.wikipedia.org/wiki/Not_invented_here" rel="noopener noreferrer"&gt;non-invented here syndrome&lt;/a&gt;. So, in most cases, other existing platform (like DialogFlow or &lt;a href="https://github.com/axa-group/nlp.js/" rel="noopener noreferrer"&gt;nlp.js&lt;/a&gt;) will work just fine. But we have also realized that there are always some particularly tricky bots for which you really need to be able to customize your engine to the specific chatbot semantics to get the results you want.&lt;/p&gt;

&lt;p&gt;And this is sometimes impossible (for platforms where the source code is not available, which, by the way, is the norm) or difficult (e.g. if you're not ready to invest lots of time trying to understand how the platform works or if the platform is not created in a way to easily allow for customizations.&lt;/p&gt;

&lt;p&gt;And of course, if your goal is to learn natural language processing techniques, there is nothing better to write some NLP code :-). Ours is simple enough to give you a kick-start.&lt;/p&gt;

&lt;h2&gt;What makes our NLU engine different?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/xatkit-bot-platform/xatkit-nlu-server" rel="noopener noreferrer"&gt;Xatkit's NLU Engine&lt;/a&gt; is (or better said, it will be, as what we're releasing now is still an alpha version with limited functionality, which is good to play with and to learn, not so much to use it on production ;-) ) a flexible and pragmatic chatbot engine. A couple of examples of this flexible and pragmatic approach.&lt;/p&gt;

&lt;h3&gt;Xatkit lets you configure &lt;em&gt;almost&lt;/em&gt; everything&lt;/h3&gt;

&lt;p&gt;So that the data processing, the training of the network, the behaviour of the classifier itself, etc can be configured. Every time we make a design decision we create a parameter for it so that you can choose whether to follow our strategy or not.&lt;/p&gt;

&lt;h3&gt;Xatkit creates a &lt;em&gt;separate neural network&lt;/em&gt; for each bot context&lt;/h3&gt;

&lt;p&gt;We see bots as having different conversation contexts (e.g. as part of a &lt;a href="https://xatkit.com/chatbot-dsl-state-machines-xatkit-language/" rel="nofollow noopener noreferrer"&gt;bot state machine&lt;/a&gt;). When in a given context, only the intents that make sense in that context should be evaluated when considering possible matches.&lt;/p&gt;

&lt;p&gt;A Xatkit bot is composed of contexts where each context may include a number of intents (see the &lt;code&gt;dsl&lt;/code&gt; package). During the training phase, a NLP model is trained on those intents' training sentences and attached to the context for future predictions).&lt;/p&gt;

&lt;h3&gt;
&lt;a id="user-content-xatkit-understands-that-a-neural-network-is-not-always-the-ideal-solution-for-intent-matching" href="https://github.com/xatkit-bot-platform/xatkit-nlu-server#xatkit-understands-that-a-neural-network-is-not-always-the-ideal-solution-for-intent-matching" rel="noopener noreferrer"&gt;&lt;/a&gt;Xatkit understands that &lt;em&gt;a neural network is not always the ideal solution&lt;/em&gt; for intent matching&lt;/h3&gt;

&lt;p&gt;What if the user input text is full of words the NN has never seen before? It's safe to assume that we can directly determine there is no matching and trigger a bot move to the a &lt;a href="https://github.com/xatkit-bot-platform/xatkit/wiki/Default-and-Local-Fallback" rel="noopener noreferrer"&gt;default fallback&lt;/a&gt; state.&lt;/p&gt;

&lt;p&gt;Or what if the input text is a perfect literal match to one of the training sentences? Shouldn't we assume that's the intent to be returned with maximum confidence?&lt;/p&gt;

&lt;p&gt;This type of pragmatic decisions are at the core of Xatkit to make it a really useful chatbot-specific intent matching project.&lt;/p&gt;

&lt;h2&gt;Show me code!&lt;/h2&gt;

&lt;p&gt;At the core of the intent classifier we have &lt;strong&gt;a &lt;a href="https://keras.io/" rel="noopener noreferrer"&gt;Keras&lt;/a&gt; / &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;Tensorflow &lt;/a&gt;model&lt;/strong&gt;. But before training the network and using it for prediction we need to make sure we &lt;strong&gt;process the training data&lt;/strong&gt; (the pairs  in this case). And to make the NLP Engine useful we expose it through an &lt;strong&gt;easy-to-use REST API&lt;/strong&gt; that makes it easy to integrate it in external platforms or libraries.&lt;/p&gt;

&lt;p&gt;So, overall, the project structure is the following:&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2022%2F03%2Fprojectstructure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2022%2F03%2Fprojectstructure.png" alt="NLU Server Structure of the python project" width="649" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt; Main.py&lt;/code&gt; holds the API definition (thanks to the &lt;a href="https://fastapi.tiangolo.com/" rel="noopener noreferrer"&gt;FastAPI&lt;/a&gt; framework). The &lt;code&gt;dsl&lt;/code&gt; package has the internal data structures storing the bot definition. The &lt;code&gt;dto&lt;/code&gt; is a simplified version of the dsl classes to facilitate the API calls. Finally, the &lt;code&gt;core&lt;/code&gt; package includes the configuration options and the core prediction and training functions.&lt;/p&gt;

&lt;h3&gt;Core Neural Network definition&lt;/h3&gt;

&lt;p&gt;At the core of the Xatkit NLU engine we have a &lt;a href="https://keras.io/" rel="noopener noreferrer"&gt;Keras&lt;/a&gt;/&lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;Tensorflow&lt;/a&gt; model.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The layers and parameters are rather standard for a classifier network. Two aspects worth mentioning:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;The number of classes depends on the value of &lt;code&gt;len(context.intents)&lt;/code&gt;. We have as many classes as intents are defined in a given bot context.
    &lt;/li&gt;
&lt;li&gt;We use &lt;a href="https://en.wikipedia.org/wiki/Sigmoid_function" rel="noopener noreferrer"&gt;&lt;em&gt;sigmoid &lt;/em&gt;&lt;/a&gt;function in the last layer as the intents are not always mutually exclusive so we want to get the probabilities for each of them independently&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Preparing the data for the training&lt;/h3&gt;

&lt;p&gt;Before training the chatbot using the above ML model, we need to prepare the data for training. &lt;/p&gt;

&lt;p&gt;Keep in mind that bots are defined as a set of context where for each context we have a number of intents and for each intent a number of training sentences.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;To get the training started, we take the training sentences and link them with their corresponding intents to create the labeled data for the bot.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Some details on the code listing above:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt; We assign a numeric value to each intent and we use that number when populating the &lt;code&gt; total_labels_training_sentences &lt;/code&gt; list
        &lt;/li&gt;
&lt;li&gt; We use a tokenizer to create a &lt;em&gt;word index&lt;/em&gt; for the words in the training sentences by calling &lt;code&gt;fit_on_texts&lt;/code&gt;. This word index is then used (&lt;code&gt;texts_to_sequences&lt;/code&gt; to transform words into their index value. At this point training sentences are a set of numeric values, which we call &lt;code&gt;training_sequences&lt;/code&gt;
       &lt;/li&gt;
&lt;li&gt; Padding ensures that all sequences have the same length. This length, as always, is part of the NLP configuration.
       &lt;/li&gt;
&lt;li&gt; The model is finally trained by calling the &lt;code&gt;fit&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt; Predicting the best intent matches &lt;/h3&gt;

&lt;p&gt;Once the ML model has been trained, prediction is straightforward. The only important thing to keep in mind is to process the user utterance with the same process we prepared the training data. Note that we also implement one of the optimizations discussed above to directly return a zero match prediction for sentences where all tokens have never seen before by the model.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt; Exposing the chatbot intent classifier as a REST API &lt;/h3&gt;

&lt;p&gt;The &lt;code&gt; main.py &lt;/code&gt; module is in charge of exposing our FastAPI methods. As an example, this is the method for training a bot. It relies on &lt;a href="https://pydantic-docs.helpmanual.io/" rel="noopener noreferrer"&gt;Pydantic&lt;/a&gt; to facilitate the processing of the JSON input and output parameters. Parameter types are the &lt;code&gt;dto&lt;/code&gt; version of the &lt;code&gt;dsl&lt;/code&gt; classes.  &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;Ready to give it a try?&lt;/h2&gt;

&lt;p&gt;Great. Go over to &lt;a href="https://github.com/xatkit-bot-platform/xatkit-nlu-server" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/xatkit-bot-platform/xatkit-nlu-server" rel="noopener noreferrer"&gt;https://github.com/xatkit-bot-platform/xatkit-nlu-server&lt;/a&gt;  and follow the installation instructions. Even better if you watch/star it to follow our new developments. And even, even better if you decide to contribute to the project in any way, shape or form you can!&lt;/p&gt;

</description>
      <category>chatbot</category>
      <category>nlp</category>
      <category>tutorial</category>
      <category>python</category>
    </item>
    <item>
      <title>How to program a chatbot that reads all your website and answers questions based on its content</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Mon, 27 Dec 2021 07:24:49 +0000</pubDate>
      <link>https://forem.com/jcabot/how-to-program-a-chatbot-that-reads-all-your-website-and-answers-questions-based-on-its-content-1h3o</link>
      <guid>https://forem.com/jcabot/how-to-program-a-chatbot-that-reads-all-your-website-and-answers-questions-based-on-its-content-1h3o</guid>
      <description>&lt;p&gt;I'm sure you often get questions from your visitors and think "but this is already on the website!". You then added a chatbot to your site to filter out the &lt;a href="https://xatkit.com/pareto-principle-chatbot-intent-design/" rel="noopener noreferrer"&gt;most common questions&lt;/a&gt;. But what about all the rest. Adding more and more questions to the bot it takes time. So, is there an easy way to &lt;strong&gt;let the chatbot find the answer by itself&lt;/strong&gt; for those questions that are already answered somewhere in the (hundreds? thousands?) pages, posts or other online documents you've already published?&lt;/p&gt;

&lt;p&gt;YES!. The key is to plug &lt;a href="https://haystack.deepset.ai/" rel="noopener noreferrer"&gt;Haystack&lt;/a&gt; to your chatbot. We have several &lt;a href="https://huggingface.co/models?pipeline_tag=question-answering" rel="noopener noreferrer"&gt;pretrained language models fine-tuned for Question Answering&lt;/a&gt; that can be used to find an answer in a text but they only work when the text length is really small. Here it is where Haystack comes into play. Haystack architecture (see the featured image above) proposes a two-phase process:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;A &lt;em&gt;Retriever&lt;/em&gt; selects a set of candidate documents from all the available information. Among other options, we can rely on &lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;ElasticSearch&lt;/a&gt; to index the documents and return those that most likely contain the answer to the question&lt;/li&gt;
    &lt;li&gt;A &lt;em&gt;Reader &lt;/em&gt;applies state-of-the-art QA models to try to infer an answer from each candidate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we just need to return these inferred solutions together with additional information (context, confidence level,...) to help our users understand why we think this is the answer they were looking for. All thanks to a completely &lt;a href="https://github.com/deepset-ai/haystack" rel="noopener noreferrer"&gt;free and open source NLP framework!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's see how we can benefit from Haystack by creating a chatbot able to answer software design questions for the &lt;a href="https://wordpress.org/" rel="noopener noreferrer"&gt;WordPress&lt;/a&gt; website &lt;a href="https://modeling-languages.com/" rel="noopener noreferrer"&gt;modeling-languages.com&lt;/a&gt;. Haystack website is full of useful examples, so we'll adapt them in our scenario.&lt;/p&gt;

&lt;h2&gt;Creating the chatbot, the "front-end"&lt;/h2&gt;

&lt;p&gt;The easiest part is to create the chatbot. We'll obviously use &lt;a href="https://github.com/xatkit-bot-platform/xatkit" rel="noopener noreferrer"&gt;Xatkit&lt;/a&gt; for this. The bot can have as many intents as you wish. The only part that we care about here is the &lt;a href="https://xatkit.com/define-meaningful-fallbacks-for-your-chatbot/" rel="noopener noreferrer"&gt;&lt;em&gt;default fallback &lt;/em&gt;&lt;/a&gt;state. Here, instead of saying something useless, e.g. "sorry I didn't get your question, can you rephrase it and try again?", we will ask Haystack to find us a solution.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Here we have all the pieces locally deployed but obviously each of them could be in a different server.&lt;/p&gt;

&lt;h2&gt;Loading the information to ElasticSearch&lt;/h2&gt;

&lt;p&gt;Before we can find an answer, we need to first power up ElasticSearch with the documents we want to use as information source for the bot. In this example, the documents will be all posts published in modeling-languages. We assume we have direct access to the WordPress database but otherwise we could write something similar using the &lt;a href="https://developer.wordpress.org/rest-api/" rel="noopener noreferrer"&gt;WordPress REST API &lt;/a&gt;instead. Note that we split each post in different paragraphs to avoid chunks of text that could be too much for the QA models.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I deployed a &lt;a href="https://flask.palletsprojects.com/en/2.0.x/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt; server to facilitate calling all the endpoints on-demand, especially needed for those that the bot needs to interact with.&lt;/p&gt;

&lt;h2&gt;Finding the answer&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;Retriever&lt;/em&gt; component will look for the most promising documents in ElasticSearch (by default, using the &lt;a href="https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables" rel="noopener noreferrer"&gt;BM25 algorithm&lt;/a&gt; but there are other options). The &lt;em&gt;Reader&lt;/em&gt; will look into each candidate and try to find the right answer in it. Thanks to the predefined &lt;a href="https://haystack.deepset.ai/reference/pipelines" rel="noopener noreferrer"&gt;Pipelines&lt;/a&gt; provided by Haystack, putting everything together is really easy:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once we have the answer, we create the response object that will be sent back to the chatbot. As a final step the chatbot will print this response to the user together with the URL of the post the answer comes from. This way, even if the answer is not perfect the user will have the option to go to the suggested URL.&lt;/p&gt;

&lt;h2&gt;But, does it work?&lt;/h2&gt;

&lt;p&gt;We've seen it's feasible to add the Haystack infrastructure to a chatbot. But what about the quality of the answers? Are they good enough?&lt;/p&gt;

&lt;p&gt;The answer is that it does work reasonably well. The modeling-languages website was not the easiest one to try with. It's rather large (over 1000 posts that translate into around 8000 documents) with significant overlappings. And there are still some "legay" posts in Spanish that add to the confusion.&lt;/p&gt;

&lt;p&gt;Let's see a couple of examples. In the first one I ask on how can I add a business rule (i.e. constraint) to a software design model. The first answer is technically correct (indeed, constraints are written on top of models) but rather useless. The next two are exactly what I was hoping to see an answer as they suggest me to use the Object Constraint Language to specify my constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2021%2F12%2FOCLExample.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2021%2F12%2FOCLExample.png" alt="Question Answering example on a WordPress site" width="375" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second question is more concrete but it has a more open answer. Note that all answers are taken from the same document (the only one that realy talks about this Temporal EMF tool). All answers are reasonable but the third one really nails it. And keep in mind that we're using an extractive QA model, meaning that the model aims to return a subset of the text containing the answer. Instead, a generative QA model (also available in Haystack) would be able to "build" the answer from partial answers, potentially spread out in more than document.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2021%2F12%2FtemporalExample.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2021%2F12%2FtemporalExample.png" alt="QA haystack example with a more concrete question" width="374" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In terms of &lt;strong&gt;performance, results were very satisfactory&lt;/strong&gt;. The whole process took just a few seconds (once the initial model loading) but this is on my poor laptop. With proper configuration and tuning, the user should not notice a major delay. &lt;a href="https://haystack.deepset.ai/guides/rest-api" rel="noopener noreferrer"&gt;Haystack itself can also be deployed as a REST API&lt;/a&gt; which should optimize even more the whole process.  And of course, you could always let the bot designer to configure whether to use Haystack in the default fallback or not, depending on a number of factors.&lt;/p&gt;

</description>
      <category>haystack</category>
      <category>chatbot</category>
      <category>qa</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Choosing Java as your language for a Machine Learning project - Are we crazy???</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Thu, 04 Nov 2021 11:13:26 +0000</pubDate>
      <link>https://forem.com/jcabot/choosing-java-as-your-language-for-a-machine-learning-project-are-we-crazy-454l</link>
      <guid>https://forem.com/jcabot/choosing-java-as-your-language-for-a-machine-learning-project-are-we-crazy-454l</guid>
      <description>&lt;p&gt;Most people are stunned when they realize that the &lt;a href="https://github.com/xatkit-bot-platform" rel="noopener noreferrer"&gt;Xatkit bot engine is written in Java.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;True, the vast majority of AI / Machine Learning projects are written in Python. But this doesn't mean that you should go with Python when starting your own project. And don't worry, this is &lt;strong&gt;not a post about language wars&lt;/strong&gt;. I don't pretend to say that Java is better than Python (nor the other way round, for that matter). I'm just explaining our language choice. And suggesting that you should take into account many aspects when choosing the base language for your next project.&lt;/p&gt;

&lt;p&gt;Let's see why Java is a good choice for Machine Learning projects, or at least as good as a choice as many others:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Machine Learning is only a small part of your project&lt;/strong&gt;. Most of your code will NOT be about ML tasks but about data input/output, user interface, interaction with external services,... so the language needs to be good at all these things as well.  This is &lt;a href="https://xatkit.com/the-software-challenges-of-building-smart-chatbots/" rel="noopener noreferrer"&gt;especially true in the case of chatbots&lt;/a&gt; that, to begin with, need to interact with different user input platforms.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;There are ML libraries available for every language&lt;/strong&gt;. So there is always a way to execute/train your neural networks outside the python world. For instance, in Xatkit, we reuse &lt;a href="https://stanfordnlp.github.io/CoreNLP/usage.html" rel="noopener noreferrer"&gt;Stanfords' Core NLP&lt;/a&gt; models in some of our &lt;a href="https://github.com/xatkit-bot-platform/xatkit/wiki/Processors" rel="noopener noreferrer"&gt;language processors&lt;/a&gt;. And, if needed, there is always the option to wrap the ML models code in a Python server (I like the simplicity of &lt;a href="https://palletsprojects.com/p/flask/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt; for this) and consume them via API calls to this server.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Java is heavily used in the enterprise world&lt;/strong&gt;. So while core ML fans may frown at our language choice, enterprise users may see Java as a benefit as they already know how to manage and deploy Java-based applications but they could not have the same experience with Python or other languages.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;We are Java "experts"&lt;/strong&gt;. We are much more productive coding in Java than with any other language. Of course, we could become proficient in Python if we put the time but time is precious and it made sense to stick to the language we were already using in other projects&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Xatkit is a model-based tool. &lt;/strong&gt;By model, I refer here to &lt;a href="https://modeling-languages.com/clarifying-concepts-mbe-vs-mde-vs-mdd-vs-mda/" rel="noopener noreferrer"&gt;software design models&lt;/a&gt;, not ML ones. An in the modeling ecosystem, Java is still the boss. In particular, Xatkit reuses some &lt;a href="https://www.eclipse.org/modeling/emf/" rel="noopener noreferrer"&gt;EMF&lt;/a&gt; libraries, mostly to do some reflection on the bot definition at runtime. For sure, there are other ways to accomplish the same goal, but you can see this as a legacy decision before &lt;a href="https://xatkit.com/fluent-interface-building-chatbots-bots/" rel="noopener noreferrer"&gt;Xatkit embraced Fluent APIs&lt;/a&gt; for the bot definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see, maybe Java should not be your first option when getting started in AI technologies if there is &lt;em&gt;really&lt;/em&gt; no constraint at all on your language choice. Otherwise, the choice of a language is more of a social/team/organization decision that should take into account many other aspects (team knowledge, organization architecture, integration needs,...). We see developers arguing non-stop about why language A is better than language B but for most projects, even those including some kind of intelligent component, any major language will work and that choice will NOT be the core element in the project success at all.&lt;/p&gt;

&lt;p&gt;So, forgive me if &lt;a href="https://xatkit.com/" rel="noopener noreferrer"&gt;we&lt;/a&gt; continue developing bots in Java :-)&lt;/p&gt;

</description>
      <category>java</category>
      <category>chatbot</category>
      <category>machinelearning</category>
      <category>language</category>
    </item>
    <item>
      <title>Beyond no-code: no-learn and no-work development</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Sun, 18 Apr 2021 06:29:59 +0000</pubDate>
      <link>https://forem.com/jcabot/beyond-no-code-no-learn-and-no-work-development-29bm</link>
      <guid>https://forem.com/jcabot/beyond-no-code-no-learn-and-no-work-development-29bm</guid>
      <description>&lt;p&gt;No-code solutions aim to empower non-technical people to create their own software solutions with zero need to code anything at all. The term &lt;strong&gt;no-code is sometimes used as a slight variation of low-code&lt;/strong&gt;. In fact, we can often see tools defining themselves as &lt;em&gt;no-code/low-code&lt;/em&gt; tools. All big &lt;a href="https://modeling-languages.com/big-five-bet-modeling-low-code/" rel="noopener noreferrer"&gt;companies are buying into it&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I discussed before the &lt;a href="https://modeling-languages.com/low-code-vs-model-driven/" rel="noopener noreferrer"&gt;difference between the low-code and no-concepts&lt;/a&gt;. Long story short, no-code really means no code at all, which limits the variety of applications you can build. We are basically looking at template-based frameworks or the creation of workflows mixing predefined connectors to external applications where the designers, at most, decide when and how certain actions should be triggered. This is a reasonable trade-off if you don't want/have the time to learn to code. And the number of different applications you could create is still huge with an &lt;a href="https://www.nocode.tech/" rel="noopener noreferrer"&gt;increasing number of no-code solutions&lt;/a&gt; ready to help, no matter the type of applications you have in mind. But&lt;strong&gt; no-code doesn't mean that you'll be able to create your dreamed application in no time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I want to highlight two subcategories of no-code applications: no-learn and no-work development tools. The following image describes the relationship between each category.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmodeling-languages.com%2Fwp-content%2Fuploads%2F2021%2F04%2Fimg_6075211ca85d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmodeling-languages.com%2Fwp-content%2Fuploads%2F2021%2F04%2Fimg_6075211ca85d3.png" alt="No-code vs no-learn vs no-work" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;No-Learn software development&lt;/h2&gt;

&lt;p&gt;No-learn development tools correspond to the set of no-code frameworks that let the users employ their own tools to define the software to be built. Instead of the user having to learn to use the no-code tool and &lt;a href="https://modeling-languages.com/nocode-lowcode-modeling/" rel="noopener noreferrer"&gt;the (graphical) language behind it,&lt;/a&gt; the no-code tool is able to read and import the software definition from Word, Excel or whatever other tool users in that domain typically use.&lt;/p&gt;

&lt;p&gt;An example would be our &lt;a href="https://xatkit.com/spreadsheet-excel-to-chatbot/" rel="noopener noreferrer"&gt;Excel to chatbot service&lt;/a&gt;. Instead of forcing users to learn a new interface and language to define chatbots, we offer them an Excel template that lets them define bots in a tool they already know. No matter how great is our no-code tool, if it's a new tool the user will need to invest time learning it. Better to eliminate this barrier to entry.&lt;/p&gt;

&lt;p&gt;A no-learn tool can still have the same expressiveness as a general no-code one. We're not limiting what users can do with it, just changing the language in which that no-code project will be defined.&lt;/p&gt;

&lt;h2&gt;No-Work software development&lt;/h2&gt;

&lt;p&gt;No-work is an extreme case of no-code development where the user does nothing. Users do not code but neither do they define the application to be built. Not within the no-code tool interface, nor with their own tool (no-learn category above). They do nothing. Instead, the no-code tool takes whatever information is already available in the company (in documents, files or databases) to automatically derive a software application from them.&lt;/p&gt;

&lt;p&gt;Continuing with our chatbot definition example. In a no-work setting, we would not even ask the user to fill our Excel template to specify the bot behaviour, we would just take its FAQ website, existing Excel file (e.g. with some list of questions they prepared to homogenize their client support team) or list of products in a database and generate the chatbot from there. In our interviews with potential chatbot clients, they often tell us a variation of the sentence "we already have an Excel file we use internally, we just want to give you that and get a bot that is able to answer the questions in our file".&lt;/p&gt;

&lt;p&gt;This completely eliminates all barriers to entry. As a trade-off, no-work approaches do restrict even more the diversity of applications to be built as the no-code tool needs to largely depend on &lt;a href="https://modeling-languages.com/convention-configuration-key-selling-point-mde/" rel="noopener noreferrer"&gt;convention over configuration&lt;/a&gt; strategies in order to be able to derive a valid application from the client's input files.&lt;/p&gt;

&lt;p&gt;Still, this approach is less radical than it seems. Just think about all the &lt;a href="https://modeling-languages.com/umltosql-umltosymfonyphp-and-umltodjangopython-are-now-open-source/" rel="noopener noreferrer"&gt;CRUD generators&lt;/a&gt; available on any modern programming framework. Any of them is able to connect to a relational database and immediately generate all the forms required to access and manipulate the database data.&lt;/p&gt;

&lt;h2&gt;No-code categories are not exclusive&lt;/h2&gt;

&lt;p&gt;Each of the categories above targets a different type of user/scenario. Depending on how much the user wants to learn/work, you can offer him a specific type of tool, each one with its trade-offs.&lt;/p&gt;

&lt;p&gt;But this doesn't mean your no-code tool needs to stick to one specific category. As we do in &lt;a href="https://xatkit.com/" rel="noopener noreferrer"&gt;Xatkit&lt;/a&gt;, you can offer different interfaces/importers on top of the same engine. You can even offer a low-code version for advanced users willing to use your tool's API to complement the result of the no-code approach.&lt;/p&gt;

</description>
      <category>nocode</category>
      <category>lowcode</category>
      <category>nowork</category>
      <category>engineering</category>
    </item>
    <item>
      <title>“Hello World”, chatbot version – Complete example</title>
      <dc:creator>Jordi Cabot</dc:creator>
      <pubDate>Mon, 23 Nov 2020 10:48:51 +0000</pubDate>
      <link>https://forem.com/jcabot/hello-world-chatbot-version-complete-example-468</link>
      <guid>https://forem.com/jcabot/hello-world-chatbot-version-complete-example-468</guid>
      <description>&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/%22Hello,_World!%22_program" rel="noopener noreferrer"&gt;Hello World&lt;/a&gt; program is the typical first example you see when learning any programming language since it was first used in a tutorial to learn B (predecessor of the C language) &lt;a href="https://blog.hackerrank.com/the-history-of-hello-world/" rel="noopener noreferrer"&gt;in 1973&lt;/a&gt;.  It is often the first program written by people learning to code. Its success resides in its simplicity. Writing its code is very simple in most programming languages. It's also used as a &lt;a href="https://en.wikipedia.org/wiki/Sanity_check" rel="noopener noreferrer"&gt;sanity test&lt;/a&gt; to make sure the editor, compiler,... is properly installed and configured. For these same reasons, it makes sense to have &lt;strong&gt;a "Hello World" version for chatbots&lt;/strong&gt;. Such bot could be defined as follows:&lt;/p&gt;

&lt;blockquote&gt; A &lt;em&gt;Hello World&lt;/em&gt; chatbot is a chatbot that replies "Hello World" every time the user greets the bot&lt;/blockquote&gt;

&lt;p&gt;So something as this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2020%2F11%2Fhelloworld.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2020%2F11%2Fhelloworld.gif" alt="Hello World chatbot" width="461" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this chatbot is indeed simple (compared with any other chatbot), it's much more deceitful than its Hello World counterparts for programming languages. That's because of the&lt;strong&gt; &lt;a href="https://en.wikipedia.org/wiki/Essential_complexity" rel="noopener noreferrer"&gt;essential complexity&lt;/a&gt; of chatbot development. &lt;/strong&gt;Even the simplest chatbot is a complex system that needs to interact with communication channels (on the "front-end") and the Text Processing / NLP engine (in the "backend"), among, potentially, other external services. Clearly, creating and deploying a Hello World chatbot is not exactly your typical Hello World exercise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2020%2F11%2Fchatbots_complex_systems.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxatkit.com%2Fwp-content%2Fuploads%2F2020%2F11%2Fchatbots_complex_systems.png" alt="Chatbots are complex systems" width="800" height="352"&gt;&lt;/a&gt; Chatbots are complex systems&lt;/p&gt;

&lt;p&gt;But don't be scared, let me show you how to build your first chatbot with our open-source platform &lt;a href="https://xatkit.com/" rel="noopener noreferrer"&gt;Xatkit&lt;/a&gt;. Our &lt;a href="https://xatkit.com/fluent-interface-building-chatbots-bots/" rel="noopener noreferrer"&gt;Fluent API&lt;/a&gt; will help you to create and assemble the different parts of the chatbot. Let's see the chatbot code you need to write.&lt;/p&gt;

&lt;h2&gt;Recognizing when the user says "Hi"&lt;/h2&gt;

&lt;p&gt;The chatbot needs to detect when the user is greeting it. This is the only intention we need to care about. So it's enough to define a single &lt;em&gt;Intent &lt;/em&gt;with a few training sentences. Any NLP Provider (e.g. &lt;a href="https://dialogflow.com/" rel="noopener noreferrer"&gt;DialogFlow&lt;/a&gt; or &lt;a href="https://xatkit.com/chatbots-catalan-language-nlpjs-xatkit/" rel="noopener noreferrer"&gt;nlp.js&lt;/a&gt;) would do a good job with this simple intent.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;Replying Hello World&lt;/h2&gt;

&lt;p&gt;To process the user greetings text, we need at least one state that replies by printing the "Hello World" text. But to keep the bot in a loop (who knows, maybe many users want to say Hi!), we'll use a couple of them.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;Configuring the chatbot&lt;/h2&gt;

&lt;p&gt;As we mentioned above, chatbots come with some inherent essential complexity. At the very least, they need to wait and listen to the user on some channel and then reply to the same channel. In Xatkit, we use the concept of Platform for this. In the code below, we indicate that the bot is displayed as a widget on a webpage and that it will get both events (e.g. the page loaded event) and user utterances via this platform.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And this is basically all you need for your Hello World chatbot!. Feel free to clone our &lt;a href="https://github.com/xatkit-bot-platform/xatkit-bot-template" rel="noopener noreferrer"&gt;Xatkit bot template&lt;/a&gt; to get a Greetings Bot ready to use and play with.&lt;/p&gt;

&lt;p&gt;Of course, this is a very simple Hello World chatbot (e.g. what about if the user does not say Hi but something else?) but I think it's the closer we can get to the Hello World equivalent you're so used to see for other languages. Remember you can head to our main &lt;a href="https://github.com/xatkit-bot-platform/xatkit" rel="noopener noreferrer"&gt;GitHub Repo for more details on Xatkit &lt;/a&gt;or check some of our &lt;a href="https://github.com/xatkit-bot-platform/xatkit-examples" rel="noopener noreferrer"&gt;other bot examples&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>helloworld</category>
      <category>chatbots</category>
      <category>example</category>
      <category>java</category>
    </item>
  </channel>
</rss>
