<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: eylonmiz</title>
    <description>The latest articles on Forem by eylonmiz (@eylonmiz).</description>
    <link>https://forem.com/eylonmiz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/eylonmiz"/>
    <language>en</language>
    <item>
      <title>The 5 Pillars for taking LLM to production</title>
      <dc:creator>eylonmiz</dc:creator>
      <pubDate>Fri, 01 Sep 2023 11:59:13 +0000</pubDate>
      <link>https://forem.com/pezzo/the-5-pillars-for-taking-llm-to-production-1olg</link>
      <guid>https://forem.com/pezzo/the-5-pillars-for-taking-llm-to-production-1olg</guid>
      <description>&lt;p&gt;LLMs (Large Language Models) have tremendous potential to enable new types of AI applications. The truth is, turning simple prototypes into robust, production-ready applications is quite challenging.&lt;/p&gt;

&lt;p&gt;We've been supporting dozens of companies in bringing applications to production, and we're excited to share our learnings with you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pillars
&lt;/h2&gt;

&lt;p&gt;When building LLM applications for production use, certain capabilities rise above the rest in importance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Engineering
&lt;/h3&gt;

&lt;p&gt;Carefully crafted prompts are key to achieving reliable performance from LLMs. Think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where and how to store your prompts for quick iteration.&lt;/li&gt;
&lt;li&gt;Ability to experiment with prompts (A/B testing, user segmentation).&lt;/li&gt;
&lt;li&gt;Collaboration. Stakeholders can contribute immensely to prompt engineering.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;We've all been there - we designed some prompts that worked really well with test data. Then we went live and disaster struck. Bad handling of LLMs can have negative effects such as long waiting times, inappropriate responses, lack of context and more. This translates to bad user experience, and could negatively affect your brand/churn rate.&lt;/p&gt;

&lt;p&gt;It's important to carefully think about the observability and monitoring aspects of your LLM operations, and have the ability to quickly identify issues and troubleshoot them. Think about tracing, the ability to track an entire conversation, replay it and improve it over time. Consider anomaly detection as well as emerging trends.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt;&lt;br&gt;
  It's important to know "what good looks like". Having the ability to mark good (e.g. converting) LLM responses versus bad (e.g. churn) will really pay off in the long run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Cost Control
&lt;/h3&gt;

&lt;p&gt;LLM API costs can quickly spiral out of control. It's important to be prepared and budget accordingly. With that being said, we've seen cases where a trivial parameter change has increased costs by 25% over night.&lt;/p&gt;

&lt;p&gt;Granular tracking of API usage and billing helps identify expensive calls. With detailed visibility into LLM costs, you can set custom budgets and alerts to proactively manage spend. By analyzing logs and performance data, expensive queries using excessive tokens can be identified and reworked to be more efficient. With rigorous cost management tools, LLM costs can be predictable and optimized.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt;&lt;br&gt;
You often find yourself chaining multiple calls. Think about the models in use. Do you really need to use GPT-4 for everything? If you can, save GPT-4 calls for scoring/labeling/classification calls, where the output is short. This will save you plenty of money. When you need to generate long responses, GPT-3.5-Turbo might be more appropriate from a cost perspective.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Evaluation
&lt;/h3&gt;

&lt;p&gt;Rigorous evaluation using datasets and metrics is key for reliability when building LLM applications. With a centralized dataset store, relevant datasets can be easily logged from real application queries and used to frequently evaluate production models. Built-in integration with open source evaluation libraries makes it simple to assess critical metrics like accuracy, response consistency, and more.&lt;/p&gt;

&lt;p&gt;Evaluation frameworks help you efficiently validate new prompts, chains, and workflows before deploying them to production. Ongoing evaluation using real user data helps identify areas for improvement and ensures optimal performance over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt;&lt;br&gt;
Evaluation doesn't have to be too complicated. You can sample an X% of your LLM responses and run them through another, simple prompt for scoring. Over time, this will give you valuable data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Training
&lt;/h3&gt;

&lt;p&gt;There's a limit to what you can do with off-the-shelf models. If using LLMs becomes an important aspect of your operations, you'll likely resort to fine-tuning at some point.&lt;/p&gt;

&lt;p&gt;With integrated data pipelines, real user queries can be efficiently logged and processed into clean training datasets. These datasets empower on-going learning - models can be fine-tuned to better handle terminology and scenarios unique to your business use-case.&lt;/p&gt;

&lt;p&gt;Invest in tooling to generate datasets and fine-tune models early to ensure LLMs deliver maximum value by keeping them closely aligned with evolving business needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt;&lt;br&gt;
Apart from yielding better results, fine-tuning can dramatically improve costs. For example, you can train the gpt-3.5-turbo model based on data produced by GPT-4, or other capable (and expensive) models.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Additional Considerations
&lt;/h2&gt;

&lt;p&gt;Besides the pillars mentioned above, there are a few more concepts you need to consider when building production-grade, LLM-powered applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; Depending on your application, it might be crucial to optimize for fast response times and minimal latency. Make sure to design your prompt chains for maximum throughput.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Model Support:&lt;/strong&gt; If you use multiple LLMs like GPT-3.5, GPT-4, Claude, LLaMA-2, consider how you consume these. Adopting a unified, abstracted way to consume various models will make your application more maintainable as you scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Feedback:&lt;/strong&gt; Understanding how real users interact with your LLMs is invaluable for guiding improvements. Make sure to capture real usage data and feedback so you can improve the user experience over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Readiness:&lt;/strong&gt; Depending on your target market, enterprise-grade capabilities might be important. Think about fine-grained access controls and permissions, predictability and reliability SLAs, data security, privacy, and compliance assurance, automated testing and validation frameworks to ensure reliability, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  LLMOps Platform - Give Pezzo a Try!
&lt;/h2&gt;

&lt;p&gt;Pezzo is the open-source (Apache 2.0) LLMOps platform. It addresses prompt management, versioning, instant delivery, A/B testing, fine-tuning, observability, monitoring, evaluation, collaboration and more.&lt;/p&gt;

&lt;p&gt;Regardless of where you’re at in your LLM adoption journey, consider using Pezzo. It takes exactly one minute to integrate, and endless value will come your way.&lt;/p&gt;

&lt;p&gt;If you’d like to learn more about Pezzo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pezzolabs/pezzo"&gt;Check out the Pezzo GitHub repository&lt;/a&gt; and consider giving us a star! ⭐️&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pezzo.ai"&gt;Check out our website&lt;/a&gt; and try Pezzo Cloud&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.pezzo.ai"&gt;Read the Pezzo Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>llm</category>
      <category>webdev</category>
      <category>opensource</category>
      <category>openai</category>
    </item>
    <item>
      <title>Pezzo.ai is out of Beta! - The open-source, developer-first LLMOps platform</title>
      <dc:creator>eylonmiz</dc:creator>
      <pubDate>Thu, 17 Aug 2023 17:00:51 +0000</pubDate>
      <link>https://forem.com/eylonmiz/pezzoai-is-out-of-beta-the-open-source-developer-first-llmops-platform-14i2</link>
      <guid>https://forem.com/eylonmiz/pezzoai-is-out-of-beta-the-open-source-developer-first-llmops-platform-14i2</guid>
      <description>&lt;p&gt;Pezzo.ai is out of Beta! - The open-source, developer-first LLMOps platform.&lt;/p&gt;

&lt;p&gt;Struggling to add that sprinkle of LLM magic to your product? Pezzo is here to demystify the process.&lt;/p&gt;

&lt;p&gt;Through building various LLM-driven products (such as autonomous agents, chatbots, content generators), I've experienced firsthand the gap between a basic proof of concept and a top-tier, production-grade product. And here's my insight: Crafting outstanding AI applications is not merely a data science issue; it's an engineering challenge.&lt;/p&gt;

&lt;p&gt;Imagine what a small, talented team of full-stack developers can achieve with an OpenAI API key and expertly composed LLM. That's where Pezzo comes in to serve you!&lt;/p&gt;

&lt;p&gt;✨ What is Pezzo?&lt;br&gt;
Pezzo is an open-source, developer-first LLMOps platform. Here, your product teams can create, manage, monitor, evaluate, and optimize prompts effortlessly.&lt;br&gt;
With Pezzo, AI delivery at scale becomes affordable and convenient, ensuring security, latency, reliability, and visibility. Even a single line of code can save you big! It's time to adopt AI smoothly with your team.&lt;/p&gt;

&lt;p&gt;🌎 Pezzo's Website: &lt;a href="https://pezzo.cc/3QHEyRi"&gt;https://pezzo.cc/3QHEyRi&lt;/a&gt;&lt;br&gt;
👩‍💻 Pezzo on GitHub: &lt;a href="https://pezzo.cc/3qIWNv3"&gt;https://pezzo.cc/3qIWNv3&lt;/a&gt;&lt;br&gt;
🌐 Docs: &lt;a href="https://pezzo.cc/3qDCQWD"&gt;https://pezzo.cc/3qDCQWD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Support Pezzo:&lt;br&gt;
🌟 Star us on GitHub to express your support!&lt;br&gt;
💬 Signup and let us know what you think ❤️&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Introducing ReactAgent: The open-source React.js Autonomous LLM Agent</title>
      <dc:creator>eylonmiz</dc:creator>
      <pubDate>Mon, 15 May 2023 08:33:46 +0000</pubDate>
      <link>https://forem.com/eylonmiz/introducing-reactagent-the-open-source-reactjs-autonomous-llm-agent-4o81</link>
      <guid>https://forem.com/eylonmiz/introducing-reactagent-the-open-source-reactjs-autonomous-llm-agent-4o81</guid>
      <description>&lt;p&gt;Hello, fellow developers!&lt;/p&gt;

&lt;p&gt;I've been working on an exiting project that I'm sure will help many of you save a lot of precious time - ReactAgent, an experimental autonomous agent that uses GPT-4 language model to generate and compose React components from user stories. Built with technologies like React, TailwindCSS, Typescript, Radix UI, Shandcn UI, and the OpenAI API, this project aims to revolutionize how we write and understand code.&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://reactagent.io"&gt;Website&lt;/a&gt;
    ·
    &lt;a href="https://www.loom.com/share/658adb2869174e81a39a0a2cdcfec4eb"&gt;Watch Demo&lt;/a&gt;
    ·
    &lt;a href="https://github.com/eylonmiz/react-agent"&gt;Github Repo&lt;/a&gt;
    ·
    &lt;a href="https://docs.reactagent.io"&gt;Docs&lt;/a&gt;
    ·
    &lt;a href="https://discord.gg/57JjYNKe"&gt;Discord&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Features
&lt;/h2&gt;

&lt;p&gt;ReactAgent comes with a variety of features that make coding in React more intuitive and efficient:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate React Components from user stories:&lt;/strong&gt; No more manually coding every single component. Just provide a user story, and ReactAgent will do the rest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compose React Components from existing components:&lt;/strong&gt; Leverage your existing components to create new ones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use a local design system to generate React Components:&lt;/strong&gt; ReactAgent can tap into your local design system to generate components that align with your style guide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use React, TailwindCSS, Typescript, Radix UI, Shandcn UI:&lt;/strong&gt; ReactAgent is built with popular and modern technologies to ensure compatibility and efficiency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built with Atomic Design Principles:&lt;/strong&gt; We've incorporated atomic design principles into ReactAgent to enhance the cohesiveness and consistency of your designs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📦 Next Steps
&lt;/h2&gt;

&lt;p&gt;We're continually working to improve ReactAgent and add more features. Here are some of the things we're planning for the future:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Edit existing components:&lt;/strong&gt; Make changes to your existing components directly within ReactAgent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Components after generating:&lt;/strong&gt; Immediately test your new components to ensure they're working correctly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wireframe image to skeleton code:&lt;/strong&gt; Convert your design wireframes into code skeletons.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote design system to generate React Components:&lt;/strong&gt; Use a remote design system to generate components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use of external libraries:&lt;/strong&gt; Integrate external libraries into your components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component logic control (state, props, context, effects, API calls, etc.):&lt;/strong&gt; More control over your component logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're excited to see where ReactAgent goes from here, and we hope you are too! Remember, this is an open-source project, and we welcome all kinds of contributions. Whether you're a developer who wants to code, a designer who can help with the UI, or just a user who wants to share feedback and ideas, we'd love to hear from you.&lt;/p&gt;

&lt;p&gt;Let's build the future of web development together!&lt;/p&gt;

&lt;p&gt;Happy coding,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/eylonmiz"&gt;Eylon&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>opensource</category>
      <category>openai</category>
      <category>react</category>
    </item>
    <item>
      <title>open-source React GPT LLM Agent</title>
      <dc:creator>eylonmiz</dc:creator>
      <pubDate>Fri, 12 May 2023 09:19:26 +0000</pubDate>
      <link>https://forem.com/eylonmiz/open-source-react-gpt-llm-agent-9da</link>
      <guid>https://forem.com/eylonmiz/open-source-react-gpt-llm-agent-9da</guid>
      <description>&lt;p&gt;Hey everyone, I've been working in the couple of months on an experiment, trying to make GPT-4 much more useful for web development / React, writing production code that is relevant to any repository without copy pasta from ChatGPT or having small snippets of auto-complete from Copilot that are not in your context.&lt;/p&gt;

&lt;p&gt;The agent is taking a user story text and generating and composing multiple react components to generate the relevant screens, with Typescript, TailwindCSS and RadixUI.&lt;/p&gt;

&lt;p&gt;Is is still experimental but very interesting results, I would like to get your feedback on it!&lt;br&gt;
It is completely open-sourced, looking for contributors!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/tutim-io/react-gpt"&gt;Github Repo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.loom.com/share/658adb2869174e81a39a0a2cdcfec4eb"&gt;Demo Video&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/tutim-io/react-gpt/pull/1/files"&gt;Output Code Example&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>opensource</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Introducing tutim.io - A Headless, API-first, and Developer-led User Flow Platform</title>
      <dc:creator>eylonmiz</dc:creator>
      <pubDate>Wed, 12 Apr 2023 17:00:24 +0000</pubDate>
      <link>https://forem.com/eylonmiz/introducing-tutimio-a-headless-api-first-and-developer-led-user-flow-platform-295n</link>
      <guid>https://forem.com/eylonmiz/introducing-tutimio-a-headless-api-first-and-developer-led-user-flow-platform-295n</guid>
      <description>&lt;p&gt;Hey Dev Community! 🎉&lt;/p&gt;

&lt;p&gt;As a developer who has built and maintained a few hundred forms at Rapyd, a $10B+ fintech company, I've personally experienced the pain and inefficiencies of traditional form building and user flow creation. That's why I've been working on tutim.io - to provide a better alternative for developers and product teams alike.&lt;/p&gt;

&lt;p&gt;I would love to hear your feedback on &lt;a href="https://tutim.io/"&gt;tutim.io&lt;/a&gt; a headless, API-first, and developer-led user flow platform that aims to revolutionize how product teams create and optimize their user flows. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Problem: From Simple Forms to Complex Challenges&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yvl23HzZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhl0hgaejacolvh4d0yt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yvl23HzZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhl0hgaejacolvh4d0yt.png" alt="Complex Wizard Creation" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Flexport's onboarding wizard plan &lt;a href="https://medium.com/flexport-ux/designing-the-new-operating-system-for-global-trade-at-flexport-ce84b7052032"&gt;(source)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you start building a web app, you might begin with just one or two simple forms. As your app grows, so do the number and complexity of the forms. Soon, you may find yourself maintaining, upgrading, and fixing bugs in over 100 complex forms, and a significant part of your job becomes dedicated to managing these forms.&lt;/p&gt;

&lt;p&gt;That's where tutim.io comes in. Our goal is to simplify and streamline the process of creating, optimizing, and maintaining user flows so that you can focus on what truly matters - building amazing products.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Headless?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Headless architecture is the future of UI development. By decoupling the frontend presentation layer from the backend, we've made it possible to build seamless, high-converting user experiences without sacrificing development time or resources.&lt;/p&gt;

&lt;p&gt;Being headless means that our platform is flexible and can integrate with your existing tech stack seamlessly. The result? A truly developer-first product that empowers non-technical team members to contribute to product development.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How We're Different from Typeform and Other No-Code Form Builders&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Unlike traditional form builders, tutim.io is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Headless&lt;/strong&gt; - We don't impose any design constraints. Use your own design system and create unique user experiences that align with your brand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API-first&lt;/strong&gt; - Our cloud backend comes with REST API support, making integration with your existing tech stack a breeze.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer-led&lt;/strong&gt; - We built tutim.io with developers in mind, addressing their needs and pain points throughout the entire development process.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How We're Different from Form Libraries (e.g RHF)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Unlike traditional form libraries like react-hook-form or formik, tutim.io offers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Non-technical team member empowerment&lt;/strong&gt; - Our platform enables non-technical team members to create and optimize user flows, fostering collaboration and reducing the development workload.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-step support&lt;/strong&gt; - Easily create multi-step user flows and wizard forms to guide users through complex processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich features&lt;/strong&gt; - Access a wide array of powerful features designed to simplify user flow creation and optimization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote schemas&lt;/strong&gt; - Benefit from remote schemas that allow for real-time changes and easy management.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Features&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Drag-and-drop user flow creation&lt;/li&gt;
&lt;li&gt;Generate production-grade user flows and wizard forms from text&lt;/li&gt;
&lt;li&gt;Use your own design system with our headless wizards&lt;/li&gt;
&lt;li&gt;Integrate with your existing tech stack&lt;/li&gt;
&lt;li&gt;Cloud-based or self-hosted options&lt;/li&gt;
&lt;li&gt;REST API support&lt;/li&gt;
&lt;li&gt;Conversion Rate Optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Benefits&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By using tutim.io, product teams can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save 50%-90% of development time&lt;/li&gt;
&lt;li&gt;Increase conversion rates with optimized user flows&lt;/li&gt;
&lt;li&gt;Enable non-technical team members to contribute to product development&lt;/li&gt;
&lt;li&gt;Seamlessly integrate with their existing tech stack&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's Next&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our journey only starts. We're continuously working on improving tutim.io and adding new features based on your feedback. We're also excited to explore potential integrations with popular tools and platforms to make tutim.io even more powerful - so please let me know if you have that need.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Let's Talk!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We'd love to hear your thoughts on tutim.io, whether you're a developer, product manager, or simply interested in improving user flows. Have you faced similar challenges in your projects? What would you like to see in our platform?&lt;/p&gt;

&lt;p&gt;Feel free to leave a comment here or join our discord channel at &lt;a href="https://discord.tutim.io/"&gt;https://discord.tutim.io/&lt;/a&gt;. Let's make user flow creation and optimization a breeze for everyone! 🚀&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>frontend</category>
      <category>react</category>
    </item>
  </channel>
</rss>
