<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Olu </title>
    <description>The latest articles on Forem by Olu  (@digital-nomad).</description>
    <link>https://forem.com/digital-nomad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/digital-nomad"/>
    <language>en</language>
    <item>
      <title>Huggingface.js: Quick- Guide to Getting Started</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Wed, 10 Jan 2024 05:54:44 +0000</pubDate>
      <link>https://forem.com/digital-nomad/huggingfacejs-quick-guide-to-getting-started-2mg</link>
      <guid>https://forem.com/digital-nomad/huggingfacejs-quick-guide-to-getting-started-2mg</guid>
      <description>&lt;p&gt;Have you ever wondered about the friendly emoji-like logo associated with HuggingFace? No, its not just another cute emoji; instead, it is the heart and symbol of a thriving community who's dedicated to making machine learning and artificial intelligence more approachable for enthusiasts, developers and researcher's alike. Lets dive deeper into understanding HuggingFace and how anyone use their awesome features. &lt;/p&gt;

&lt;p&gt;What exactly is HuggingFace ? &lt;/p&gt;

&lt;p&gt;It's a vibrant community and versatile platform centered around ML/AL models. The platform allows users to create, fine-tune, and deploy cutting-edge models effortlessly. The community fosters collaboration among professionals and hobbyists eager to advance ML research and applications. &lt;/p&gt;

&lt;p&gt;Why join the HuggingFace Community ? &lt;/p&gt;

&lt;p&gt;Aside from being part of an interesting ,and passionate group committed to pushing ML boundaries , joining the HuggingFace community offers several benefits: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Share and discover thousands of ready-to-use models, datasets, and demos tailored to various ML tasks. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect with fellow ML engineers, DS, researchers and beginners to exchange ideas , seek guidance and collaborate on exciting projects. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contribute to open-source initiatives and improve existing models or datasets based on use-cases. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Exploring The Hugging Face Model Hub&lt;/p&gt;

&lt;p&gt;Think of the Hugging Face Model Hub as an extensive online library filled with valuable resources for programmers. Here is what you can do: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Models: These recipes teach computers to process and analyze text, images, audio and other types of input.With access to over 350K pre-trained models, anyone can quickly get started on any ML project! (No need to reinvent the wheel)  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Datasets: These are like stacks of books waiting to be read , datasets provide essential raw material for trading and refining ML models. The Hub hosts more than 75K diverse datasets , enabling you to experiment with various scenarios and enhance your models performance. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spaces: These are awesome! Spaces are environments ( dedicated place to host your model /  Use Models in your Web Browser) to showcase live demos of ML models right inside your web browser. By offering 150K+ such demos , Hugging Face makes it super easy to explore and tinker with state-of-the -art techniques without setting up complex infrastructure. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, the Hugging Face Hub keeps track of every change made to these assets, similar to your browsers history option. &lt;/p&gt;

&lt;p&gt;Now that we've explored the basics let's discuss using Hugging Face effectively.  &lt;/p&gt;

&lt;p&gt;Getting Started with Hugging Face&lt;/p&gt;

&lt;p&gt;To begin leveraging Hugging Face, follow these simple steps:&lt;/p&gt;

&lt;p&gt;Step 1. Visit the official website&lt;/p&gt;

&lt;p&gt;Step 2. Obtain a user access token (API key) – Note: To utilize the Inference API on your private models, providing an API token is mandatory.Once set up, here are some cool features available to you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Choose from 100+ high-quality pre-built models&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easily upload, manage, and host your custom models securely&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform various ML tasks, including classification, named entity recognition, conversational agents, summarization, translation, question answering, and embedding extraction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy large models seamlessly, even if they pose challenges during production deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Develop your business solutions on top of the trusted open-source ML framework&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets Code - Use any IDE you want ( VSCODE works , but sublime, replit or GitHub codespaces are fine!) &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open a new / empty directory! &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Import required libraries &amp;amp; Run 'npm init -y' for Node SDK (In your new package.json file - add the following after name, version, main.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; "main": "huggingface.js",
  "type": "module", 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Specify your Hugging Face access token&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initialize the Hugging Face Inference class&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define the model and the image you want to caption &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fetch the image as blob&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the ImageToText Function to get the image caption&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log the results!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note: To install the package and dependencies use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;npm I @huggingface/inference dotenv &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Model Used ' &lt;a href="https://huggingface.co/nlpconnect/vit-gpt2-image-captioning"&gt;https://huggingface.co/nlpconnect/vit-gpt2-image-captioning&lt;/a&gt; '&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Models from hugging face  &lt;/p&gt;

&lt;h1&gt;
  
  
  Node SDK - Run 'npm init -y' to setup your Node.Js environment and create the package.json file
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Step 1 - Import required libraries 
import { HfInference } from "@huggingface/inference" 
import dotenv from "dotenv"
dotenv.config 
# Step 2 -  Specify your Hugging Face access token
const HF_ACCESS_TOKEN = process.env.HF_ACCESS_TOKEN 
# Step 3 -  Initialize the Hugging Face Inference Class 
const inference = new HfInference(HF_ACCESS_TOKEN);
#Step 4 - Define the model (see hugging face docs) and the image you want to caption 
const model = ""
const imageUrl = "URL_OF_ANY_IMAGE_YOU_WANT"
# IMPORTANT NOTE - You need to play different imgUrls to get the right format! Feel free to check the repo or huggingface space!

Example of imageUrl = "https://ankur3107.github.io/assets/images/image-captioning-example.png"

# Step 5 - Fetch the image as a blog 
const response = await fetch(imageUrl);
const imageBlob = await response.blob();
# Step 6 - Use the ImageToText function to get the Image Caption 
const results = await inference.imageToText({
   data: imageBlob,
   model: model,
});
# Log in Console!
console.log(results); 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Active API / Access Token &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From project directory run 'node huggingface.js' &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You a pass the imageUrl of any image you want/ find online&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the Hugginface JS Inference for how to use and interact with the library. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Hugging Face, discovering the potential of AI becomes enjoyable and achievable for individuals and organizations alike. So why wait? Join the Hugging Face community today and start exploring!&lt;/p&gt;

&lt;p&gt;For further assistance, check out these helpful resources:&lt;/p&gt;

&lt;h2&gt;
  
  
  Hugging Face Inference API Documentation
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A one year obsession can change your life&lt;br&gt;
**&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My big plan for 2024 is to really focus on learning about smart computer programs, also known as machine learning. I'll be exploring things like Hugging Face, TensorFlow, and PyTorch, which are tools to help make these smart programs. Also, I'll dive into the basics of how these programs work and how we can use them in everyday life. &lt;/p&gt;

&lt;p&gt;I believe that spending one whole year learning about this stuff can really make a big difference. So, LFG!!!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI Companions 101: How To Create, Customize, and Chat w/ Your AI Companion</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Mon, 17 Jul 2023 06:27:57 +0000</pubDate>
      <link>https://forem.com/digital-nomad/ai-companions-101-how-to-create-customize-and-chat-w-your-ai-companion-210k</link>
      <guid>https://forem.com/digital-nomad/ai-companions-101-how-to-create-customize-and-chat-w-your-ai-companion-210k</guid>
      <description>&lt;p&gt;AI Companions 101: How-to&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3iaB0phE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgr1d7jps1hxozy2d5ii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3iaB0phE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgr1d7jps1hxozy2d5ii.jpg" alt="Image description" width="512" height="512"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Introduction:&lt;br&gt;
Imagine having your very own AI companion that you can chat with, right in your browser or even via SMS! The AI Companion App allows you to create and customize your companion's personality and backstory, making each interaction truly unique and personalized.&lt;/p&gt;

&lt;p&gt;The Ultimate Chatbot Experience:&lt;br&gt;
But what sets this app apart is its use of a powerful vector database with similarity search. This means that your conversations with your companion can go beyond simple responses. The app intelligently retrieves relevant prompts based on the context and background of your companion, creating more engaging and dynamic interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1jQT4mFb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2wrtytk0g075sv9hggk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1jQT4mFb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2wrtytk0g075sv9hggk.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Endless Possibilities:&lt;br&gt;
Now, let's talk about the possibilities. With the AI Companion App, the use cases are endless! Looking for a AI Girlfriend? You got it! Want a trusted friend to chat with? No problem! Need entertainment or coaching? It's all here! You can guide your companion towards your ideal use case by crafting their backstory and choosing the perfect model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--94H80RuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue8zrxpxdt9nqgkg3xhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--94H80RuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue8zrxpxdt9nqgkg3xhk.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not Just for Casual Interactions:&lt;br&gt;
But remember, this app is not just for casual interactions. It's also a valuable developer tutorial and starter stack for those curious about chatbot development. If you're interested in exploring a production-ready open-source platform, be sure to check out Steamship. And if you want to experience the leading AI chat platforms, Character.ai is the way to go!&lt;/p&gt;

&lt;p&gt;The Technology Behind the App:&lt;br&gt;
Now, let's take a closer look at the technology behind the AI Companion App. It's built on a powerful stack that combines the best tools and frameworks in the AI world. We have Next.js for app logic, Pinecone and Supabase pgvector for vector database storage, Langchain.js for LLM orchestration, and OpenAI and Replicate for powerful text models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rQo6oz-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l39dgmxpi3iy0rdnhu71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rQo6oz-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l39dgmxpi3iy0rdnhu71.png" alt="Image description" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Text Your Companion:&lt;br&gt;
The app also utilizes Twilio for text messaging integration, allowing you to text your AI companion and retain the full conversational history and context. It adds a whole new level of immersion and interaction!&lt;/p&gt;

&lt;p&gt;Getting Started with the AI Companion App:&lt;/p&gt;

&lt;p&gt;So, how can you get started with the AI Companion App? It's super simple! Just follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Fork and clone the &lt;a href="https://github.com/a16z-infra/companion-app"&gt;repository&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fill out the necessary secrets, such as Clerk secrets, OpenAI API key, Replicate API key, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate embeddings for your companions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the app locally and start chatting with your AI companions!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deploying the App:&lt;br&gt;
But wait, there's more! You can even deploy the app to a live environment using Fly. It's a breeze to set up, and you'll have your AI companions accessible from anywhere!&lt;/p&gt;

&lt;p&gt;Export to Character.ai:&lt;br&gt;
And for those of you looking to take your AI companion to the next level, we've got you covered. You can easily export your companion to Character.ai, unlocking even more possibilities for customization and advanced character settings!&lt;/p&gt;

&lt;p&gt;In Summary:&lt;br&gt;
In summary, the AI Companion App is a game-changer in the world of AI interactions. Whether you're looking for a friend, a partner, or just some entertaining conversations, this app delivers. Its powerful technology stack, seamless integration with various AI models, and text messaging capabilities make it a must-have tool for AI enthusiasts and developers alike.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
Thanks for joining us on this exciting journey into the AI Companion App. Be sure to check out the links below to explore the app, join our community Discord, and stay up to date with the latest AI innovations. Don't forget to unleash the power of AI companions in your life!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/q-XX9QdXIK0"&gt;Youtube Tutorial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;resources:&lt;/p&gt;

&lt;p&gt;Github &lt;br&gt;
Demo App&lt;br&gt;
OpenAI&lt;br&gt;
Clerk&lt;br&gt;
Replicate&lt;br&gt;
Upstash&lt;br&gt;
Twilio&lt;br&gt;
Character.ai&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>web3</category>
      <category>api</category>
    </item>
    <item>
      <title>Building LLM Web Apps w/ Node.js</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Sat, 24 Jun 2023 21:21:58 +0000</pubDate>
      <link>https://forem.com/digital-nomad/building-llm-web-apps-w-nodejs-3292</link>
      <guid>https://forem.com/digital-nomad/building-llm-web-apps-w-nodejs-3292</guid>
      <description>&lt;p&gt;First, I want to say thank you to the team at Vercel for putting in the leg work and creating Vercel AI SDK! Which allows you to create AI related projects using React and Svelte.Over the weekend, I decided to create a beautiful but minimalistic Linkedin Bio generator. &lt;/p&gt;

&lt;p&gt;Introduction: &lt;/p&gt;

&lt;p&gt;Read Vercel blog post introducing the &lt;a href="https://vercel.com/blog/introducing-the-vercel-ai-sdk"&gt;AI SDK&lt;/a&gt;. After looking through the blog post, feel free to browse around the templates devs have started for us. Once you have found a template, feel free to look at the demo and review the GitHub repo. If you don't understand anything, check online or use your favorite chatbot!&lt;/p&gt;

&lt;p&gt;I decided to go with the twitter bio generator since I had found out about the Vercel AI SDK on twitter. Since I was on LinkedIn earlier that day, I decided to create a LinkedIn summary generator or bio generator that anyone with a social network can use. I will be explaining, the features of next.js , and general walkthrough for building LLM powered apps!&lt;/p&gt;

&lt;p&gt;Overview Of Vercel: &lt;/p&gt;

&lt;p&gt;Vercel is a powerful platform to build AI-powered applications, with popular open-source and cloud LLMs, including support for streaming generative responses. It provides a seamless deployment experience for Next.JS applications, making it an ideal choice for our Linkedin Bio website. With Vercel, you can take advantage of features like automatic deployments, built-in CDN, server less functions, and more. It offers scalability, performance, and ease of use, allowing you to focus on building your website without worrying about infrastructure management.&lt;/p&gt;

&lt;p&gt;Setting up the Environment: &lt;/p&gt;

&lt;p&gt;To get started, we need to set up our environment. Open your IDE or terminal to run locally.Check the resources section for the Blog post by the Vercel team and the GitHub Repo!&lt;/p&gt;

&lt;p&gt;The Frontend:&lt;/p&gt;

&lt;p&gt;The Next.js frontend consists of a few elements:&lt;/p&gt;

&lt;p&gt;A text box for users to copy their current bio or write a few sentences about themselves. &lt;/p&gt;

&lt;p&gt;A dropdown where they can select the tone of the bio they want to be generated.&lt;/p&gt;

&lt;p&gt;A submit Botton for generating their bio, which when clicked calls an API route that uses OpenAI's GPT-3 model and returns two generated bios!&lt;/p&gt;

&lt;p&gt;Two containers to display the generated bio after we get them back from the API route.&lt;/p&gt;

&lt;p&gt;The Backend: &lt;/p&gt;

&lt;p&gt;I am still relatively new to next.js, one of its advantages lies in its ability to seamlessly develop both frontend and backend of your web app.By leveraging Next.ja we can build full stack apps! &lt;/p&gt;

&lt;p&gt;To see exactly how to build out your backend, please refer to the Official Vercel blog since they created Next.js!&lt;/p&gt;

&lt;p&gt;Limitations of the serverless function approach &lt;/p&gt;

&lt;p&gt;You can build your web app with a server less approach, but there are some limitations that make edge a better fit for this kind of application:&lt;/p&gt;

&lt;p&gt;If we're building an app where we want to await longer responses, such as generating entire blog posts, responses will likely take over 10 seconds which can lead to serverless timeout issues on Vercel's hobby tier. BUT the PRO TIER has a 60-second timeout which is usually enough for GPT-3 applications. &lt;/p&gt;

&lt;p&gt;Waiting several seconds before seeing any data isn't a good user experience. Ideally, we want to incrementally show the users data as it's generated - just like OpenAIs ChatGPT. &lt;/p&gt;

&lt;p&gt;The responses may take even longer due serverless lambda functions. &lt;/p&gt;

&lt;p&gt;Vercel Edge Functions with streaming are here to help! Depending on which Node.js libraries you're working with it may not be edge compatible.For this case, they will work great. Lets dive into what edge functions are and how we use them to make faster generations and better user experiences!&lt;/p&gt;

&lt;p&gt;Edge Functions VS. Serverless Functions &lt;/p&gt;

&lt;p&gt;Edge functions are basically a more lightweight version of serverless functions. They have a smaller limit on code size, memory, and doesn't support ALL node.js libraries. Here's why we still want to use them: &lt;/p&gt;

&lt;p&gt;Because Edge functions use a smaller edge runtime and run very close to users on the edge, they're also fast. They have virtually no cold starts and are significantly faster than serverless functions. &lt;/p&gt;

&lt;p&gt;They allows for a great users experience, especially when paired with streaming. Streaming a response breaks it down into small chunks and progressively sends them to the client, as opposed to waiting for the entire response before sending it. &lt;/p&gt;

&lt;p&gt;Edge functions have 30 seconds timeout and even longer when streaming, which exceeds the timeout limit for serverless functions on Vercel's Hobby plan.Using these can allow you to get past timeout issues when using AI APIs that take longer to respond. As an added benefit, Edge functions are also cheaper to run!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/9Q9_CQxFUKY"&gt;https://youtu.be/9Q9_CQxFUKY&lt;/a&gt;&lt;br&gt;
Video By Vercel Team!&lt;/p&gt;

&lt;p&gt;Now that we understand the benefits and cost-effectiveness of using edge functions, let's refactor our existing code to use them. Now you can start with your backend's API route!&lt;/p&gt;

&lt;p&gt;Check the GitHub repo from the Vercel team or myself. Open the generate.ts file and you can see we define the edge function as an "edge" function. You will also see an added extra variable to the payload, to make sure OpenAI streams in chunks back to the client. &lt;/p&gt;

&lt;p&gt;After the payload is defined there is a stream variable. There is a helper function, OPENAISTREAM, that enables us to incrementally stream responses to the client as we get data from OpenAI. &lt;/p&gt;

&lt;p&gt;If you open the the OPENAIStream.ts file. You can see we sent a POST request to OPENAI with the payload. We create a stream to continuously parse the data received from OpenAI, all while waiting for the [DONE] token to be sent since this signifies its completion.When that happens the stream its closed. &lt;/p&gt;

&lt;p&gt;Back in the frontend &lt;/p&gt;

&lt;p&gt;We define a reader using the native we API, getReader(), and progressively add data to our generatedBio state as it's streamed in. &lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Thank you again to the Vercel team for creating the Vercel SDK , templates and blogs to help devs learn and ship products. If you want to create your own version or my Bioninja or the original TwitterBio from the Vercel team check out the resources below! &lt;/p&gt;

&lt;p&gt;Keep learning , building and trying new things! Don't get trapped and live with the results of other people thinking! &lt;/p&gt;

&lt;p&gt;Resources: &lt;/p&gt;

&lt;p&gt;Bio Ninja - &lt;a href="https://bio-ninja-eight.vercel.app"&gt;DEMO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github: &lt;a href="https://github.com/byteolu/BioNinja"&gt;Bioninja &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vercel.com/blog/gpt-3-app-next-js-vercel-edge-functions"&gt;Blog post - Vercel's Team&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/vercel/status/1671192090141949952?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet"&gt;Twitter Post- Vercel AI SDK&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aiapplicationsblog.com/why-i-built-bioninja-io-w-javascript/"&gt;Blog &lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title> Vercel's AI SDK: Supercharging Your Web Development</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Mon, 19 Jun 2023 23:56:17 +0000</pubDate>
      <link>https://forem.com/digital-nomad/vercels-ai-sdk-supercharging-your-web-development-17nc</link>
      <guid>https://forem.com/digital-nomad/vercels-ai-sdk-supercharging-your-web-development-17nc</guid>
      <description>&lt;p&gt;Link to my demo app - Linkedin Bio&lt;/p&gt;

&lt;p&gt;Introduction: &lt;/p&gt;

&lt;p&gt;Have you ever wondered how technology can help you stand out in web development ? Well, there's a cool new tool called the Vercel AI SDK that can make your new sklls shine! In this blog, we'll explore what the Vercel AI SDK is and how it can help.&lt;/p&gt;

&lt;p&gt;What is the Vercel AI SDK?&lt;/p&gt;

&lt;p&gt; The Vercel AI SDK is like a special toolbox that helps developers create really smart and interactive websites. It's designed to make it easy for developers to add artificial intelligence (AI) features to their websites without having to build everything from scratch. With the Vercel AI SDK, developers can create things like chat interfaces, where you can talk to a computer and get intelligent responses.&lt;/p&gt;

&lt;p&gt;Why is it Exciting for Professionals or everyday people? &lt;/p&gt;

&lt;p&gt;Imagine your mom or dad wants to create a chores-chatbot or your manager wants you to create a excel formatting app for non-technical stakeholders. They can to make it look really impressive so that people notice their skills! The Vercel AI SDK can help them do just that! It has a feature that lets them have a chat-like conversation with an AI model, which is like having a conversation with a super smart computer. This AI model can generate personalized summaries, skills, and experience sections for their profile. It's like having a personal assistant helping them create an awesome profile.&lt;/p&gt;

&lt;p&gt;How Does it Work?&lt;/p&gt;

&lt;p&gt; Using the Vercel AI SDK is pretty cool! Developers can write special code that tells the computer how to use the AI model. They can create a chat interfaces or a Linkedin Bio Generator (&lt;a href="https://bio-ninja-eight.vercel.app"&gt;https://bio-ninja-eight.vercel.app&lt;/a&gt;) . &lt;/p&gt;

&lt;p&gt;Benefits of the Vercel AI SDK: The Vercel AI SDK has some really cool benefits for professionals:&lt;/p&gt;

&lt;p&gt;profile updater: Using the Vercel AI SDK, you can put it on the internet using Vercel's platform. It's like finding a perfect home for your app. Vercel makes sure your app works fast and smoothly. It can handle lots of people using it at the same time without any problems.&lt;/p&gt;

&lt;p&gt;It's easy to use: Even if you dont know much about coding, you can still use the Vercel AI SDK. It's designed to be user-friendly, so you can focus on creating a great code without getting stuck in complicated technical details.&lt;/p&gt;

&lt;p&gt;Conclusion: &lt;/p&gt;

&lt;p&gt;The Vercel AI SDK is an exciting tool that brings the power of AI to professionals looking to update web dev skills or start AI projects. &lt;/p&gt;

&lt;p&gt;So, with the Vercel AI SDK, developers can create amazing apps that use AI to make things easier. It's like having a superpower to make your website smarter and more interactive. And Vercel keeps improving the SDK to make it even better based on what developers need.&lt;/p&gt;

&lt;p&gt;Isn't it cool that we can use AI to do smart things on the internet? With the Vercel AI SDK, developers can make awesome apps that make our lives easier and more fun.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Check out Vercel AI SDK or feel free to check out my demo app!&lt;/p&gt;

&lt;p&gt;You can generate your linkedin bio or any social networking Bio in seconds based off your job title or your current bio! &lt;/p&gt;

&lt;p&gt;Link - &lt;a href="https://bio-ninja-eight.vercel.app"&gt;https://bio-ninja-eight.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Juneteenth &amp;amp; Have A Great Week!&lt;/p&gt;

&lt;p&gt;Prepare for your next job application with  Cover Letter Generator!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;Github Repo &lt;/p&gt;

&lt;p&gt;Vercel Templates&lt;/p&gt;

&lt;p&gt;Twitter Tweet - Inspiration for my Demo / Weekend Project&lt;/p&gt;

</description>
    </item>
    <item>
      <title>PandasAI: Making Data Analyst Fun &amp; Conversational!</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Tue, 13 Jun 2023 08:23:41 +0000</pubDate>
      <link>https://forem.com/digital-nomad/pandasai-making-data-analyst-fun-conversational-524m</link>
      <guid>https://forem.com/digital-nomad/pandasai-making-data-analyst-fun-conversational-524m</guid>
      <description>&lt;h2&gt;
  
  
  What is it?
&lt;/h2&gt;

&lt;p&gt;A new Python library that adds artificial intelligence to Pandas, the popular data analysis and manipulation tool.With PandasAI, you can ask questions about your data and get answers back in a conversational /chatbot way. In today's blog we will dive in and discover how PandasAI can make your analytical task easier!&lt;/p&gt;

&lt;h2&gt;
  
  
  What does PandasAI do ?
&lt;/h2&gt;

&lt;p&gt;Imagine having an assistant like Siri, except with access to a powerful data analytics library! You can ask questions about your data and receive answers in the form of Pandas data frames, which are organized like tables of information. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Installation&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
pip install pandasai&lt;/p&gt;

&lt;h2&gt;
  
  
  Explore PandasAI
&lt;/h2&gt;

&lt;p&gt;You can create a Dataframe to play around with containing information on Dog breeds such as their weight and goofiness score. Or literally anything you want to create a database about!&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Drum Roll! *&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cool right?
&lt;/h2&gt;

&lt;p&gt;In a few lines of code, PandasAI can give you the answers you're looking for. You can take it a step further and level-up your questions to perform various calculations with ease. &lt;/p&gt;

&lt;p&gt;Note: Calculations work great but prompting a chart based on the cal,  will give you an off number. &lt;/p&gt;

&lt;h2&gt;
  
  
  But, Why Should You Care About PandasAI ?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Engaging &amp;amp; Fun way to Complete Analytics&lt;/strong&gt;: PandasAI may transform data analysis from dry and dull task into interactive and engaging experiences. Instead of crunching numbers, you can ask questions, explore your data, and uncover meaningful insights!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical-thinking and problem-solving&lt;/strong&gt;: Working with data requires critical thinking and problem-solving skills. With PandasAI, you'll learn how to formulate questions, analyze information and interpret results. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Discover Hidden Stories&lt;/strong&gt;: Data always has a story to tell. By asking the right questions and exploring your data, you can uncover patterns, trends, and correlations that might not be immediately apparent. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privacy &amp;amp; Security are Important&lt;/strong&gt;: PandasAI takes privacy seriously. It randomizes sensitive information and shuffles non-sedative data to ensure your privacy is protected. If you want to take your security further, you can use PandasAI with enforce_privacy=True to only send column names not actual data. &lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;The more new tools release the more exciting the data analytics/ scientist/ engineer journeys become! You can explore your data, ask questions, and receive insightful answers. Give it a try!&lt;/p&gt;

&lt;p&gt;Install PandasAI , load your data and start asking questions!&lt;/p&gt;

&lt;p&gt;Have a great week! &lt;/p&gt;

&lt;p&gt;Prepare for your next job application with &lt;a href="http://coverletterbuilder.up.railway.app/"&gt;Cover Letter Generator!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gventuri/pandas-ai"&gt;Github Repo - pandasAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://aiapplicationsblog.com/"&gt;Blog&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>ai</category>
      <category>analyst</category>
    </item>
    <item>
      <title>Creating Your Own Recipe Generator Using Streamlit, Python, and OpenAI API</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Thu, 08 Jun 2023 07:23:49 +0000</pubDate>
      <link>https://forem.com/digital-nomad/creating-your-own-recipe-generator-using-streamlit-python-and-openai-api-3g7f</link>
      <guid>https://forem.com/digital-nomad/creating-your-own-recipe-generator-using-streamlit-python-and-openai-api-3g7f</guid>
      <description>&lt;p&gt;If you love food or are a tech enthusiast looking for a new way to discover quick and tasty recipes, then this is the blog for you.In this tutorial, I will walk you through the steps to create your own recipe generator using streamlit, Python, and the OpenAI API. By the end of this tutorial, you'll have a fully functional recipe generator that can provide you with unique recipe ideas at the click of a button!&lt;/p&gt;

&lt;h2&gt;
  
  
  *&lt;em&gt;Prerequisites *&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;To follow along with this tutorial, make sure you have the following prerequisites:&lt;/p&gt;

&lt;p&gt;Python3 installed on your machine(Version 3.6 or higher)&lt;/p&gt;

&lt;p&gt;Streamlit library installed ('pip install streamlit') &lt;/p&gt;

&lt;p&gt;OpenAI Python library installed ( 'pip install openai')&lt;/p&gt;

&lt;p&gt;OpenAI API key (You can obtain it from the OpenAI website) &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set up the Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's start by setting up the project structure and installing the required dependencies. Follow the steps below:&lt;/p&gt;

&lt;p&gt;Create a new directory for your project and navigate into it using the command line.&lt;/p&gt;

&lt;p&gt;Initialize a new Python virtual environment by running the following command:&lt;/p&gt;

&lt;p&gt;python -m venv recipe-generator-env&lt;/p&gt;

&lt;p&gt;Activate the virtual environment:&lt;/p&gt;

&lt;p&gt;On Windows, run: recipe-generator-env\Scripts\activate&lt;/p&gt;

&lt;p&gt;On macOS/Linux, run: source recipe-generator-env/bin/activate&lt;/p&gt;

&lt;p&gt;Install the required libraries by running the following command:&lt;/p&gt;

&lt;p&gt;pip install streamlit openai python-dotenv&lt;/p&gt;

&lt;p&gt;Create a new Python file named recipe_generator.py in your project directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Set up the Environment Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To securely store your OpenAI API key and load it into your application, we'll use environment variables. Follow these steps:&lt;/p&gt;

&lt;p&gt;Create a new file named .env in the same directory as your recipe_generator.py file.&lt;/p&gt;

&lt;p&gt;Open the .env file in a text editor and add the following line:&lt;/p&gt;

&lt;p&gt;OPENAI_API_KEY=your_api_key&lt;/p&gt;

&lt;p&gt;3.** Save the '.env' file.**&lt;/p&gt;

&lt;p&gt;Step 3: Import the Required Libraries&lt;/p&gt;

&lt;p&gt;In the recipe_generator.py file, start by importing the required libraries:&lt;/p&gt;

&lt;p&gt;import os&lt;br&gt;
import streamlit as st&lt;br&gt;
import openai&lt;br&gt;
from dotenv import load_dotenv&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Step 4: Load Environment Variables&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Next, load the environment variables from the .env file:&lt;/p&gt;

&lt;p&gt;load_dotenv()&lt;br&gt;
openai.api_key = os.getenv("OPENAI_API_KEY")&lt;/p&gt;

&lt;p&gt;Step 5: Implement the Recipe Generator&lt;/p&gt;

&lt;p&gt;Now, let's define the function to generate the recipe using the OpenAI API:&lt;/p&gt;

&lt;p&gt;def generate_recipe(prompt):&lt;br&gt;
    completion = openai.ChatCompletion.create(&lt;br&gt;
        model="gpt-3.5-turbo",&lt;br&gt;
        messages=[&lt;br&gt;
            {"role": "Professional Chef", "content": prompt}&lt;br&gt;
        ]&lt;br&gt;
    )&lt;br&gt;
    return completion.choices[0].message["content"]&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Step 6: Create the Streamlit Application&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
We will use Streamlit to create a simple web application for the recipe generator:&lt;/p&gt;

&lt;p&gt;def main():&lt;br&gt;
    st.title("Recipe Generator")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = st.text_area("Enter the ingredients and instructions:", height=150)
if st.button("Generate Recipe"):
    recipe = generate_recipe(prompt)
    st.markdown(f"## Recipe\n\n{recipe}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Step 7: Run the Application&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
To run the application, open a command prompt, navigate to your project directory, and run the following command:&lt;/p&gt;

&lt;p&gt;streamlit run recipe_generator.py&lt;/p&gt;

&lt;p&gt;The application will start, and you can access it in your web browser at &lt;a href="http://localhost:8501"&gt;http://localhost:8501&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A1GPtIVa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qee43ugg2uej8gigk0r3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A1GPtIVa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qee43ugg2uej8gigk0r3.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x0uUbdqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndwmb3bda73ed35glg2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x0uUbdqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndwmb3bda73ed35glg2b.png" alt="Image description" width="800" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully built your own recipe generator using Streamlit, Python, and the OpenAI API. Now you can generate unique and exciting recipes by entering the ingredients and instructions. Feel free to customize and enhance the application further to suit your needs. Happy cooking!&lt;/p&gt;

&lt;p&gt;Feel free to customize and add more features to your streamlit app! This is just a starting point to guide you in creating an amazing recipe generator.&lt;/p&gt;

&lt;p&gt;Follow me on: Twitter or Linkedin!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/oiQqRX2v7_Y"&gt;Youtube Channel&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://yourtechie.substack.com/p/create-your-own-recipe-generator?sd=pf"&gt;Follow My Substack&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openai</category>
      <category>streamlit</category>
      <category>python</category>
      <category>howto</category>
    </item>
    <item>
      <title>Installing and Using SuperAGI : The Perfect Tool For Autonomous AI Agents</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Wed, 07 Jun 2023 07:14:39 +0000</pubDate>
      <link>https://forem.com/digital-nomad/installing-and-using-superagi-the-perfect-tool-for-autonomous-ai-agents-3c22</link>
      <guid>https://forem.com/digital-nomad/installing-and-using-superagi-the-perfect-tool-for-autonomous-ai-agents-3c22</guid>
      <description>&lt;h2&gt;
  
  
  Do you remember AutoGPT ?
&lt;/h2&gt;

&lt;p&gt;With AI constantly evolving devs rely on reliable frameworks to build autonomous AI agents. The repo trending on Github by TransformerOptimus, who created an open-source autonomous AI agent framework. It's for devs to build, manage and run useful autonomous agents quickly and reliably. In today's blog we will explore the installation process and use of SuperAGI, the open-source autonomous AI agent framework. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Getting started: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Docker Setup&lt;/p&gt;

&lt;p&gt;If Docker is not installed on your system, visit the Docker website (docker.com) and install Docker Desktop.&lt;/p&gt;

&lt;p&gt;Once installed make sure Docker Desktop is running. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pinecone Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sign up at Pinecone and retrieve an API key from your account dashboard. (No need to upgrade! You can do this on the free account :) )&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation Steps &lt;br&gt;
Now let's begin the install!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Download the Repo&lt;/p&gt;

&lt;p&gt;Open terminal, command prompt in IDE and run the follwing. &lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/TransformerOptimus/SuperAGI.git"&gt;https://github.com/TransformerOptimus/SuperAGI.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Directory&lt;/p&gt;

&lt;p&gt;Use the 'cd' command to navigate into the SuperAGI directory. &lt;/p&gt;

&lt;p&gt;cd Desktop&lt;/p&gt;

&lt;p&gt;CD SuperAGI&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In VSCODE or your IDE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the project folder and rename the 'config_template.yaml' file and rename it 'config.yaml' &lt;/p&gt;

&lt;p&gt;ENTER YOUR API KEYS - Find the following API keys from their expected sources&lt;/p&gt;

&lt;p&gt;OpenAI API Key&lt;/p&gt;

&lt;p&gt;Google API Key&lt;/p&gt;

&lt;p&gt;Custom Search Engine ID&lt;/p&gt;

&lt;p&gt;Pinecone API Key&lt;/p&gt;

&lt;p&gt;You will copy and paste the keys without any quotes, spaces, or extra characters in the corresponding fields of the 'config.yaml' file. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running SuperAGI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the setup above is complete, its time to run SuperAGI and experience its features:&lt;/p&gt;

&lt;p&gt;Start the Docker Containers: &lt;/p&gt;

&lt;p&gt;Make sure Docker Desktop is running. &lt;/p&gt;

&lt;p&gt;In the SuperAGI directory or in your VSCODE editor open terminal and run:&lt;/p&gt;

&lt;p&gt;docker-compose up --build&lt;/p&gt;

&lt;p&gt;Access Superagi&lt;/p&gt;

&lt;p&gt;Open your web browser and visit 'localhost:3000' to access the SuperAGI UI!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;SuperAGI is an impressive open-source framework that empowers developers to build, manage, and run autonomous AI agents quickly and reliably. By following the installation steps outlined in this blog, you can set up SuperAGI on your system and explore its diverse features. Have fun experimenting with this powerful tool and contribute to its community!&lt;/p&gt;

&lt;p&gt;Prepare for your next job application with — &lt;a href="http://coverletterbuilder.up.railway.app/"&gt;Cover Letter Generator&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aiapplicationsblog.com/installing-and-using-superagi-the-perfect-tool-for-autonomous-ai-agents/"&gt;AIAPPLICATIONSBLOG.COM &lt;/a&gt;&lt;br&gt;
References:&lt;/p&gt;

&lt;p&gt;OpenAI API Key&lt;/p&gt;

&lt;p&gt;Pinecone Vector DB&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/TransformerOptimus/SuperAGI"&gt;SuperAGI Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google Programmable Search &lt;/p&gt;

&lt;p&gt;Google Console - API &amp;amp; Services&lt;/p&gt;

</description>
      <category>openai</category>
      <category>tutorial</category>
      <category>github</category>
      <category>autogpt</category>
    </item>
    <item>
      <title>How to install AutoGPT</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Wed, 07 Jun 2023 06:28:24 +0000</pubDate>
      <link>https://forem.com/digital-nomad/how-to-install-autogpt-i5k</link>
      <guid>https://forem.com/digital-nomad/how-to-install-autogpt-i5k</guid>
      <description>&lt;p&gt;*&lt;em&gt;First, let's talk about what Auto-Gpt is! *&lt;/em&gt;&lt;br&gt;
It is an open-source application that showcases the capabilities of GPT-4. This program is driven by GPT-4 and can chain together LLM "thoughts", to autonomously achieve any given goal you set.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WHY IS IT COOL?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It pushes the boundaries of what is possible with AI as one of the best examples of GPT-4 running fully autonomously.Some of cool features:&lt;br&gt;
Internet Access for searches and information gathering&lt;br&gt;
Long-term and Short-term Memory Management&lt;br&gt;
GPT-4 instances for text generation&lt;br&gt;
Access to popular websites and platforms&lt;br&gt;
File storage and summarization with GPT-3.5&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are the Requirements ?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10 or later&lt;/li&gt;
&lt;li&gt;OpenAI API Key&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;GPT3.5 ONLY Mode (please see GitHub repo for any help!)&lt;br&gt;
If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT!&lt;br&gt;
python -m autogpt --speak --gpt3only&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;&lt;br&gt;
Backend -&amp;gt; Pinecone | Milvus | Reddit&lt;br&gt;
ElevenLabs Key (To have AI SPEAK )&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-Installation&lt;/strong&gt;&lt;br&gt;
Create a folder in your home directory or Desktop. Create another folder called 'autogpt'. Or whatever name you like!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Activate a virtual environment&lt;/strong&gt;&lt;br&gt;
To create a virtual environment. Follow the code below:&lt;/p&gt;

&lt;p&gt;To navigate to your root directory, use the 'cd' command followed by the path to your root directory.For example if your root directory is '/home/user/' on Linux or MacOS, use the command:&lt;/p&gt;

&lt;p&gt;cd /home/user/&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once you've navigated to your root directory, you can create a new Python virtual environment using the 'venv' module or any other tool of your choice.
python3 -m venv myenv
This will create a new virtual environment called 'myenv'.
To activate your new virtual environment, run the activate script in the virtual environment's 'bin' directory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;On MacOS or Linux:&lt;/strong&gt;&lt;br&gt;
source myenv/bin/activate&lt;/p&gt;

&lt;p&gt;This will activate the virtual environment and you can start working on your new project!&lt;br&gt;
Installation (disregard the weird number 1 formatting )&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To start installing Auto-GPT, follow these steps:&lt;/strong&gt;&lt;br&gt;
Install all requirements listed above, if not already installed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To execute the following commands, open a CMD,Bash or terminal window.&lt;/strong&gt;&lt;br&gt;
Clone the repository: For this step, you need Git installed. Alternatively, you can download the latest stable release (Source code (zip), bottom of the page).&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/Significant-Gravitas/Auto-GPT.git"&gt;https://github.com/Significant-Gravitas/Auto-GPT.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the directory where repository was downloaded&lt;/p&gt;

&lt;p&gt;cd Auto-GPT&lt;br&gt;
Install the required dependencies&lt;/p&gt;

&lt;p&gt;pip install -r requirements.txt&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure Auto-GPT&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Locate the file named .env.template in the main /Auto-GPT folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a copy of this file, called .env by removing the template extension. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The easiest way is to do this in a command prompt/terminal window cp .env.template .env.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open the .env file in a text editor. &lt;br&gt;
&lt;strong&gt;Note: Files starting with a dot might be hidden by your Operating System.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Find the line that says OPENAI_API_KEY=.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After the "=", enter your unique OpenAI API Key (without any quotes or spaces).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Enter any other API keys or Tokens for services you would like to utilize.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save and close the .env file.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By completing these steps, you have properly configured the API Keys for your project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command Line Arguments&lt;/strong&gt;&lt;br&gt;
Here are some common arguments you can use when running Auto-GPT:&lt;br&gt;
Replace anything in (&amp;lt;&amp;gt;) to a value you want to specify.&lt;br&gt;
To view all available command line arguments&lt;/p&gt;

&lt;p&gt;python -m autogpt --help&lt;br&gt;
Run Auto-GPT with a different AI Settings file&lt;/p&gt;

&lt;p&gt;python -m autogpt --ai-settings &lt;br&gt;
Specify a memory backend&lt;/p&gt;

&lt;p&gt;python -m autogpt --use-memory  &lt;br&gt;
Speech Mode&lt;br&gt;
Use this to use TTS (Text-to-Speech) for Auto-GPT.&lt;br&gt;
python -m autogpt --speak&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;List of IDs with names from eleven labs, you can use the name or ID:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use 2 of these ID's in you .env file as a voice if you choose to use the SPEAK option.&lt;br&gt;
Rachel : 21m00Tcm4TlvDq8ikWAM&lt;br&gt;
Domi : AZnzlk1XvdvUeBnXmlld&lt;br&gt;
Bella : EXAVITQu4vr4xnSDxMaL&lt;br&gt;
Antoni : ErXwobaYiN019PkySvjV&lt;br&gt;
Elli : MF3mGyEYCl7XYWbV9V6O&lt;br&gt;
Josh : TxGEqnHWrfWFTfGW9XjX&lt;br&gt;
Arnold : VR6AewLTigWG4xSOukaG&lt;br&gt;
Adam : pNInz6obpgDQGcFmaJgB&lt;/p&gt;

&lt;h2&gt;
  
  
  Sam : yoZ06aMxZJJ28mfd3POQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;OpenAI API Keys Configuration&lt;/strong&gt;&lt;br&gt;
Obtain your OpenAI API key from: &lt;a href="https://platform.openai.com/account/api-keys"&gt;https://platform.openai.com/account/api-keys&lt;/a&gt;.&lt;br&gt;
To use OpenAI API key for Auto-GPT, you NEED to have billing set up (AKA paid account).&lt;br&gt;
You can set up paid account at &lt;a href="https://platform.openai.com/account/billing/overview"&gt;https://platform.openai.com/account/billing/overview&lt;/a&gt;.&lt;br&gt;
Setting Your Cache Type&lt;br&gt;
By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.&lt;br&gt;
To switch to either, change the MEMORY_BACKEND env variable to the value that you want:&lt;br&gt;
local (default) uses a local JSON cache file&lt;br&gt;
pinecone uses the Pinecone.io account you configured in your ENV settings&lt;br&gt;
redis will use the redis cache that you configured&lt;br&gt;
milvus will use the milvus cache that you configured&lt;br&gt;
weaviate will use the weaviate cache that you configured&lt;/p&gt;




&lt;p&gt;🌲 &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pinecone API Key Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Pinecone enables the storage of vast amounts of vector-based memory, allowing for only relevant memories to be loaded for the agent at any given time.&lt;br&gt;
Go to pinecone and make an account if you don't already have one.&lt;br&gt;
Choose the Starter plan to avoid being charged.&lt;br&gt;
Find your API key and region under the default project in the left sidebar.&lt;/p&gt;

&lt;p&gt;In the .env file set:&lt;br&gt;
PINECONE_API_KEY&lt;br&gt;
PINECONE_ENV (example: "us-east4-gcp")&lt;br&gt;
MEMORY_BACKEND=pinecone&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatively, you can set them from the command line (advanced):&lt;/strong&gt;&lt;br&gt;
MacOS or Linux:&lt;br&gt;
export PINECONE_API_KEY=""&lt;br&gt;
export PINECONE_ENV="" # e.g: "us-east4-gcp"&lt;br&gt;
export MEMORY_BACKEND="pinecone"&lt;/p&gt;




&lt;p&gt;💀&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Continuous Mode&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt; ⚠️&lt;br&gt;
Run the AI without user authorization, 100% automated. Continuous mode is NOT recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize. Use at your own risk.&lt;br&gt;
Run the autogpt python module in your terminal:&lt;/p&gt;

&lt;p&gt;python -m autogpt --speak --continuous&lt;br&gt;
 &lt;strong&gt;To exit the program, press Ctrl + C&lt;/strong&gt;&lt;br&gt;
It is recommended to use a virtual machine for tasks that require high security measures to prevent any potential harm to the main computer's system and data.&lt;/p&gt;

&lt;h2&gt;
  
  
  LETS CONNECT!!
&lt;/h2&gt;

&lt;p&gt;Have a great week everyone &amp;amp; Happy Hump Day!&lt;br&gt;
Olu Aganjuomo&lt;br&gt;
Follow me on: Twitter or Linkedin!&lt;/p&gt;

</description>
      <category>autpgpt</category>
      <category>gpt3</category>
      <category>openai</category>
      <category>github</category>
    </item>
    <item>
      <title>Google's MakerSuite: Your Generative AI Workshop!</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Thu, 01 Jun 2023 08:13:17 +0000</pubDate>
      <link>https://forem.com/digital-nomad/googles-makersuite-your-generative-ai-workshop-1ldc</link>
      <guid>https://forem.com/digital-nomad/googles-makersuite-your-generative-ai-workshop-1ldc</guid>
      <description>&lt;h2&gt;
  
  
  Are you ready to bring your imagination to life?
&lt;/h2&gt;

&lt;p&gt;If you love creating things and have a passion for exploring new technologies, then you're in for a treat! Google is excited to introduce you to MakerSuite, a brand new tool developed by Google AI. With MakerSuite, you can test out and prototype with something called Language Models (LLMs) right from your web browser. It's like having your very own Generative AI workshop where you can turn your ideas into reality&lt;/p&gt;

&lt;p&gt;Imagine you have a special tool called MakerSuite that you can use on your computer. It's like a magical workshop where you can create all sorts of things using your imagination and some special language models.&lt;/p&gt;

&lt;p&gt;MakerSuite has three different ways for you to give it instructions, depending on what you want to create.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text prompts&lt;/strong&gt;: This is like writing a story or a sentence to tell MakerSuite what you want. You can be creative and express yourself freely. You can also give examples to help MakerSuite understand better. For example, you can ask MakerSuite to help you create code for a game you want to make. You can even write the same instructions in different ways to see what happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data prompts&lt;/strong&gt;: This is a more organized way to give instructions to MakerSuite. It's like filling out a table with information. You can create a structured prompt that tells MakerSuite what input to expect and what output you want. This is useful for creating things that follow a specific pattern or structure. You can use this to make your own prompts or modify examples that are already available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chat prompts&lt;/strong&gt;: This is like talking to a friend or a virtual assistant. You can have a conversation with MakerSuite and ask it questions or give it instructions step by step. This is great for building interactive experiences like chatbots.&lt;/p&gt;

&lt;p&gt;Once you have given MakerSuite the instructions and it creates something you like, you can export it as Python code. This means you can use the code to bring your creation to life and make it work on your computer.&lt;/p&gt;

&lt;h2&gt;
  
  
  **Here is an example I was able to create using Chat Prompt:
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZoBBjoWU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8o28ucpjfhio2cgein70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZoBBjoWU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8o28ucpjfhio2cgein70.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K8w7dM34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nixi3s75kewiincbxefi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K8w7dM34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nixi3s75kewiincbxefi.png" alt="Image description" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, whether you want to write a story, create a game, or have a chat with a Data Analyst, Scientist, or Engineer, MakerSuite is here to help you bring your ideas to life. Have fun exploring and experimenting with all the possibilities!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LETS CONNECT!!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Follow me on: Twitter or &lt;a href="https://www.linkedin.com/in/olu-a/"&gt;Linkedin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Blog: &lt;a href="//Aiapplicationsblog.com"&gt;Aiapplicationsblog.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prepare for your next job application with — Cover Letter Generator!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html"&gt;Google Developer Blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.generativeai.google/tutorials/makersuite_quickstart"&gt;Google Generative AI Tutorial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developers.generativeai.google/guide"&gt;Generative AI Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>google</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>smol-developer: Adapt-or Get left Behind</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Mon, 29 May 2023 07:47:02 +0000</pubDate>
      <link>https://forem.com/digital-nomad/smol-developer-adapt-or-get-left-behind-5b9f</link>
      <guid>https://forem.com/digital-nomad/smol-developer-adapt-or-get-left-behind-5b9f</guid>
      <description>&lt;p&gt;The innovative solution that combines cutting-edge technology and seamless remote collaboration to revolutionize the way we code. Say goodbye to physical limitations and hello to a world of endless possibilities. In this blog, we delve into the incredible features and functionalities of Smol-Developer, exploring how this virtual coding companion will transform your programming experience. Stay tuned as we uncover the undeniable advantages of having your own remote coding genius at your fingertips. Let's dive into the future of coding with Smol-Developer!&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;What is Smol Developer?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Smol Developer, also known as your junior developer, is an AI-powered tool that helps you scaffold an entire codebase based on a product specification. Unlike rigid starter templates like create-react-app or create-nextjs-app, Smol-Developer is like a "create-anything-app" that adapts to your specific needs.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The key features of Smol Developer include:&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Human-Centric Approach: Smol Developer is designed to be helpful, harmless, and honest. It complements your coding process by providing assistance without overshadowing your skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coherent Whole Program Synthesis&lt;/strong&gt;: With Smol Developer, you can generate an entire codebase by writing a basic prompt. The generated code is then read and run by you, allowing you to make additions, identify errors, and provide feedback for improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customizable and Understandable&lt;/strong&gt;: The Smol Developer codebase is kept simple and compact, with less than 200 lines of Python and prompts. This makes it easy for anyone, including college students and beginners, to comprehend and customize according to their requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Smol Developer work?&lt;/strong&gt;&lt;br&gt;
To understand how Smol Developer functions, let's walk through a basic workflow:&lt;/p&gt;

&lt;p&gt;Write a Prompt: Start by writing a simple prompt that describes the app you want to build. For example, you can specify that you want to create a Chrome extension that performs certain actions.&lt;/p&gt;

&lt;p&gt;Code Generation: Using the main.py file, Smol Developer generates the code based on your prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual Review and Iteration&lt;/strong&gt;: You can review the generated code, run it, and manually identify any errors. If you encounter any underspecified parts or issues, you can add more details to the prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging Assistance&lt;/strong&gt;: Smol Developer offers a debugger.py file that reads the entire codebase and provides specific suggestions for fixing errors. This makes the debugging process more efficient and enjoyable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iterative Loop&lt;/strong&gt;: Repeat the process of refining the prompt, generating code, reviewing, and debugging until you're satisfied with the results. Smol Developer helps you throughout this loop, ensuring you're in control of the codebase.&lt;/p&gt;

&lt;p&gt;Not No Code, Not Low Code, but Something in Between&lt;/p&gt;

&lt;p&gt;Smol Developer represents a new paradigm in software development. It's not a no-code or low-code solution but rather a "third thing" that combines technical expertise with automated code generation. You still need to have technical knowledge, but Smol Developer assists you in scaffolding the codebase, saving you time and effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples and Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Smol-Developer provides various examples and resources to get you started:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short Intro Video Demo&lt;/strong&gt;: Introducing smog-dev-human-centric &amp;amp; coherent whole program synthesis: &lt;br&gt;
Watch a 6-minute video demo that showcases the process of going from a prompt to building a full-fledged Chrome extension using Smol Developer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Major Forks and Alternatives&lt;/strong&gt;: Smol Developer has been implemented in different languages and stacks. Explore the alternative implementations and deploy strategies, such as the JS/TS variant, the C#/Dotnet implementation, and the Golang version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smol Plugin Github Repo&lt;/strong&gt;: Discover the smol-plugin, which allows you to generate OpenAI plugins by specifying your API in markdown using Smol-Developer-plugin repo.This is amazing!&lt;/p&gt;
&lt;h2&gt;
  
  
  Install and Getting Started Below :)
&lt;/h2&gt;

&lt;p&gt;Prep:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Github repo in another window&lt;/li&gt;
&lt;li&gt;Visual Studio Code or IDE &lt;/li&gt;
&lt;li&gt;OpenAI API Key&lt;/li&gt;
&lt;li&gt;Anthropic API-Key Optional &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Lets Code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open Visual Studio Code&lt;/li&gt;
&lt;li&gt;Open terminal in VS Code &lt;/li&gt;
&lt;li&gt;cd into your desktop or project folder &lt;/li&gt;
&lt;li&gt;Navigate to the project GitHub repo and find the GIT CLONE command&lt;/li&gt;
&lt;li&gt;git clone &lt;a href="https://github.com/smol-ai/developer"&gt;https://github.com/smol-ai/developer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;From terminal inside your VS Code Editor &lt;/li&gt;
&lt;li&gt;cd into the project directory 'developer' &lt;/li&gt;
&lt;li&gt;Select Explorer [ The widget in the top left hand side of VS Code - Looks like two paper ]&lt;/li&gt;
&lt;li&gt;Select the folder your working in and open the developer folder.&lt;/li&gt;
&lt;li&gt;Find the 'example.env.' file and rename it to '.env'&lt;/li&gt;
&lt;li&gt;Go to Openai API and create your key or use an existing key.&lt;/li&gt;
&lt;li&gt;In the .env file replace the placeholder text with your API Key.&lt;/li&gt;
&lt;li&gt;Install Modal [Server-less Data Execution Engine like Vercel or Lambda]&lt;/li&gt;
&lt;li&gt;If you don't want to use Model you can:&lt;/li&gt;
&lt;li&gt;Run pip install -r requirements.txt&lt;/li&gt;
&lt;li&gt;Run your command using python main_no_modal.py YOUR_PROMPT_HERE&lt;/li&gt;
&lt;li&gt;In terminal run:&lt;/li&gt;
&lt;li&gt;modal run main.py --prompt " Your Prompt "&lt;/li&gt;
&lt;li&gt;You can also you can extract your prompt to a file, as long as your "prompt" ends in a .md extension.w&lt;/li&gt;
&lt;li&gt;modal run main.py --prompt prompt.md&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each time you run this, the generated directory (folder) is deleted (except for images) and all files are written from scratch. There is a helper file that ensure conference between files. &lt;/p&gt;

&lt;p&gt;Note: If you want to tweak the prompt but only want it to affect one file, and keep the rest of the files, specify the file param:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;modal run main.py --prompt prompt.md --file popup.js

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What the creator is looking for in the future
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;These are things to try/would be accepted as open issues discussions and PRS:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Specify &lt;code&gt;.md&lt;/code&gt;files for each generated file, with further prompts that could fine-tune the output in each of them:&lt;/p&gt;

&lt;p&gt;Basically like 'pop.html.md' and 'content_scripts.js.md' and so on.&lt;/p&gt;

&lt;p&gt;Bootstrap the prompt.md for existing codebases = write a script to read in a code base and write descriptive, bullet pointed prompt that generates it. &lt;/p&gt;

&lt;p&gt;done by smog pm, but can be improved. &lt;/p&gt;

&lt;p&gt;Ability to install its own dependencies &lt;/p&gt;

&lt;p&gt;This leaks into depending on execution environment.(How to avoid docker ? web container ? ) &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Self-heal&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Running the code itself and use errors as information for reprompting&lt;/p&gt;

&lt;p&gt;Make agents that autonomously run this code in a loop/watch the prompt file and regenerate code each time, on a new git branch.&lt;/p&gt;

&lt;p&gt;The code could be generated on 5 simultaneous git branches and checking their output would just involve switching git branches. &lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank You @swyxio For creating the GitHub repo!
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

</description>
      <category>openai</category>
      <category>smoldeveloper</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Microsoft introduces Fabric: The Data Analytics for the Era of AI</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Wed, 24 May 2023 02:52:06 +0000</pubDate>
      <link>https://forem.com/digital-nomad/microsoft-introduces-fabric-the-data-analytics-for-the-era-of-ai-392n</link>
      <guid>https://forem.com/digital-nomad/microsoft-introduces-fabric-the-data-analytics-for-the-era-of-ai-392n</guid>
      <description>&lt;p&gt;Are you curious about analytics and AI? Well, get ready to dive into Microsoft Fabric! Today we'll introduce you to an awesome new platform that makes data analytics super easy to understand and use. Lets go!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uajirDFV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dygpbjo0rtqhaagqtih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uajirDFV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dygpbjo0rtqhaagqtih.png" alt="Image description" width="800" height="716"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Regardless of where you are in the world, we're surrounded by tons of data.From our smartphones to the apps we use, data is everywhere. Microsoft Fabric is an all-in-one analytics platform that brings together different tools and technologies to help organizations make the most of their data. I imagine it's like having a BI superpower that unlocks your data.Fabric is an end-to-end analytics platform that seamlessly integrates Azure Data Factory, Azure Synapse Analytics and Power BI into a unified product.&lt;/p&gt;

&lt;p&gt;Here's what makes Microsoft Fabrics Special:&lt;/p&gt;

&lt;p&gt;Complete Analytics Platform: Fabric offers a single product with a unified experience and architecture, eliminating the need to integrate multiple products from different vendors.As a Software-as-a-service(SaaS) solution, it automatically integrates and optimizes all the necessary capabilities, enabling users to derive real business value within minutes. &lt;/p&gt;

&lt;p&gt;Lake-Centric and Open: Fabric simplifies the management and operation of data lakes through its built-in multi-cloud data lake, OneLake. This intuitive data hub reduces data duplication and provides centralized governance and compliance. Moreover, Fabric's commitment to open data formats ensures flexibility and compatibility across analytics offerings.&lt;/p&gt;

&lt;p&gt;Powered by AI: Microsoft Fabric leverages Azure OpenAI Service to infuse AI capabilities at every layer. With Copilot, users can employ conversational language to create dataflows, generate code, build machine learning models, and visualize results. Copilot respects organizations' security, compliance, and privacy policies, ensuring data protection.&lt;/p&gt;

&lt;p&gt;Empowering Business Users: Fabric deeply integrates with Microsoft 365 applications, making it easier for everyone in the organization to make data-driven decisions. With Power BI as a core component, relevant data from OneLake is easily accessible within applications like Excel, Microsoft Teams, PowerPoint, and SharePoint.&lt;/p&gt;

&lt;p&gt;Saves Money: Microsoft Fabric is designed to be cost-effective. It uses computing power efficiently, so you don't waste resources. This means you get all the cool features without breaking the bank.&lt;/p&gt;

&lt;p&gt;Watch a quick overview:&lt;/p&gt;

&lt;p&gt;Microsoft Fabric is already making waves among leading organizations across industries. Ferguson, a prominent distributor of plumbing supplies, has significantly reduced delivery time by consolidating their analytics stack with Fabric. T-Mobile, a major wireless communications provider, is excited about eliminating data silos and unlocking new insights. Aon, a global professional services provider, looks forward to simplifying their analytics stack and adding more value to their business.&lt;br&gt;
For existing Microsoft analytics solutions, such as Azure Synapse Analytics, Azure Data Factory, and Azure Data Explorer, Fabric represents an evolution in the form of a simplified SaaS solution. Customers can upgrade to Fabric at their own pace while benefiting from the robustness of the existing PaaS offerings. Embark on your data analytics journey with Microsoft Fabric today!&lt;/p&gt;

&lt;p&gt;P.S - After July 1, 2023, Fabric will be enabled for all Power BI tenants!&lt;/p&gt;

&lt;p&gt;LETS CONNECT!!&lt;/p&gt;

&lt;p&gt;Follow me at: Twitter or Linkedin!&lt;/p&gt;

&lt;p&gt;Prepare for your next job application with — Cover Letter Generator!&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;Microsoft &lt;/p&gt;

&lt;p&gt;Techcrunch&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Install PrivateGPT - Local Document Question Answering with Privacy</title>
      <dc:creator>Olu </dc:creator>
      <pubDate>Mon, 22 May 2023 03:12:21 +0000</pubDate>
      <link>https://forem.com/digital-nomad/how-to-install-privategpt-local-document-question-answering-with-privacy-5fka</link>
      <guid>https://forem.com/digital-nomad/how-to-install-privategpt-local-document-question-answering-with-privacy-5fka</guid>
      <description>&lt;p&gt;How to Install PrivateGPT - Local Document Question Answering with Privacy&lt;/p&gt;

&lt;p&gt;There's something new in the AI space, in this post, we will walk you through the process of installing and setting up PrivateGPT. &lt;/p&gt;

&lt;p&gt;What is it&lt;br&gt;
A powerful tool that allows you to query documents locally without the need for an internet connection. Whether you're a researcher, dev, or just curious about exploring document querying tools, PrivateGPT provides an efficient and secure solution. This tutorial accompanies a Youtube video, where you can find a step-by-step demonstration of the installation process!&lt;/p&gt;

&lt;p&gt;Prerequisites: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10 or later installed on your system or virtual env &lt;/li&gt;
&lt;li&gt;Basic knowledge of using the command line Interface (CLI/Terminal) &lt;/li&gt;
&lt;li&gt;Git installed &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can create a folder on your desktop. In the screenshot below you can see I created a folder called 'blog_projects'. Open the command line from that folder or navigate to that folder using the terminal/ Command Line. Follow the steps below to create a virtual environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bb9as5Dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8hx0h5bp3w6x4q3wv8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bb9as5Dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8hx0h5bp3w6x4q3wv8b.png" alt="Image description" width="194" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First, let's create a virtual environment.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Create a virtual environment:&lt;br&gt;
Open your terminal and navigate to the desired directory.&lt;br&gt;
Run the following command to create a virtual environment (replace myenv with your preferred name):&lt;br&gt;
        python3 -m venv myenv&lt;br&gt;
The name of your virtual environment will be 'myenv'&lt;br&gt;
Activate the virtual environment:&lt;br&gt;
On macOS and Linux, use the following command:&lt;br&gt;
        source myenv/bin/activate&lt;br&gt;
On Windows, use the following command:&lt;br&gt;
        myenv\Scripts\activate&lt;br&gt;
Run the git clone command to clone the repository:&lt;br&gt;
        git clone &lt;a href="https://github.com/imartinez/privateGPT.git"&gt;https://github.com/imartinez/privateGPT.git&lt;/a&gt;&lt;br&gt;
By creating and activating the virtual environment before cloning the repository, we ensure that the project dependencies will be installed and managed within this environment. This helps maintain a clean and isolated development environment specific to this project.&lt;/p&gt;

&lt;p&gt;After cloning the repository, you can proceed to install the project dependencies and start working on the project within the activated virtual environment.&lt;/p&gt;

&lt;p&gt;Then copy the code repo from Github, and go into your directory or folder where you want your project to live. Open the terminal or navigate to your folder from the command line. &lt;/p&gt;

&lt;p&gt;Once everything loads, you can run the install requirements command to install the needed dependencies. &lt;/p&gt;

&lt;p&gt;Navigate to the directory where you want to install PrivateGPT.&lt;br&gt;
        CD  &lt;br&gt;
Run the following command to install the required dependencies:&lt;br&gt;
        pip install -r requirements.txt&lt;br&gt;
Next, download the LLM model and place it in a directory of your choice. The default model is 'ggml-gpt4all-j-v1.3-groovy.bin,' but if you prefer a different GPT4All-J compatible model, you can download it and reference it in your .env file.&lt;/p&gt;

&lt;p&gt;Rename the 'example.env' file to '.env' and edit the variables appropriately.&lt;/p&gt;

&lt;p&gt;Set the 'MODEL_TYPE' variable to either 'LlamaCpp' or 'GPT4All,' depending on the model you're using.&lt;br&gt;
Set the 'PERSIST_DIRECTORY' variable to the folder where you want your vector store to be stored.&lt;br&gt;
Set the 'MODEL_PATH' variable to the path of your GPT4All or LlamaCpp supported LLM model.&lt;br&gt;
Set the 'MODEL_N_CTX' variable to the maximum token limit for the LLM model.&lt;br&gt;
Set the 'EMBEDDINGS_MODEL_NAME' variable to the SentenceTransformers embeddings model name (refer to &lt;a href="https://www.sbert.net/docs/pretrained_models.html"&gt;https://www.sbert.net/docs/pretrained_models.html&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Make sure you create a models folder in your project to place the model you downloaded. &lt;/p&gt;

&lt;p&gt;PrivateGPT comes with a sample dataset that uses a 'state of the union transcript' as an example. However, you can also ingest your own dataset. Let me show you how."&lt;br&gt;
Put all your files into the 'source_documents' directory.&lt;br&gt;
Make sure your files have one of the supported extensions: CSV, Word Document (docx, doc), EverNote (enex), Email (eml), EPub (epub), HTML File (html), Markdown (md), Outlook Message (msg), Open Document Text (odt), Portable Document Format (PDF), PowerPoint Document (pptx, ppt), Text file (txt).&lt;br&gt;
Run the following command to ingest all the data:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    python ingest.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Perfect! The data ingestion process is complete. Now, let's move on to the next step!&lt;br&gt;
------- ------ ------- ------- ------- --------------------if you have this error: cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' use this command: python -m pip install requests "urllib3&amp;lt;2"&lt;/p&gt;

&lt;h2&gt;
  
  
  [ Key thing to mention, IF YOU ADD NEW DOCUMENTS TO YOUR SOURCE_DOCS you need to rerun ‘python ingest.py’ 
&lt;/h2&gt;

&lt;p&gt;Asking Questions to Your Documents Host: Now comes the exciting part—asking questions to your documents using PrivateGPT. Let me show you how it's done. &lt;/p&gt;

&lt;p&gt;Open your terminal or command prompt.&lt;br&gt;
Navigate to the directory where you installed PrivateGPT.&lt;br&gt;
[ project directory 'privateGPT' , if you type ls in your CLI you will see the READ.ME file, among a few files.]&lt;br&gt;
Run the following command:&lt;br&gt;
        python privateGPT.py&lt;br&gt;
Wait for the script to prompt you for input.&lt;br&gt;
When prompted, enter your question!&lt;/p&gt;

&lt;p&gt;Tricks and tips:&lt;br&gt;
Use python privategpt.py -s [ to remove the sources from your output. So instead of displaying the answer and the source it will only display the source ] &lt;br&gt;
On line 33, at the end of the command where you see’ verbose=false, ‘ enter ‘n threads=16’ which will use more power to generate text at a faster rate! &lt;/p&gt;

&lt;p&gt;Pros &amp;amp; Cons &lt;/p&gt;

&lt;p&gt;Great for anyone who wants to understand complex documents on their local computer.&lt;br&gt;
Great for private data you don't want to leak out externally. &lt;br&gt;
Particularly great for students, people new to an industry, anyone learning about taxes, or anyone learning anything complicated that they need help understanding.&lt;br&gt;
The wait time can be 30-50 seconds or maybe even longer because you’re running it on your local computer.&lt;/p&gt;

&lt;p&gt;END OF BLOG -  how to install privateGPT &amp;amp; query documents locally and private&lt;/p&gt;

&lt;p&gt;LETS CONNECT!!&lt;/p&gt;

&lt;p&gt;Follow me on: Twitter, &lt;a href="https://www.linkedin.com/in/olu-a/"&gt;Linkedin,&lt;/a&gt; Medium and AIapplicationsblog.com&lt;/p&gt;

&lt;p&gt;Prepare for your next job application with — &lt;a href="http://coverletterbuilder.up.railway.app/"&gt;Cover Letter Generator!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>chatgpt</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
