<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: I-dodo</title>
    <description>The latest articles on Forem by I-dodo (@justjs).</description>
    <link>https://forem.com/justjs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/justjs"/>
    <language>en</language>
    <item>
      <title>Now it's EASY to do function calling with DeepSeek R1</title>
      <dc:creator>I-dodo</dc:creator>
      <pubDate>Sat, 22 Feb 2025 00:06:45 +0000</pubDate>
      <link>https://forem.com/justjs/now-its-easy-to-do-function-calling-with-deepseek-r1-2n2k</link>
      <guid>https://forem.com/justjs/now-its-easy-to-do-function-calling-with-deepseek-r1-2n2k</guid>
      <description>&lt;h2&gt;
  
  
  Function Calling with DeepSeek R1
&lt;/h2&gt;

&lt;p&gt;🚀 Excited to share that &lt;a href="https://node-llama-cpp.withcat.ai/" rel="noopener noreferrer"&gt;&lt;code&gt;node-llama-cpp&lt;/code&gt;&lt;/a&gt; now includes &lt;strong&gt;special optimizations&lt;/strong&gt; for DeepSeek R1 models, improving function calling performance and stability. Let's dive into the details and see how you can leverage this powerful feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Example: Function Calling with DeepSeek R1
&lt;/h2&gt;

&lt;p&gt;Here's a quick example demonstrating function calling in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;getLlama&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LlamaChatSession&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defineChatSessionFunction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resolveModelFile&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;node-llama-cpp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;modelPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;resolveModelFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hf:mradermacher/DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llama&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getLlama&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loadModel&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="nx"&gt;modelPath&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createContext&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LlamaChatSession&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;contextSequence&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getSequence&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;functions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;getTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;defineChatSessionFunction&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Get the current time&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toLocaleTimeString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;en-US&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;q1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What's the time?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;q1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;a1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;q1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;functions&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AI: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;a1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Recommended Models for Function Calling
&lt;/h2&gt;

&lt;p&gt;Looking for the best models to try out? Here are some top picks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;URI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GGUF" rel="noopener noreferrer"&gt;DeepSeek R1 Distill Qwen 7B&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;4.68GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;hf:mradermacher/DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-14B-GGUF" rel="noopener noreferrer"&gt;DeepSeek R1 Distill Qwen 14B&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.99GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;hf:mradermacher/DeepSeek-R1-Distill-Qwen-14B-GGUF:Q4_K_M&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-32B-GGUF" rel="noopener noreferrer"&gt;DeepSeek R1 Distill Qwen 32B&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;19.9GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;hf:mradermacher/DeepSeek-R1-Distill-Qwen-32B-GGUF:Q4_K_M&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Pro Tip:&lt;/strong&gt; The 7B model works great for the first prompt, but tends to degrade in subsequent queries. For better performance across multiple prompts, consider using a larger model.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Usage TIP
&lt;/h3&gt;

&lt;p&gt;Before downloading, estimate your machine's compatibility with the model using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; node-llama-cpp inspect estimate &amp;lt;model URI&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try It with the CLI
&lt;/h2&gt;

&lt;p&gt;You can also try function calling directly from the command line using the &lt;code&gt;chat&lt;/code&gt; command with the &lt;code&gt;--ef&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; node-llama-cpp chat &lt;span class="nt"&gt;--ef&lt;/span&gt; &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s2"&gt;"What is the time?"&lt;/span&gt; &amp;lt;model URI&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What do you think? Is this useful? What are you going to use it for?&lt;/p&gt;

&lt;p&gt;Let me know in the comments :)&lt;/p&gt;

</description>
      <category>deepseek</category>
      <category>node</category>
      <category>chatgpt</category>
      <category>ai</category>
    </item>
    <item>
      <title>Open source file sharing!</title>
      <dc:creator>I-dodo</dc:creator>
      <pubDate>Mon, 06 Nov 2023 18:42:13 +0000</pubDate>
      <link>https://forem.com/justjs/open-source-file-sharing-dac</link>
      <guid>https://forem.com/justjs/open-source-file-sharing-dac</guid>
      <description>&lt;p&gt;If you want to share files fast, without any size limit - welcome!&lt;/p&gt;

&lt;p&gt;I wanted to share with you a cool project I was working on:&lt;/p&gt;

&lt;h2&gt;
  
  
  My-folder-online
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Limitless peer-to-peer file sharing&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some of it's cool features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Peer-to-peer sharing (for privacy) 🫣&lt;/li&gt;
&lt;li&gt;High speed (using WebRTC) ⚡️&lt;/li&gt;
&lt;li&gt;Directory sharing (share a whole directory) 📁&lt;/li&gt;
&lt;li&gt;PWA (installable webapp) 📱&lt;/li&gt;
&lt;li&gt;FileSystem API (no zip files needed for directory download) 🗂️&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://my-folder-online.vercel.app/"&gt;https://my-folder-online.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8CN_Yb_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihhc3ox0snpcb5jmbf2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8CN_Yb_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihhc3ox0snpcb5jmbf2o.png" alt="Image description" width="800" height="724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jLHeeDIf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z26rsmtertumxneiwqkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jLHeeDIf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z26rsmtertumxneiwqkv.png" alt="Image description" width="800" height="724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leveraging WebRTC technology, MyFolderOnline facilitates lightning-fast file transfers, enabling hassle-free sharing of large files and directories. Say goodbye to tedious upload/download processes and enjoy a seamless sharing experience.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
If you think this project is cool, please star it on GitHub :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ido-pluto/my-folder-online"&gt;https://github.com/ido-pluto/my-folder-online&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What next?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Improve leading page&lt;/li&gt;
&lt;li&gt;Preview of files&lt;/li&gt;
&lt;li&gt;Support for mobile file sharing&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;What do you think you can do with this?&lt;br&gt;
Do you have any ideas for improvement?&lt;/p&gt;

</description>
      <category>webrtc</category>
      <category>react</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Use ChatGPT locally!</title>
      <dc:creator>I-dodo</dc:creator>
      <pubDate>Sat, 01 Apr 2023 17:55:05 +0000</pubDate>
      <link>https://forem.com/justjs/use-chatgpt-locally-29pp</link>
      <guid>https://forem.com/justjs/use-chatgpt-locally-29pp</guid>
      <description>&lt;p&gt;If you ever want a local chatgpt like a chatbot that is not affected by the capacity problem that the chatgpt has or you want to ask him something that he is unwilling to answer, you need to hire about this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fido-pluto%2Fcatai%2Fraw%2Fmain%2Fdemo%2Fchat.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fido-pluto%2Fcatai%2Fraw%2Fmain%2Fdemo%2Fchat.gif" alt="Chat Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is an open source project called &lt;a href="https://github.com/ido-pluto/catai" rel="noopener noreferrer"&gt;CatAI&lt;/a&gt;, it uses the open source model of meta called llama to create a local AI assistant with cool chat UI.&lt;/p&gt;

&lt;p&gt;It can answer everything without a second thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;It is very simple, first let's install the model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; catai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we choose the model to download, for this example let's install the 7B model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;catai &lt;span class="nb"&gt;install &lt;/span&gt;7B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then let's start using it with the simple command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;catai serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Good luck!&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>openai</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
