<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hrithik Roshan</title>
    <description>The latest articles on Forem by Hrithik Roshan (@hrithikroshanm).</description>
    <link>https://forem.com/hrithikroshanm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hrithikroshanm"/>
    <language>en</language>
    <item>
      <title>Unlocking the Power of AI in the Palm of Your Hand with NVIDIA Jetson Nano</title>
      <dc:creator>Hrithik Roshan</dc:creator>
      <pubDate>Fri, 20 Dec 2024 09:49:20 +0000</pubDate>
      <link>https://forem.com/hrithikroshanm/unlocking-the-power-of-ai-in-the-palm-of-your-hand-with-nvidia-jetson-nano-17om</link>
      <guid>https://forem.com/hrithikroshanm/unlocking-the-power-of-ai-in-the-palm-of-your-hand-with-nvidia-jetson-nano-17om</guid>
      <description>&lt;p&gt;NVIDIA Jetson Nano is redefining what's possible with edge AI computing. Imagine packing the raw power of modern AI into something smaller than a credit card — that’s exactly what the Jetson Nano does! Whether you're building robotics, smart devices, or innovative edge computing applications, this powerhouse packs 472 GFLOPs of performance into a tiny, energy-efficient form factor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkuedccm1ubtwqohxqtd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkuedccm1ubtwqohxqtd.jpeg" alt="Image description" width="550" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazingly Compact, Incredibly Powerful&lt;br&gt;
The Jetson Nano is small. At just 70 x 45mm, it’s smaller than most smartphones, yet it delivers an astonishing level of performance. Powered by a 128-core NVIDIA Maxwell GPU and a quad-core ARM Cortex-A57 CPU running at 1.43 GHz, it’s designed to run multiple AI workloads simultaneously, without breaking a sweat.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Jetson Nano offers impressive performance through its compact form factor.
&lt;/h2&gt;

&lt;p&gt;472 GFLOPs of AI performance for running complex algorithms and multiple neural networks.&lt;br&gt;
The ability to process data from multiple high-resolution sensors concurrently.Big Impact, Low Power.&lt;br&gt;
Now, here’s the truly magical part: all that power is delivered while consuming just 5 to 10 watts. That’s right—Jetson Nano’s low-power demand makes it the perfect choice for developing AI applications that need to run in the field, without the need for constant power-hungry infrastructure.&lt;/p&gt;

&lt;p&gt;Whether you’re creating an autonomous robot, an intelligent home assistant, or a video analytics system, you get all the performance with minimal energy consumption. The form factor combined with energy efficiency means you’re not just creating cutting-edge technology, you’re doing it sustainably.&lt;/p&gt;

&lt;p&gt;Unleash the Future with AI at the Edge&lt;br&gt;
The Jetson Nano isn’t just about AI on a chip — it’s about AI in everything. It’s designed to empower developers to bring AI capabilities to embedded products, allowing you to add intelligent processing, computer vision, and real-time data analysis into devices that are always on the move. Think home robots, smart security systems, AI-driven IoT devices, and more. The possibilities are endless, and the Jetson Nano makes them possible today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Form Factor, Function, and Flexibility
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Super small:&lt;/strong&gt;Measuring just 70 x 45mm, the Jetson Nano fits into almost any space, making it ideal for embedded systems.&lt;br&gt;
Big performance: Despite its size, it’s capable of running complex AI algorithms—the kind of tasks that normally require much larger and power-hungry systems.&lt;/p&gt;

&lt;p&gt;Incredible efficiency: Running on 5-10 watts, the Jetson Nano keeps energy consumption low while delivering incredible AI performance.&lt;br&gt;
&lt;strong&gt;Endless potential:&lt;/strong&gt; From robotics to video analytics, from smart homes to autonomous vehicles, this tiny AI supercomputer is ready to tackle whatever challenge you throw at it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Jetson Nano is Changing the Game
&lt;/h2&gt;

&lt;p&gt;AI-powered robotics: Whether it’s automating warehouse operations or building the next-generation autonomous robot, the Jetson Nano brings AI to the heart of robotics.&lt;br&gt;
&lt;strong&gt;Edge AI for IoT:&lt;/strong&gt; Process data on the edge and make smart decisions locally, without relying on the cloud.&lt;br&gt;
Real-time video analytics: Use the power of computer vision to process high-resolution video feeds on the edge for everything from security to smart city applications.&lt;/p&gt;

&lt;p&gt;The NVIDIA Jetson Nano is small in size but big on impact. If you’re ready to push the limits of edge AI and embedded computing, Jetson Nano is your launchpad. Power. Performance. Portability. All packed into one amazing form factor.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>neuralnetworks</category>
      <category>machinelearning</category>
      <category>nvidia</category>
    </item>
    <item>
      <title>Understanding Large Language Models (LLMs)</title>
      <dc:creator>Hrithik Roshan</dc:creator>
      <pubDate>Mon, 09 Dec 2024 09:48:53 +0000</pubDate>
      <link>https://forem.com/hrithikroshanm/understanding-large-language-models-llms-1kn3</link>
      <guid>https://forem.com/hrithikroshanm/understanding-large-language-models-llms-1kn3</guid>
      <description>&lt;h2&gt;
  
  
  Understanding LLMs,
&lt;/h2&gt;

&lt;p&gt;We are hearing about AI everywhere and how rapidly it is being adopted in products like ChatGPT and now Gemini, Claude, etc. I am quite intrigued to know what all this is about and how it works, followed by some research on the internet, where I could find out some very basic things about these models and share here with you some such findings. &lt;strong&gt;But how do they manage to understand language so well? That’s where the next concepts come into play: parameters, architecture, and training.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly are LLMs?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Large Language Models&lt;/strong&gt; are what LLM stands for. They are primarily AI programs trained to understand as well as generate written text. They may answer questions, write essays, summarize articles, create fables, or poems. &lt;/p&gt;

&lt;p&gt;They belong to a larger category called foundation models, the training of which predates thousands of terabytes worth of data. While LLMs, for the most part, has text data from books, websites, and codes, their size ranges anywhere from gigabytes (GB) to petabytes (PB). As an example:&lt;br&gt;
1 petabyte: 1 million gigabytes&lt;br&gt;
1 gigabyte: Approximately 178 million words.&lt;br&gt;
OpenAI’s GPT-3, a well-known LLM, has 175 billion parameters and was trained on datasets spanning terabytes in size.&lt;/p&gt;

&lt;p&gt;Thus, LLMs have been built by developing appropriate models through patterns and relationships found in the texts so that they could create human-like text.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do LLMs Work?
&lt;/h2&gt;

&lt;p&gt;LLMs get defined by three key parts: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data:&lt;/strong&gt; They use enormous datasets rich in text through which the model learns how to use language. &lt;br&gt;
&lt;strong&gt;Architecture:&lt;/strong&gt; LLMs are based on a special type of neural net called Transformer, which is very good in processing sequences like sentences and understanding their context. &lt;br&gt;
&lt;strong&gt;Training:&lt;/strong&gt; In training, the model predicts the next words in a sentence, and then it would change its internal parameters to optimize them, in case its prediction is wrong. This eventually helps the model improve at producing text with meaning and semantics. &lt;br&gt;
Fine-Tuning for Specific Tasks &lt;br&gt;
After general training, the model can be fine-tuned using domain-specific data for specialized tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications of LLMs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Chatbots:&lt;/strong&gt;Used in customer service, tech support, and virtual assistants.&lt;br&gt;
&lt;strong&gt;Content Creation:&lt;/strong&gt; Generate social media posts, blogs, and marketing content.&lt;br&gt;
&lt;strong&gt;Software Development:&lt;/strong&gt; Help with code suggestions, explanations, and debugging.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>neuralnetworks</category>
      <category>llms</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
