<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Karan Bhardwaj</title>
    <description>The latest articles on Forem by Karan Bhardwaj (@reckon762).</description>
    <link>https://forem.com/reckon762</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/reckon762"/>
    <language>en</language>
    <item>
      <title>How to setup the Nvidia TAO Toolkit on Kaggle Notebook</title>
      <dc:creator>Karan Bhardwaj</dc:creator>
      <pubDate>Thu, 17 Oct 2024 04:30:37 +0000</pubDate>
      <link>https://forem.com/reckon762/how-to-setup-the-nvidia-tao-toolkit-on-kaggle-notebook-3h77</link>
      <guid>https://forem.com/reckon762/how-to-setup-the-nvidia-tao-toolkit-on-kaggle-notebook-3h77</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Action recognition plays a crucial role in enabling applications like video surveillance, sports analytics, and gesture recognition. Leveraging pre-trained models with NVIDIA’s TAO Toolkit makes it easier to train high-performance action recognition models efficiently.&lt;/p&gt;

&lt;p&gt;TAO Toolkit can be set up using docker or NGC CLI. Since we will be working on the Kaggle Notebook, we will use the NGC CLI, as the Kaggle Notebook environment does not support docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kaggle Notebooks don't support Docker due to security concerns, resource management, and the provision of pre-configured environments for simplified workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation Steps:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Install dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, install &lt;em&gt;nvidia-pyindex&lt;/em&gt;, a repository manager for NVIDIA’s Python-based tools that simplifies the installation process for the TAO Toolkit and related components.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install nvidia-pyindex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Install the Nvidia TAO Toolkit and NGC-CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Nvidia TAO Toolkit contains a collection of pre-trained models for various tasks such as object detection, classification, segmentation and action recognition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install nvidia-tao
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, install the NGC-CLI (NVIDIA GPU Cloud Command Line Interface), which interacts with NVIDIA's NGC catalog to manage pre-trained models.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!wget -O ngccli_linux.zip https://ngc.nvidia.com/downloads/ngccli_linux.zip
!unzip ngccli_linux.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Create an NGC account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Register for an account on the Nvidia NGC catalog to access the TAO toolkit models. Once registered, you can authenticate via the NGC CLI using your API key to download the desired models.&lt;/p&gt;

&lt;p&gt;First, go to &lt;a href="https://dev.tourl"&gt;https://catalog.ngc.nvidia.com/&lt;/a&gt; and sign up for a free account from the right menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay5qcmjdqgeb7g8vmvj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fay5qcmjdqgeb7g8vmvj0.png" alt="NGC Catalog website" width="713" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once signed in, go to the &lt;strong&gt;Setup&lt;/strong&gt; section from the right drop-down menu and click on &lt;strong&gt;Generate Personal Key&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh53mzc48q0wxzzzfuun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh53mzc48q0wxzzzfuun.png" alt="Generate API Key" width="698" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Configure the NGC CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Set up your environment to authenticate with NGC using the following commands. Keep your API key secure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!chmod u+x ngc-cli/ngc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

# Declaring the input arguments as environment variables. 
# This way we can directly pass the arguments during cell runtime to any command request in the Kaggle notebook.

os.environ['API_KEY'] = 'your_api_key'
os.environ['TYPE'] = 'ascii'
os.environ['ORG'] = '0514167173176982'
os.environ['TEAM'] = 'no-team'
os.environ['ACE'] = 'no-ace'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Passing the input arguments to the config command
!echo -e "$API_KEY\n$TYPE\n$ORG\n$TEAM\n$ACE" | ngc-cli/ngc config set
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see the output below, your setup is complete. Hurray!!🥳🥳&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp7xheb3a1nm4tk9np8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp7xheb3a1nm4tk9np8s.png" alt="Configuration Success" width="800" height="52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the NGC CLI is configured, you can list the available models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!ngc-cli/ngc registry model list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to download any specific model, you can do so by running the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!ngc-cli/ngc registry model download-version "nvidia/tao/actionrecognitionnet:deployable_onnx_v2.0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I have downloaded the ActionRecognitionNet model. The model will be downloaded in the .onnx format.&lt;/p&gt;

&lt;p&gt;By following the steps above, you’ve set up the TAO Toolkit on Kaggle Notebook. Now you can start exploring the world of high-performance computer vision with ease. &lt;/p&gt;

&lt;p&gt;Happy Coding!🤗🤗&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>nvidia</category>
      <category>kaggle</category>
      <category>python</category>
    </item>
    <item>
      <title>Passing Input Arguments in Kaggle Notebook Using Environment Variables</title>
      <dc:creator>Karan Bhardwaj</dc:creator>
      <pubDate>Sun, 13 Oct 2024 18:41:10 +0000</pubDate>
      <link>https://forem.com/reckon762/how-to-give-user-input-in-kaggle-notebook-1oc6</link>
      <guid>https://forem.com/reckon762/how-to-give-user-input-in-kaggle-notebook-1oc6</guid>
      <description>&lt;p&gt;Kaggle Notebook doesn't support interactive user input since it runs in a cloud environment where code cells are executed in sequence without waiting for user interaction.&lt;/p&gt;

&lt;p&gt;So, in cases where we have to pass input arguments, we can bring the environment variable to our rescue.&lt;/p&gt;

&lt;p&gt;Assuming the case that there is a command named &lt;em&gt;some_command&lt;/em&gt; when executed asks for input argument, let's say an API key. So the steps to pass the API key will be as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Declare an environment variable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We use the &lt;em&gt;os&lt;/em&gt; library to declare an environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

# Instantiate the API key as an environment variable
os.environ['API_KEY'] = "whatever_is_the_key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Passing the environment variable as a user input&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, we will use the &lt;em&gt;echo&lt;/em&gt; shell command to pass the API key as a user input argument to command &lt;em&gt;some_command&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# run the shell command
!echo $API_KEY | some_command
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happened above is that "&lt;em&gt;echo&lt;/em&gt; $API_KEY" generated the output (in this case, the API key "whatever_is_the_key"), and "|" sent this output as an input argument to &lt;em&gt;some_command&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This way, you can pass input arguments to the commands you need to execute.&lt;/p&gt;

&lt;p&gt;In case you have to pass multiple input arguments, you can modify &lt;em&gt;echo&lt;/em&gt; shell command as,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Assume you have environment variables as I, ME, and YOU
!echo "$I" "$ME" "$YOU" | some_command
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach can be beneficial when automating tasks that require external inputs or when working with APIs in non-interactive environments like Kaggle&lt;/p&gt;

&lt;p&gt;Happy Coding!🤗🤗&lt;/p&gt;

</description>
      <category>python</category>
      <category>kaggle</category>
    </item>
    <item>
      <title>How to use Detectron2 Instance Segmentation on Videos</title>
      <dc:creator>Karan Bhardwaj</dc:creator>
      <pubDate>Wed, 06 Mar 2024 18:47:48 +0000</pubDate>
      <link>https://forem.com/reckon762/streamlining-instance-segmentation-a-guide-to-utilizing-detectron2-on-google-colab-and-performing-inference-on-video-1bn</link>
      <guid>https://forem.com/reckon762/streamlining-instance-segmentation-a-guide-to-utilizing-detectron2-on-google-colab-and-performing-inference-on-video-1bn</guid>
      <description>&lt;p&gt;Instance segmentation, a challenging task in computer vision that involves detecting and delineating individual objects within an image or video, has seen significant advancements in recent years. One such advancement is Detectron2, a flexible and efficient framework developed by Facebook AI Research. In this guide, we'll explore how to leverage the power of Detectron2 within the Google Colab environment to perform instance segmentation on videos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Check GPU availability
&lt;/h2&gt;

&lt;p&gt;Check whether you have connected to GPU by changing the runtime from the &lt;strong&gt;Runtime&lt;/strong&gt; tab in the dropdown menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovvy6rcqyh4x5kzen2ux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovvy6rcqyh4x5kzen2ux.png" alt="Change runtime type" width="426" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that check whether the GPU is accessible or not by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!nvidia-smi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see something like this, you are all set to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwf7cruno04zamv4eyj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwf7cruno04zamv4eyj3.png" alt="Check for GPU" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install detectron2
&lt;/h2&gt;

&lt;p&gt;Run this single command to directly install detectron2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Import libraries
&lt;/h2&gt;

&lt;p&gt;Import the required libraries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# COMMON LIBRARIES
import os
import cv2

from google.colab.patches import cv2_imshow

# VISUALIZATION
from detectron2.utils.visualizer import Visualizer
from detectron2.utils.visualizer import ColorMode

# CONFIGURATION
from detectron2 import model_zoo
from detectron2.config import get_cfg

# EVALUATION
from detectron2.engine import DefaultPredictor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Initialize the predictor
&lt;/h2&gt;

&lt;p&gt;Choose a model as per your requirement from the model zoo. You can see the list of available models &lt;a href="https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfg = get_cfg()
cfg.merge_from_file("detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
predictor = DefaultPredictor(cfg)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Inference on Video
&lt;/h2&gt;

&lt;p&gt;Set the path to your video in the following code, and execute the code. The output will be a video with segmentation applied.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import imageio
import numpy

# Load video
video_path = "path_to_your_video.mp4"
cap = cv2.VideoCapture(video_path)

# Initialize video writer
fps = cap.get(cv2.CAP_PROP_FPS)
output_path = '/content/output.mp4'
writer = imageio.get_writer(output_path, fps=fps)

# Perform instance segmentation on each frame
while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    outputs = predictor(frame)

    # Find the classes (Optional)
    pred_classes = instances.pred_classes.cpu().numpy()

    # Find the segment points (Optional)
    pred_masks = instances.pred_masks.cpu().numpy()

    v = Visualizer(frame[:, :, ::-1], metadata=MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=0.8)
    frame = v.draw_instance_predictions(outputs["instances"].to("cpu")).get_image()[:, :, ::-1]
    # Write processed frame to output video
    writer.append_data(frame)

# Release video resources
cap.release()
writer.close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>computervision</category>
      <category>segmentation</category>
      <category>detectron2</category>
      <category>python</category>
    </item>
    <item>
      <title>How to Install Detectron2 on Windows</title>
      <dc:creator>Karan Bhardwaj</dc:creator>
      <pubDate>Tue, 05 Mar 2024 19:32:51 +0000</pubDate>
      <link>https://forem.com/reckon762/how-to-install-detectron2-on-windows-3hil</link>
      <guid>https://forem.com/reckon762/how-to-install-detectron2-on-windows-3hil</guid>
      <description>&lt;p&gt;Detectron2 is a powerful open-source object detection and segmentation framework built by Facebook AI Research. It's widely used for research and development in computer vision applications. However, installing Detectron2 on Windows 11 can be a bit tricky due to various dependencies. In this guide, I will take you through the step-by-step process to set up Detectron2 on your Windows 11 system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set up a Conda Environment
&lt;/h2&gt;

&lt;p&gt;First, let's create a new conda environment to isolate the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conda create --name detectron2_env python=3.11.7
conda activate detectron2_env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Install CUDA
&lt;/h2&gt;

&lt;p&gt;If you have an NVIDIA GPU, you'll need to install the CUDA toolkit. Ensure you have the correct version compatible with your GPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conda install cuda=12.1 -c nvidia
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Install PyTorch
&lt;/h2&gt;

&lt;p&gt;Install PyTorch with the required CUDA version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conda install pytorch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1 cudatoolkit=12.1 -c pytorch -c nvidia
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For other versions of CUDA, and the required PyTorch version, you can refer to &lt;a href="https://dev.tourl"&gt;https://pytorch.org/get-started/locally/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In case you are on CPU, use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;conda install pytorch torchvision torchaudio cpuonly -c pytorch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Update Visual C++ Redistributable
&lt;/h2&gt;

&lt;p&gt;Ensure your system has the latest Visual C++ Redistributable installed. You can download and install it from the official Microsoft website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Install Cython and COCO Tools
&lt;/h2&gt;

&lt;p&gt;Cython is a prerequisite for building Detectron2, while COCO tools required for evaluation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install cython
conda install conda-forge::pycocotools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Clone Detectron2 Repository
&lt;/h2&gt;

&lt;p&gt;Clone the Detectron2 repository from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/facebookresearch/detectron2.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Install Detectron2
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m pip install -e detectron2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You've successfully installed Detectron2 on your Windows 11 system. You can now start using it for various computer vision tasks like object detection, instance segmentation, and more.&lt;/p&gt;

&lt;p&gt;Remember to activate your conda environment whenever you want to work with Detectron2.&lt;/p&gt;

&lt;p&gt;Happy coding!🤗🤗&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>segmentation</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
