<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Pulkit Singh</title>
    <description>The latest articles on Forem by Pulkit Singh (@pulkitsinghdev).</description>
    <link>https://forem.com/pulkitsinghdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pulkitsinghdev"/>
    <language>en</language>
    <item>
      <title>Image classifier</title>
      <dc:creator>Pulkit Singh</dc:creator>
      <pubDate>Sat, 21 Nov 2020 06:55:53 +0000</pubDate>
      <link>https://forem.com/pulkitsinghdev/image-classifier-2h49</link>
      <guid>https://forem.com/pulkitsinghdev/image-classifier-2h49</guid>
      <description>&lt;p&gt;View my project here:-&lt;br&gt;
&lt;a href="https://colab.research.google.com/drive/1hv7tSi_HEh0bLHD8JmDMwo6g97b32a3d?usp=sharing"&gt;https://colab.research.google.com/drive/1hv7tSi_HEh0bLHD8JmDMwo6g97b32a3d?usp=sharing&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Google Colab and Fastai?
&lt;/h3&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Google Colab&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Jupyter notebooks hosted by google&lt;/li&gt;
&lt;li&gt;Executes Python code (and bash commands)&lt;/li&gt;
&lt;li&gt;Supports markdown (TEXT)&lt;/li&gt;
&lt;li&gt;Interactivity&lt;/li&gt;
&lt;li&gt;To run a cell press &lt;code&gt;shift+enter&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; You can watch &lt;a href="https://www.youtube.com/watch?v=HW29067qVWk"&gt;this video&lt;/a&gt; if you are hearing jupyter notebook for the first time.&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Fastai&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It is a deep learning library written in python&lt;/li&gt;
&lt;li&gt;It provides high-level components that can quickly and easily provide &lt;em&gt;state-of-the-art&lt;/em&gt; results in standard deep learning domains.&lt;/li&gt;
&lt;li&gt;Key features: &lt;strong&gt;ease of use&lt;/strong&gt;, &lt;strong&gt;flexibility&lt;/strong&gt;, and &lt;strong&gt;performance&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Visit &lt;a href="https://docs.fast.ai/"&gt;fastai docs&lt;/a&gt; to learn more.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Use GPU&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Make sure you are using a &lt;strong&gt;GPU&lt;/strong&gt; runtime. Click on &lt;code&gt;runtime&lt;/code&gt; -&amp;gt; &lt;code&gt;change runtime type&lt;/code&gt;. Under hardware accelerator select &lt;strong&gt;GPU&lt;/strong&gt; and click &lt;strong&gt;SAVE&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Update fastai&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fastai comes pre-installed in google colab but its an older version. So, we will first update &lt;code&gt;fastai&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;!pip install fastai --upgrade --quiet&lt;/p&gt;

&lt;p&gt;from fastai.vision.all import *&lt;/p&gt;
&lt;h1&gt;
  
  
  Building state-of-the-art Image classifier
&lt;/h1&gt;



&lt;p&gt;For building an image classifier, this is how the workflow looks:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_ktmD7XD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/bZzHeKr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_ktmD7XD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/bZzHeKr.png" alt="Workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets start by collecting data:&lt;/p&gt;
&lt;h3&gt;
  
  
  Collecting data
&lt;/h3&gt;

&lt;p&gt;Lets start by creating a &lt;code&gt;data&lt;/code&gt; folder where we will keep all our data.&lt;/p&gt;

&lt;p&gt;path = Path('data')&lt;br&gt;
path.mkdir(exist_ok=True)&lt;/p&gt;

&lt;p&gt;For building an image classifier we will need urls of images. Generally, you need 80-150 images per class to train a good model.&lt;/p&gt;

&lt;p&gt;There are 3 possible ways of collecting data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manually copy pasting image urls - Not recommended.&lt;/li&gt;
&lt;li&gt;Use existing &lt;a href="https://github.com/Ankur-singh/image_scrapper/tree/master/datasets"&gt;datasets&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://share.streamlit.io/ankur-singh/image_scrapper"&gt;Image URL scaper&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/bear.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/bear.txt&lt;/a&gt;&lt;br&gt;
!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/dog.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/dog.txt&lt;/a&gt;&lt;br&gt;
!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/horse.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/horse.txt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have all the URLs, we can easily download them using &lt;code&gt;download_images&lt;/code&gt; function. &lt;/p&gt;

&lt;p&gt;download_images('data/horse', Path('horse.txt'))&lt;br&gt;
download_images('data/dog', Path('dog.txt'))&lt;br&gt;
download_images('data/bear', Path('bear.txt'))&lt;/p&gt;

&lt;p&gt;files = get_image_files(path)&lt;br&gt;
len(files)&lt;/p&gt;

&lt;p&gt;failed = verify_images(files)&lt;br&gt;
failed&lt;/p&gt;

&lt;p&gt;failed.map(Path.unlink) &lt;/p&gt;
&lt;h1&gt;
  
  
  delete corrupted files
&lt;/h1&gt;

&lt;p&gt;files = get_image_files(path)&lt;br&gt;
len(files)&lt;/p&gt;

&lt;p&gt;Jupyter notebooks make it so easy to gradually build what you want, and check your work every step of the way. I, personally, make a lot of mistakes, so this is really helpful to me...&lt;/p&gt;

&lt;p&gt;Jupyter notebooks are great for experimenting and immediately seeing the results of each function, but there is also a lot of functionality to help you figure out how to use different functions, or even directly look at their source code. For instance, if you type in a cell:&lt;/p&gt;

&lt;p&gt;verify_images??&lt;/p&gt;

&lt;p&gt;get_image_files??&lt;/p&gt;

&lt;p&gt;download_images??&lt;/p&gt;

&lt;p&gt;download_url??&lt;/p&gt;

&lt;p&gt;This tells us what argument the function accepts (files), then shows us the source code and the file it comes from. Looking at that source code, we can see it applies the function &lt;code&gt;verify_image&lt;/code&gt; in parallel and only keeps the image files for which the result of that function is &lt;code&gt;False&lt;/code&gt;, which is consistent with the doc string: it finds the images in &lt;em&gt;files&lt;/em&gt; that can't be opened.&lt;/p&gt;
&lt;h3&gt;
  
  
  DataLoaders
&lt;/h3&gt;

&lt;p&gt;In machine learning, almost all algorithms take the complete dataset while training. But in-case of deep learning, you don't pass the complete data at once.&lt;/p&gt;

&lt;p&gt;You divide the data into smaller batches and pass the batches as input to deep learnining model. &lt;strong&gt;DataLoaders&lt;/strong&gt; allows us to train models on huge dataset. It parallely load the data, in batches, while the model is training.&lt;/p&gt;

&lt;p&gt;But we don't have to worry about it. The &lt;code&gt;Datablocks&lt;/code&gt; API will take care of everything for us.&lt;/p&gt;

&lt;p&gt;To turn our downloaded data into a DataLoaders object we need to tell fastai at least four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What kinds of data we are working with&lt;/li&gt;
&lt;li&gt;How to get the list of items&lt;/li&gt;
&lt;li&gt;How to label these items&lt;/li&gt;
&lt;li&gt;How to create the validation set&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is how we can create a &lt;code&gt;DataLoaders&lt;/code&gt; for the dataset that we just downloaded:&lt;/p&gt;

&lt;p&gt;animals = DataBlock(&lt;br&gt;
    blocks=(ImageBlock, CategoryBlock), # x,y&lt;br&gt;
    get_items=get_image_files, &lt;br&gt;
    splitter=RandomSplitter(valid_pct=0.2, seed=42),&lt;br&gt;
    get_y=parent_label,&lt;br&gt;
    item_tfms=RandomResizedCrop(224, min_scale=0.5),&lt;br&gt;
    batch_tfms=aug_transforms())&lt;/p&gt;

&lt;p&gt;Let's look at each of these arguments in turn. First we provide a tuple where we specify what types we want for the &lt;strong&gt;independent&lt;/strong&gt; and &lt;strong&gt;dependent&lt;/strong&gt; variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;blocks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ImageBlock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CategoryBlock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The independent variable is the thing we are using to make predictions from, and the dependent variable is our target. In this case, our independent variables are &lt;strong&gt;images&lt;/strong&gt;, and our dependent variables are the &lt;strong&gt;categories&lt;/strong&gt; (type of animal) for each image. &lt;/p&gt;

&lt;p&gt;For this DataLoaders our underlying items will be file paths. We have to tell fastai how to get a list of those files. The &lt;code&gt;get_image_files&lt;/code&gt; function takes a path, and returns a list of all of the images in that path (recursively, by default):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;get_items&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;get_image_files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will randomly split our data into &lt;strong&gt;training&lt;/strong&gt; and &lt;strong&gt;validation&lt;/strong&gt; sets. However, we would like to have the same training/validation split each time we run this notebook, so we fix the &lt;em&gt;random seed&lt;/em&gt; (computers don't really know how to create random numbers at all, but simply create lists of numbers that look random; if you provide the same starting point for that list each time—called the seed—then you will get the exact same list each time):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;splitter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;RandomSplitter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;valid_pct&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;seed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The independent variable is often referred to as &lt;code&gt;x&lt;/code&gt; and the dependent variable is often referred to as &lt;code&gt;y&lt;/code&gt;. Here, we are telling fastai what function to call to create the labels in our dataset:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;get_y&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;parent_label&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;parent_label&lt;/code&gt; function simply gets the name of the folder a file is in. Because we put each of our images into folders based on its class, this is going to give us the labels that we need.&lt;/p&gt;

&lt;p&gt;Our images are all different sizes, and this is a problem for deep learning: we don't feed the model one image at a time but several of them (what we call a mini-batch). To group them in a big array (usually called a tensor) that is going to go through our model, they all need to be of the same size. So, we need to add a transform which will resize these images to the same size. Item transforms are pieces of code that run on each individual item, whether it be an image, category, or so forth. Here, we'll use &lt;code&gt;RandomResizedCrop&lt;/code&gt; with an image size of 224 px, which is fairly standard for image classification, and default &lt;code&gt;aug_transforms&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;item_tfms&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;RandomResizedCrop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;min_scale&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="n"&gt;batch_tfms&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;aug_transforms&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command has given us a DataBlock object. This is like a template for creating a DataLoaders. We still need to tell fastai the actual source of our data—in this case, the path where the images can be found:&lt;/p&gt;

&lt;p&gt;dls = animals.dataloaders(path)&lt;/p&gt;

&lt;p&gt;A DataLoaders includes validation and training DataLoaders. DataLoader is a class that provides batches of a few items at a time to the GPU. &lt;/p&gt;

&lt;p&gt;When you loop through a DataLoader, by default you will get 64 items per batch, all stacked up into a single tensor. We can take a look at a few of those items by calling the &lt;code&gt;show_batch&lt;/code&gt; method on a DataLoader:&lt;/p&gt;

&lt;p&gt;dls.train.show_batch()&lt;/p&gt;

&lt;p&gt;dls.train.show_batch(max_n=4, nrows=1)&lt;/p&gt;

&lt;h3&gt;
  
  
  Learner
&lt;/h3&gt;

&lt;p&gt;Fastai's &lt;code&gt;learner&lt;/code&gt; class, put together everything. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DataLoaders&lt;/li&gt;
&lt;li&gt;Model Architecture&lt;/li&gt;
&lt;li&gt;Loss function and metric&lt;/li&gt;
&lt;li&gt;Training loop, callbacks and much more. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we can create a &lt;code&gt;Learner&lt;/code&gt;, which is a fastai object that combines the data and a model for training, and uses transfer learning to fine tune a pretrained model in just two lines of code:&lt;/p&gt;

&lt;p&gt;learn = cnn_learner(dls, resnet18, metrics=error_rate)&lt;br&gt;
learn.fine_tune(3)&lt;/p&gt;

&lt;p&gt;The first line downloaded a model called &lt;code&gt;ResNet18&lt;/code&gt;, pretrained on ImageNet, and adapted it to our specific problem. It then fine tuned that model and in a relatively short time, we get a model with very high accuracy... amazing!&lt;/p&gt;

&lt;p&gt;If you want to make a prediction on a new image, you can use &lt;code&gt;learn.predict&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;learn.predict(files[10])&lt;/p&gt;

&lt;p&gt;The predict method returns three things: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the decoded prediction, &lt;/li&gt;
&lt;li&gt;the index of the predicted class and &lt;/li&gt;
&lt;li&gt;the tensor of probabilities of all classes in the order of their indexed labels. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;predict&lt;/code&gt; method accepts a filename, a PIL image or a tensor directly in this case. We can also have a look at multiple predictions at once, with the &lt;code&gt;learn.show_results&lt;/code&gt; method:&lt;/p&gt;

&lt;p&gt;learn.show_results()&lt;/p&gt;

&lt;p&gt;It might look like a lot of code. But actually not. Lets try to re-create everything that we learned above &lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Data
&lt;/h2&gt;

&lt;p&gt;!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/bear.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/bear.txt&lt;/a&gt;&lt;br&gt;
!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/dog.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/dog.txt&lt;/a&gt;&lt;br&gt;
!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/horse.txt"&gt;https://raw.githubusercontent.com/Ankur-singh/image_scrapper/master/datasets/horse.txt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;download_images('data/horse', Path('horse.txt'))&lt;br&gt;
download_images('data/dog', Path('dog.txt'))&lt;br&gt;
download_images('data/bear', Path('bear.txt'))&lt;/p&gt;

&lt;p&gt;files = get_image_files(path)&lt;br&gt;
failed = verify_images(files)&lt;br&gt;
.map(Path.unlink)&lt;/p&gt;

&lt;h2&gt;
  
  
  DataLoaders
&lt;/h2&gt;

&lt;p&gt;animals = DataBlock(&lt;br&gt;
    blocks=(ImageBlock, CategoryBlock), # x,y&lt;br&gt;
    get_items=get_image_files, &lt;br&gt;
    splitter=RandomSplitter(valid_pct=0.2, seed=42),&lt;br&gt;
    get_y=parent_label,&lt;br&gt;
    item_tfms=RandomResizedCrop(224, min_scale=0.5),&lt;br&gt;
    batch_tfms=aug_transforms())&lt;/p&gt;

&lt;p&gt;dls = animals.dataloaders(path)&lt;/p&gt;

&lt;h2&gt;
  
  
  Learner
&lt;/h2&gt;

&lt;p&gt;learn = cnn_learner(dls, resnet18, metrics=error_rate)&lt;br&gt;
learn.fine_tune(3)&lt;/p&gt;

&lt;h2&gt;
  
  
  Making predictions
&lt;/h2&gt;

&lt;p&gt;learn.predict()&lt;/p&gt;

&lt;h2&gt;
  
  
  Inference
&lt;/h2&gt;

&lt;p&gt;Now let's see whether the mistakes the model is making. To visualize it, we can create a confusion matrix:&lt;/p&gt;

&lt;p&gt;interp = ClassificationInterpretation.from_learner(learn)&lt;br&gt;
interp.plot_confusion_matrix()&lt;/p&gt;

&lt;p&gt;The rows represents the actual classes. The columns represent the classes predicted by our model. Therefore, the diagonal of the matrix shows the images which were classified correctly, and the off-diagonal cells represent those which were classified incorrectly. The color-coding, makes it super easy to visualize mistakes. Our image classifier isn't making many mistakes!&lt;/p&gt;

&lt;p&gt;It's helpful to see where exactly our errors are occurring, to see whether they're due to a dataset problem (e.g., images that aren't bears at all, or are labeled incorrectly, etc.), or a model problem (perhaps it isn't handling images taken with unusual lighting, or from a different angle, etc.). To do this, we can sort our images by their loss.&lt;/p&gt;

&lt;p&gt;The loss is a number that is higher if the model is incorrect (especially if it's also confident of its incorrect answer), or if it's correct, but not confident of its correct answer. &lt;code&gt;plot_top_losses&lt;/code&gt; shows us the images with the highest loss in our dataset. As the title of the output says, each image is labeled with four things: prediction, actual (target label), loss, and probability. The probability here is the confidence level, from zero to one, that the model has assigned to its prediction:&lt;/p&gt;

&lt;p&gt;interp.plot_top_losses(5, nrows=1)&lt;/p&gt;

&lt;p&gt;We can see the some very different images.&lt;/p&gt;

&lt;p&gt;The intuitive approach to doing data cleaning is to do it before you train a model. But as you've seen in this case, a model can actually help you find data issues more quickly and easily. So, you can normally prefer to train a quick and simple model first, and then use it to help us with data cleaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Model
&lt;/h2&gt;

&lt;p&gt;First, we will export the model that we trained.&lt;/p&gt;

&lt;p&gt;learn.export('export.pkl')&lt;/p&gt;

&lt;p&gt;Installing &lt;code&gt;Streamlit&lt;/code&gt; and &lt;code&gt;colab-everything&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;!pip install streamlit --quiet&lt;/p&gt;

&lt;p&gt;Coping the app code from github&lt;/p&gt;

&lt;p&gt;!wget -q &lt;a href="https://raw.githubusercontent.com/Ankur-singh/CrowdSource-Workshop/main/app.py"&gt;https://raw.githubusercontent.com/Ankur-singh/CrowdSource-Workshop/main/app.py&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;!pip install colab-everything&lt;/p&gt;

&lt;p&gt;Finally, running the app . . . &lt;/p&gt;

&lt;p&gt;from colab_everything import ColabStreamlit&lt;br&gt;
ColabStreamlit('app.py')&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
