<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Anurag Verma</title>
    <description>The latest articles on Forem by Anurag Verma (@anurag629).</description>
    <link>https://forem.com/anurag629</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anurag629"/>
    <language>en</language>
    <item>
      <title>Elon Musk's Neuralink Receives FDA Approval for Human Trials</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Sat, 17 Jun 2023 08:55:53 +0000</pubDate>
      <link>https://forem.com/anurag629/elon-musks-neuralink-receives-fda-approval-for-human-trials-54ll</link>
      <guid>https://forem.com/anurag629/elon-musks-neuralink-receives-fda-approval-for-human-trials-54ll</guid>
      <description>&lt;p&gt;Elon Musk's company Neuralink has received approval from the US Food and Drug Administration (FDA) to begin human trials of its brain-computer interface technology. This is a major milestone for the company, which has been working on developing this technology for several years.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Basics of Python - Variables and Data Types</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Mon, 29 May 2023 06:55:43 +0000</pubDate>
      <link>https://forem.com/anurag629/basics-of-python-variables-and-data-types-33nd</link>
      <guid>https://forem.com/anurag629/basics-of-python-variables-and-data-types-33nd</guid>
      <description>&lt;p&gt;Learn about the basics of Python. This article also covers the basics of Python Variables and Data Types.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introduction to large language model</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Sun, 21 May 2023 11:11:53 +0000</pubDate>
      <link>https://forem.com/anurag629/introduction-to-large-language-model-2mme</link>
      <guid>https://forem.com/anurag629/introduction-to-large-language-model-2mme</guid>
      <description>&lt;p&gt;In this article we will learn about Large language models(LLMs). We will learn about what is LLM, how does LLM works, benefits of LLM, challenges of LLM, future of LLM, examples of LLM, etc.&lt;/p&gt;

&lt;p&gt;The post is originally published at &lt;a href="https://www.anurag629.club/posts/introduction-to-large-language-model/"&gt;Link&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Plant Diseases Detection using Deep Learning</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Sat, 20 May 2023 03:57:59 +0000</pubDate>
      <link>https://forem.com/anurag629/plant-diseases-detection-using-deep-learning-44o4</link>
      <guid>https://forem.com/anurag629/plant-diseases-detection-using-deep-learning-44o4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post we will be building a plant diseases detection model using deep learning. We will be using the &lt;a href="https://www.kaggle.com/datasets/vipoooool/new-plant-diseases-dataset"&gt;New Plant Diseases Dataset&lt;/a&gt; dataset from Kaggle. &lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset
&lt;/h2&gt;

&lt;p&gt;The dataset consists of about 87,000 images of healthy and diseased plant leaves. The dataset contains 38 different classes of plant leaves. The total dataset is divided into 80/20 ratio of training and validation set preserving the directory structure. A new directory is also created for testing the model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing Libraries
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;tensorflow.keras&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;layers&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting the Hyperparameters
&lt;/h2&gt;

&lt;p&gt;We will be setting the hyperparameters for the model. We will be using the following hyperparameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;image_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;
&lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;
&lt;span class="n"&gt;channels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="n"&gt;epoches&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loading the Dataset
&lt;/h2&gt;

&lt;p&gt;For loading the dataset we will be using the &lt;code&gt;image_dataset_from_directory&lt;/code&gt; function. This function takes the path of the dataset directory and returns a &lt;code&gt;tf.data.Dataset&lt;/code&gt; object. The &lt;code&gt;tf.data.Dataset&lt;/code&gt; object is a powerful tool for building input pipelines for TensorFlow models. It allows us to easily load data from disk, apply transformations, and feed the data into our model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;dataset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;image_dataset_from_directory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;'../input/new-plant-diseases-dataset/'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;shuffle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;image_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Storing and printing the class names&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;class_names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;class_names&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_names&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Visualizing the Images
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;figure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;figsize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;image_batch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;label_batch&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;take&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;ax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;subplot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;imshow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_batch&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;numpy&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"uint8"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;plt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_names&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;label_batch&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code we are taking the first batch of images and labels from the dataset and then plotting the images with their corresponding labels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Splitting the Dataset
&lt;/h2&gt;

&lt;p&gt;We will be splitting the dataset into training, validation and testing set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_dataset_partitions_tf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;train_split&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_split&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_split&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shuffle_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;ds_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;shuffle_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;train_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_split&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;ds_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;val_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;val_split&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;ds_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;train_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;take&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;val_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_size&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;take&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;val_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;test_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_size&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;val_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;train_ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_ds&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code we are defining a function that takes the dataset and splits it into training, validation and testing set. The function takes the dataset and the split ratio as input and returns the training, validation and testing set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;train_ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;val_ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_dataset_partitions_tf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Getting the length of the training, validation and testing set&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Len train_set = "&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;train_ds&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Len val_set = "&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;val_ds&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Len test_set = "&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_ds&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Caching, Prefetching and Batching the Dataset
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;train_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;train_ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;prefetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffer_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AUTOTUNE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;val_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;val_ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;prefetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffer_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AUTOTUNE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;shuffle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;prefetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buffer_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AUTOTUNE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we are caching, shuffling and prefetching the dataset. Caching the dataset will store the images in memory after they are loaded off disk during the first epoch. This will ensure the dataset does not become a bottleneck while training the model. Shuffling the dataset will ensure that the model does not see the same order of examples during each epoch. Prefetching the dataset will ensure that the data is immediately available for the next iteration of training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resizing and Rescaling the Images
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;resize_and_rescale&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;experimental&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Resizing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;experimental&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Rescaling&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we are resizing the images to the specified size and rescaling the images to the range of 0 to 1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Augmentation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;data_augumentation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;experimental&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RandomFlip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"horizontal_and_vertical"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;experimental&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RandomRotation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above we are performing data augmentation on the images. Data augmentation is a technique to artificially create new training data from existing training data. This helps to avoid overfitting and helps the model generalize better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;num_classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;38&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="n"&gt;resize_and_rescale&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;data_augumentation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;kernel_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt; 
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'relu'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_classes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'softmax'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using a sequential model with the following layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resize and Rescale Layer&lt;/strong&gt; - This layer resizes the images to the specified size and rescales the images to the range of 0 to 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Augmentation Layer&lt;/strong&gt; - This layer performs data augmentation on the images. Data augmentation is a technique to artificially create new training data from existing training data. This helps to avoid overfitting and helps the model generalize better.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convolutional Layer&lt;/strong&gt; - This layer performs convolution on the input image. Convolution is a mathematical operation that takes two inputs such as image matrix and a filter or kernel. The filter is applied to the input image and the output is a feature map.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Max Pooling Layer&lt;/strong&gt; - This layer performs max pooling on the input image. Max pooling is a technique to reduce the dimensionality of the input image. It is done by taking the maximum value from the portion of the image covered by the kernel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flatten Layer&lt;/strong&gt; - This layer flattens the input image into a single dimension.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dense Layer&lt;/strong&gt; - This layer performs the operation of output = activation(dot(input, kernel) + bias). It is used to perform classification on the input image.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Compiling the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;'adam'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;keras&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;losses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SparseCategoricalCrossentropy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from_logits&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'accuracy'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are compiling the model with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimizer&lt;/strong&gt; - This is the optimizer that will be used to update the weights of the model. We are using the Adam optimizer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loss&lt;/strong&gt; - This is the loss function that will be used to calculate the loss of the model. We are using the Sparse Categorical Crossentropy loss function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt; - This is the metric that will be used to evaluate the performance of the model. We are using the accuracy metric.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Training the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;train_ds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;epoches&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;batch_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;validation_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;val_ds&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are training the model with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Training Dataset&lt;/strong&gt; - This is the training dataset that will be used to train the model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Epochs&lt;/strong&gt; - This is the number of epochs that the model will be trained for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Size&lt;/strong&gt; - This is the batch size that will be used to train the model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verbose&lt;/strong&gt; - This is the verbosity mode that will be used to train the model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation Dataset&lt;/strong&gt; - This is the validation dataset that will be used to validate the model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Evaluating the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_ds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are evaluating the model on the testing dataset. The model achieves an accuracy of 0.96 on the testing dataset. This means that the model is able to correctly classify 96% of the images in the testing dataset. This is a good accuracy. We can further improve the accuracy of the model by using a more complex model architecture, using a larger dataset, using a different optimizer, using a different loss function, using a different metric, using a different batch size, using a different number of epochs, etc. We can also use transfer learning to improve the accuracy of the model. &lt;/p&gt;

&lt;h2&gt;
  
  
  Saving the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"model.h5"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are saving the model in the h5 format. The h5 format is a data file format that is used to store the weights and architecture of the model. We can load the model from the h5 file using the load_model() function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, we learned how to use deep learning to detect the diseases in plant leaves. We learned how to build a deep learning model using the Keras API. We learned how to build a sequential model with convolutional layers, max pooling layers, flatten layers, dense layers, etc. We learned how to compile the model with the Adam optimizer, Sparse Categorical Crossentropy loss function, and accuracy metric. We learned how to train the model on the training dataset and evaluate the model on the testing dataset. We learned how to save the model in the h5 format. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Voyla! We have successfully built a deep learning model to detect the diseases in plant leaves.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you for reading!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>model</category>
      <category>python</category>
    </item>
    <item>
      <title>The Power of Bit Manipulation - Let's smash some bits!</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Wed, 17 May 2023 17:55:43 +0000</pubDate>
      <link>https://forem.com/anurag629/the-power-of-bit-manipulation-lets-smash-some-bits-2li8</link>
      <guid>https://forem.com/anurag629/the-power-of-bit-manipulation-lets-smash-some-bits-2li8</guid>
      <description>&lt;p&gt;In this article, we will learn everything about bit manipulation and how to solve problems efficiently using bit manipulation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Created my own personalblog + portfolio website</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Wed, 17 May 2023 09:12:14 +0000</pubDate>
      <link>https://forem.com/anurag629/created-my-own-personalblog-portfolio-website-1a4</link>
      <guid>https://forem.com/anurag629/created-my-own-personalblog-portfolio-website-1a4</guid>
      <description>&lt;p&gt;First have a look at my blog + portfolio website and let me know your thoughts &lt;a href="https://www.anurag629.club/"&gt;Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A personal blog or portfolio website is a great way to showcase your work, share your thoughts and ideas, and connect with others. If you're looking to create your own personal blog or portfolio website, Vercel is a great option. Vercel is a cloud platform that makes it easy to deploy and host your website. It also offers a variety of templates that you can use to get started quickly.&lt;/p&gt;

&lt;p&gt;In this tutorial, I'll show you how to create your own personal &lt;strong&gt;blog + portfolio&lt;/strong&gt; website using Vercel templates. I'll also show you how to add your own features like a blog &lt;strong&gt;views counter&lt;/strong&gt; and &lt;strong&gt;comments&lt;/strong&gt; using GitHub issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Choose a Vercel Template
&lt;/h3&gt;

&lt;p&gt;The first step is to choose a &lt;strong&gt;Vercel template&lt;/strong&gt;. Vercel offers a variety of templates for personal blogs and portfolio websites. You can browse the templates by visiting the Vercel website.&lt;/p&gt;

&lt;p&gt;Once you've chosen a template, you can click on the "Use Template" button to start creating your website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Connect Your GitHub Account
&lt;/h3&gt;

&lt;p&gt;The next step is to connect your GitHub account. This will allow you to deploy your website to Vercel.&lt;/p&gt;

&lt;p&gt;To connect your GitHub account, click on the &lt;strong&gt;"Connect GitHub"&lt;/strong&gt; button. Then, enter your GitHub username and password.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy Your Website
&lt;/h3&gt;

&lt;p&gt;Once you've connected your GitHub account, you can deploy your website. To do this, click on the &lt;strong&gt;"Deploy"&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Vercel will then deploy your website to the cloud. This process may take a few minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Add Your Own Features
&lt;/h3&gt;

&lt;p&gt;Once your website has been deployed, you can add your own features. In this tutorial, I'll show you how to add a blog views counter and comments using GitHub issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding a Blog Views Counter
&lt;/h3&gt;

&lt;p&gt;To add a blog views counter to your website, you can use the Vercel Counter widget. The Counter widget is a simple way to display the number of views your blog posts have received.&lt;/p&gt;

&lt;p&gt;To add the Counter widget to your website, click on the "Add Widget" button. Then, search for "Counter" and click on the "Counter" widget.&lt;/p&gt;

&lt;p&gt;The Counter widget will be added to your website. You can then configure the widget by clicking on the "Configure" button.&lt;/p&gt;

&lt;p&gt;In the Configure dialog, you can specify the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Title: The title of the Counter widget.&lt;/li&gt;
&lt;li&gt;Count: The number of views to display.&lt;/li&gt;
&lt;li&gt;Style: The style of the Counter widget.
Once you've configured the Counter widget, click on the "Save" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Adding Comments Using GitHub Issues
&lt;/h3&gt;

&lt;p&gt;To add comments to your blog posts, you can use the Vercel Comments widget. The Comments widget allows you to display comments from GitHub issues on your website.&lt;/p&gt;

&lt;p&gt;To add the Comments widget to your website, click on the "Add Widget" button. Then, search for "Comments" and click on the "Comments" widget.&lt;/p&gt;

&lt;p&gt;The Comments widget will be added to your website. You can then configure the widget by clicking on the "Configure" button.&lt;/p&gt;

&lt;p&gt;In the Configure dialog, you can specify the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repository: The repository where the GitHub issues are located.&lt;/li&gt;
&lt;li&gt;Issue Number: The issue number to display comments from.&lt;/li&gt;
&lt;li&gt;Style: The style of the Comments widget.
Once you've configured the Comments widget, click on the "Save" button.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this tutorial, I showed you how to create your own personal blog + portfolio website using Vercel templates. I also showed you how to add your own features like a blog views counter and comments using GitHub issues.&lt;/p&gt;

</description>
      <category>blogging</category>
      <category>portfolio</category>
      <category>vercel</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Create a Machine Learning API with FastAPI</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Wed, 17 May 2023 08:55:53 +0000</pubDate>
      <link>https://forem.com/anurag629/create-a-machine-learning-api-with-fastapi-16m8</link>
      <guid>https://forem.com/anurag629/create-a-machine-learning-api-with-fastapi-16m8</guid>
      <description>&lt;p&gt;Create a Machine Learning API with FastAPI. This tutorial will help you to create a machine learning API with FastAPI.&lt;/p&gt;

&lt;p&gt;The post is originally published at &lt;a href="https://www.anurag629.club/posts/create-a-machine-learning-api-with-fastapi/"&gt;Link&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Step-by-Step Guide to Creating a Repository on GitHub</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Wed, 17 May 2023 04:58:53 +0000</pubDate>
      <link>https://forem.com/anurag629/a-step-by-step-guide-to-creating-a-repository-on-github-47a9</link>
      <guid>https://forem.com/anurag629/a-step-by-step-guide-to-creating-a-repository-on-github-47a9</guid>
      <description>&lt;p&gt;A Step-by-Step Guide to Creating a Repository on GitHub. This tutorial will help you to create a repository on GitHub.&lt;/p&gt;

&lt;p&gt;The post is Originally published at &lt;a href="https://www.anurag629.club/posts/a-step-by-step-guide-to-creating-a-repository-on-github/"&gt;Link&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using the timeit Module to Identify Bottlenecks and Improve Performance</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Wed, 17 May 2023 04:58:53 +0000</pubDate>
      <link>https://forem.com/anurag629/using-the-timeit-module-to-identify-bottlenecks-and-improve-performance-51m</link>
      <guid>https://forem.com/anurag629/using-the-timeit-module-to-identify-bottlenecks-and-improve-performance-51m</guid>
      <description>&lt;p&gt;The &lt;code&gt;timeit&lt;/code&gt; module in Python is a built-in module that allows to measure time of code snnippets. It is very useful tool for comparing the performance of different approaches&lt;/p&gt;

</description>
    </item>
    <item>
      <title>In Depth Basics of Python with Question and Solution</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Tue, 16 May 2023 04:58:53 +0000</pubDate>
      <link>https://forem.com/anurag629/in-depth-basics-of-python-with-question-and-solution-3g6m</link>
      <guid>https://forem.com/anurag629/in-depth-basics-of-python-with-question-and-solution-3g6m</guid>
      <description>&lt;p&gt;In this tutorial, you will learn about basic python programming with question and solution. This tutorial is for beginners.&lt;/p&gt;

&lt;p&gt;The post is Originally published at &lt;a href="https://www.anurag629.club/posts/indepth_basics_of_python_with_question_and_solution/"&gt;Link&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting Up Your Full-Stack Development Environment with Python, Django, and React</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Tue, 09 May 2023 03:46:38 +0000</pubDate>
      <link>https://forem.com/anurag629/setting-up-your-full-stack-development-environment-with-python-django-and-react-1457</link>
      <guid>https://forem.com/anurag629/setting-up-your-full-stack-development-environment-with-python-django-and-react-1457</guid>
      <description>&lt;p&gt;Welcome to a comprehensive guide on setting up your full-stack development environment with Python, Django, and React. This article is designed to provide step-by-step instructions to budding programmers and experienced developers alike. Let's dive right in!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Installing Python and Django&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python is a versatile language that's great for back-end web development. Django, a Python-based framework, makes it easy to build robust and scalable web applications. If you haven't already installed these tools, here's how to do it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Visit the official Python website to download and install the latest stable version of Python. As of September 2021, the current stable release is Python 3.9.x.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We recommend using a virtual environment for your Python projects to avoid dependency conflicts. Python's built-in &lt;code&gt;venv&lt;/code&gt; module makes this a breeze. Here's how to create a new virtual environment:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv myenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;To activate the virtual environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Windows: &lt;code&gt;myenv\Scripts\activate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;On Unix or MacOS: &lt;code&gt;source myenv/bin/activate&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With your virtual environment activated, install Django using pip, Python's package manager:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install django
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Creating a Django Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that we have Django installed, let's create a new Django project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;django-admin startproject ProgrammersPost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can replace "ProgrammersPost" with your preferred project name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Installing Node.js and npm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Node.js is a runtime that allows you to run JavaScript on your server, while npm is a package manager for JavaScript. To install these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Visit the official Node.js website to download and install Node.js and npm. As of September 2021, Node.js 14.x is the current LTS version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify your installation by running the following commands in your terminal:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node -v
npm -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Creating a React Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;React is a popular JavaScript library for building user interfaces, especially single-page applications. Follow these steps to create a new React application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, install Create React App, a tool that sets up a modern web app by running one command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g create-react-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Next, create a new React application:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-react-app programmerspost-client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace "programmerspost-client" with your preferred app name.&lt;/p&gt;

&lt;p&gt;And voilà! You have now set up a full-stack development environment with Python, Django, and React. With these powerful tools at your disposal, you're ready to build scalable and efficient web applications. Stay tuned for more tutorials on developing with Python, Django, and React!&lt;/p&gt;

</description>
      <category>django</category>
      <category>react</category>
      <category>webdevlopment</category>
      <category>setupenvironmen</category>
    </item>
    <item>
      <title>Terms used in Reinforcement Leaning</title>
      <dc:creator>Anurag Verma</dc:creator>
      <pubDate>Sat, 25 Mar 2023 13:43:21 +0000</pubDate>
      <link>https://forem.com/anurag629/terms-used-in-reinforcement-leaning-56lo</link>
      <guid>https://forem.com/anurag629/terms-used-in-reinforcement-leaning-56lo</guid>
      <description>&lt;p&gt;Every AI/ML/Data Science enthusiast knows the definition of Reinforcement Learning - it is a feedback-based machine learning technique in which an agent learns to behave in an environment by performing actions and observing their outcomes. For each good action, the agent receives positive feedback, and for each bad action, it receives negative feedback or a penalty. However, many are not familiar with the specific terms used in this definition. Let me explain them with an example.&lt;/p&gt;

&lt;p&gt;Let's consider the example of a robot that is learning to navigate a maze. In this scenario:&lt;/p&gt;

&lt;p&gt;🕵️Agent: The robot is the agent, which is the decision-maker that interacts with the environment. The agent can perceive the environment and take actions to achieve its goal.&lt;/p&gt;

&lt;p&gt;🧀‍ꡌ‍ꡙ‍ꡚ‍🐁 Environment: The maze is the environment, which is the context in which the agent operates. The environment can provide feedback to the agent in the form of rewards or punishments.&lt;/p&gt;

&lt;p&gt;🎬 Actions: The robot can take different actions such as moving forward, turning left, or turning right. These actions are the choices available to the agent.&lt;/p&gt;

&lt;p&gt;🙂Feedback: The environment provides feedback to the agent based on its actions. The feedback can be positive, negative, or neutral.&lt;/p&gt;

&lt;p&gt;🏆 Reward: The agent receives a reward when it takes an action that leads it closer to its goal. For example, if the robot moves towards the exit of the maze, it may receive a positive reward.&lt;/p&gt;

&lt;p&gt;🚫 Punishment: The agent receives punishment when it takes an action that leads it further away from its goal. For example, if the robot hits a wall, it may receive a negative reward.&lt;/p&gt;

&lt;p&gt;📜 Policy: The policy is the strategy used by the agent to select actions based on its current state. The goal of the agent is to learn an optimal policy that maximizes the long-term reward. For example, the robot may learn to follow the left wall of the maze to reach the exit.&lt;/p&gt;

&lt;p&gt;📍 State: The state is a representation of the environment at a particular time, which includes information such as the location of the agent and other relevant information.&lt;/p&gt;

&lt;h1&gt;
  
  
  datascience #machinelearning #ai #ml #reinforcementlearning
&lt;/h1&gt;

</description>
      <category>reinforcementlearning</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>deeplearning</category>
    </item>
  </channel>
</rss>
