<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ife</title>
    <description>The latest articles on Forem by Ife (@ifeoluwafavour).</description>
    <link>https://forem.com/ifeoluwafavour</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ifeoluwafavour"/>
    <language>en</language>
    <item>
      <title>How Gradient Descent Powers Machine Learning Models</title>
      <dc:creator>Ife</dc:creator>
      <pubDate>Fri, 22 Nov 2024 09:15:44 +0000</pubDate>
      <link>https://forem.com/ifeoluwafavour/how-gradient-descent-powers-machine-learning-models-53h3</link>
      <guid>https://forem.com/ifeoluwafavour/how-gradient-descent-powers-machine-learning-models-53h3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Building accurate machine learning models relies heavily on optimization techniques and gradient descent is one of the techniques. Gradient descent helps models adjust parameters, minimise errors and improve performance overtime. In this article, I will dive into gradient descent as a concept and why it is important in the machine learning process.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Gradient Descent
&lt;/h2&gt;

&lt;p&gt;Gradient descent is the process that minimises a model's errors by adjusting its parameters continuously until it finds the best value that reduces the loss function.&lt;/p&gt;

&lt;p&gt;The loss function is the difference between the predicted value and the actual value. To get the predicted value, the model runs some calculation which involves parameters. These parameters determine how the model processes input data to generate its predictions, and they are adjusted during training to minimize the loss function and improve accuracy. Gradient descent handles the adjustments of these parameters. &lt;/p&gt;

&lt;h2&gt;
  
  
  How Gradient Descent Works
&lt;/h2&gt;

&lt;p&gt;This is the equation behind the simple linear regression model. It is similar to the equation of a line. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oppnd8xoynzdtayj5bi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oppnd8xoynzdtayj5bi.png" alt="Linear regression model equation" width="657" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The parameters in this equation are w and b, the weight and the bias. When the linear regression model makes a prediction using this equation, the predicted output is compared with the actual output using the following equation,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft511nndl6qdghpxpyqhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft511nndl6qdghpxpyqhn.png" alt="Mean Square Error loss function equation" width="790" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you substitute the equation for the predicted value into the equation, you get this,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjdwv2rvwbi1wc7m6i2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjdwv2rvwbi1wc7m6i2k.png" alt="Linear regression model substituted into the loss function equation" width="456" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This loss function is called the Mean Squared Error (MSE). The smaller the difference between the predicted value and the actual value, the more accurate the model's predictions and this accuracy depends on the values of the weight (w) and the bias (b).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g6yb099flf74c11700w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g6yb099flf74c11700w.png" alt="House pricing dataset" width="731" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are to build a linear regression model that predicts house prices based on size only, the features (X) will be the size of the house and the target (y) will be the price of the house.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x_train = np.array([500, 800, 1000, 1500, 2000])
y_train = np.array([50, 80, 100, 150, 200])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If w is 0 and b is 0, the model predicts the target value to be 0:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;w = 0
b = 0

pred_y = w*x_train[0] + b
# x_train[0] = 500

# pred_y = 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The actual target value when the size of the house is 500, is 50. The loss function for this prediction is 50, which means that the model is far from accurate. However, if w and b were different, the model would also predict a different target value.&lt;/p&gt;

&lt;p&gt;To see in real-time, the effect of gradient descent on the loss function, look at the graph below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon23kmreoc20ec7jhll4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon23kmreoc20ec7jhll4.png" alt="MSE loss function vs w; b is set to 0" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above graph, you can see that the loss reduces until it gets to a point (where w = 0.1). This minimum point is called the global minimum and the value of w at this point will give the smallest possible loss function value. &lt;/p&gt;

&lt;p&gt;Instead of guessing the values of w and b, the gradient descent algorithm will go through all the possible values for w and b until it gets the value for w and b, resulting in the smallest possible loss function value. The gradient descent is defined as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8flal6rntazdyrx6vwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8flal6rntazdyrx6vwu.png" alt="Gradient descent formula" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gradient descent algorithm updates the w and b parameters simultaneously after each iteration using the equations above until it gets the best values that result in the best loss function value for accurate predictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of the Learning Rate
&lt;/h3&gt;

&lt;p&gt;The alpha sign in the gradient descent equations is known as the learning rate. The learning rate is a value that determines how much or how little the parameters get updated after each iteration. &lt;/p&gt;

&lt;p&gt;If the learning rate is too small, the gradient descent will take too long to reach the global minimum. However, if the learning rate is too big, the gradient descent might miss the global minimum and that will lead to increasing values of the loss function which you don't want.&lt;/p&gt;

&lt;p&gt;You provide the gradient descent algorithm with a good learning rate value. A good range is from 0.01 to 1.&lt;/p&gt;

&lt;p&gt;To learn more about gradient descent and learning rate including graphs, check out this notebook on &lt;a href="https://github.com/greyhatguy007/Machine-Learning-Specialization-Coursera/blob/main/C1%20-%20Supervised%20Machine%20Learning%20-%20Regression%20and%20Classification/week1/Optional%20Labs/C1_W1_Lab05_Gradient_Descent_Soln.ipynb" rel="noopener noreferrer"&gt;Gradient Descent&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Gradient Descent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Batch Gradient Descent&lt;/strong&gt;&lt;br&gt;
This type of gradient descent computes the gradient of the entire dataset to update parameters. Each iteration uses all training examples to calculate the gradient. It is best for small datasets and when computing power is sufficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stochastic Gradient Descent (SGD)&lt;/strong&gt;&lt;br&gt;
The Stochastic Gradient Descent (SGD) algorithm updates parameters for each individual training example. It iterates through examples one at a time. It is best for very large datasets or when computational efficiency is a priority.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mini-batch Gradient Descent&lt;/strong&gt;&lt;br&gt;
This type of gradient descent combines the strengths of batch and stochastic gradient descent. It computes the gradient for small random subsets (mini-batches) of the dataset and updates parameters. It is commonly used when training deep learning models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Gradient descent is a very important optimisation technique for machine learning models. Its ability to minimise the error function iteratively allows algorithms to improve with each step, resulting in more accurate models. By understanding the differences between the various gradient descent methods, you can adjust your approach to fit each problem, making the training of models faster and more accurate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.coursera.org/specializations/machine-learning-introduction" rel="noopener noreferrer"&gt;Machine learning specialisation course on Coursera&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>gradientdescent</category>
    </item>
    <item>
      <title>Data Preprocessing Techniques for Machine Learning in Python</title>
      <dc:creator>Ife</dc:creator>
      <pubDate>Thu, 14 Nov 2024 08:35:14 +0000</pubDate>
      <link>https://forem.com/ifeoluwafavour/data-preprocessing-techniques-for-ml-models-5acg</link>
      <guid>https://forem.com/ifeoluwafavour/data-preprocessing-techniques-for-ml-models-5acg</guid>
      <description>&lt;p&gt;Data preprocessing is a critical step in machine learning workflows. It is the act of carrying out certain actions or steps on a dataset to improve the dataset's quality before it is used for machine learning or other tasks. Data preprocessing steps involve cleaning, transforming, normalization and handling outliers in order to improve its quality or ensure that it is suitable for its main purpose (in this case, machine learning). A clean and high-quality dataset enhances the machine learning model's performance.&lt;/p&gt;

&lt;p&gt;Common issues with low-quality data include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Missing values&lt;/li&gt;
&lt;li&gt;Inconsistent formats &lt;/li&gt;
&lt;li&gt;Duplicate values&lt;/li&gt;
&lt;li&gt;Irrelevant features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, these are the steps in data preprocessing for machine learning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import necessary libraries.&lt;/li&gt;
&lt;li&gt;Load and inspect the dataset.&lt;/li&gt;
&lt;li&gt;Data cleaning

&lt;ul&gt;
&lt;li&gt;Handling missing values.&lt;/li&gt;
&lt;li&gt;Duplicate removal.&lt;/li&gt;
&lt;li&gt;Dealing with outliers.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Data transformation

&lt;ul&gt;
&lt;li&gt;Normalization&lt;/li&gt;
&lt;li&gt;Standardization&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You will need basic knowledge of Python and how to use Python libraries for data preprocessing to be able to follow this guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt;&lt;br&gt;
The following are required for data preprocessing in this guide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.12&lt;/li&gt;
&lt;li&gt;Jupyter Notebook or your favourite notebook&lt;/li&gt;
&lt;li&gt;Numpy&lt;/li&gt;
&lt;li&gt;Pandas&lt;/li&gt;
&lt;li&gt;Scipy&lt;/li&gt;
&lt;li&gt;Scikit learn &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/datasets/dansbecker/melbourne-housing-snapshot" rel="noopener noreferrer"&gt;Melbourne Housing Dataset&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also check out the output of each code in these &lt;a href="https://github.com/ifeoluwafavour/data-preprocessing-techniques" rel="noopener noreferrer"&gt;Jupyter notebooks&lt;/a&gt; on GitHub.&lt;/p&gt;
&lt;h2&gt;
  
  
  Import necessary libraries
&lt;/h2&gt;

&lt;p&gt;If you haven't installed Python already, you can download it from the &lt;a href="//python.org"&gt;Python&lt;/a&gt; website and follow the instructions to install it.&lt;/p&gt;

&lt;p&gt;Once Python has been installed, install the required libraries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install numpy scipy pandas scikit-learn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Jupyter Notebook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install notebook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, start Jupyter Notebook with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jupyter notebook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will launch Jupyter Notebook in your default web browser. If not, check the terminal for a link you can manually paste into your browser. &lt;/p&gt;

&lt;p&gt;Open a new notebook from the File menu, import the required libraries and run the cell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import pandas as pd
import scipy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Load and Inspect the Data
&lt;/h2&gt;

&lt;p&gt;Go to the &lt;a href="https://www.kaggle.com/datasets/dansbecker/melbourne-housing-snapshot" rel="noopener noreferrer"&gt;Melbourne Housing Dataset&lt;/a&gt; site and download the dataset. Load the dataset into the notebook using the following code. You can copy the file path on your computer to paste in the &lt;code&gt;read_csv&lt;/code&gt; function. You can also put the CSV file in the same folder as the notebook and import the file as seen below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data = pd.read_csv(r"melb_data.csv")

# View the first 5 columns of the dataset
data.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Split the data into training and validation sets&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.model_selection import train_test_split

# Set the target
y = data['Price']

# Firstly drop categorical data types
melb_features = data.drop(['Price'], axis=1) #drop the target column

X = melb_features.select_dtypes(exclude=['object'])

# Divide data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;You have to split data into training and validation sets to prevent data leakage. As a result, whatever preprocessing technique you carry out on the training features set is the same as the one you carry out on the validation features set.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now the dataset is ready for preprocessing!&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Cleaning
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Handling missing values&lt;/strong&gt;&lt;br&gt;
Missing values in a dataset are like holes in a fabric that are supposed to be used to sew a dress. It spoils the dress before it is even made.&lt;/p&gt;

&lt;p&gt;There are 3 ways to handle missing values in a dataset.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drop the rows or columns with empty cells
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Drop the rows with empty cells from the original data frame
data.dropna(inplace=True)

#Drop the columns with empty cells
#firstly, get the names of columns with empty cells
cols_with_empty_cells = [col for col in X_train.columns if X_train[col].isnull().any()]

#secondly, drop the columns with the empty cells
removed_X_train_cols = X_train.drop(cols_with_empty_cells, axis=1)
removed_X_valid_cols = X_valid.drop(cols_with_empty_cells, axis=1)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The issue with this method is that you may lose valuable information that you are to train your model with. Unless most values in the dropped rows or columns are missing, there is no need to drop either rows or columns with empty cells.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Impute values in the empty cells&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can impute or fill in the empty cells with the mean, median or mode of the data in that particular column. &lt;code&gt;SimpleImputer&lt;/code&gt; from Scikit Learn will be used to impute values in the empty cells&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.impute import SimpleImputer

# Impute values
imputer = SimpleImputer()
imputed_X_train_values = pd.DataFrame(imputer.fit_transform(X_train))
imputed_X_valid_values = pd.DataFrame(imputer.transform(X_valid))

# Imputation removed column names so we put them back
imputed_X_train_values.columns = X_train.columns
imputed_X_valid_values.columns = X_valid.columns

# Set the imputed values to X_train
X_train = imputed_X_train_values
X_valid = imputed_X_valid_values

X_train.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Impute and notify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How this works is that you impute values in the empty cells but you also create a column that indicates that the cell was initially empty.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Make new columns indicating what will be imputed
# The column will have booleans as values
for col in cols_with_empty_cells:
    X_train[col + '_was_missing'] = X_train[col].isnull()
    X_valid[col + '_was_missing'] = X_valid[col].isnull()

# Impute values
imputer = SimpleImputer()
imputed_X_train_values = pd.DataFrame(imputer.fit_transform(X_train))
imputed_X_valid_values = pd.DataFrame(imputer.transform(X_valid))

# Imputation removed column names so we put them back
imputed_X_train_values.columns = X_train.columns
imputed_X_valid_values.columns = X_valid.columns

# Set the imputed values to X_train
X_train = imputed_X_train_values
X_valid = imputed_X_valid_values

# See the new columns and their values
X_train.head() 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Duplicate removal&lt;/strong&gt;&lt;br&gt;
Duplicate cells mean repeated data and it affects model accuracy. The only way to deal with them is to drop them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check for the number of duplicate rows in the dataset
X_train.duplicated().sum()

# Drop the duplicate rows
X_train.drop_duplicates(inplace=True)
X_valid.drop_duplicates(inplace=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dealing with outliers&lt;/strong&gt;&lt;br&gt;
Outliers are values that are significantly different from the other values in the dataset. They can be unusually high or low compared to other data values. They can arise due to entry errors or they could genuinely be outliers. &lt;/p&gt;

&lt;p&gt;It is important to deal with outliers or else they will lead to inaccurate data analysis or models. One method to detect outliers is by calculating z-scores.&lt;/p&gt;

&lt;p&gt;The way it works is that the z-score is used to check if a data point is 3 points or more away from the mean value. This calculation is done for every data point. If the z-score for a data point equals 3 or a higher value, the data point is an outlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from scipy import stats
import pandas as pd

# Calculate Z-scores for each row in the training and validation datasets
X_train_zscores = stats.zscore(X_train, axis=1)
X_valid_zscores = stats.zscore(X_valid, axis=1)

# Define the threshold for outlier detection
threshold = 3

# Identify rows in X_train and X_valid with values above the threshold (meaning they're outliers)
outliers_train = np.abs(X_train_zscores) &amp;gt; threshold
outliers_valid = np.abs(X_valid_zscores) &amp;gt; threshold

# Remove rows identified as outliers from X_train and X_valid (~ means NOT)
X_train_no_outliers = X_train[~outliers_train]
X_valid_no_outliers = X_valid[~outliers_valid]

# Display the results
print("Original X_train shape:", X_train.shape)
print("X_train shape after removing outliers:", X_train_no_outliers.shape)

print("Original X_valid shape:", X_valid.shape)
print("X_valid shape after removing outliers:", X_valid_no_outliers.shape)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Data Transformation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Normalization&lt;/strong&gt; &lt;br&gt;
You normalize features so they can be described as a normal distribution.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A normal distribution (also known as the Gaussian distribution) is a statistical distribution where there are roughly equal distances or distributions above and below the mean. The graph of the data points of a normally distributed data form a bell curve.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The point of normalizing data is if the machine learning algorithm you want to use assumes that the data is normally distributed. An example is the Gaussian Naive Bayes model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.preprocessing import MinMaxScaler

# Initialize the MinMaxScaler
scaler = MinMaxScaler()

# Fit the scaler on the training data and transform it
X_train_normalized = scaler.fit_transform(X_train)

# Transform the validation data using the same scaler
X_valid_normalized = scaler.transform(X_valid)

# Convert the normalized data back into DataFrames to keep column names
X_train_normalized = pd.DataFrame(X_train_normalized, columns=X_train.columns, index=X_train.index)
X_valid_normalized = pd.DataFrame(X_valid_normalized, columns=X_valid.columns, index=X_valid.index)

# Display the results
print("First few rows of normalized X_train:")
print(X_train_normalized.head())

print("First few rows of normalized X_valid:")
print(X_valid_normalized.head())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Standardization&lt;/strong&gt;&lt;br&gt;
Standardization transforms the features of a dataset to have a mean of 0 and a standard deviation of 1. This process scales each feature so that it has similar ranges across the data. This ensures that each feature contributes equally to model training.&lt;/p&gt;

&lt;p&gt;You use standardization when: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The features in your data are on different scales or units. &lt;/li&gt;
&lt;li&gt;The machine learning model you want to use is based on distance or gradient-based optimizations (e.g., linear regression, logistic regression, K-means clustering). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You use &lt;code&gt;StandardScaler()&lt;/code&gt; from the &lt;code&gt;sklearn&lt;/code&gt; library to standardize features.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.preprocessing import StandardScaler

# Initialize the StandardScaler
scaler = StandardScaler()

# Fit the scaler on the training and validation data and transform them
X_train_standardized = scaler.fit_transform(X_train)
X_valid_standardized = scaler.transform(X_valid)

# Convert the standardized data back into DataFrames to keep column names
X_train_standardized = pd.DataFrame(X_train_standardized, columns=X_train.columns, index=X_train.index)
X_valid_standardized = pd.DataFrame(X_valid_standardized, columns=X_valid.columns, index=X_valid.index)

# Display the results
print("First few rows of standardized X_train:")
print(X_train_standardized.head())

print("First few rows of standardized X_valid:")
print(X_valid_standardized.head())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Data preprocessing is not just a preliminary stage. It is part of the process of building accurate machine learning models. It can also be tweaked to fit the needs of the dataset you are working with. &lt;/p&gt;

&lt;p&gt;Like with most activities, practice makes perfect. As you continue to practise these data preprocessing techniques, your skills will improve as well as your models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you for reading through. I would love to read your thoughts on this 👇&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>python</category>
      <category>data</category>
    </item>
    <item>
      <title>How to Build a Books CRUD API with FastAPI</title>
      <dc:creator>Ife</dc:creator>
      <pubDate>Sat, 26 Aug 2023 00:13:24 +0000</pubDate>
      <link>https://forem.com/ifeoluwafavour/building-a-books-crud-api-with-fastapi-269k</link>
      <guid>https://forem.com/ifeoluwafavour/building-a-books-crud-api-with-fastapi-269k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//fastapi.tiangolo.com"&gt;FastAPI&lt;/a&gt; is a modern Python web framework which has gained popularity for its high-performance capabilities and development time. I like it for its overall simplicity compared to Django. &lt;/p&gt;

&lt;p&gt;In this tutorial, I will guide you through building a library CRUD (Create, Read, Update, Delete) application using FastAPI. By the end of this guide, you will have a fully functional books API that allows users to manage books effortlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive into the development process, let's ensure you have the necessary tools in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Python (3.6+)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;FastAPI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can install FastAPI using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install fastapi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uvicorn&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can install Uvicorn using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install uvicorn[standard]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the CRUD API
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setting Up the Project
&lt;/h3&gt;

&lt;p&gt;Let's start by setting up the project structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a new directory for your project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a virtual environment and activate it&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python -m venv myenv

myenv\Scripts\activate.bat #for Windows
source myenv/bin/activate #for Mac OS and Linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install FastAPI and Uvicorn in the virtual environment
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install fastapi uvicorn[standard]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Inside the project directory, create a file named books.py.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;FastAPI provides a lot of features for software development. In books.py, import the required dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI, Body
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For this tutorial, I will use a book list object with information about the books
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BOOKS = [ 
    {'title': 'Jane Eyre', 'author': 'Jane Austen', 'category': 'period drama'},
    {'title': 'Great Expectations', 'author': 'Charles Dickens', 'category': 'period drama'},
    {'title': 'Bourne Idemtity', 'author': 'Robert Ludlum', 'category': 'mystery/thriller'},
    {'title': 'DaVinci Code', 'author': 'Dan Brown', 'category': 'mystery/thriller'},
    {'title': 'The Match Girl', 'author': 'Charles Dickens', 'category': 'tragedy'}
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Instantiate the FastAPI app. This lets you use all the dependencies that come with FastAPI
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app = FastAPI()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implementing CRUD Operations
&lt;/h3&gt;

&lt;p&gt;CRUD stands for create, read, update and delete and each aligns with the HTTP verbs, post, get, put respectively.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create (POST)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create the endpoint for the CRUD operations, you will need &lt;code&gt;app&lt;/code&gt; decorator, &lt;code&gt;post&lt;/code&gt; attribute and the endpoint you want. In this case, the endpoint &lt;code&gt;/books/create_book&lt;/code&gt; is static.&lt;/p&gt;

&lt;p&gt;The function that follows takes &lt;code&gt;new_book&lt;/code&gt; data of type JSON. &lt;code&gt;Body()&lt;/code&gt; tells FastAPI that. It then adds the new data to the book list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.post('/books/create_book')
async def create_book(new_book=Body()):
    BOOKS.append(new_book)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Read (GET)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To get all the values from the BOOKS list, we use &lt;code&gt;get&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.get('/books')
async def read_all_books():
    return BOOKS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update (PUT)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To update an entry in the books list, we use &lt;code&gt;put&lt;/code&gt;. The function takes the updated data from the user and loops through the list. If the title of the updated entry matches a title in the list, the matching entry is updated. This means the title can't be changed since we are searching for entries through their titles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.put('/books/update_book')
async def update_book(update_book=Body()):
    for i in range(len(BOOKS)):
        if BOOKS[i].get('title').casefold() == update_book.get('title').casefold():
            BOOKS[i] = update_book
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Delete (DELETE)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use &lt;code&gt;delete&lt;/code&gt; to delete the entry that matches the book title the user provides. This endpoint is not static as it has a dynamic parameter &lt;code&gt;{book_title}&lt;/code&gt; of type string (&lt;code&gt;str&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@app.delete('/books/delete_book/{book_title}')
async def delete_book(book_title: str):
    for i in range(len(BOOKS)):
        if BOOKS[i].get('title').casefold() == book_title.casefold():
            BOOKS.pop(i)
            break
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running the App
&lt;/h3&gt;

&lt;p&gt;Now that we are done building the app, the next step is to test it.&lt;/p&gt;

&lt;p&gt;Uvicorn is the server used to run FastAPI locally on your machine. Use the following command to start the server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uvicorn main:app --reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing the Endpoints
&lt;/h3&gt;

&lt;p&gt;FastAPI comes with automatic docs (Swagger UI and Redoc). You can access the docs through &lt;code&gt;http://127.0.0.1:8000/docs&lt;/code&gt; or &lt;code&gt;http://127.0.0.1:8000/redoc&lt;/code&gt;. Swagger UI doc looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse3gig44yilpsopmeoq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse3gig44yilpsopmeoq8.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test the API in the documentation.&lt;/p&gt;

&lt;p&gt;The code for this project is live on &lt;a href="https://github.com/ifeoluwafavour/fastapi-library-api" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;FastAPI helps developers to build APIs efficiently, and this tutorial has showcased its capabilities. In this tutorial, you learnt how to set up FastAPI routes and test your endpoints. In my next tutorial, I will cover dynamic path parameters and query parameters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Was this tutorial helpful to you?&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
