<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sharath Hebbar</title>
    <description>The latest articles on Forem by Sharath Hebbar (@sharathhebbar).</description>
    <link>https://forem.com/sharathhebbar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sharathhebbar"/>
    <language>en</language>
    <item>
      <title>LLM Model Sharding</title>
      <dc:creator>Sharath Hebbar</dc:creator>
      <pubDate>Thu, 11 Apr 2024 10:12:55 +0000</pubDate>
      <link>https://forem.com/sharathhebbar/llm-model-sharding-43d5</link>
      <guid>https://forem.com/sharathhebbar/llm-model-sharding-43d5</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Large Language Models (LLMs) represent a significant advancement in artificial intelligence and natural language processing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxz8kcx59r61cez4whxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxz8kcx59r61cez4whxa.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Models such as OpenAI’s GPT (Generative Pre-trained Transformer) series, Google’s Gemini, PaLM, T5, and many such open-source models have achieved remarkable capabilities in understanding and generating human-like text.&lt;/p&gt;

&lt;p&gt;However, as these models grow larger to improve performance, they also pose challenges in terms of scalability, resource requirements, and ethical considerations.&lt;/p&gt;

&lt;p&gt;A major challenge is using such models. Leave alone using the LLM in Colab, Kaggle notebook, or locally with less amount of RAM, even loading such huge models need high RAM which is not a feasible solution.&lt;/p&gt;

&lt;p&gt;So one such solution will be model sharding which converts the huge models into smaller chunks which in turn takes less time and consumes less hardware for loading such huge models.&lt;/p&gt;

&lt;p&gt;Here we will discuss model sharding using Open Source LLM Mistral 7B freely hosted on HuggingFace Platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7j1u0eunuhd1b0ra0rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7j1u0eunuhd1b0ra0rq.png" alt=" " width="800" height="212"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from accelerate import Accelerator, load_checkpoint_and_dispatch

model_name = "mistralai/Mistral-7B-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16
)

accelerator = Accelerator()

accelerator.save_model(
    model=model,
    save_directory=save_directory,
    max_shard_size="200MB"
)

device_map={"":'cpu'}

model = load_checkpoint_and_dispatch(
    model,
    checkpoint="/content/model/",
    device_map=device_map,
    no_split_module_classes=["Block"]
)

new_model = "&amp;lt;Name of the model&amp;gt;"
HF_TOKEN = "&amp;lt;Your HF Token&amp;gt;"

tokenizer.push_to_hub(
    new_model,
    token=HF_TOKEN
)

model.push_to_hub(
    new_model,
    token=HF_TOKEN
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Loading Sharded Model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi00as4j0zyubxbckxk8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi00as4j0zyubxbckxk8d.png" alt=" " width="297" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The original model took 16GB RAM to load the full model in 16-bit floating point&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%%time
model_name = "mistralai/Mistral-7B-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;CPU times: user 36.8 s, sys: 48.5 s, total: 1min 25s&lt;br&gt;
Wall time: 3min 30s&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%%time
model_name = "Sharathhebbar24/Mistral-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;CPU times: user 23 s, sys: 48.7 s, total: 1min 11s&lt;br&gt;
Wall time: 1min 49s&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyivo9mm1tzjz0wj2fxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgyivo9mm1tzjz0wj2fxx.png" alt=" " width="291" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sharded model took 3GB RAM to load the full model in a 16-bit floating point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;br&gt;
HF Docs: &lt;a href="https://huggingface.co/docs/transformers/en/big_models" rel="noopener noreferrer"&gt;https://huggingface.co/docs/transformers/en/big_models&lt;/a&gt;&lt;br&gt;
Using Accelerate: &lt;a href="https://huggingface.co/docs/transformers/en/main_classes/model#large-model-loading" rel="noopener noreferrer"&gt;https://huggingface.co/docs/transformers/en/main_classes/model#large-model-loading&lt;/a&gt;&lt;br&gt;
Medium: &lt;a href="https://medium.com/@sharathhebbar24/llm-model-sharding-55102ecb1823" rel="noopener noreferrer"&gt;https://medium.com/@sharathhebbar24/llm-model-sharding-55102ecb1823&lt;/a&gt;&lt;br&gt;
Github: &lt;a href="https://github.com/SharathHebbar/Model-Sharding" rel="noopener noreferrer"&gt;https://github.com/SharathHebbar/Model-Sharding&lt;/a&gt;&lt;br&gt;
Reference: &lt;a href="https://medium.com/@jain.sm/sharding-large-models-for-parallel-inference-ee19844cc44#:%7E:text=Memory%20Efficiency%3A%20Sharding%20enables%20running,parts%2C%20reducing%20memory%20requirements%20significantly" rel="noopener noreferrer"&gt;https://medium.com/@jain.sm/sharding-large-models-for-parallel-inference-ee19844cc44#:~:text=Memory%20Efficiency%3A%20Sharding%20enables%20running,parts%2C%20reducing%20memory%20requirements%20significantly&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>transformers</category>
    </item>
    <item>
      <title>Joblib</title>
      <dc:creator>Sharath Hebbar</dc:creator>
      <pubDate>Sun, 25 Jun 2023 07:52:32 +0000</pubDate>
      <link>https://forem.com/sharathhebbar/joblib-3j57</link>
      <guid>https://forem.com/sharathhebbar/joblib-3j57</guid>
      <description>&lt;h2&gt;
  
  
  Joblib
&lt;/h2&gt;

&lt;p&gt;Joblib is a set of tools to provide lightweight pipelining in Python. In particular: transparent disk-caching of functions and lazy re-evaluation (memoize pattern) easy simple parallel computing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why it is used?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Better performance&lt;/li&gt;
&lt;li&gt;reproducibility&lt;/li&gt;
&lt;li&gt;Avoid computing the same thing twice&lt;/li&gt;
&lt;li&gt;Persist to disk transparently&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;p&gt;Transparent and fast disk-caching of output value&lt;br&gt;
Embarrassingly parallel helper&lt;br&gt;
Fast compressed Persistence&lt;/p&gt;

&lt;h3&gt;
  
  
  Importing libraries
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from joblib import Memory,Parallel, delayed,dump,load
import pandas as pd
import numpy as np
import math
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Data Creation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my_dir = '/content/sample_data'
a = np.vander(np.arange(3))
print(a)
output: [[0 0 1]  [1 1 1]  [4 2 1]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Memory
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mem = Memory(my_dir)
output: [[ 0  0  1]  [ 1  1  1]  [16  4  1]]
sqr = mem.cache(np.square)
b = sqr(a)
print(b)
output: [[ 0  0  1]  [ 1  1  1]  [16  4  1]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Parallel
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%%time
Parallel(n_jobs=1)(delayed(np.square)(i) for i in range(10))
output: CPU times: user 2.85 ms, sys: 0 ns, total: 2.85 ms
Wall time: 3 ms
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
%%time
Parallel(n_jobs=2)(delayed(np.square)(i) for i in range(10))
output: CPU times: user 42.7 ms, sys: 762 µs, total: 43.5 ms
Wall time: 75.9 ms
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
%%time
Parallel(n_jobs=3)(delayed(np.square)(i) for i in range(10))
output: CPU times: user 92.9 ms, sys: 8.93 ms, total: 102 ms
Wall time: 151 ms
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dump
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dump(a,'/content/sample_data/a.job')
output: ['/content/sample_data/a.job']
Load
aa = load('/content/sample_data/a.job')
print(aa)
output: array([[0, 0, 1],        [1, 1, 1],        [4, 2, 1]])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;Documentation: &lt;a href="https://joblib.readthedocs.io" rel="noopener noreferrer"&gt;https://joblib.readthedocs.io&lt;/a&gt;&lt;br&gt;
Download: &lt;a href="https://pypi.python.org/pypi/joblib#downloads" rel="noopener noreferrer"&gt;https://pypi.python.org/pypi/joblib#downloads&lt;/a&gt;&lt;br&gt;
Source code: &lt;a href="https://github.com/joblib/joblib" rel="noopener noreferrer"&gt;https://github.com/joblib/joblib&lt;/a&gt;&lt;br&gt;
Report issues: &lt;a href="https://github.com/joblib/joblib/issues" rel="noopener noreferrer"&gt;https://github.com/joblib/joblib/issues&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;br&gt;
&lt;a href="https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2FSharathHebbar%2FData-Science-and-ML%2Ftree%2Fmain%2Fcodes%2Fjoblib" rel="noopener noreferrer"&gt;https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2FSharathHebbar%2FData-Science-and-ML%2Ftree%2Fmain%2Fcodes%2Fjoblib&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Access Git(Github/Gitlab .. ) using multiple accounts on the same system.</title>
      <dc:creator>Sharath Hebbar</dc:creator>
      <pubDate>Sun, 25 Jun 2023 07:44:20 +0000</pubDate>
      <link>https://forem.com/sharathhebbar/access-gitgithubgitlab-using-multiple-accounts-on-the-same-system-2m66</link>
      <guid>https://forem.com/sharathhebbar/access-gitgithubgitlab-using-multiple-accounts-on-the-same-system-2m66</guid>
      <description>&lt;p&gt;Often you need to use multiple GitHub/GitLab accounts from the same system and you use the command line to do this. But while trying this you will face so many errors as you can only add one user to use git services, so below are the steps.&lt;/p&gt;

&lt;p&gt;Step 1:&lt;/p&gt;

&lt;p&gt;Install the Git command-line tool on your system, if it is not already installed.&lt;/p&gt;

&lt;p&gt;The link to download git is as follows, you can choose your package according to your OS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://git-scm.com/download" rel="noopener noreferrer"&gt;https://git-scm.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2:&lt;/p&gt;

&lt;p&gt;Generate an SSH key for each of the GitHub accounts you want to use. You can do this by running the following command, replacing “&lt;a href="mailto:email@example.com"&gt;email@example.com&lt;/a&gt;” with the email address associated with your GitHub account:&lt;/p&gt;

&lt;p&gt;Note: You can see .ssh folder in C:\Users[Your Username].ssh&lt;/p&gt;

&lt;p&gt;if not exists then it will automatically create .ssh folder&lt;/p&gt;

&lt;p&gt;ssh-keygen -t rsa -b 4096 -C “&lt;a href="mailto:email@example.com"&gt;email@example.com&lt;/a&gt;”&lt;/p&gt;

&lt;p&gt;Step 3:&lt;/p&gt;

&lt;p&gt;When prompted, enter a file in which to save the key. It is recommended to use a different file for each key, so you can easily distinguish between them. For example, you might use “id_rsa_account1” for one account and “id_rsa_account2” for another.&lt;/p&gt;

&lt;p&gt;Ex: C:\Users[Your Username].ssh \id_rsa -&amp;gt; C:\Users[Your Username].ssh \id_rsa_account1&lt;/p&gt;

&lt;p&gt;Step 4:&lt;/p&gt;

&lt;p&gt;Follow the prompts to enter a passphrase for the key. This passphrase will be used to encrypt the private key, so make sure to choose a strong and unique passphrase.&lt;/p&gt;

&lt;p&gt;You can even not give any passphrase&lt;/p&gt;

&lt;p&gt;Two files will be generated for each key&lt;/p&gt;

&lt;p&gt;Id_rsa_account1&lt;/p&gt;

&lt;p&gt;id_rsa_account1.pub&lt;/p&gt;

&lt;p&gt;Step 5:&lt;/p&gt;

&lt;p&gt;Once the key has been generated, copy the public key (.pub) and paste it in your github account ssh section.&lt;/p&gt;

&lt;p&gt;[In Github go to settings &amp;gt; SSH and GPG &amp;gt; New SSH key &amp;gt; Add Title of your choice &amp;gt; In key section paste the public key ]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwzd25bkofcvikbs1gpl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwzd25bkofcvikbs1gpl.png" alt=" " width="206" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssy2yzhkk8xko9rvgwle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssy2yzhkk8xko9rvgwle.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47egnv1nkss80crmikz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47egnv1nkss80crmikz4.png" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6:&lt;/p&gt;

&lt;p&gt;Repeat the steps above for each of the GitHub accounts you want to use.&lt;/p&gt;

&lt;p&gt;Step 7:&lt;/p&gt;

&lt;p&gt;Create a config file in the same directory (.ssh)&lt;/p&gt;

&lt;h3&gt;
  
  
  account 1
&lt;/h3&gt;

&lt;p&gt;Host github.com-account1&lt;/p&gt;

&lt;p&gt;HostName github.com&lt;/p&gt;

&lt;p&gt;User git&lt;/p&gt;

&lt;p&gt;IdentityFile ~/.ssh/id_rsa_account1&lt;/p&gt;

&lt;h3&gt;
  
  
  account 2
&lt;/h3&gt;

&lt;p&gt;Host github.com-account2&lt;/p&gt;

&lt;p&gt;HostName github.com&lt;/p&gt;

&lt;p&gt;User git&lt;/p&gt;

&lt;p&gt;IdentityFile ~/.ssh/id_rsa_account2&lt;/p&gt;

&lt;p&gt;Step 8:&lt;/p&gt;

&lt;p&gt;Now Initialize a git repository in any of the desired location.&lt;/p&gt;

&lt;p&gt;Then add all the required files that need to be committed&lt;/p&gt;

&lt;p&gt;Then commit those files with an appropriate commit message&lt;/p&gt;

&lt;p&gt;Then while adding the remote repo Take the ssh URL and change github.com to github.com-account1&lt;/p&gt;

&lt;p&gt;Ex: git remote add origin &lt;a href="mailto:git@github.com-account1"&gt;git@github.com-account1&lt;/a&gt;:dummy-proj/dummy-proj.git&lt;/p&gt;

&lt;p&gt;You can do the same for git clone too.&lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
To change your default account&lt;br&gt;
git config user.name “account1”&lt;br&gt;
git config user.email “&lt;a href="mailto:account1@gmail.com"&gt;account1@gmail.com&lt;/a&gt;”&lt;/p&gt;

&lt;p&gt;Links:&lt;br&gt;
&lt;a href="https://github.com/SharathHebbar/Data-Science-and-ML/tree/main/articles/github-multiple-accounts" rel="noopener noreferrer"&gt;https://github.com/SharathHebbar/Data-Science-and-ML/tree/main/articles/github-multiple-accounts&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubeflow Installation</title>
      <dc:creator>Sharath Hebbar</dc:creator>
      <pubDate>Sun, 25 Jun 2023 07:37:31 +0000</pubDate>
      <link>https://forem.com/sharathhebbar/kubeflow-installation-4e5g</link>
      <guid>https://forem.com/sharathhebbar/kubeflow-installation-4e5g</guid>
      <description>&lt;p&gt;Installing the Kubeflow pipeline(locally) in the Windows system&lt;br&gt;
Installing Kubeflow Pipeline (KFP) on Windows can be a bit challenging since KFP is primarily designed to run on Linux-based systems. However, you can set up a Windows-based development environment and run KFP using a Docker container.&lt;br&gt;
The following steps will help you to install Kubeflow pipelines for windows system&lt;br&gt;
&lt;strong&gt;Step 1: Install Docker Desktop&lt;/strong&gt;&lt;br&gt;
Docker Desktop: &lt;a href="https://docs.docker.com/desktop/install/windows-install/" rel="noopener noreferrer"&gt;https://docs.docker.com/desktop/install/windows-install/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2: Install Minikube&lt;/strong&gt;&lt;br&gt;
Minikube installation link: &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;https://minikube.sigs.k8s.io/docs/start/&lt;/a&gt;&lt;br&gt;
Open up your PowerShell in Administrator mode and type in this command&lt;br&gt;
Download and run the installer for the latest release&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;New-Item -Path 'c:\' -Name 'minikube' -ItemType Directory -Force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Invoke-WebRequest -OutFile 'c:\minikube\minikube.exe' -Uri 'https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-amd64.exe' -UseBasicParsing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the minikube.exe binary to your PATH.&lt;br&gt;
Make sure to run PowerShell as Administrator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine)
if ($oldPath.Split(';') -inotcontains 'C:\minikube'){ `
[Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath), [EnvironmentVariableTarget]::Machine) `
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Install K8s&lt;/strong&gt;&lt;br&gt;
Kubectl commands&lt;br&gt;
&lt;a href="https://www.kubeflow.org/docs/components/pipelines/v1/installation/localcluster-deployment/" rel="noopener noreferrer"&gt;https://www.kubeflow.org/docs/components/pipelines/v1/installation/localcluster-deployment/&lt;/a&gt;&lt;br&gt;
To deploy the Kubeflow Pipelines, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Set PIPELINE_VERSION=1.8.5
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
kubectl wait - for condition=established - timeout=60s crd/applications.app.k8s.io
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the Kubeflow Pipelines UI is accessible by port-forwarding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
  </channel>
</rss>
