<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alexander Guschin</title>
    <description>The latest articles on Forem by Alexander Guschin (@aguschin).</description>
    <link>https://forem.com/aguschin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aguschin"/>
    <language>en</language>
    <item>
      <title>Deploying ML models straight from Jupyter Notebooks</title>
      <dc:creator>Alexander Guschin</dc:creator>
      <pubDate>Thu, 26 Jan 2023 16:10:10 +0000</pubDate>
      <link>https://forem.com/aguschin/deploying-ml-models-straight-from-jupyter-notebooks-12bh</link>
      <guid>https://forem.com/aguschin/deploying-ml-models-straight-from-jupyter-notebooks-12bh</guid>
      <description>&lt;p&gt;Winter is a time of magic 🧙‍♂️. Everyone is waiting for something special at this time, and Data Scientists aren’t different. It is not in the power of software developer to be a magician, but I can help you deploy your models with literally a &lt;strong&gt;single command&lt;/strong&gt; right from your Jupyter notebook (and basically from any place like your command line or Python script).&lt;/p&gt;

&lt;p&gt;Sounds like magic? It is! 💫&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiw7ge5lf9fqiqz29n5d.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiw7ge5lf9fqiqz29n5d.gif" alt="Streamlit" width="828" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get some winter season vibes, let’s do some magic ourselves first. Let’s do something that will help us prepare some fun for our friends for the weekends.&lt;/p&gt;

&lt;p&gt;To do so, we’ll create a model that &lt;strong&gt;translates lyrics to emojis&lt;/strong&gt;. With all due respect to recent advances in NLP and LLM algorithms, it’s still both easier and more fun to convince your friends to do the backward translation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4eo79nq2obti4hiiqei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4eo79nq2obti4hiiqei.png" alt="ChatGPT" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, I’m sure humans are up to the challenge!&lt;/p&gt;

&lt;p&gt;Alright, just before we get into the actual coding, everything described in this blog post is available on this &lt;a href="https://colab.research.google.com/drive/1JNQ1Ntv9kbKTZ82p9cw-VThf44HQ4a1Y?usp=sharing" rel="noopener noreferrer"&gt;Google Colab notebook&lt;/a&gt;. Now, let's get to it!&lt;/p&gt;

&lt;p&gt;First, let’s load &lt;a href="https://www.kaggle.com/datasets/aadityasiva/emojis-dataset" rel="noopener noreferrer"&gt;an emoji dataset&lt;/a&gt;. We need something to base our model on, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn32hb0vbm0rjrui2rc2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn32hb0vbm0rjrui2rc2j.png" alt="Load emoji dataset" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The secret sauce to creating our emoji language is using a &lt;a href="https://huggingface.co/distilbert-base-uncased" rel="noopener noreferrer"&gt;pretrained Distilbert model&lt;/a&gt; to tokenize and create &lt;a href="https://towardsdatascience.com/a-guide-to-word-embeddings-8a23817ab60f" rel="noopener noreferrer"&gt;embeddings&lt;/a&gt; which represent our emoji dictionary:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff24ui4cp5zgfjsdmnj3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff24ui4cp5zgfjsdmnj3w.png" alt="Turn emojis into embeddings" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now similarly embed any word and replace it with its “closest” emoji embedding to create our text→emoji translator. Using that, “Jingle bells” should become something like “🔔🔔”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmfjki82o1lff1g3tn7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmfjki82o1lff1g3tn7a.png" alt="Find the closest emoji for each word" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Good start - it guessed half of the emojis correctly!&lt;/p&gt;

&lt;p&gt;Our part of magic is done, now to the &lt;strong&gt;single command deployment&lt;/strong&gt; I promised in the beginning. Before we go rogue and deploy it to the cloud, let’s run a &lt;a href="https://streamlit.io" rel="noopener noreferrer"&gt;Streamlit&lt;/a&gt; app locally to test things out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc42sg3pg69w6mgk6whyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc42sg3pg69w6mgk6whyq.png" alt="Serving with Streamlit" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What happened here? That innocent looking &lt;code&gt;mlem.api.save&lt;/code&gt; method &lt;del&gt;inspected the model object to find all python packages to install&lt;/del&gt; did the magic of preparing the model to be used! 🪄✨&lt;/p&gt;

&lt;p&gt;Now you should have a Streamlit app at &lt;code&gt;localhost:80&lt;/code&gt; that looks just like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bw9wbopp3zfxu8so3os.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bw9wbopp3zfxu8so3os.gif" alt="Streamlit app" width="852" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we finished playing around with the model locally, let’s cast our final spell for the day 🧙‍♂️ and deploy the model to &lt;a href="http://fly.io" rel="noopener noreferrer"&gt;fly.io&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wbc9wtoppu5x6455xt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wbc9wtoppu5x6455xt1.png" alt="Deployment to fly.io" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some elvish gibberish is printed to the command line, and you get a deployment up and ready.&lt;/p&gt;

&lt;p&gt;Now, before you go, remember that these powers extends to serving models as &lt;a href="https://mlem.ai/doc/user-guide/serving/fastapi/" rel="noopener noreferrer"&gt;REST API applications&lt;/a&gt;, &lt;a href="https://mlem.ai/doc/user-guide/serving/streamlit/" rel="noopener noreferrer"&gt;Streamlit apps&lt;/a&gt;, building &lt;a href="https://mlem.ai/doc/user-guide/building/docker/" rel="noopener noreferrer"&gt;Docker Images&lt;/a&gt; and &lt;a href="https://mlem.ai/doc/user-guide/building/pip/" rel="noopener noreferrer"&gt;Python packages&lt;/a&gt;, and deploying them to &lt;a href="https://mlem.ai/doc/user-guide/deploying/heroku/" rel="noopener noreferrer"&gt;Heroku&lt;/a&gt;, &lt;a href="https://iterative.ai/blog/mlem-cv-model-deployment/" rel="noopener noreferrer"&gt;Flyio&lt;/a&gt;, &lt;a href="https://mlem.ai/doc/user-guide/deploying/kubernetes/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, and &lt;a href="https://mlem.ai/doc/user-guide/deploying/sagemaker/" rel="noopener noreferrer"&gt;AWS Sagemaker&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Or just &lt;a href="https://mlem.ai/doc/get-started/" rel="noopener noreferrer"&gt;go here&lt;/a&gt; to get a crash course :)&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>career</category>
      <category>softwaredevelopment</category>
      <category>sideprojects</category>
    </item>
  </channel>
</rss>
