<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oluwafemi Tairu</title>
    <description>The latest articles on Forem by Oluwafemi Tairu (@emmarex).</description>
    <link>https://forem.com/emmarex</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/emmarex"/>
    <language>en</language>
    <item>
      <title>AWS CloudFormation UPDATE_ROLLBACK_FAILED fix in production</title>
      <dc:creator>Oluwafemi Tairu</dc:creator>
      <pubDate>Mon, 01 Nov 2021 12:37:34 +0000</pubDate>
      <link>https://forem.com/emmarex/aws-cloudformation-updaterollbackfailed-fix-in-production-4203</link>
      <guid>https://forem.com/emmarex/aws-cloudformation-updaterollbackfailed-fix-in-production-4203</guid>
      <description>&lt;p&gt;Recently, while trying to deploy a serverless application, the pipeline failed thus putting the cloud formation stack in "&lt;strong&gt;UPDATE_ROLLBACK_FAILED&lt;/strong&gt;" state. This is due to an error in your pipeline. In my own case, I was trying to use a base layer - "&lt;strong&gt;arn:aws:lambda:us-east-1:770693421928:layer:klayers-python38-pandas:35&lt;/strong&gt;" which at the point of writing this article does not exist.&lt;/p&gt;

&lt;p&gt;If you are familiar with AWS SAM Cli or cloud formation generally, you will know that you wont be able to deploy a new update until this status changes to "&lt;strong&gt;UPDATE_COMPLETE&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;Now, you have two options facing this situation (at least that I am aware of):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attempt to complete the update rollback process.&lt;/li&gt;
&lt;li&gt;Delete the stack and create another.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The second option is almost perfect and simple except for the fact that you might not want to do this in production has is basically shutting down your application.&lt;/p&gt;

&lt;p&gt;The first option however is much more advisable. Now, how do you complete the update rollback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using AWS CLI
&lt;/h2&gt;

&lt;p&gt;You can make use of the &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudformation/continue-update-rollback.html" rel="noopener noreferrer"&gt;aws cloudformation continue-update-rollback&lt;/a&gt; command to complete your update rollback.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="nt"&gt;-update-rollback&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--stack-name&lt;/span&gt; STACK_NAME &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--resources-to-skip&lt;/span&gt; LIST_OF_RESOURCES


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Where &lt;strong&gt;STACK_NAME&lt;/strong&gt; is the name of your stack while &lt;strong&gt;LIST_OF_RESOURCES&lt;/strong&gt; is the list of the logical IDs of resources you will like to skip. Please note that for &lt;strong&gt;LIST_OF_RESOURCES&lt;/strong&gt; you have to specify resources that are in the &lt;strong&gt;UPDATE_FAILED&lt;/strong&gt; state only.&lt;/p&gt;

&lt;p&gt;Sample&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="nt"&gt;-update-rollback&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--stack-name&lt;/span&gt; eazido-app-stack &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--resources-to-skip&lt;/span&gt; CustomerApi PaymentApi 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Using AWS Console
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Visit your cloud formation area of the AWS console - &lt;a href="https://console.aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/cloudformation/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select the stack that requires rollback.&lt;/li&gt;
&lt;li&gt;Under the "&lt;strong&gt;Stack Action&lt;/strong&gt;" select "&lt;strong&gt;Continue update rollback&lt;/strong&gt;"
&lt;strong&gt;Note&lt;/strong&gt; If this update rollback still fails or you want to skip some resources, then select "&lt;strong&gt;Advanced troubleshooting&lt;/strong&gt;" on the "&lt;strong&gt;Continue update rollback&lt;/strong&gt;" dialog and tick the resources you will like to skip.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once this is done, your stack should now carry the "** UPDATE_ROLLBACK_COMPLETE**" status. If you try to deploy your updates again, it should work just fine.&lt;/p&gt;

&lt;p&gt;However, remember how the update failed because of an error, you will need to identify and fix this before you deploy. In my own case, I was using a layer whose version does not exist. I had to update the layer by updating the stack template (we will talk about this in a bit). You can tell why your stack failed by checking the &lt;strong&gt;Status reason&lt;/strong&gt; column under the &lt;strong&gt;Events&lt;/strong&gt; tab of the failed stack.&lt;/p&gt;
&lt;h2&gt;
  
  
  Updating CloudFormation Template
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Select the stack you want to update its template and select "&lt;strong&gt;View in designer&lt;/strong&gt;"" under the "&lt;strong&gt;Template&lt;/strong&gt;" tab.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo11onicnmx1n547z0h64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo11onicnmx1n547z0h64.png" alt="Template tab of an AWS cloud formation stack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the "&lt;strong&gt;View in designer&lt;/strong&gt;"'s page loads, you should see a page similar to this.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnop13mkoe5g9tav3t0cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnop13mkoe5g9tav3t0cd.png" alt="AWS Cloud formation designer page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can get the template for a particular stack using AWS CLI's (get-template)[&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudformation/get-template.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/cloudformation/get-template.html&lt;/a&gt;] command.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation get-template &lt;span class="nt"&gt;--stack-name&lt;/span&gt; STACK_NAME


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;You can either edit your template as json or Yaml file. Once you are done making changes, click on the file icon on the top left corner of the page and click "&lt;strong&gt;save&lt;/strong&gt;". You have the option of saving on your laptop or in an s3 bucket. Once that is done, you can exit the page.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating and executing a Change Set
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Select the stack you want to update it's template.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under "&lt;strong&gt;Stack Actions&lt;/strong&gt;" select "&lt;strong&gt;Create change set for current stack&lt;/strong&gt;". You should see a page like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg434s19zruy3sfznkmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg434s19zruy3sfznkmx.png" alt="AWS Create change set page"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on "&lt;strong&gt;Replace current template&lt;/strong&gt;", select a template source (local or s3) based on where you saved the edited template.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the change set and &lt;strong&gt;Execute&lt;/strong&gt;. This is will starts updating the AWS cloud formation stack and you can see the progress on the event tab.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can update a particular stack using AWS CLI's (update-stack)[&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/cloudformation/update-stack.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/cloudformation/update-stack.html&lt;/a&gt;] command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws cloudformation update-stack &lt;span class="nt"&gt;--stack-name&lt;/span&gt; STACK_NAME &lt;span class="nt"&gt;--template-url&lt;/span&gt; https://s3.amazonaws.com/sample/updated_template.template


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you are getting AWS cli errors, you might want to take a look at the - &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-troubleshooting.html" rel="noopener noreferrer"&gt;AWS cli troubleshooting guide.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>cloud</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Deploying Machine learning models on Mobile with Tensorflow Lite and Firebase M.L Kit</title>
      <dc:creator>Oluwafemi Tairu</dc:creator>
      <pubDate>Thu, 02 Apr 2020 19:56:37 +0000</pubDate>
      <link>https://forem.com/emmarex/deploying-machine-learning-models-on-mobile-with-tensorflow-lite-and-firebase-m-l-kit-4647</link>
      <guid>https://forem.com/emmarex/deploying-machine-learning-models-on-mobile-with-tensorflow-lite-and-firebase-m-l-kit-4647</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/emmarex/a-thousand-ways-to-deploy-machine-learning-models-a-p-i-36eo"&gt;last article&lt;/a&gt;, I shared how to deploy Machine learning models via an A.P.I.&lt;/p&gt;

&lt;p&gt;In this article, I will share with you on how to deploy models using Tensorflow Lite and Firebase M.L Kit with Mobile Apps.&lt;/p&gt;

&lt;p&gt;Deploying models via A.P.I is fine but there are multiple reasons why that might not suit your need or that of your organisation. Some of them include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Poor internet connection or no access to an internet connection for users of your app.&lt;/li&gt;
&lt;li&gt;Data Privacy.&lt;/li&gt;
&lt;li&gt;Need for faster inference.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whatever your reason might be, changing your model availability from A.P.I's to on-device won't cost you much time. Your previous codes can stay the same with only an addition of some few lines.&lt;/p&gt;

&lt;p&gt;Convert your previously saved model&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;

&lt;span class="c1"&gt;# Convert the model.
&lt;/span&gt;&lt;span class="n"&gt;converter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TFLiteConverter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_saved_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MODEL_DIR&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tflite_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;plant_ai.tflite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tflite_model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Convert new models&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;

&lt;span class="c1"&gt;# saving your deep learning model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;plant_ai_model.h5&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Convert the model.
&lt;/span&gt;&lt;span class="n"&gt;converter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TFLiteConverter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_keras_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tflite_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;converter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;convert&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;plant_ai.tflite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tflite_model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this article, I will be using a deep learning model for plant disease detection. I wrote about it &lt;a href="https://towardsdatascience.com/plant-ai-plant-disease-detection-using-convolutional-neural-network-9b58a96f2289" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
The modified code can be found &lt;a href="https://www.kaggle.com/emmarex/plant-a-i" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Integrating with a mobile app
&lt;/h4&gt;

&lt;p&gt;Our model is now ready for deployment to a mobile app. Bundling a model (in a .tflite format ) means you can perform prediction without the use of the internet which is good for some solutions. However, bundling your .tflite models with your apps this will increase app size.&lt;/p&gt;

&lt;p&gt;For this article, I will be using &lt;a href="http://flutter.dev/" rel="noopener noreferrer"&gt;Flutter&lt;/a&gt; but you can do the same using Java or Kotlin following &lt;a href="https://firebase.google.com/docs/ml-kit/android/use-custom-models" rel="noopener noreferrer"&gt;Google's Documentation&lt;/a&gt; or Swift or Objective-C with &lt;a href="https://firebase.google.com/docs/ml-kit/ios/use-custom-models" rel="noopener noreferrer"&gt;Google's Documentation&lt;/a&gt;. Flutter is Google’s UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from a single codebase - &lt;a href="http://flutter.dev/" rel="noopener noreferrer"&gt;Flutter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are new to Flutter, you can follow &lt;a href="https://flutter.dev/docs/get-started/install" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to get started with your first Flutter application. Your Flutter application structure should look like this after creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fo0uoxbj32do24bnancq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fo0uoxbj32do24bnancq9.png" alt="Flutter app folder structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to install the necessary packages. This packages can be simply installed in Flutter by putting them in the &lt;code&gt;pubspec.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcxhj94k0vne7a6fenbla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcxhj94k0vne7a6fenbla.png" alt="pubspec.yaml"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following packages were used for this app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://pub.dev/packages/mlkit" rel="noopener noreferrer"&gt;mlkit&lt;/a&gt;: A Flutter plugin to use the Firebase ML Kit.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pub.dev/packages/image_picker" rel="noopener noreferrer"&gt;image_picker&lt;/a&gt;: A Flutter plugin for iOS and Android for picking images from the image library, and taking new pictures with the camera.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pub.dev/packages/toast" rel="noopener noreferrer"&gt;toast&lt;/a&gt; (Optional): A Flutter Toast plugin.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, you will have to add Firebase to your android or iOS project following the guide &lt;a href="https://firebase.google.com/docs/android/setup" rel="noopener noreferrer"&gt;here&lt;/a&gt; for android and the guide &lt;a href="https://firebase.google.com/docs/ios/setup" rel="noopener noreferrer"&gt;here&lt;/a&gt; for iOS. Do make sure you add the "google-services.json" to your project folder.&lt;/p&gt;

&lt;p&gt;Next, we would create a folder called assets in our project's root directory and then copy our .tflite model and label.text to that folder.&lt;br&gt;
We would need to make this accessible in our Flutter project by modifying our &lt;code&gt;pubspec.yaml&lt;/code&gt; file as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqxhoawc0c5f89d6yid3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqxhoawc0c5f89d6yid3.png" alt="pubspec.yaml file showing the inclusion of assets folder"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will skip the interface design aspect has you can have any interface for your app.&lt;/p&gt;

&lt;p&gt;Let's import the necessary packages (which we installed earlier)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:flutter/material.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'dart:io'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'dart:typed_data'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:flutter/services.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:image_picker/image_picker.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:mlkit/mlkit.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:toast/toast.dart'&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="s"&gt;'package:image/image.dart'&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will declare some variables and constants first at the beginning of our code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;FirebaseModelInterpreter&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FirebaseModelInterpreter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;FirebaseModelManager&lt;/span&gt; &lt;span class="n"&gt;manager&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FirebaseModelManager&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;File&lt;/span&gt; &lt;span class="n"&gt;selectedImageFile&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;modelLabels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;imageDim&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kt"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;inputDims&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kt"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;outputDims&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first two lines show the instantiation of the FirebaseModel Interpreter and Manager. The &lt;code&gt;imageDim&lt;/code&gt; (height and width 256x256) is the size of the image required by the model.&lt;br&gt;
Then the &lt;code&gt;inputDims&lt;/code&gt; is the required input shape of our model while &lt;code&gt;outputDims&lt;/code&gt; is the output shape. This value may vary depending on the model used. If someone else built the model, you can easily check for these by running the Python code below.&lt;/p&gt;
&lt;h3&gt;
  
  
  Load TFLite model and allocate tensors.
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tensorflow&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;

&lt;span class="n"&gt;interpreter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;lite&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Interpreter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;plant_ai_lite_model.tflite&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# OR interpreter = tf.lite.Interpreter(model_content=tflite_model)
&lt;/span&gt;&lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allocate_tensors&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Get input and output tensors.
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;input_details&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_input_details&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;output_details&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_output_details&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The value for &lt;code&gt;input_details&lt;/code&gt; in this case is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'conv&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="err"&gt;d_input'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'index':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'shape':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;array(&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;dtype&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;int&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'dtype':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;class&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'numpy.float&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="err"&gt;'&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'quantization':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;while that of the &lt;code&gt;output_details&lt;/code&gt; is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'name':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'Identity'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'index':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'shape':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;array(&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;dtype&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;int&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'dtype':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;class&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;'numpy.float&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="err"&gt;'&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;'quantization':&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do take note of the value of 'shape' in the two results above. That is what determines the value for 'inputDims' and 'outputDims' above.&lt;/p&gt;

&lt;p&gt;Next, we would load the model and labels using the FirebaseModel manager. Firebase M.L Kit gives us the option of either loading local model on the device or using one hosted in the Cloud. You can find samples and more information about this &lt;a href="https://firebase.google.com/docs/ml-kit" rel="noopener noreferrer"&gt;here&lt;/a&gt; or you can check my &lt;a href="https://github.com/Emmarex/Firebase-M.L-Kit-Flutter/blob/master/lib/Pages/FirebaseVisionTextPage.dart" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; for a simple example of Firebase Vision API to recognise text.&lt;/p&gt;

&lt;p&gt;For this example, I have used an on-device model loaded using &lt;code&gt;FirebaseLocalModelSource&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;manager&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;registerLocalModelSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;FirebaseLocalModelSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;assetFilePath:&lt;/span&gt; &lt;span class="s"&gt;"assets/models/plant_ai_lite_model.tflite"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;modelName:&lt;/span&gt; &lt;span class="s"&gt;"plant_ai_model"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;//     rootBundle.loadString('assets/models/labels_plant_ai.txt').then((string) {&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;_labels&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;removeLast&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;modelLabels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_labels&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's take a picture !!&lt;br&gt;
Using the &lt;a href="https://pub.dev/packages/image_picker" rel="noopener noreferrer"&gt;image_picker&lt;/a&gt; Flutter package, the function below allows us to pick a picture from the phone's gallery&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;_pickPicture&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ImagePicker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;pickImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;source&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ImageSource&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;gallery&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
      &lt;span class="n"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;selectedImageFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="n"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;selectedImageFile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The predict function is where we convert our image file into a byte and run on the interpreter using the input and output options. The result will be a prediction has you will have it when running on your Jupyter notebook or any other platform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;imageToByteListFloat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;imageFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;imageDim&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;interpreter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nl"&gt;localModelName:&lt;/span&gt; &lt;span class="s"&gt;"plant_ai_model"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nl"&gt;inputOutputOptions:&lt;/span&gt; &lt;span class="n"&gt;FirebaseModelInputOutputOptions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;FirebaseModelIOOption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;FirebaseModelDataType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FLOAT32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;inputDims&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;FirebaseModelIOOption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;FirebaseModelDataType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;FLOAT32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;outputDims&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nl"&gt;inputBytes:&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The results of the prediction can then be converted to label text has specified in the 'labels.txt'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;confidenceLevel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;2.55&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;confidenceLevel&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
              &lt;span class="n"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;modelLabels&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;confidenceLevel&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;predictionKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;predictions&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;entries&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;predictionKeys&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;compareTo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;double&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fromEntries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;predictionKeys&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The prediction result are now avaialble for further use and/or display on your app. All without the use of an internet connection and without sending data from the client to a particular server.&lt;/p&gt;

&lt;p&gt;The codes for this article can be found &lt;a href="https://github.com/Emmarex/AThousandWaysToDeployModels---tflite" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks for reading. 🤝&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
    </item>
    <item>
      <title>A thousand ways to deploy Machine learning models - A.P.I</title>
      <dc:creator>Oluwafemi Tairu</dc:creator>
      <pubDate>Fri, 24 Jan 2020 11:22:21 +0000</pubDate>
      <link>https://forem.com/emmarex/a-thousand-ways-to-deploy-machine-learning-models-a-p-i-36eo</link>
      <guid>https://forem.com/emmarex/a-thousand-ways-to-deploy-machine-learning-models-a-p-i-36eo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally posted &lt;a href="https://towardsdatascience.com/a-thousand-ways-to-deploy-machine-learning-models-part-1-652b59ab92ae"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;“What use is a machine learning model if you don’t deploy to production “ — Anonymous&lt;/p&gt;

&lt;p&gt;You have done a great work building that awesome 99% accurate machine learning model but your work most of the time is not done without deploying. Most times our models will be integrated with existing web apps, mobile apps or other systems. How then do we make this happen?&lt;/p&gt;

&lt;p&gt;I said a thousand, I guess I have just a few. I am guessing you would have found the right one for you before you get past the first two or three. Or do you think there are more? Do let me know, let’s see if we can get a thousand 😄.&lt;br&gt;
Let’s start, how do we deploy Machine learning models or integrate with other systems?&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Via an A.P.I
&lt;/h3&gt;

&lt;p&gt;This involves making your models accessible via an Application Programming Interface (A.P.I).&lt;br&gt;
First, I will be deploying a deep learning model built by &lt;a href="https://medium.com/u/10cf0dba197a"&gt;Rising Odegua&lt;/a&gt; to classify malaria cells. The notebook can be found &lt;a href="https://github.com/risenW/Disease_classifier/blob/master/model_training_files_and_output/notebooks/malaria-cells-classification-dsn-poster.ipynb"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then I will also deploy a simple Nigerian movie review classifier model built by &lt;a href="https://medium.com/u/985e503f2ec5"&gt;Aminu Israel&lt;/a&gt;. The notebook can be found &lt;a href="https://github.com/AminuIsrael/NLP-movie-review-model/blob/master/main.ipynb"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Deployment of a deep learning model via an A.P.I
&lt;/h4&gt;

&lt;p&gt;After building and testing your deep learning model the next thing to do is to save your model. This can simply be done by adding this line of code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;cnn_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'model_name.h5'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can move your saved model to the folder accessible to your A.P.I code. I will be using Flask for deployment but Django, Starlette or any other python frameworks can be used. Here is what my folder structure looks like -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_RAfM_hJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3h9fam6zfmbqqw5sw8s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_RAfM_hJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3h9fam6zfmbqqw5sw8s3.png" alt="folder structure" width="880" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Coming over to our A.P.I folder (powered by Flask) first thing you will want to do is install the requirements. I saved the requirements in requirements.txt. Here is what it looks like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aL9TPOu7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/npwa43czjfhfrd3n3pi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aL9TPOu7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/npwa43czjfhfrd3n3pi6.png" alt="requirements.text" width="880" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can install these requirements simply by running this in your terminal&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install -r requirements.txt&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cHBUA9rM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vtl4wx8jfiaffc5v2mly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cHBUA9rM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vtl4wx8jfiaffc5v2mly.png" alt="Import required packages" width="880" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just as we preprocessed images before passing them to our neural network for training, we would also preprocess all input images we collect via our A.P.I endpoint.&lt;br&gt;
On line 8 we converted the image collect via the A.P.I endpoint to an array. You will notice a little difference in what you would have done on a norm and that’s simply because of &lt;a href="https://werkzeug.palletsprojects.com/en/0.15.x/datastructures/#werkzeug.datastructures.FileStorage"&gt;Flask’s data storage&lt;/a&gt;. Using the code on Line 9 would do the same on a Framework like Django or if you are loading from a path on your machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pSEDd_wh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/00wqpi25htneayov9xc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pSEDd_wh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/00wqpi25htneayov9xc8.png" alt="Alt Text" width="880" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From lines 10–15 we get an RBG version of our image, resized the image to 100x100 and convert the image to a numpy array. We also scaled our image to the range [0, 1] and return a tuple containing True if no error occurs and an array with our image in it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_26YPD5Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9zkjyr4xsq98bhackex5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_26YPD5Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9zkjyr4xsq98bhackex5.png" alt="Alt Text" width="880" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the function that performs the magic.&lt;br&gt;
Line 26 simply means that the function “classify_malaria_cells” would be executed when the “classify” endpoint is called.&lt;/p&gt;

&lt;p&gt;On line 29 we are checking to see if the request contains an image file. Then we preprocess that image using the helper function we created.&lt;br&gt;
The saved model can be loaded using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;keras.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_model&lt;/span&gt;
&lt;span class="c1"&gt;# OR
# from tensorflow.keras.models import load_model
&lt;/span&gt;&lt;span class="n"&gt;malaria_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;load_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MODEL_PATH&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From lines 34 to 39, we loaded the saved model, performed prediction to determine the class for the image and get an accuracy score for the prediction. On line 40 the result from the model is saved in a python dictionary that will be sent back as a JSON response on line 60.&lt;/p&gt;

&lt;p&gt;We end our A.P.I with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="n"&gt;__main__&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;flask_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;threaded&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, we have successfully built our A.P.I and it is ready for deployment to any cloud platform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploying to Google Cloud
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create a Google Cloud Account — &lt;a href="https://cloud.google.com/"&gt;https://cloud.google.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new Google Cloud Platform project. You can follow the steps &lt;a href="https://cloud.google.com/appengine/docs/standard/python3/building-app/creating-gcp-project"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Go to the root of this flask project in your terminal and run:

&lt;code&gt;python gcloud app deploy&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;** I have also added a Procfile for deployment to &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt;, just follow the steps &lt;a href="https://devcenter.heroku.com/articles/getting-started-with-python"&gt;here&lt;/a&gt; to deploy.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deployment of a machine learning model via an A.P.I
&lt;/h4&gt;

&lt;p&gt;This is almost the same as with the deep learning model. However, saving your models here is quite different. You could save your model using —&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pickle&lt;/span&gt;
&lt;span class="n"&gt;FILENAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'filename.pkl'&lt;/span&gt;
&lt;span class="n"&gt;pickle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;trained_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FILENAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'wb'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;joblib&lt;/span&gt;
&lt;span class="n"&gt;FILENAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'filename.joblib'&lt;/span&gt;
&lt;span class="n"&gt;joblib&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dump&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;trained_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;FILENAME&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is advised that &lt;strong&gt;joblib&lt;/strong&gt; be used to save models rather than using &lt;strong&gt;pickle&lt;/strong&gt; because of its efficiency on objects that carry large numpy arrays internally.&lt;br&gt;
Just as we did with the deep learning model, save your model in a folder accessible to your Flask A.P.I code. Here is what my file structure looks like&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ql-I5ALO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sn85yzo3y8x9dzvw8qk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ql-I5ALO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sn85yzo3y8x9dzvw8qk1.png" alt="File structure for movie review A.P.I using Flask" width="880" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our requirements are a little different here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sPXQEtHs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ew9pbpqcqvtq3j2cbqh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sPXQEtHs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ew9pbpqcqvtq3j2cbqh7.png" alt="Requirements for running the Flask A.P.I for movie review" width="880" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can install these requirements simply by running this in your terminal again.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install -r requirements.txt&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Next, we would import the required modules and initialise some variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--caH7Iy9V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/k9673jz29o1p0r1tl31j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--caH7Iy9V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/k9673jz29o1p0r1tl31j.png" alt="import required modules" width="880" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w6NmyliW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5nr4irfolpa7fsiz776c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w6NmyliW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5nr4irfolpa7fsiz776c.png" alt="Helper function for tokenization" width="880" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the helper function above, we split the sentence into words, removed the stop words loaded from the pickled file on Line 10. On line 18, a &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html"&gt;HashingVectorizer&lt;/a&gt; is used to the tokenized words into a matrix. The output of this is a scipy.sparse matrix.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ewntI5Kh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d7i9u7ls7f9lkfaxipzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ewntI5Kh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d7i9u7ls7f9lkfaxipzj.png" alt="Alt Text" width="880" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our movie review will be in text format. In Line 24 we check to see if the required form data is been sent and then we assign a variable name to the form data on Line 25. On line 28, we opened and loaded our movie review classifier with pickle(you can achieve the same with joblib). On lines, 29–36 we passed the vectorized movie review into our movie review classifier for prediction, calculated a prediction probability score and created a python dictionary to pass out the results of our prediction. The output is then sent back as a JSON response on Lines 47.&lt;/p&gt;

&lt;p&gt;We have successfully built a movie review classification A.P.I and it is ready for deployment to any cloud platform. I have also added a config file for deployment to both &lt;a href="https://cloud.google.com/"&gt;Google Cloud&lt;/a&gt; and &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That’s all on model deployment via an A.P.I. You can find the codes used in this article &lt;a href="https://github.com/Emmarex/AThousandWaysToDeployModels---A.P.I"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In my next article, I will share how to deploy machine learning models to Apps with TensorFlow Lite.&lt;/p&gt;

&lt;p&gt;Thanks for reading 👍&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>python</category>
    </item>
  </channel>
</rss>
