<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: dstack</title>
    <description>The latest articles on Forem by dstack (@dstack).</description>
    <link>https://forem.com/dstack</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dstack"/>
    <language>en</language>
    <item>
      <title>Running Stable Diffusion Locally &amp; in Cloud with Diffusers &amp; dstack</title>
      <dc:creator>Andrey Cheptsov</dc:creator>
      <pubDate>Mon, 13 Feb 2023 12:04:13 +0000</pubDate>
      <link>https://forem.com/dstack/running-stable-diffusion-locally-in-cloud-with-diffusers-dstack-41n0</link>
      <guid>https://forem.com/dstack/running-stable-diffusion-locally-in-cloud-with-diffusers-dstack-41n0</guid>
      <description>&lt;p&gt;By now, it is likely that everyone has heard of Stable Diffusion, a model capable of producing photo-realistic images from text. Thanks to the Diffusers library by HuggingFace, using this model is straightforward.&lt;/p&gt;

&lt;p&gt;However, organizing your project and dependencies to run it independently of the environment, whether locally or in the cloud, can still be a challenge.&lt;/p&gt;

&lt;p&gt;In this article, I'll show you how to solve a problem using &lt;a href="https://github.com/huggingface/diffusers/" rel="noopener noreferrer"&gt;&lt;code&gt;diffusers&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://github.com/dstackai/dstack" rel="noopener noreferrer"&gt;&lt;code&gt;dstack&lt;/code&gt;&lt;/a&gt;. We will create a script that uses a pretrained model from a remote repository to generate images, and we will explore how effortless it is to run the script both locally and in the cloud. This setup accelerates local development and debugging while also providing the ability to switch to the cloud when additional resources are required.&lt;/p&gt;

&lt;p&gt;To overcome this challenge, we have written an article guiding you through the steps of using &lt;a href="https://github.com/huggingface/diffusers/" rel="noopener noreferrer"&gt;&lt;code&gt;diffusers&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://github.com/dstackai/dstack" rel="noopener noreferrer"&gt;&lt;code&gt;dstack&lt;/code&gt;&lt;/a&gt; to generate images from prompts, both locally and in the cloud, using a simple example.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;diffusers&lt;/code&gt; Python library provides an easy way to access a variety of pre-trained diffusion models published on Hugging Face, allowing you to perform inference tasks with ease.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;dstack&lt;/code&gt; tool lets you set up your ML workflows and their dependencies in code and run them either locally or in a cloud account you've set up. It takes care of creating and destroying cloud resources as needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Here is the list of Python libraries that we will utilize:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;diffusers
transformers
accelerate
scipy
ftfy
safetensors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Using the &lt;a href="https://github.com/huggingface/safetensors" rel="noopener noreferrer"&gt;&lt;code&gt;safetensors&lt;/code&gt;&lt;/a&gt; library for storing tensors instead of pickle, as recommended by Hugging Face for better safety and speed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To ensure our scripts can run smoothly across all environments, let's include them in the &lt;code&gt;stable_diffusion/requirements.txt&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;You can also install these libraries locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; stable_diffusion/requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's install &lt;code&gt;dstack&lt;/code&gt; CLI too, since we'll use it locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;dstack &lt;span class="nt"&gt;--upgrade&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Downloading the pretrained model
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we will use the &lt;a href="https://huggingface.co/runwayml/stable-diffusion-v1-5" rel="noopener noreferrer"&gt;&lt;code&gt;runwayml/stable-diffusion-v1–5&lt;/code&gt;&lt;/a&gt; model, which is pretrained by Runway. You can explore the background story of this model on their blog. However, there are a range of &lt;a href="https://huggingface.co/models?library=diffusers" rel="noopener noreferrer"&gt;other models to choose from&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's create the following &lt;code&gt;stable_diffusion/stable_diffusion.py&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;shutil&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;diffusers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StableDiffusionPipeline&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_folder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StableDiffusionPipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;runwayml/stable-diffusion-v1-5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                                                              &lt;span class="n"&gt;return_cached_folder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;shutil&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copytree&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cache_folder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./models/runwayml/stable-diffusion-v1-5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dirs_exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; By default, &lt;code&gt;diffusers&lt;/code&gt; downloads the model to its own &lt;a href="https://huggingface.co/docs/datasets/cache" rel="noopener noreferrer"&gt;cache folder&lt;/a&gt; built using symlinks. Since dstack &lt;a href="https://github.com/dstackai/dstack/issues/180" rel="noopener noreferrer"&gt;doesn't support symlinks&lt;/a&gt; in artifacts, we're copying the model files to the local folder.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run a script via dstack, it must be defined as a workflow via a YAML file under &lt;code&gt;.dstack/workflows&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.dstack/workflows/stable_diffusion.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;workflows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable-diffusion&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pip install -r stable_diffusion/requirements.txt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python stable_diffusion/stable_diffusion.py&lt;/span&gt;
    &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./models&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16GB&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the workflow can be run anywhere via the &lt;code&gt;dstack&lt;/code&gt; CLI.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before you run a workflow via &lt;code&gt;dstack&lt;/code&gt;, make sure your project has a remote Git branch (&lt;code&gt;git remote -v&lt;/code&gt; is not empty), and invoke the &lt;code&gt;dstack init&lt;/code&gt; command which will ensure that &lt;code&gt;dstack&lt;/code&gt; can access the repository.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's how to run a dstack workflow locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack run stable-diffusion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run it, &lt;code&gt;dstack&lt;/code&gt; will run the script, and save the models folder as an artifact. After that, you can reuse the artifact in other workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attaching an interactive IDE
&lt;/h2&gt;

&lt;p&gt;Sometimes, before you can run a workflow, you may want to run code interactively, e.g. via an IDE or a notebook.&lt;/p&gt;

&lt;p&gt;Look at the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;workflows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code-stable&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code&lt;/span&gt;
    &lt;span class="na"&gt;deps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;workflow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable-diffusion&lt;/span&gt;
    &lt;span class="na"&gt;setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pip install -r stable_diffusion/requirements.txt&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16GB&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see, the &lt;code&gt;code-stable&lt;/code&gt; workflow refers the &lt;code&gt;stable-diffusion&lt;/code&gt; workflow as a dependency. Go ahead and run it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack run code-stable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do, in the output, you'll see a URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;RUN           WORKFLOW    SUBMITTED STATUS     TAG  BACKENDS
giant-sheep-0 code-stable now       Submitted       &lt;span class="nb"&gt;local

&lt;/span&gt;Provisioning… It may take up to a minute. ✓

To interrupt, press Ctrl+C.

Web UI available at http://127.0.0.1:49959/?folder&lt;span class="o"&gt;=&lt;/span&gt;%2Fworkflow&amp;amp;tkn&lt;span class="o"&gt;=&lt;/span&gt;1f6b1fcc1ac0424cb95eb74ae37ddbf7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens VS Code, attached to your workflow, with everything set up: the code, the pre-trained model, and the Python environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rwdon5wylglckyeez6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rwdon5wylglckyeez6i.png" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating images by given prompts
&lt;/h2&gt;

&lt;p&gt;Let's write a script that generates images using a pre-trained model and given prompts.&lt;/p&gt;

&lt;p&gt;Here's an example of the &lt;code&gt;stable_diffusion/prompt_stable.py&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;argparse&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pathlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;diffusers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StableDiffusionPipeline&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;parser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;argparse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ArgumentParser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_argument&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-P&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;append&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse_args&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;pipe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StableDiffusionPipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./models/runwayml/stable-diffusion-v1-5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;local_files_only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cuda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_available&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cuda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;images&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;

    &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mkdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exist_ok&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
        &lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script loads the model from the local &lt;code&gt;./models/runwayml/stable-diffusion-v1–5&lt;/code&gt; folder, generates images based on the given prompts, and saves the resulting images to the local output folder.&lt;/p&gt;

&lt;p&gt;To be able to run it via &lt;code&gt;dstack&lt;/code&gt;, let's define it in &lt;code&gt;.dstack/workflows/stable_diffusion.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;workflows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prompt-stable&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
    &lt;span class="na"&gt;deps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;workflow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable-diffusion&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pip install -r stable_diffusion/requirements.txt&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python stable_diffusion/prompt_stable.py ${{ run.args }}&lt;/span&gt;
    &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./output&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16GB&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run this workflow, &lt;code&gt;dstack&lt;/code&gt; will mount the output artifacts from the &lt;code&gt;stable-diffusion&lt;/code&gt; workflow to the working directory. So, the model that was previously downloaded will be in the local &lt;code&gt;./models/runwayml/stable-diffusion-v1–5&lt;/code&gt; folder.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;dstack&lt;/code&gt; run command allows to pass arguments to the workflow via &lt;code&gt;${{ run.args }}&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's run the workflow locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack run prompt-stable &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="s2"&gt;"cats in hats"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The output artifacts of local runs are stored under &lt;code&gt;~/.dstack/artifacts&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5rjdb00tom2t0sey1it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5rjdb00tom2t0sey1it.png" alt="An example of the prompt-stable workflow output." width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring AWS as a remote
&lt;/h2&gt;

&lt;p&gt;By default, workflows in &lt;code&gt;dstack&lt;/code&gt; run locally. However, you have the option to configure a remote to run your workflows. &lt;/p&gt;

&lt;p&gt;For instance, you can set up your AWS account as a remote to run workflows.&lt;/p&gt;

&lt;p&gt;To configure a remote, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command prompts you to select an AWS profile for credentials, an AWS region for workflow execution, and an S3 bucket to store remote artifacts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS profile: default
AWS region: eu-west-1
S3 bucket: dstack-142421590066-eu-west-1
EC2 subnet: none
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Currently, &lt;code&gt;dstack&lt;/code&gt; only supports AWS as a remote backend. The addition of support for GCP and Azure is expected in one of the upcoming releases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Running remotely
&lt;/h2&gt;

&lt;p&gt;Once a remote is configured, you can use the &lt;code&gt;--remote&lt;/code&gt; flag with the &lt;code&gt;dstack run&lt;/code&gt; command to run workflows remotely.&lt;/p&gt;

&lt;p&gt;Let's first run the &lt;code&gt;stable-diffusion&lt;/code&gt; workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack run stable-diffusion &lt;span class="nt"&gt;--remote&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When you run a workflow remotely, &lt;code&gt;dstack&lt;/code&gt; automatically creates resources in the configured cloud, saves artifacts, and releases them once the workflow is finished.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you run a workflow remotely, you can configure the required resources to run the workflows: either via the &lt;code&gt;resources&lt;/code&gt; property in YAML or the &lt;code&gt;dstack run&lt;/code&gt; command's arguments, such as &lt;code&gt;--gpu&lt;/code&gt;, &lt;code&gt;--gpu-name&lt;/code&gt;, etc.&lt;/p&gt;

&lt;p&gt;Let' run the &lt;code&gt;prompt-stable&lt;/code&gt; workflow remotely and tell to use a GPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dstack run prompt-stable &lt;span class="nt"&gt;--remote&lt;/span&gt; &lt;span class="nt"&gt;--gpu&lt;/span&gt; 1 &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="s2"&gt;"cats in hats"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; By default, &lt;code&gt;dstack&lt;/code&gt; picks the cheapest available machine that matches the resource requirements. For example, in AWS, if you request one GPU, it will use a &lt;code&gt;p2.xlarge&lt;/code&gt; instance with NVIDIA Tesla K80 GPU.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you found the tutorial interesting, you are invited to further delve deeper into the topic by exploring the official documentation for &lt;a href="https://huggingface.co/docs/diffusers/index" rel="noopener noreferrer"&gt;&lt;code&gt;diffusers&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://docs.dstack.ai/" rel="noopener noreferrer"&gt;&lt;code&gt;dstack&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The source code for this tutorial can be found on &lt;a href="https://github.com/dstackai/dstack-examples/tree/main/stable_diffusion" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In one of the next blog posts, we will delve not only into generating images, but also into finetuning a Stable Diffusion model.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>indie</category>
      <category>indiegames</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
