<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ramzi A.</title>
    <description>The latest articles on Forem by Ramzi A. (@ralaruri).</description>
    <link>https://forem.com/ralaruri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ralaruri"/>
    <language>en</language>
    <item>
      <title>Bucket Storage on Railway with Go</title>
      <dc:creator>Ramzi A.</dc:creator>
      <pubDate>Tue, 14 Mar 2023 03:50:49 +0000</pubDate>
      <link>https://forem.com/ralaruri/bucket-storage-on-railway-with-go-2ah0</link>
      <guid>https://forem.com/ralaruri/bucket-storage-on-railway-with-go-2ah0</guid>
      <description>&lt;h2&gt;
  
  
  A little about Railway and what we are trying to solve
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//Railway.app"&gt;Railway&lt;/a&gt; has changed how I will build any side projects (and future start up!). One of the best things about building software is actually writing the code and solving the problems you intended with it. But a lot of times I feel like I am wrestling configurations for AWS or GCP to get things to work in an automated way. Thats time lost into building your actual application. &lt;/p&gt;

&lt;p&gt;Railway has changed that for me. Deploying and adding resources like a database is seamless and serverless. I haven't been this excited about a new product in a long time. Thats why I decided to write this blog post. Currently Railway doesn't have native object storage, so I wanted to share one way to use some sort of object storage for your project. It's the least I can do for their generous free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites &amp;amp; Setup.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You have a google cloud account you can sign up here with a gmail account.&lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;&lt;br&gt;
Why Google cloud? Well I thought there is a lot s3 and AWS guides out there to try something different. (If you are interested in AWS guide let me know!).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have a &lt;a href="//Railway.app"&gt;Railway.app&lt;/a&gt; account. Their free tier is really generous and should be sufficient for this guide. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The way I like to use Railway is to connect to an existing project from Github to it. In this case it is the object storage railway project &lt;a href="https://github.com/ralaruri/object-storage-railway" rel="noopener noreferrer"&gt;https://github.com/ralaruri/object-storage-railway&lt;/a&gt;. Every-time you push a PR you can run a development deployment and when you merge to your main branch you can push to production &lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alyh7ttrpergzwnrrfk4.png" rel="noopener noreferrer"&gt;Add Project&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to skip the guide check out the repo here you can deploy to Railway on your personal account. &lt;a href="https://github.com/ralaruri/object-storage-railway" rel="noopener noreferrer"&gt;https://github.com/ralaruri/object-storage-railway&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Google Cloud setup
&lt;/h2&gt;

&lt;p&gt;We will create a project &amp;amp; service account in order to give our code access in our project to create, write, delete and update buckets along with data.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new Project or use a current project you have in the UI. I created a Temporary project for this guide called &lt;code&gt;object-storage-railway&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz45wdoujux1r1itn1ndt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz45wdoujux1r1itn1ndt.png" alt="Adding or creating project"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to IAM &amp;amp; Admin Panel and create a service account. Click &lt;code&gt;+ CREATE SERVICE ACCOUNT&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjpsg9d8sftp6kylwb6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjpsg9d8sftp6kylwb6k.png" alt="Creating a Service Account"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name the Service account whatever you like mine is named: &lt;code&gt;bucket-creator-and-writer&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpa7vr9jifhrxs05282h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpa7vr9jifhrxs05282h.png" alt="Creating the service account"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Permissions to your service account:(Remember to give it the least amount of access needed i.e. reading, writing and creating buckets) In this guide I will keep it simple and give it &lt;code&gt;storage admin&lt;/code&gt; access. This allows it to create, update, read, write and delete storage. Go to &lt;code&gt;Permissions&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frizjhhdxb9qc6pwavasi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frizjhhdxb9qc6pwavasi.png" alt="Permissions"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then add storage admin access.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuie1a23xie5r5hwofx8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuie1a23xie5r5hwofx8p.png" alt="Storage Admin access"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a private JSON key and download it locally. &lt;strong&gt;Do not commit this publicly to your github or share this anywhere what I am sharing in this guide is just a test account I have deleted these and are no longer valid!&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futyjbj3i0i62rv8lyrkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futyjbj3i0i62rv8lyrkp.png" alt="Creating JSON"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the Service Account Locally or on Railway.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Convert your JSON keyfile into a base64 key. using this function in your terminal &lt;code&gt;cat object-storage-railway-f5a8a25dd590.json|base64&lt;/code&gt; it will return the base64 key &lt;strong&gt;Again don't expose this publicly this is a demo account for the purpose of this blog post and has already been deleted&lt;/strong&gt;. Save the output &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3q11ja4882cewe2rcyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3q11ja4882cewe2rcyz.png" alt="encoding the json in base64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This allows us to keep the code in our &lt;code&gt;.env&lt;/code&gt; locally or inside of Railway's variable storage for the project and decode it as needed at runtime.&lt;/p&gt;

&lt;p&gt;base64 decoding in Go part of the &lt;code&gt;bucket_creater.go&lt;/code&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

func CovertStringToJSON(env_details string) []byte {
    decoded_json, err := base64.StdEncoding.DecodeString(env_details)
    if err != nil {
        panic(err)
    }

    return decoded_json

}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;For your project you can access variables here we will add the encoded base64 key as a variable in our Railway Project. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Adding Key &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x94labmdkxqrx8kpc12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9x94labmdkxqrx8kpc12.png" alt="Adding Key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key added&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqiwmf1a56vfvhjqx2p2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqiwmf1a56vfvhjqx2p2.png" alt="Key Added"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally you will want to create a variable called &lt;code&gt;ENV=PROD&lt;/code&gt; this allows us to use Railway as our production env and locally we can use our .env if we choose to. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Code:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Establishing a connection to GCP 
We are using the Google Cloud library in Go to
create a bucket depending if you are running locally or on Railway we use either the .env you have local variables or the variables on Railway. This allows us to test locally if we need to.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This client is the underlying method to use any GCP service in our case we are using to create, read and write to a bucket.&lt;/p&gt;

&lt;p&gt;in &lt;code&gt;create_bucket.go&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

func clientConnection() *storage.Client {
    ctx := context.Background()

    // Set up credentials to authenticate with Google Cloud
    var bucketVar string

    if env := os.Getenv("ENV"); env != "PROD" {
        bucketVar = utils.GoDotEnvVariable("./.env", "BUCKET_CREATOR")
    } else {
        bucketVar = os.Getenv("BUCKET_CREATOR")
    }

    new_creds := CovertStringToJSON(bucketVar)
    creds := option.WithCredentialsJSON(new_creds)

    // Create a client to interact with the Google Cloud Storage API
    client, err := storage.NewClient(ctx, creds)
    if err != nil {
        log.Fatalf("Failed to create client: %v", err)
    }

    return client

}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Creating a Bucket&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can now use the client connection to create a bucket just providing the bucket name and the project we are using.&lt;/p&gt;

&lt;p&gt;in &lt;code&gt;create_bucket.go&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

func CreateBucket(projectID string, bucketName string) {

    ctx := context.Background()

    client := clientConnection()

    // Create a new bucket in the given project with the given name if the bucket
    // already exists then it will just print (bucket already exists)
    if err := client.Bucket(bucketName).Create(ctx, projectID, nil); err != nil {
        fmt.Printf("Failed to create bucket %q: %v", bucketName, err)
    }

    fmt.Printf("Bucket %q created.\n", bucketName)
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;in &lt;code&gt;main.go&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

buckets.CreateBucket("object-storage-railway", "food-bucket-dev")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Reading and Writing Data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have two operators we care about writing and reading. So the two functions take the name of the bucket and the file path of the data you are writing to the bucket. You can change the path of the file you want to write in the bucket it does not need to match the file path you have when reading the file. &lt;/p&gt;

&lt;p&gt;in &lt;code&gt;bucket_operator.go&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


func WriteToBucket(bucket_name string, file_path string) {

    ctx := context.Background()

    // Set up credentials to authenticate with Google Cloud

    client := clientConnection()

    bucketName := bucket_name
    filePath := file_path

    file, err := os.Open(filePath)

    if err != nil {
        log.Fatalf("Failed to open file: %v", err)

    }

    defer file.Close()

    // timestamp the file
    today := time.Now().Format("2006_01_02")

    // write the date part of the file path name
    filePath = fmt.Sprintf("%s_%s", today, filePath)

    // Create a new writer for the file in the bucket
    writer := client.Bucket(bucketName).Object(filePath).NewWriter(ctx)

    // Copy the content
    if _, err := io.Copy(writer, file); err != nil {
        log.Fatalf("Failed to write file to bucket: %v", err)
    }

    // Close the writer to flush the contents to the bucket
    if err := writer.Close(); err != nil {
        log.Fatalf("Failed to close writer %v", err)

    }

    log.Printf("File %q uploaded to bucket %q. \n", filePath, bucketName)

}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note the date is currently hardcoded and will need to be changed.&lt;/p&gt;

&lt;p&gt;in &lt;code&gt;main.go&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

buckets.WriteToBucket("food-bucket-dev", "cheese.json")
buckets.ReadFromBucket("food-bucket-dev","2023_03_12_cheese.json")


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Creating Mock Data:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The purpose of this is jsut to create some mock data to use for the purpose of the project. we create a json of different cheeses. &lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;// write the cheeseMap to a json file&lt;br&gt;
func WriteCheeseMapToJSONFile() {&lt;br&gt;
    cheeseMap := map[int]string{100: "chedder", 200: "swiss", 300: "gouda", 400: "mozzarella"}&lt;br&gt;
    // create a file&lt;br&gt;
    file, err := os.Create("cheese.json")&lt;br&gt;
    if err != nil {&lt;br&gt;
        log.Fatal(err)&lt;br&gt;
    }&lt;br&gt;
    defer file.Close()&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// create a json encoder
encoder := json.NewEncoder(file)
encoder.SetIndent("", "  ")

// write the cheeseMap to the file
err = encoder.Encode(cheeseMap)
if err != nil {
    log.Fatal(err)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;func DeleteJsonFile() {&lt;br&gt;
    err := os.Remove("cheese.json")&lt;br&gt;
    if err != nil {&lt;br&gt;
        log.Fatal(err)&lt;br&gt;
    }&lt;br&gt;
}&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Deploying and Checking out Deployment&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;As mentioned before deploying in Railway is as simple as PR push or manually clicking the deploy button on a project. &lt;/p&gt;

&lt;p&gt;From there your deployment will run and you can check the logs that everything ran successfully. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj09pesjtb1vvzgmmeqry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj09pesjtb1vvzgmmeqry.png" alt="Deploying to Railway"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logs should show the JSON and the bucket creation: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqj7kcmsgb2uapn38n0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqj7kcmsgb2uapn38n0w.png" alt="Logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we can check in GCP in the cloud storage panel which looks like it was a success!&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubxqebq2wgeqdbmy7fcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubxqebq2wgeqdbmy7fcf.png" alt="GCP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now we have established a way to setup Object storage that you can use with you Railway project. I created this for a personal project I am working on where I needed to store some files. As I mentioned before I am a huge fan of Railway and looking forward to creating more content and sharing my next personal project with everyone on Railway!&lt;/p&gt;

&lt;p&gt;If you enjoyed this post let me know! I am happy to create more guides using different languages (Python, Typescript, even Rust!)&lt;/p&gt;

</description>
      <category>go</category>
      <category>googlecloud</category>
      <category>api</category>
    </item>
    <item>
      <title>Local Development Environments for Python using asdf &amp; Poetry</title>
      <dc:creator>Ramzi A.</dc:creator>
      <pubDate>Mon, 06 Mar 2023 21:04:58 +0000</pubDate>
      <link>https://forem.com/ralaruri/local-development-environments-for-python-using-asdf-poetry-1d9l</link>
      <guid>https://forem.com/ralaruri/local-development-environments-for-python-using-asdf-poetry-1d9l</guid>
      <description>&lt;p&gt;In this &lt;a href="https://dev.to/ralaruri/local-development-environments-for-go-using-asdf-51d8"&gt;blog post&lt;/a&gt; I introduced my general setup of how I use asdf for local development for languages. That was meant as a dip into the methodology in order to ramp us up for where I see the real benefit using this for Python &amp;amp; virtual environments.&lt;/p&gt;

&lt;p&gt;We will go through almost the exact same setup until we add Poetry and i'll walk you through step by step. &lt;/p&gt;

&lt;h3&gt;
  
  
  Assumptions:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;You have homebrew &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;homebrew&lt;/a&gt; install&lt;/li&gt;
&lt;li&gt;You are using MacOS or Linux.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Pre-reqs:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;asdf: Installing &lt;a href="https://asdf-vm.com/guide/getting-started.html" rel="noopener noreferrer"&gt;asdf&lt;/a&gt; this is how you can manage different versions of languages and use them across projects &lt;/li&gt;
&lt;li&gt;Installing &lt;a href="https://python-poetry.org/" rel="noopener noreferrer"&gt;Poetry&lt;/a&gt; (for Python only) is a way to manage dependencies &amp;amp; virtual environments in Python. This was shown to me by Data Scientist at work and I would never go back using any other method.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Setup for both install asdf &amp;amp; add to your shell (I use zsh, so adjust if you use bash or something else).
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;brew install asdf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;echo -e "\n. $(brew --prefix asdf)/libexec/asdf.sh" &amp;gt;&amp;gt; ~/.zsh_profile&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Python Setup
&lt;/h2&gt;

&lt;p&gt;In asdf the order of operations is adding a 1. language plugin, 2. adding a version(s) and 3.then setting the language locally or globally for your projects&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Langauge Plugin
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Setting up asdf for go run &lt;code&gt;asdf plugin add python&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;adding different go versions if you want the latest run &lt;code&gt;asdf install python latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;OR install a specific version running &lt;code&gt;asdf install python 3.10.0&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For Python in my organization we recommend using Python version 3.n-1. So if the latest is 3.11 we would use 3.10 in production.&lt;/li&gt;
&lt;li&gt;In the case below I already have both installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4rq2aqt8dvmb2e4my77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4rq2aqt8dvmb2e4my77.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Local Version
&lt;/h3&gt;

&lt;p&gt;In this case I have a hello-python directory that i want to use Python 3.10 in order to do that you would run&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;asdf local python 3.10.0&lt;/code&gt;
This is really important you want to keep your python versions separate for repos so you are using the correct version for your project this makes it easier to manage dependencies when we introduce Poetry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An additional note you will only have to do this once in the directory unless you want to change the version number.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvmam3cqh4y4x25n1a9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvmam3cqh4y4x25n1a9y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Setting up Poetry
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We should have poetry installed &amp;amp; the path setup in your shell. &lt;/li&gt;
&lt;li&gt;Make sure you're in the directory of the project you want to setup the virtrual environment. &lt;/li&gt;
&lt;li&gt;Poetry is like pyenv or venv for Python. the difference is it also manages dependencies issues  building a .lock &amp;amp; .toml file &lt;/li&gt;
&lt;li&gt;run &lt;code&gt;poetry init&lt;/code&gt; and start the walkthrough in the CLI. &lt;/li&gt;
&lt;li&gt;You can add the naming of the project, the license, the python versions and even the packages. The packages can be added after the generation too.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5y4uvsu3burxz43m7ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5y4uvsu3burxz43m7ad.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;once you went through everything you can confirm the generation this will create a &lt;code&gt;.toml&lt;/code&gt; file &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i249oca5lupf4sstakw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i249oca5lupf4sstakw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can now install all the dependencies from the &lt;code&gt;pyproject.toml&lt;/code&gt; by running&lt;br&gt;
&lt;code&gt;poetry install&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;this creates a virtual environment in the python version you have locally and all the dependencies. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7n7fniv138sgjhso27f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7n7fniv138sgjhso27f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The beauty of Poetry is that if you have a version conflict with different packages it will attempt to resolve it by upgrading/downgrading the package based on the constraints you set for the project. An example is if you have numpy version 1.20 and pandas 0.16 they might be compatiable it will resolve the issue by upgrading pandas based on the constraints you set on the &lt;code&gt;pyproject.toml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Using the Virtual Env &amp;amp; Poetry
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;to add packages you can run &lt;code&gt;poetry add pandas&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;to remove packages you can run &lt;code&gt;poetry remove pandas&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;to enter the virtual environment run &lt;code&gt;poetry shell&lt;/code&gt; and then running &lt;code&gt;python hello.py&lt;/code&gt; or run you files by running &lt;code&gt;poetry run python hello.py&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcv9r442iqtay59f1khq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcv9r442iqtay59f1khq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow4j4yjox4bx05cl7rt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow4j4yjox4bx05cl7rt1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tips:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dont name your project the same as a python project there can be naming conflicts. i.e. your  project is named Pandas and you are adding Pandas. &lt;/li&gt;
&lt;li&gt;You can adjust the &lt;code&gt;pyproject.toml&lt;/code&gt; and reinstall the packages in your virtural env&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Local Development Environments for Go using asdf.</title>
      <dc:creator>Ramzi A.</dc:creator>
      <pubDate>Mon, 06 Mar 2023 01:40:49 +0000</pubDate>
      <link>https://forem.com/ralaruri/local-development-environments-for-go-using-asdf-51d8</link>
      <guid>https://forem.com/ralaruri/local-development-environments-for-go-using-asdf-51d8</guid>
      <description>&lt;p&gt;Over the years I have dealt with different ways to setup a project for Python &amp;amp; Go. I am sharing how I setup my personal and work projects for extensibility and modularity.&lt;/p&gt;

&lt;p&gt;There are a lot of ways to setup Python dev environments and Go but this is what works best for me and how I get higher adoption at work for my projects. (This can be extended for any language btw Node or Rust even). &lt;/p&gt;

&lt;p&gt;I will write up how I setup Go projects in this post &amp;amp; then follow up with a Python version which adopts a lot the same concepts. &lt;/p&gt;

&lt;h3&gt;
  
  
  Assumptions:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;You have homebrew &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;homebrew&lt;/a&gt; install&lt;/li&gt;
&lt;li&gt;You are using MacOS or Linux.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Pre-reqs:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;asdf: Installing &lt;a href="https://asdf-vm.com/guide/getting-started.html" rel="noopener noreferrer"&gt;asdf&lt;/a&gt; this is how you can manage different versions of languages and use them across projects 
~~2. Installing &lt;a href="https://python-poetry.org/" rel="noopener noreferrer"&gt;Poetry&lt;/a&gt; (for Python only) is a way to manage dependencies &amp;amp; virtual environments in Python. This was shown to me by Data Scientist at work and I would never go back using any other method. ~~ this will be relevant for the Python blog post&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Setup for both install asdf &amp;amp; add to your shell (I use zsh, so adjust if you use bash or something else).
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;brew install asdf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;echo -e "\n. $(brew --prefix asdf)/libexec/asdf.sh" &amp;gt;&amp;gt; ~/.zsh_profile&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Go Setup
&lt;/h2&gt;

&lt;p&gt;In asdf the order of operations is adding a 1. language plugin, 2. adding a version(s) and 3.then setting the language locally or globally for your projects&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Langauge Plugin
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Setting up asdf for go run &lt;code&gt;asdf plugin add golang&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;adding different go versions if you want the latest run &lt;code&gt;asdf install golang latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;OR install a specific version running &lt;code&gt;asdf install golang 1.19&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup Local &amp;amp; Global Version
&lt;/h3&gt;

&lt;p&gt;In this case I have a hello-go directory that i want to use Go 1.19 in order to do that you would run&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;asdf local golang 1.19&lt;/code&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohmgc2u1huz77ijvepkv.png" alt="Setting Local install"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Init a baseline project.
&lt;/h3&gt;

&lt;p&gt;Get the current directory and init a go project&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;run &lt;code&gt;pwd&lt;/code&gt; which will output the directory &lt;code&gt;/Users/ramzi/documents/projects/hello-go&lt;/code&gt; in my case. &lt;/li&gt;
&lt;li&gt;run &lt;code&gt;go mod init documents/projects/hello-go&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw6y84k3my5roskrnblg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw6y84k3my5roskrnblg.png" alt="Init the project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create &lt;code&gt;hello.go&lt;/code&gt; file by running &lt;code&gt;touch hello.go&lt;/code&gt; and add in the following code: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijczr7q8wsp2q9vixv4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijczr7q8wsp2q9vixv4c.png" alt="Hello.go code"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;go run hello.go&lt;/code&gt; and you should get your  output printed! &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsrqgzvtb1cakau3njgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsrqgzvtb1cakau3njgw.png" alt="Printed Results"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;The purpose of this is when you need to have multiple version of a language across different projects and you want to keep this binaries separate. With asdf you can set a global version (which I do not recommend) but to do so you will run &lt;code&gt;asdf global golang 1.19&lt;/code&gt; for example. &lt;/p&gt;

&lt;p&gt;This is especially important as a Python developer which I will share in an upcoming blog post. &lt;/p&gt;

</description>
      <category>go</category>
      <category>setup</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Get Paid To Train Magic</title>
      <dc:creator>Ramzi A.</dc:creator>
      <pubDate>Sat, 25 Sep 2021 02:11:54 +0000</pubDate>
      <link>https://forem.com/ralaruri/get-paid-to-train-magic-c3p</link>
      <guid>https://forem.com/ralaruri/get-paid-to-train-magic-c3p</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizcj2yyjb10szcsxn58k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fizcj2yyjb10szcsxn58k.png" alt="Alt Text" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grab a wizard mind bomb             &lt;/p&gt;

&lt;p&gt;Let's talk about magic training in old school Runescape.&lt;/p&gt;

&lt;p&gt;In old school &lt;a href="https://oldschool.runescape.com/" rel="noopener noreferrer"&gt;Runescape&lt;/a&gt; &lt;br&gt;
a typical way to train your magic level is to use a high alchemy spell. &lt;/p&gt;

&lt;p&gt;The High Alchemy spell turns an item into gold pieces (denoted as gp in Runescape's currency). &lt;/p&gt;

&lt;p&gt;You find the item that will return the most profit based on the player economy. In order to optimize your magic training generally you train at a loss and that is normally accepted. &lt;/p&gt;

&lt;h3&gt;
  
  
  Loss Scenario:
&lt;/h3&gt;

&lt;p&gt;The item you buy is 500gp, a nature rune is 200gp, and when you complete the high alchemy spell it turns the item into the average market value of 680gp.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your cost is 700gp&lt;/li&gt;
&lt;li&gt;Your return is 680gp&lt;/li&gt;
&lt;li&gt;Your loss is -20gp&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Gain Scenario:
&lt;/h3&gt;

&lt;p&gt;The item you buy is 500gp, a nature rune is 200gp and when you complete high alchemy spell the average market value of the item is 750gp. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your cost is 700gp&lt;/li&gt;
&lt;li&gt;Your return is 750gp&lt;/li&gt;
&lt;li&gt;Your gain is 50gp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The market is dynamic and can change in an instant though so you have to watch and find items that will be ideal to gain a profit. This can be a full time job in itself if you really want to commit to it. &lt;/p&gt;

&lt;p&gt;But not with my new startup that eliminates the guess work and saves you time &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="//www.hialchemy.com"&gt;hialchemy.com&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Hialchemy is one of the first movers in the virtual fintech world. Actually Hialchemy coined the term virtual fintech (all puns intended). &lt;/p&gt;

&lt;p&gt;I sort the top 100 items with the highest return value based on last purchase price. Or sort on item name, price whatever you desire. We always show the most recent Nature rune price to help you gauge how much they will cost. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sh48pw4ixfbcns9v1tk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sh48pw4ixfbcns9v1tk.png" alt="Screen Shot 2021-09-24 at 9.53.20 PM" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can search for any item and see its return value. On top of that it calculates the cost, experience and time it will take of the number of alchs you want to perform!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku9jg1lfsnto3gwxl6ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku9jg1lfsnto3gwxl6ab.png" alt="Screen Shot 2021-09-24 at 9.53.44 PM" width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Frontend:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://streamlit.io/" rel="noopener noreferrer"&gt;Streamlit.io&lt;/a&gt; a really cool python library I found that helps create data tools really quickly to put on the web. It was the quickest way to make an MVP for me so I chose this and focused primarily on the backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Databases/Storage:
&lt;/h4&gt;

&lt;p&gt;The primary database is NoSQL Firebase. The reason I chose this was because it was something I never used before and wanted to explore how it worked. It is a NoSQL document storage database. &lt;/p&gt;

&lt;p&gt;I use GCP buckets for historical json storage of daily snapshots of data.&lt;/p&gt;

&lt;p&gt;In addition I send all my data to BigQuery from Firebase (there is a direction built in connection). All my data is transformed using &lt;a href="https://www.getdbt.com/" rel="noopener noreferrer"&gt;dbt&lt;/a&gt;. The data that gets fed into the frontend is actually transformed data from dbt! &lt;/p&gt;

&lt;h4&gt;
  
  
  Orchestration:
&lt;/h4&gt;

&lt;p&gt;Pub/Sub is the main method of orchestration for all data. Where all my data Pipelines are built in Python and trigged as cloud functions. They are simple json requests to pull api data or publish data in firebase or a gcp bucket. &lt;/p&gt;

&lt;h4&gt;
  
  
  Deployment:
&lt;/h4&gt;

&lt;p&gt;I use primarily Terraform to deploy everything, there are some pieces that I use cloudrun for that was easier to setup than Terraform. &lt;/p&gt;

&lt;h1&gt;
  
  
  Whats Next?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;More frequent purchase updates &lt;/li&gt;
&lt;li&gt;Create an API using FastAPI &lt;/li&gt;
&lt;li&gt;Publish a deeper dive into the tech stack&lt;/li&gt;
&lt;li&gt;Rewrite frontend in Flask (maybe a javascript library?)&lt;/li&gt;
&lt;li&gt;Move backend historical data from gcs bucket to Postgresql&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Credits: &lt;br&gt;
I am a big 10k diver fan and took inspiration from his threads for this post. &lt;a href="https://10kdiver.com/twitter-threads/" rel="noopener noreferrer"&gt;https://10kdiver.com/twitter-threads/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>python</category>
      <category>startup</category>
      <category>gcp</category>
      <category>database</category>
    </item>
    <item>
      <title>Should I Take the Data Engineer Nano Degree?</title>
      <dc:creator>Ramzi A.</dc:creator>
      <pubDate>Wed, 12 Aug 2020 18:48:28 +0000</pubDate>
      <link>https://forem.com/ralaruri/should-i-take-the-data-engineer-nano-degree-22pc</link>
      <guid>https://forem.com/ralaruri/should-i-take-the-data-engineer-nano-degree-22pc</guid>
      <description>&lt;p&gt;I just completed the Udacity Data Engineering program and wanted to share my insights about the program to other people that were thinking of taking it. You will use S3, Redshift, Spark, EMR, EC2, Airflow, Cassandra, &amp;amp; Postgres throughout the course. With a focus on the end on Airflow, Spark, S3 &amp;amp; Redshift. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbbcu7fa9g2jhljc0i988.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbbcu7fa9g2jhljc0i988.png" alt="Alt Text" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Five Modules you will go through
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Data Modeling in Apache Cassandra &amp;amp; Postgres&lt;/li&gt;
&lt;li&gt;Cloud Data Warehouses (AWS, S3, Redshift)&lt;/li&gt;
&lt;li&gt;Building a Data Lake with an ETL pipeline in Spark (EMR, Spark)&lt;/li&gt;
&lt;li&gt;Using Airflow as a data pipeline to schedule data sends. &lt;/li&gt;
&lt;li&gt;An open ended project similar to project 3+4 or using their template project with their own data.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why I took it:
&lt;/h3&gt;

&lt;p&gt;I wanted to get better at data governance, data quality, and data pipelines so I can introduce these concepts at my current role, along with being more familiar with Airflow and AWS. Overall take more ownership of the data engineering I am doing now. &lt;/p&gt;

&lt;p&gt;Our current stack includes data loaders like Stitch and Fivetran but there has been more instances where Stitch and Fivetran don't meet our needs of integrating all our data sources. This is where I saw a need to leverage Airflow into the picture to bring data into our warehouse but also into other sources to be used by our business users.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Experience:
&lt;/h3&gt;

&lt;p&gt;Coming into this program I felt really confident with my SQL skills and Python skills to take this nano degree. I don't write python as much as I liked to (maybe once a week), and there were some cases I struggled with the content because of its involvement with Python in unfamiliar scenarios.  Especially near the Airflow portion but its manageable if you just search examples and the Airflow documentation.&lt;/p&gt;

&lt;p&gt;There were a lot of issues with the course after module 2, to the point a large chunk of my peers were complaining. They eventually added an extension to module 3 with new videos and sources that makes the completion of it a lot easier but should have been there from the start. Unfortunately I really struggled here to complete this portion before the introduced the new videos and content, so I was stuck googling and reading through the AWS documents to figure it out. &lt;/p&gt;

&lt;p&gt;This was a common theme after the 2nd module. A lot of the information and resources were outdated, I felt like I was troubleshooting and learning AWS IAM roles, or AWS security permissions than I was learning data engineering. These issues were only remedied when enough people had an issue. It felt like a half finished program. Although through the hardship I got a lot better at understanding AWS and Airflow because of limited information I had to pass each module.  &lt;/p&gt;

&lt;p&gt;Overall though I really enjoyed the program to get exposure to these tools and concepts. The highlights for me was learning Spark, and Airflow. I am comfortable enough now I can stand up my own ecosystem where I can send data to a warehouse from different sources along with having tests on the data to ensure its quality. This was my overall goal taking this nano degree. &lt;/p&gt;

&lt;h2&gt;
  
  
  Do I recommend this program?
&lt;/h2&gt;

&lt;p&gt;Do I recommend it? It depends on the work you do or want to do. &lt;/p&gt;

&lt;p&gt;With tools like dbt, dbt Cloud, Stitch and Fivetran you can avoid a lot of headaches and stand up an analytics ecosystem relatively quickly. I don't think this is the nano degree is for you if that is your goal. Buying over building makes more sense 9/10 times especially if you are a startup trying to gain insights quickly. &lt;/p&gt;

&lt;p&gt;But if you're building a backend for an application or purposes outside of an analytics warehouse. Then I would potentially take this course and proceed with caution. &lt;/p&gt;

&lt;p&gt;I think a better step is to choose one of these technologies and focus on them. (Take a Udemy Course on Airflow first for example).&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Learned AWS, Airflow, Spark&lt;/li&gt;
&lt;li&gt;Good background information on data modeling, traditional data schemas.&lt;/li&gt;
&lt;li&gt;Highlighted data quality and data governance how to introduce tests within your data pipeline.&lt;/li&gt;
&lt;li&gt;Great community of help&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Overall Cons:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Unclear directions and modules become a huge time sink&lt;/li&gt;
&lt;li&gt;Can burn through free AWS credits quickly. You spend roughly $200 USD a month on the course and then maybe an additional $100-150 USD Month on AWS products trying to figure out the projects due to the outdated documentation and resources provided. (Overall I had to spend an additional $320 over 3 months)&lt;/li&gt;
&lt;li&gt;Spending more time troubleshooting problems with the course or filling gaps of it then learning about data engineering. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall Rating 3/5&lt;/p&gt;

</description>
      <category>database</category>
      <category>aws</category>
      <category>python</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
