<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Daniel Sada 🐤</title>
    <description>The latest articles on Forem by Daniel Sada 🐤 (@danielsada).</description>
    <link>https://forem.com/danielsada</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/danielsada"/>
    <language>en</language>
    <item>
      <title>How to sleep at night having a cloud service: common Architecture Do's</title>
      <dc:creator>Daniel Sada 🐤</dc:creator>
      <pubDate>Tue, 12 Nov 2019 07:56:21 +0000</pubDate>
      <link>https://forem.com/danielsada/how-to-sleep-at-night-having-a-cloud-service-common-architecture-do-s-3di</link>
      <guid>https://forem.com/danielsada/how-to-sleep-at-night-having-a-cloud-service-common-architecture-do-s-3di</guid>
      <description>&lt;p&gt;You can see the original/latest &lt;a href="https://danielsada.tech/blog/cloud-services-dos/"&gt;up to date post in my blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Over my work in different scales of services, I've noticed that there is a common pattern in some of these services that makes them easier to approach and cause less headaches to the engineers that handle them. When we deal with millions of users making requests all the time across the world. I've noted that there are a few things that help a lot for people to sleep at night confortably. This is a quick guide on how to &lt;a href="https://www.youtube.com/watch?v=b2F-DItXtZs"&gt;be web scale [meme]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is not a comprehensive list, but the things I've seen that actually &lt;strong&gt;help&lt;/strong&gt; or have helped me in the past.&lt;/p&gt;

&lt;h1&gt;
  
  
  Easy Level
&lt;/h1&gt;

&lt;p&gt;These steps are relatively easy to implement but yield high return on investment. If you aren't doing it, you'll be surprised how good quality of life is after you start adopting these.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code.
&lt;/h2&gt;

&lt;p&gt;The first part of guranteeing sleep in having Infrastructure as Code. That means that you have a way of deploying your entire infrastruscture. It sounds fancy, but in reality, we are saying in code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Deploy 100 VMs
- with ubuntu
- each one with 2GB Ram
- they'll have this code
- with these parameters
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And you can track changes to the infrastructure and revert quickly via source control.&lt;/p&gt;

&lt;p&gt;Now, the modernist in me will say "We can use kubernetes/docker to do everything on this list!" You are correct, but for now, I'm going to err on the side of an easy explanation on this blog.&lt;/p&gt;

&lt;p&gt;If you are interested in this you can check out &lt;a href="https://www.chef.io/"&gt;Chef&lt;/a&gt;, &lt;a href="https://puppet.com/"&gt;Puppet&lt;/a&gt; or &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration/Delivery
&lt;/h2&gt;

&lt;p&gt;Having a build, and test pass run against each one of your pull requests is essential to building a scalable service. Even if the testpass is basic, it will at least guarantee that the code you are deploying compiles.&lt;/p&gt;

&lt;p&gt;What you have to answer everytime you do this step, relates to the question &lt;strong&gt;Is my build going to compile, pass the tests I've set up, and it's valid?&lt;/strong&gt;, this might seem like a low bar, but this catches a myriad of issues you wouldn't imagine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lZxKaj5U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/all-passed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lZxKaj5U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/all-passed.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Nothing more beautiful that seeing those checkmarks.&lt;/p&gt;

&lt;p&gt;For this technology you can check out &lt;a href="https://github.com/"&gt;Github&lt;/a&gt;, &lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt; or &lt;a href="https://jenkins.io/"&gt;Jenkins&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Load balancers
&lt;/h2&gt;

&lt;p&gt;Ok, so you have your machines, or endpoints, but you really want to have a load balancer to redirect traffic for having equal loads in all your nodes or redirect your traffic in case you have an outage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aPoKZqtQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/load-balancing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aPoKZqtQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/load-balancing.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having load balancers at the start of your traffic is generally a good thing. A best practice is also having redundant load balancing so that you don't have a single point of failure.&lt;/p&gt;

&lt;p&gt;Usually, load balancers are configured in your own Cloud, but if you know some good ones, leave them below in the comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  RayIDs, Correlations or UUIDs for requests.
&lt;/h2&gt;

&lt;p&gt;Have you ever got an error in an application that tells you something along the lines of &lt;strong&gt;Something wrong happened, save this id and send it to our support team&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SgvLbRou--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/ray-id.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SgvLbRou--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/ray-id.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;unique ID, correlation ID, RayID or any of its variations, is a unique identifier which allows you to trace a request through its lifecycle&lt;/strong&gt;, therefore allowing someone to see the entire path of the request in the logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kRjXubtY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/rays-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kRjXubtY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/rays-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image above, the user makes a request to system A, A then talks to B, B talks to C, saves to X and then returns to A. &lt;/p&gt;

&lt;p&gt;If you were to remote into the VMs and try to trace the path, (and manually correlate which calls belong), you'd go crazy. having the unique identifier makes your life a lot easier, this is one of the easier things you can do in your service, that will save you a lot of time as your service grows.&lt;/p&gt;

&lt;h1&gt;
  
  
  Medium Level
&lt;/h1&gt;

&lt;p&gt;These are usually more complicated than the previous ones, but if you grab the right tools, it can be easy, the ROI for small to medium companies is easy to justify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Centralized logging
&lt;/h2&gt;

&lt;p&gt;Congratulations! You deployed 100 VMs. The next day, the CEO comes with an error he had while testing the service. He gives you the above correlation ID, but then you have to scramble to look in the 100 machines which one was the one that failed. And it has to be solved before the presentation tomorrow.&lt;/p&gt;

&lt;p&gt;While that sounds like a fun endeavor; make sure to get one place to search your logs from. The way I've centralized my logs before, is with &lt;a href="https://www.elastic.co/what-is/elk-stack"&gt;ELK stack&lt;/a&gt; Having log collection and searchability s is going to really improve your experience searching for that one unexpected log. Extra points if you can also generate charts and fun things like that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h4qdTgks--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/elk-stack.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h4qdTgks--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/elk-stack.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Agents
&lt;/h2&gt;

&lt;p&gt;Well, now that your service is deployed, you have to make sure it stays up! The best way to do that, is to have some &lt;strong&gt;agents&lt;/strong&gt; running against your service and checking whether it's up, and that common operations can be done.&lt;/p&gt;

&lt;p&gt;In this step you have to answer: &lt;strong&gt;Is the build I deployed healthy and does it work fine?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I personally recommend &lt;a href="https://www.getpostman.com/"&gt;Postman&lt;/a&gt; for small to medium projects that need to be monitored against and documented in their APIs. But in general, you want to make sure you have a way of knowing if your service is down and provide timely alerts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatic Autoscaling based on load
&lt;/h2&gt;

&lt;p&gt;This one is simple. If you have 1 VM serving requests, and it's getting close to &amp;gt;80% memory, you might to either grow the vm or add more VMs to your cluster. Having these operations done automatically is great for being elastic under load. But you always have to be careful in how much money you spend and set sensible limits. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2qLwROpb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/auto-scaling.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2qLwROpb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/auto-scaling.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can configure auto-scaling in most cloud services, via more machines, vm or more powerful machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment system
&lt;/h2&gt;

&lt;p&gt;Having a way to test out things to 1% of your users for an hour is a good way to deploy changes safely. You've seen these kinds of systems in action. Facebook will give you a different color or change the size of a font to see if that is more pleasing. This is also called AB testing.&lt;/p&gt;

&lt;p&gt;Even releasing a new feature could be under an experiment, and then determine how it's released. What people don't realize is that you also get the ability to "recall" or change configuration on the fly, which given a feature that will take your service down, the ability to scale it back is amazing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hard Level
&lt;/h1&gt;

&lt;p&gt;These are actually hard, somewhat difficult to implement, you probably need a bit more resources to do these. So, for a small or medium company, it's going to be hard to push with these. &lt;/p&gt;

&lt;h2&gt;
  
  
  Blue-Green deployments
&lt;/h2&gt;

&lt;p&gt;This is what I call the "Erlang" way of deploying. When Erlang started being used more widespread, back when telephone companies started communicating people together, there was a point where software switchboards were used to route phonecalls. The main concern about the software in these switchboards was not to ever drop calls while upgrading the system. Erlang has a beautiful way of loading a module without ever dropping the previous one.&lt;/p&gt;

&lt;p&gt;This step depends on you having a load balancer. Let's imagine you have a specific version N of your software, then you want to deploy version N+1. You &lt;strong&gt;could&lt;/strong&gt; just stop the service and deploy the next version "in theory" in a convinient time for your users and get some downtime, but in general, let's say you have &lt;strong&gt;really&lt;/strong&gt; strict SLAs. A 4 9's means you can &lt;em&gt;only&lt;/em&gt; have 6 minutes down a year.&lt;/p&gt;

&lt;p&gt;If you really want to achieve that, you need to have two deployments at the same time, the one you have right now (N) and your next version (N+1). You point the load balancer to redirect a percentage of the traffic to the new version (N+1) while you actively monitor for regressions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w-oS3KQY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w-oS3KQY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we have our green deployment N, which is healthy! We are trying to move to the next version of this deployment. &lt;/p&gt;

&lt;p&gt;We send out first a really small test to see whether our N+1 deployment is working with a bit of traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dqORgAdM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dqORgAdM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, we have a set of automated checks that we end up verifying until our rollout is complete. If you want to be &lt;em&gt;really really&lt;/em&gt; careful, you can also keep your N deployment "forever" for a quick rollback given a bad regresion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iXPXlUCA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iXPXlUCA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/nn1-3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to even go into a deeper level, have everything in the blue-green deployment execute automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anomaly Detection and automatic mitigations.
&lt;/h2&gt;

&lt;p&gt;Given that you have centrlized logging, and some good log collection, flights all the elements above. We can now be proactive about catching failures. On our monitors, and on our logs, we feed our features and different charts and we are able to be proactive to when something is going to fail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PPIpKuD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/anomaly-detection.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PPIpKuD---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://danielsada.tech/images/blog/cloud/anomaly-detection.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With anomaly detection you start looking into some of the "tells" of hte service, whether a spike in CPU will let you know when your hard drive is going to fail, or a spike in request # means you need to scale up. Those kinds of statistical insights will empower your service to be proactive.&lt;/p&gt;

&lt;p&gt;Once you have those analytics, you can scale on any dimension, proactively and reactively change machines, databases, connections or other resources. &lt;/p&gt;

&lt;p&gt;This requieres a really good system, or ML prowness, which then makes it more interesting in the sense that the investment is really high, and the return is high on a massive scale.&lt;/p&gt;

&lt;h1&gt;
  
  
  This is it!
&lt;/h1&gt;

&lt;p&gt;I'm certainly not an expert in any of these, and I'm starting my carreer, but this list of priorities per stages would have saved me a lot of headaches in the past.&lt;/p&gt;

&lt;p&gt;I'm really interested in hearing from you: what would you add to this list? Please do comment.&lt;/p&gt;

&lt;p&gt;This article is open source, feel free to make a&lt;a href="https://github.com/danielsada/danielsada.tech"&gt; PR in GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title> Programs that have saved me 100+ hours by automating repetitive tasks</title>
      <dc:creator>Daniel Sada 🐤</dc:creator>
      <pubDate>Tue, 11 Jun 2019 04:08:56 +0000</pubDate>
      <link>https://forem.com/danielsada/programs-that-have-saved-me-100-hours-by-automating-repetitive-tasks-11ld</link>
      <guid>https://forem.com/danielsada/programs-that-have-saved-me-100-hours-by-automating-repetitive-tasks-11ld</guid>
      <description>&lt;p&gt;Along the year I've been working on several web platforms where repetitive tasks are usually the norm. From batch optimizing a thousand images, to changing from this obscure format to csv or json. What if you need to critically update a file in your client's and you aren't fancy enough to use some kind of continuous integration tool I'll give you some tips and tricks to be productive.&lt;/p&gt;

&lt;h1&gt;
  
  
  1. PhotoBulk
&lt;/h1&gt;

&lt;p&gt;A client comes by, dumps you a folder of 10 GB of pictures in 4000x4000 and each one of them weights 30MB in JPEG format. The client needs all this images tomorrow in the webpage, watermarked and with specific names. As you mop tears from the floor, you read this guide and discover PhotoBulk for &lt;a href="https://www.eltima.com/products/bulk-photo-editor.html"&gt;Windows&lt;/a&gt; and &lt;a href="https://mac.eltima.com/bulk-image-editor.html"&gt;Mac&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5M5xA7S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://phaven-prod.s3.amazonaws.com/files/image_part/asset/1850945/Q6oltvJvG4Kwgy_V23yBvDPOqW4/thumb_Screen_Shot_2017-03-15_at_5.46.19_PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q5M5xA7S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://phaven-prod.s3.amazonaws.com/files/image_part/asset/1850945/Q6oltvJvG4Kwgy_V23yBvDPOqW4/thumb_Screen_Shot_2017-03-15_at_5.46.19_PM.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photobulk lets you resize, watermark, optimize and rename images in bulk, or in batches. This was one of the main tools that have saved me hours and hours, so I widely recommend it. I know some of this things could be done via console, or via a photoshop action. But this is way faster.&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Regex and Sublime Text or VS Code
&lt;/h1&gt;

&lt;p&gt;The same client, not happy that you took 4 hours to do the shenanigans to the images and upload them, goes and asks you to add a palette of 200 colors, given in an php array, to complete the migration of their color palette to javascript.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/j2xXLmnnN8N2g/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/j2xXLmnnN8N2g/giphy.gif" alt=""&gt;&lt;/a&gt;&lt;br&gt;
Regex is so powerful to create fast changes in massive data, that i've saved countless hours of conversions or friend's tasks that it is worth learning. I never understood the power of regex, until I used it in a text editor. Really amazing.&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Coda or KomodoIDE
&lt;/h1&gt;

&lt;p&gt;After uploading the pallette of colors to the website, the customer needs in a hurry to edit the website, because he added his CC number to a username field. Clearly this is trouble. Better than that, he also managed to hard code it somehow to the php code. In this client's alternative world, continous integration doesn't exist. Imagine going to a world where you have to fire up Filezilla, download the file for the code. Edit it, and then upload it. Also firing up your MySQL DB manager, or console, searching the concrete entry, and changing it.&lt;/p&gt;

&lt;p&gt;Do they even know what version control is?&lt;/p&gt;

&lt;p&gt;After sometime doing this, for urgent tasks in places without versioning *shudders* I've used Coda, from panic. (for macOS) or Komodo IDE (for Windows). Both this programs, allow to set up a direct FTP link and mySQL connection to a DB, where you double click the site, and you get an instant connection to the server. So you manage to control the leak of customer's data to 10 minutes because you were fast.&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Alfred or Spotlight.
&lt;/h1&gt;

&lt;p&gt;One of the tools that have saved me the most time are Alfred and Spotlight (maybe Cortana, but it is still not there). Want to open a file quickly? Cmd + Space -&amp;gt; file.xls . Want to do a conversion? Cmd + Space -&amp;gt; 100 USD to CAD or 10 lt to gal Want to do math? Cmd + Space -&amp;gt; (13239*(1232+24)*2) + 123 % 2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Unzna99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.alfredapp.com/media/pages/home/clipboard.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Unzna99--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.alfredapp.com/media/pages/home/clipboard.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alfred is even more awesome, you can program scripts to run or searches given certain keywords. You just get everything instantly.&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Hazel
&lt;/h1&gt;

&lt;p&gt;Now, after working 3+ years in the same computer, with multiple clients,  I despise getting it in order. So I decided that I'd get Hazel (or File Juggler for windows). Where you can create rules on your folders, based on how you want them organized. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KtSvQqum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.noodlesoft.com/kb/uploads/xmain.png.pagespeed.ic.4wQ59TUX7j.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KtSvQqum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.noodlesoft.com/kb/uploads/xmain.png.pagespeed.ic.4wQ59TUX7j.webp" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, I can create a rule that watches my desktop for files more than 4 hours old, that are screenshots, and it takes them to my "Screenshot folder", or downloads that I haven't used in more than X weeks. Or create a rule that filters out images. Or create a folder which "sorts" all the files I put into it.&lt;/p&gt;

&lt;h1&gt;
  
  
  But, hey, this is pretty basic.
&lt;/h1&gt;

&lt;p&gt;I know this is fairly basic, but there is people who &lt;em&gt;manually &lt;/em&gt; does this actions, because they don't want to bother themselves with this kind of automation. Or they don't have the time to automate them themselves. So if this saves some time, I'd like for it to be useful as it was to me.&lt;/p&gt;

&lt;p&gt;What are your 100 hour time savers?&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>design</category>
    </item>
  </channel>
</rss>
