<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sudo BLAZE</title>
    <description>The latest articles on Forem by Sudo BLAZE (@sudo_blaze).</description>
    <link>https://forem.com/sudo_blaze</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sudo_blaze"/>
    <language>en</language>
    <item>
      <title>Finding best performant stack so you don't have to.</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Mon, 13 Jan 2025 22:01:00 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/finding-best-performant-stack-so-you-dont-have-to-261n</link>
      <guid>https://forem.com/sudo_blaze/finding-best-performant-stack-so-you-dont-have-to-261n</guid>
      <description>&lt;p&gt;Recently while thinking about indie hacking I went on a tangent, researching the topic on how to improve performance. It's usually a long shot to think about performance early on development stage while product does not have proven usage. But it's a dream come true when the problem your app begin to have is a problem of performance. So this curiosity lead me to &lt;a href="https://www.youtube.com/@AntonPutra" rel="noopener noreferrer"&gt;https://www.youtube.com/@AntonPutra&lt;/a&gt; channel that have several performance tests done under heavy load. A lot of his tests showcases that actually while some solutions might have initial higher latency (long response time) under almost no load, the same solutions might end up more performant under heavy load (like DDoS of 80K requests per second!). &lt;/p&gt;

&lt;p&gt;So here I would like to make a comparison and podium under several categories on which option in each category is most performant. Keep in mind, that very often it's a game of trade-offs, and taking only performance in mind while building your app you might end up with unmaintainable app. Usually going with something you know will actually help you move faster, and then rewriting bits and pieces to more performant option is a way to go. Anyway, without further ado, let's get into it:&lt;/p&gt;

&lt;h2&gt;
  
  
  Top load balancers
&lt;/h2&gt;

&lt;p&gt;Recently the hierarchy has been violated with release of Pingora by Cloudflare. So first spot might surprise you.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pingora by Cloudflare&lt;/li&gt;
&lt;li&gt;Nginx&lt;/li&gt;
&lt;li&gt;Caddy&lt;/li&gt;
&lt;li&gt;Traefik&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While Traefik is one I used the longest, Nginx was my bread and butter for the longest. Even though Pingora is the new hot stuff around the block, I would still wait a little until it's fairly established.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top backend languages
&lt;/h2&gt;

&lt;p&gt;This one might be most opinionated, because a lot of developers get attached and emotional about their favorite language. I'm language agnostic to be honest (I have my own opinions and preferences of course), but I'm looking purely on how much load and latency each can produce in typical backend stack: framework, CRUD and Database. That's why I like Anton's analysis, because it tests both static content load, and CRUD ops load as well. And based on that we have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ZIG (+ ZZZ as potentially best framework)&lt;/li&gt;
&lt;li&gt;RUST (actix, however axium and rocket not very significantly behind)&lt;/li&gt;
&lt;li&gt;GO (stdlib / fasthttp, depending on case)&lt;/li&gt;
&lt;li&gt;Java + Quarkus + GraalVM&lt;/li&gt;
&lt;li&gt;c# dot.net&lt;/li&gt;
&lt;li&gt;Ruby (no Rails, Rails actually slows down Ruby to fall behind JS)&lt;/li&gt;
&lt;li&gt;JS = Bun &amp;gt; Deno &amp;gt; Node.js&lt;/li&gt;
&lt;li&gt;Python (FastApi)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key takeaway for me is that a lot of those technologies got very close  to each other in terms of performance. Throughput and latency really starts to fall apart after node.js. Another one is that framework overhead can actually be one of the biggest bottlenecks in terms of performance.&lt;/p&gt;

&lt;p&gt;I would group these into 3 groups of solutions: Top Performers (Zig, Rust), High Performing Enterprise (GO, Java Native, c#, Ruby Native, Node on Bun), Framework Performance (Rails, Fastapi, Django, Spring etc.).&lt;br&gt;
If we are talking about frameworks, let me cover only two backend languages in that regard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Java Frameworks
&lt;/h2&gt;

&lt;p&gt;The list is pretty short in that terms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Quarkus&lt;/li&gt;
&lt;li&gt;Micronaut&lt;/li&gt;
&lt;li&gt;Spring&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is done by my own testing on docker containers in localhost. Unfortunately I don't have funds to spin up AWS Infra like Anton, so I'm unable to test heavy loads on infra, but I have my lab micro PC, that I can imitate that somehow over local network. If you are interested in that, let me know in the comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top JS Backend Frameworks
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Encore (Far up front)&lt;/li&gt;
&lt;li&gt;Elysia (Bun's favorite)&lt;/li&gt;
&lt;li&gt;Hono&lt;/li&gt;
&lt;li&gt;Fastify &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Recent Node.js backend frameworks basically managed to drastically improve performance, effectively closing the gap between Node backend and GO backend in terms of performance. No wonder because Encore is basically Typescript run on top of Rust runtime. While Encore claims to be 50% faster than Elysia, Elysia by itself is built for Bun, which is actually rewritten in ZIG. So while technically you are writing Typescript, both those frameworks are leveraging capabilities of two most performant new languages out there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top JS Frontend Frameworks
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Astro&lt;/li&gt;
&lt;li&gt;Svelte&lt;/li&gt;
&lt;li&gt;Remix&lt;/li&gt;
&lt;li&gt;Gatsby&lt;/li&gt;
&lt;li&gt;Next.js&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No surprise for me here. I've been in love in Astro since the beginning. First truly meta framework that you can mix and match with others. This podium is based on &lt;a href="https://lookerstudio.google.com/u/0/reporting/55bc8fad-44c2-4280-aa0b-5f3f0cd3d2be/page/M6ZPC?params=%7B" rel="noopener noreferrer"&gt;Page Vitals and Lighthouse&lt;/a&gt; metrics, because we are talking mainly about static content. In simple terms it refers to: how fast page loads and how much overhead framework itself adds. And while Astro itself feels closer to Next.js in terms of usability and convenience, static build is actually one of the smallest. Server build however might not be up to par with top performant backend frameworks, unless it is running on Bun, so keep that in mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Databases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;SQLite&lt;/li&gt;
&lt;li&gt;MongoDB&lt;/li&gt;
&lt;li&gt;Postgres&lt;/li&gt;
&lt;li&gt;MariaDB&lt;/li&gt;
&lt;li&gt;MySQL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First place might surprise you. I was surprised myself, but actually it's obvious. Think about it. All other solutions work over network protocol. SQLite directly saves to disk drive. It's only bottleneck is disk space, and disk speed. And recently libsqlite (thanks to turso) pushed it even further. To be honest I was more surprised that MongoDB outperforms Postgres in terms on how much more inserts and updates are more efficient.  Partially it's due to the fact, that by default, Mongo flushes the state every 100ms, while Postgres saves right away. Either way top 3 will always be a solid choice. &lt;/p&gt;

&lt;h2&gt;
  
  
  Top caching / Key-value store
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Memcached&lt;/li&gt;
&lt;li&gt;Redis (better Throughput) / Dragonfly (better Latency)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Memcached is clear winner by a landslide. Of course any solution at this point will improve performance, but memcached won't have a rival for very long time.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While being sidetracked from my other projects, this research gave me something to think about when choosing stack for my next application.&lt;br&gt;
I think this is not a total revelation for those who work long enough in the industry, but still this results shows how convenience is often trade-off for performance. You have to pick your poison. In 1 year time, or even  6 months, this might be outdated, given how Zig and Rust moved things forward in terms of performance across many solutions. For me, it's a question whether I should consider dipping my fingers in Rust or Zig, or should I settle with encore as a good enough alternative. On the other hand, Mongo is database I know, but never considered for production, and now I know I shouldn't disregard it that fast. &lt;/p&gt;

&lt;p&gt;The point is that by tweaking some decisions early on, you might avoid a complete rewrite of your application as soon as it takes off. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>performance</category>
      <category>saas</category>
    </item>
    <item>
      <title>So I created Linktree alternative...</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Fri, 13 Dec 2024 16:51:51 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/so-i-created-linktree-alternative-173o</link>
      <guid>https://forem.com/sudo_blaze/so-i-created-linktree-alternative-173o</guid>
      <description>&lt;p&gt;You might be asking why I decided to do something that is completely free already and was recycled over and over again. Well the reason is I stumbled upon peculiar problem. &lt;/p&gt;

&lt;p&gt;Basically I wanted to host my link list myself, and existing solutions were not satisfactory enough for me. Either because of how routing behaved, or were too bloated (really? laravel for simple page with links? I'm looking at you &lt;a href="https://linkstack.org" rel="noopener noreferrer"&gt;linkstack&lt;/a&gt;). &lt;/p&gt;

&lt;p&gt;Basically I needed on that will fit my docker stack and is easy to configure with traefik. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facq8x4c084zjf8bsyl65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facq8x4c084zjf8bsyl65.png" alt=" " width="763" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I decided to make one in Astro. Easy, light, no backend, just frontend, quick configuration with yaml. That was my goal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue #1
&lt;/h2&gt;

&lt;p&gt;I didn't want to spend too much time on styling. Sooo... tailwing - check, daisyui, check, ChatGPT plis spit out template in astro... I said astro, not React... I said tailwind, not bootstrap... FFFFFFFFFF... Fine, I'll do it myself too.&lt;/p&gt;

&lt;p&gt;Two hours later and three different skins to choose from, now we can put some data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue #2
&lt;/h2&gt;

&lt;p&gt;Ok, after some googling seems like we can import yaml as is to our script directly using &lt;a href="https://docs.astro.build/en/recipes/add-yaml-support/" rel="noopener noreferrer"&gt;rollup-yaml plugin&lt;/a&gt;. That would be actually nice, but the problem is, yaml is packed together with application, which it does not give us much control over it's contents.&lt;/p&gt;

&lt;p&gt;Finally I ended up loading it with fetch and &lt;code&gt;js-yaml&lt;/code&gt; from "public".&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue #3
&lt;/h2&gt;

&lt;p&gt;At this point I could already package my application, but one more thing I have to consider was base image for my container. I can go either with nginx, but then my yaml file have to be loaded and parsed on the client.&lt;br&gt;
However, if I go Astro SSR route and run it with node, the file can be loaded by server in the background, and no requests for config file is made on client side. Downside however is that I have to install all server dependencies on server side during image creation.&lt;/p&gt;

&lt;p&gt;I went with &lt;code&gt;node:alpine&lt;/code&gt;. Less processing on client side = better user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng4l3t47vf9airjeydfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng4l3t47vf9airjeydfk.png" alt=" " width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I can hook up my list of links (don't confuse with linked list), and attach a volume to a file with my configuration without any UI, backend, server, database or any other dependency whatsoever. &lt;/p&gt;

&lt;p&gt;All of that took me maybe 4-5 hrs so I decided to simply open-source it.&lt;br&gt;
Feel free to check it out, use it, fork it, or whatever. Contributions welcome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/TheDigitalBlaze/linkspace" rel="noopener noreferrer"&gt;https://github.com/TheDigitalBlaze/linkspace&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>opensource</category>
      <category>astro</category>
      <category>frontend</category>
    </item>
    <item>
      <title>React is total brainrot! 🤬</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Sat, 21 Sep 2024 17:31:10 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/react-is-total-brainrot-2n0j</link>
      <guid>https://forem.com/sudo_blaze/react-is-total-brainrot-2n0j</guid>
      <description>&lt;p&gt;Some time ago I had opportunity to look into the code of AirBnb frontend, written in React... and it was nightmare. I have PTSD to this day. It was huge monorepo project with ton of components, embedded in one another, without clear logic or structure behind them. Not even clear separation between presenters and containers. I mean, there were some attempts to separate logic from presentation, however, it was not clear from project's structure. There were hundreds of components all in one folder.&lt;/p&gt;

&lt;p&gt;To be completely honest with you - I never liked React, so I might biased.  But let's dive deeper on why is that in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  The beauty and the beast
&lt;/h2&gt;

&lt;p&gt;Even though I hate it, I can see the appeal. You know a little bit of html, and you know little bit of javascript - you are good to go. It is great rendering engine (!) if you want to move fast. And with the right use case and right architecture it can be very easy to manage, and iterate upon. Basically if your programming experience is purely frontend, little bit of HTML, CSS and JS, than React simply builds upon what you already know.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;HelloWorld&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"flex flex-col items-center justify-center min-h-screen bg-gray-100"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text-4xl font-bold text-blue-600 mb-4"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Hello, World!&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text-lg text-gray-700"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Welcome to your React application.&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;p&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;HelloWorld&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Problem comes with complexity. Every project large enough comes through complexity hell. The true test of great technology comes with ease of introduction to new users. This applies both to user experience (UX), and developer experience. &lt;/p&gt;

&lt;p&gt;The main problem with React though, is that it does not enforce good architecture from beginning. There is no clear separation of concerns, and logic is mixed with presentation. &lt;br&gt;
I can already hear voices: "there is container/presenter pattern! there is redux/jotai/mobx etc!" &lt;br&gt;
Yes, there is, but it's not enforced standard. It's just pieces of libraries, spread apart without clear implementation guidelines. And  unless you end up in greenfield project, you are going to regret it at some point. Fortunately someone had an idea how to rectify that.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Next(js) React
&lt;/h2&gt;

&lt;p&gt;Clearly Vercel had great idea with it's meta-framework because it solves majority of problems with react. If Express and React had a baby, it would be Next.js. It has folder-based routing, clear project structure, server side rendering and possibility to create server endpoints.&lt;/p&gt;

&lt;p&gt;Wait... folder-based routing and server endpoints? I have flashbacks... &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ieat5w077wi6oqr5wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi2ieat5w077wi6oqr5wx.png" alt="Mark Zuckerberg having flashbacks about PHP" width="800" height="1075"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;
&lt;span class="k"&gt;include&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'pages/'&lt;/span&gt;&lt;span class="mf"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;$_GET&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'route'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="mf"&gt;.&lt;/span&gt;&lt;span class="s1"&gt;'.php'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="cp"&gt;?&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oh.... Now it all makes sense. No wonder server-side react have the same issues: exposing secrets, unused endpoints. &lt;br&gt;
But let's be honest: next.js was created to promote vercel, because were all these noob saas js developers will go with their app most likely?&lt;br&gt;
At lest PHP since version 7 can be called high-level, especially when using with Laravel or Symfony. And PHP is not trying to take over native apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  React Native
&lt;/h2&gt;

&lt;p&gt;Like taking over the web was not enough. With electron becoming so bloated, and cross-platform mobile apps is basically "killing two stones with one bird" we definitely needed JS solution for that. The good thing is, that any company now can slap Frontend Dev to this task, order them to make mobile app, and call it a day. Great idea on paper. Another nightmare fuel to add if for some reason your mobile app and web app is the same codebase. Oh wait, code base diverges? Let's put couple of if statements and call it a day. It's nice to talk about success stories like "X - formerly Twitter" (we know!) or several Microsoft use cases (I don't remember which ones though tbh). But, boy oh boy, I bet those developers are doing speedrun to burnout. &lt;/p&gt;

&lt;h2&gt;
  
  
  React Ecosystem
&lt;/h2&gt;

&lt;p&gt;Did you know you can also compose emails and create pdf's using react? Technically you can slap react on your cli as well! Why use &lt;code&gt;go&lt;/code&gt;, &lt;code&gt;rust&lt;/code&gt;, or even &lt;code&gt;bash&lt;/code&gt; when you can use react!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0n2n2la1ends7foqbt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0n2n2la1ends7foqbt1.png" alt="Yo Dawg react meme" width="622" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every language, framework or library becomes trashy if you use it long enough. But let's be honest do you really need react in cli? Ok, I admit, &lt;a href="https://reacord.mapleleaf.dev/" rel="noopener noreferrer"&gt;react on discord &lt;/a&gt; is pretty neat, and that is something I can see myself using for such case. &lt;/p&gt;

&lt;p&gt;The reason for that is that discord bot, even fairly feature-full wouldn't come close to behemoth of a system you can expect in big tech companies. Anything bigger then that, with multiple people in the team and couple of years maintenance cycle, and react goes out of window, unless you start using microfrontends from the beginning and chunking the systems into smaller pieces. But I rarely see that done early on, and later on becomes too late.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the alternative?
&lt;/h2&gt;

&lt;p&gt;I can hear react bros sight audibly when I mention Angular. You knew where this is going. I can even hear accusations in the style of "oh you are backend dev, obviously you gonna choose Angular". Believe it or not, I started on frontend, but I dabbled my hands in backend as well. And I've been fullstack dev for pretty long time already. Now think a bit about it - why is that Angular is so often selected by backend developers? Why is that Java with Spring Boot is still the default on backend corpo world, even when there are newer and faster technologies?&lt;/p&gt;

&lt;p&gt;The answer is simple - reliability and predictability. Most backend developers experienced maintaining existing codebase for a long time. And jumping into existing project where you don't need to worry about technology, only the business rules, is way better.&lt;br&gt;
Angular is a framework compared to React. Pure web (browser) technologies were never designed as software development tool to begin with. That's why Angular adopts a lot of solutions that where proven in traditional application development.&lt;br&gt;
It imposes certain standard on developer from the start. Even without conscious architecture and pattern selection, you can be almost sure, that most logic will land in the services, and most rendering will happen in components. Performance-wise it might not be the best option but on the long run? As maintainer you will be frustrated significantly less so then in React. Because React codebase is maintained differently in every company. And under certain threshold it does not matter if app is responsive in 400ms or 200ms. If that matters to you so much... what the hell you are doing in JS? &lt;br&gt;
Did you know that a lot of companies that are established on the market for quite a long time compared to most big tech giants actually prefer Angular? Companies that have more than 20-30 years of history will stick with Java and Angular instead of React or Next.js. Those companies are less volatile to market changes, and prone to lay offs, if you value your stability, don't pursue FAANG or any startup that went big in last 10 years. Maybe with exception to fintech... fintech is crazy town, regardless of how much they are on the market. &lt;/p&gt;

&lt;p&gt;Now let me give you my personal recommendations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Angular&lt;/strong&gt; - Obviously. Keep in mind, that Angular didn't sit on his ass while others where innovating. Currently modules are not mandatory, components can be standalone. Also you can actually get rid of zone.js and polyfills completely, and use signals instead! That speeds up application considerably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Meta-framework
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Analog.js&lt;/strong&gt; - If Next.js and Angular SSR had a baby (there is no angular universal anymore). It still uses Angular under the hood, so migration should not be that hard. I think in general it is even better then native SSR, because you don't have to use afterRender and afterNextRender (I might be wrong here, so correct me if I am).&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend framework
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Nest.js&lt;/strong&gt; - You need backend only approach in Node, and you like Angular / Spring Boot? Well, fret not, Nest.js comes with help. One of the reasons why Angular is so popular among backend developers are "@" annotations that make cross-cutting concerns like dependency injection easy. Nest.js just much like Spring enables such annotations like &lt;code&gt;@Controller&lt;/code&gt; in your code. Keep in mind however, because it's backend framework, it does not adopt angular.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mobile Development
&lt;/h3&gt;

&lt;p&gt;Here we have actually two options, one obvious: Flutter; and one not so obvious: PWABuilder. &lt;br&gt;
&lt;strong&gt;PWABuilder&lt;/strong&gt; - great option if you have website with PWA enabled in Angular or Analog.js. With this tool you can quickly turn your PWA App into Native app, that you can compile and put on Google Play or Apple App Store. But keep in mind it has some limitations.&lt;br&gt;
&lt;strong&gt;Flutter&lt;/strong&gt; - while not much of a standard angular approach, it's way easier to grasp and manage then React Native app. It has little bit less smooth animations than React Native, but overall performance is much better. Great thing about it is that it comes with a lot of ready-to-use Widgets based on Native Apple or Android UI, that you can further customize. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I don't dismiss React altogether. It has it's use cases and overall contribution of this technology to web development is huge. I don't deny that. There is a place for everything. However it has a feature called "brainrot native" that causes average frontend developer to design poorly. And because it's relatively very young technology (official React launch was 2013, Angular.js was 2010) not a lot of devs with years of experience came to it. Nobody coming from jQuery was thinking about big frontend websites outside maybe of Google! Think about it. Who at the time was doing massive frontend websites? Google. Who come up with angular.js? Google. In the meantime 90% of Facebook was running on PHP. That's why they come up with react, because they needed small pieces of website updated with small components, not a whole web app run in the browser. And the architecture of both reflect that. &lt;br&gt;
So take my advice, and go with Angular if you are going to make massive web application that is going to be maintained over the course of several years. Or make a proper f*cking architecture design for your application if you are going with React, so others don't go bald maintaining it.&lt;/p&gt;

</description>
      <category>react</category>
      <category>webdev</category>
      <category>frontend</category>
      <category>angular</category>
    </item>
    <item>
      <title>Are you burning your money?</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Fri, 21 Jun 2024 18:33:41 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/are-you-burning-your-money-2m6f</link>
      <guid>https://forem.com/sudo_blaze/are-you-burning-your-money-2m6f</guid>
      <description>&lt;p&gt;Recently I've became very spending conscientious. Partially because there are more and more services that adapt subscription based model, and partially because I became more and more aware how much every possible industry giants try to milk their customers as much as possible. I've became quite resentful towards those practices, and recent news about Adobe TOS and anti-consumer practices just pushed that resentment even further.&lt;/p&gt;

&lt;p&gt;On top of that I do tend to forget, that there are some subscriptions I subscribed to trial and forgot to cancel. Sometimes I want to just check what are recurring payments that automatically draw from my account, and where I would like to cut costs. &lt;/p&gt;

&lt;p&gt;Because of that initially I created a script that will look into my financial raport and recognize those payments automatically.&lt;/p&gt;

&lt;p&gt;You can find the source code here: &lt;a href="https://github.com/sudo-sein/subscription-finder" rel="noopener noreferrer"&gt;https://github.com/sudo-sein/subscription-finder&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, at the end of the day, I thought that I would go a bit further, and create service that will create such reports via webapp. So, If you guys want to check it out!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://subscriptionfinder.xyz/" rel="noopener noreferrer"&gt;Subscription Finder&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If application gain enough traction and interest, I might extend it's functionality.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>discuss</category>
      <category>community</category>
      <category>startup</category>
    </item>
    <item>
      <title>Self-hosted public site - safe and cheap.</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Mon, 17 Jun 2024 12:06:00 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/self-hosted-public-site-safe-and-cheap-11k2</link>
      <guid>https://forem.com/sudo_blaze/self-hosted-public-site-safe-and-cheap-11k2</guid>
      <description>&lt;p&gt;In my first post in the series there where a lot of people that were concerned about exposing local network to public internet, or actually might have a problem with doing that due to ISP limitations like dynamic IP or reverse lookup. That might not be a problem if you order a static IP from your ISP, but not all ISPs make it easy or cheap. There are also several reasons you might want to avoid that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current setup
&lt;/h2&gt;

&lt;p&gt;In my initial post I suggested using Intel NUC to host every tool in the closed off local network that only you have access too. Basically you can install server version of Ubuntu, setup ssh access, install docker, and run any tool you need on that docker (I will post extensive guide in the future). I'm using also Portainer and Traefik as my main containers to actually bind any local address to a new service, so I don't have to jump on different ports, depending on what I want. Now your local address becomes something like portainer.home.local. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqlxu2h559ejlzhwv8lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqlxu2h559ejlzhwv8lh.png" alt=" " width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving to public
&lt;/h2&gt;

&lt;p&gt;Now, this type of routing can be incorporated in public environment if you had domain pointing to your static IP, port-forwarded 80 and 443 to your intel NUC from router, and let Traefik take over routing over subdomains. That's classic way of routing traffic. In fact, that's not so different from spinning up EC2 with docker and doing that yourself on external network. Definitely it's safer this way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns5ed3e172ptsgj2l67k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fns5ed3e172ptsgj2l67k.png" alt=" " width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But we are going the economic way, and actually the best of both worlds: tunneling. That's right, we will make a bridge between the domain and our container. This way traffic comes from indirect source instead of directly through our router and local network. You can even set up docker network so it's closed off from the rest of your internal network.&lt;/p&gt;

&lt;p&gt;There are several tools that allows tunneling like &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;ngrok&lt;/a&gt; or &lt;a href="https://tunnelmole.com/" rel="noopener noreferrer"&gt;tunnelmole&lt;/a&gt;, however, best option is actually &lt;a href="https://cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare&lt;/a&gt;. It's free for small amounts of traffic, and works best in docker environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1. Get the domain
&lt;/h3&gt;

&lt;p&gt;Honestly, you can't go public without domain, if you don't have one, you have to grab one. Keep in mind that you need to point DNS towards Cloudflare nameservers so you can't use one that already is occupied by something else. Best services that provide domains for me are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.godaddy.com/en-uk" rel="noopener noreferrer"&gt;GoDaddy&lt;/a&gt; - I've been using them for years. I had good experience with their customer service, and most of my domains sits there. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.namecheap.com/" rel="noopener noreferrer"&gt;NameCheap&lt;/a&gt; - Name says it all. Solid alternative, some domains comes cheaper then GoDaddy, some don't. But one of the main benefit is - they have &lt;code&gt;.xyz&lt;/code&gt; domains for $1 a year!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2. Set up a domain on cloudflare.
&lt;/h3&gt;

&lt;p&gt;Now let's create account on cloudflare and add our domain. Go to websites and add new domain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flat5mabm70jxte2qpdzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flat5mabm70jxte2qpdzh.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you provide domain you already own, you will be asked to subscribe... don't worry, there is free tier, scroll down:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7hnjg0pg3i5kltk0fg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7hnjg0pg3i5kltk0fg7.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do the quick domain scan, click on next, until you get to activation page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2heui3wqt20xup325w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2heui3wqt20xup325w1.png" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down until you see nameservers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0olcuodzwauvqa20jkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0olcuodzwauvqa20jkh.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set up DNS records to point to your cloudflare nameservers (don't copy from me, these can be different for you)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9i40i8d2zca1yzg31vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9i40i8d2zca1yzg31vk.png" alt=" " width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you need to wait until domain propagates correctly. Depending on where you have your own DNS servers set up, it can take between 5 minutes up to 24 hours. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Tunnel your traffic.
&lt;/h3&gt;

&lt;p&gt;Alright, we have cloudflare account and domain set and ready. Let's setup our proxy bridge. Go to "Zero Trust" section: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwp772lwz7qofc90jsh5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwp772lwz7qofc90jsh5.png" alt=" " width="326" height="736"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now expand "Networks" section and select "Tunnels":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmjhwtt65uwlnmwnxrla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmjhwtt65uwlnmwnxrla.png" alt=" " width="342" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click on "Create a tunnel" and on the next screen select "Cloudflared". Name your tunnel on the next screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jfpyg9uldw4bfl3ogmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jfpyg9uldw4bfl3ogmd.png" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your tunnel is created, now you have to install it in your docker. But before we do that, we need to setup our network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgy0gx62l3mek98poc0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgy0gx62l3mek98poc0z.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker network create &lt;span class="nt"&gt;-d&lt;/span&gt; bridge public_services
&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public_services cloudflare/cloudflared:latest tunnel &lt;span class="nt"&gt;--no-autoupdate&lt;/span&gt; run &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;your-token&amp;gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; hello-world &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public_services &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 strm/helloworld-http
&lt;span class="nv"&gt;$ &lt;/span&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s1"&gt;'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'&lt;/span&gt; hello-world
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="c"&gt;# 172.17.0.3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way your tunnel will have access only to services that run on the same network. Last command is what you need to get internal networks container ip. We will need that for tunneling.&lt;/p&gt;

&lt;p&gt;Let's get back to cloudflare and let's make new routes. Edit your tunnel and go to "Private Network"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20rjp5xt5yemnqf0m2wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20rjp5xt5yemnqf0m2wy.png" alt=" " width="560" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here set up which networks should be visible in your tunnel. Set up CIDR accordingly to your container's IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp581iq9ze3u7ynei69h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp581iq9ze3u7ynei69h2.png" alt=" " width="535" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go to "Public Hostname" tab, and click "Add a public hostname"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsepd60wub1ba2jsmus9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsepd60wub1ba2jsmus9w.png" alt=" " width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we set up our routing to a specific domain. We can run multiple services this way, each with it's own specific subdomain, or we can set up our personal website. The best thing is - there are no limitations to technology you want to use, as long as it's dockerized. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up a personal domain and hosting your own services can be a daunting task, especially if you're concerned about security and exposing your local network to the public internet. With docker and Cloudflare however it can be actually quite rewarding and fairly safe. It is good extension of your homelab if you have one, without a need to resort to traditional hosting.&lt;/p&gt;

&lt;p&gt;With this setup, you can host a variety of services, from personal websites to custom applications, all while maintaining a secure and isolated environment. The possibilities are endless, and the best part is that you have complete control over your data and services.&lt;/p&gt;

&lt;p&gt;So, if you've been hesitant to explore the world of self-hosting due to security concerns or technical challenges, give this method a try. You might be surprised at how easy and rewarding it can be to have your own little corner of the internet, all while keeping your local network safe and sound.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>selfhosted</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Do you do your homework before interview?</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Sun, 16 Jun 2024 16:06:00 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/do-you-do-your-homework-before-interview-9dc</link>
      <guid>https://forem.com/sudo_blaze/do-you-do-your-homework-before-interview-9dc</guid>
      <description>&lt;p&gt;Do you ever feel like doing the homework before every interview? Do you feel that there is some kind of test you have to pass and not only gain enough points but also outscore other candidates? But when you actually get the job, it has nothing to do with questions, requirements and responsibilities posted in the listing.&lt;/p&gt;

&lt;p&gt;I think this is ridiculous. Recruitment process became detached from reality for a long time, and I've seen only few companies that actually make recruiting right. I have over 15 years of experience in the field, I've been on many recruiting processes, and I did some recruiting myself as well. I'm not saying my approach is bad, but we have to address elephant in the room. &lt;/p&gt;

&lt;h2&gt;
  
  
  The BAD
&lt;/h2&gt;

&lt;p&gt;Let's start with how most interviews go. You show up. You have a ton of questions in regards to technical knowledge about tech that was listed in the job listing. But let's be hones, recruiters have a standard list of those questions. And the number of questions is limited to a degree, so most of them you can find online easily (yeah, recruiters take that questions from online sources as well). So, if you are determined you can just study answers for the questions, without really understanding the topic! In fact, there are many people that do so! &lt;/p&gt;

&lt;p&gt;The problem starts to be visible when ability to answer questions, does not corelate with ability to solve problems in the job environment. Or in worst case scenario - it does, but the temper does not fit the team, and it ends up lowering efficiency of the whole team. Gaining 1 team member ends up loosing 200% productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ugly
&lt;/h2&gt;

&lt;p&gt;And the productivity is actually on the elitist approach to recruiting.&lt;br&gt;
This is "the top 1%" of programmers. You either have to answer all questions perfectly and beyond, answer leetcode problems, take test on the coding exercise platforms within 2h timeframe, or make a full blown solution within 24h window. &lt;br&gt;
This is not a problem with recruitment process. It's a problem with company culture. The grind mindset. You sacrifice everything for that company, if you don't we don't want you. Those type of companies expect ultimate loyalty from you, but don't return any loyalty back to you.&lt;br&gt;
Let's be honest - companies are loyal as long as it benefits them, but no company is nowadays loyal to their employees, so you don't have any obligation to be loyal back.&lt;br&gt;
Anyway, for the sake of your mental health - steer away from that type recruiting processes. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Good
&lt;/h2&gt;

&lt;p&gt;Now, the best recruiting processes are those who don't focus on the knowledge, but concepts and problem solving skills. Intricacies of a specific tech does not matter unless there are conceptual mindset behind it. Mindset and thinking process are the money saving skills. And heads up recruiters - any hard technical knowledge can be bridged very quickly with googling skills, or nowadays even prompting skills.&lt;br&gt;
So prohibiting googling during interview is one of the stupidest rules ever. Why? Because the way someone is looking for information is one of the most important skills to have. People do that on the job anyway, and during interview you are pretending like it does not happen. &lt;br&gt;
And one more aspect to consider - attitude and company culture fitness. Basically, check for soft skills like self-presentation, honesty, teamwork capabilities, communication. How to do that? By asking questions like: "what would you do, if you realize there is a bug on production", or "how would you handle conflict, if you have disagreement about implementation, and you are sure you represent better solution". But you have to be very sensitive to bullshit answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  My protocol
&lt;/h2&gt;

&lt;p&gt;I'm assuming that at this point you want to recruit some talent. You might have considered other approaches, you might have your own recruiting recipe, however you might think there is some room for improvement. Certainly, there is some room for that in my recruiting protocol - it's not bulletproof. But I think it's one that I like the most and whenever I'm being recruited in similar way, my green lights goes off. I suddenly know, that people on the other side actually know how to do it properly. So here's my process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction (~10 minutes)&lt;/li&gt;
&lt;li&gt;Brief technical check (~10-20 minutes)&lt;/li&gt;
&lt;li&gt;Problem solving check (~15 minutes)&lt;/li&gt;
&lt;li&gt;Soft skills check and debrief (~15 minutes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course if you have more time, or multiple step recruiting process, you can extend it, but apart from introduction, all the parts should take roughly same amount of time. Let me explain each part briefly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Standard who am I, who are you, why you left. Nothing outside of typical formula. You can explain the tech stack and team composition as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Check
&lt;/h3&gt;

&lt;p&gt;Yeah, I mentioned that in "the bad" section, that you can easily learn answer for these questions, but even so - you still need to check with simple questions, if that resume you got has anything to do with reality. But don't be tempted with very niche technical questions that you don't actually use in production. A datatype questions or common libraries are enough to see if someone is using the technology or not. If you are asking about something technical that you do not use on a daily basis, you cannot realistically expect an answer from someone else without being a jerk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem Solving Check
&lt;/h3&gt;

&lt;p&gt;This is the section, where you can ask, how someone would solve some technical issue. How would you design an API, or solution architecture, or consistency across multiple databases. Those questions ask about concepts and patterns, and does not focus on any specific technology. Of course, depending on skill level of candidate you can scale them up and down. You can ask a junior or mid about naming and linting problems, or clean code. Important thing is, that you have to check not the answer itself but how they got to the answer. Answer itself can be wrong (!) as long, as their thinking process is solid, and they are open for suggestions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Soft Skills Check
&lt;/h3&gt;

&lt;p&gt;That openness to suggestions is also a soft skill check. This is important, if not most important skill check. Emotional maturity can make or break your team. You are not hiring a freelancer, but someone who will be part of the social group. In group environment you need to keep in mind that there are couple situations where people have to know how to manage their emotions. And no, I don't mean they have to hide them by that.&lt;br&gt;
What you have to have in mind is: how they handle constructive critique and suggestions, ideas rejection, giving feedback themselves, other people's feelings and social outings. This is emotional intelligence. It's part of the job, even though almost every company out there does not recognize it or cultivate it. And assuming yours does not have a program to improve emotional awareness of your employees, you have to look for those who have emotional self-regulation mechanism by themselves. What it means is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;candidate can take critique into account, does not reject it or does not shut down immediately&lt;/li&gt;
&lt;li&gt;candidate can propose ideas and solutions in clear and easy to understand way&lt;/li&gt;
&lt;li&gt;candidate knows how to communicate during conflict, and does not escalate&lt;/li&gt;
&lt;li&gt;candidate is eager to participate in social outing of the team
Based on that "metrics" you can come up with questions or provoke situation to check against it in non obvious way. Asking some of the questions directly can also help, but you are risking getting logical answer, not the actual behavior. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Recruitment process might be one of the most annoying processes in our line of work. Compared to other industries, software engineers and IT change jobs way more often. Not surprising, considering that switching jobs is one of the best ways to get a pay rise in a industry that wage changes happen also more often than any other industry. However, sometimes even though it is mostly intellectual and creative work, recruiters/companies seems to treat us like construction workers. But maybe there's better way? I would recommend anyone who participate in the recruitment process, to think this through a little bit more. If you have your own thoughts and suggestions, I'm happy to hear them in the comments. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>interview</category>
      <category>programming</category>
      <category>career</category>
    </item>
    <item>
      <title>Self-hosting: What's the point of homelab?</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Wed, 05 Jun 2024 19:27:21 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/self-hosting-whats-the-point-of-homelab-5h99</link>
      <guid>https://forem.com/sudo_blaze/self-hosting-whats-the-point-of-homelab-5h99</guid>
      <description>&lt;p&gt;My previous post on &lt;a href="https://dev.to/sein_digital/why-you-should-self-host-everything-2f31"&gt;Why You Should Self-Host Everything&lt;/a&gt; blow up to my surprise. I'm positively encouraged to write more. However, I realized that not everyone quite realized what was the purpose of self-hosting in that instance! And I wrote the article under the assumption that it's obvious! Well then, let me explain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption #1: Public facing service
&lt;/h2&gt;

&lt;p&gt;Self-hosted homelab was never intended to be accessible from the web. By design it is available for internal network only. It's for you. For your own personal projects. For hacking, pentesting, trying new technologies, learning. You can spin up redis, kafka, postgres with ease and learn integration with it in your home environment. You can setup rss reader, media server, password vault, etc. in your home network without access from outside world. This is the benefit of self-hosting because you can create your own tools that only you have access to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption #2: Why not on your working PC?
&lt;/h2&gt;

&lt;p&gt;For some use cases - of course, Docker Desktop is enough. Especially in case of local development environment. But the downside is that it becomes unavailable as soon as your main device is down. What if you want to have an access to your tools while your main pc is down? What if there are background jobs you want to run, or maybe you have notification bot running? The main benefit if micro-pc is that it has low power consumption of around 30 watt-hours per hour. For comparison fully-fledged stationary PC consumes between about 100-150 watt-hours per hour on light work, and laptops around 50 watt-hours per hour. That's up to 5 times difference! And that's not accounting for PC monitor!&lt;br&gt;
So yeah - for something that can potentially work 24/7, it's definitely more cost-effective.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption #3: Private access
&lt;/h2&gt;

&lt;p&gt;Now, let's say, you are out of home, maybe you were sent by company to a conference, or to work in different hub for a while. How do you set up remote access? Didn't I mentioned that they are not public facing services? Well, I did. But you don't have to expose your services to the web, to actually use them remotely. Please welcome VPN - the actual intended way of using them (unlike what "private vpn" services promote).&lt;br&gt;
There are two solutions for VPN, and two ways to set them up. Some routers have VPN server built in, like Asus:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o4ndrp5blea8repzvt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o4ndrp5blea8repzvt6.png" alt="Asus Router Dashboard" width="800" height="581"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;This is actually screenshot from custom built firmware for Asus Router&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In case of cheaper ones, you can actually setup your VPN via docker container, and simply port-forward from router to your VPN instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5eg1jyh9vyopqckefgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5eg1jyh9vyopqckefgf.png" alt="Linuxserver's OpenVpn image" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you can choose between &lt;strong&gt;OpenVPN&lt;/strong&gt; and &lt;strong&gt;Wireguard&lt;/strong&gt;, both are solid options, however Wireguard a much newer and faster than OpenVPN.&lt;/p&gt;

&lt;p&gt;But doesn't that mean you are exposing open port to the world? Yeah, but there are ways to mitigate that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use fail2ban, to ban anyone on #ish attempt that fails.&lt;/li&gt;
&lt;li&gt;use non standard port and udp-only approach, you can avoid noobs and script kiddies this way&lt;/li&gt;
&lt;li&gt;use tunneling - this way you are not exposing anything, but connect via proxy to actually connect to your network. You can achieve that by yourself setting up reverse-proxy vpn on vps (now your network is just client with pass-through) or using service like &lt;a href="https://www.twingate.com/" rel="noopener noreferrer"&gt;Twingate&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I might cover all of that in the future, but at least I wanted to leave you something to go on on yourself. &lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption #4: Small number of services
&lt;/h2&gt;

&lt;p&gt;Now, one of the arguments was - if you have small number of containers, you can use raspberry pi - and that's correct, but in that case wouldn't it be better to just run it on your machine? I'm talking about running more then 10-15 containers constantly. Right now, my personal server is running 27 services and counting! I'm literally replacing a lot of subscription-based productivity services with my own - all in my network.&lt;br&gt;
That's what most companies do, when they ask you to connect to work VPN, most of their internal network is self hosted, and available for employees only. &lt;br&gt;
Of course, I'm using raspberry pi in my own setup, but only as dns server and network intrusion detection system (nids). My NUC however, is running home assistant, wiki / notes, rss reader, media server, meal planner, automation tool, searx, password vault and task/project tracker, among others. And the best thing is - because it's all on docker, I'm not even using that much resources I would with VPS or VM. Here's proof:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rzslhz959juhgt8i55j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rzslhz959juhgt8i55j.png" alt="NUC Dashboard" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CPU utilization at ~15%&lt;br&gt;
RAM utilization at ~32%&lt;br&gt;
SSD utilization at ~10%&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4gggr3xd6rk0lzmyiaw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4gggr3xd6rk0lzmyiaw.png" alt="Traefik dashboard" width="470" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this is number of apps exposed to my network. It does not count for dependencies. So as you can see, you don't need strong CPU for something like that. Simple i3 2-core 4-thread CPU is enough. What you need is RAM. I have 32 GB of DDR4, and I'm using only 1/3 of it, but as my infrastructure grows I might need an upgrade... maybe... or maybe not.&lt;br&gt;
It all depends on use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumption #5: Why not cloud? Why not RACK?
&lt;/h2&gt;

&lt;p&gt;Now you might ask, why not spin up AWS EC2 if CPU is not that demanding? Well let's keep in mind, that what containers uses the most is RAM. For my current setup I would need about 12-16 GB RAM instance. Currently only R6G Large fulfill that demands which equals to $80 monthly which is $960 yearly. For that amount of money you can buy both NUC and NAS which will collectively exceed what AWS has to offer and stay with you for years!&lt;/p&gt;

&lt;p&gt;RACK on the other hand is really fun project to have, but it's way more expensive one, takes up a lot more space, and consumes more energy. So if you are about cost-cutting, that's not the most cost-effective option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical use cases
&lt;/h2&gt;

&lt;p&gt;Now that we tackled most of the assumptions, let's discuss who actually benefits such setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #1: Startups and small business
&lt;/h3&gt;

&lt;p&gt;Having a small on-premise pc, that runs business infrastructure is a blessing, granted you have at least one person that know how to maintain it. It means that you can set up your infrastructure according to your employees needs. There are a ton of tools that helps running a business and doesn't cost a dime if you host them yourself, starting on LDAP, and ending with CRM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #2: Network Nerds
&lt;/h3&gt;

&lt;p&gt;I don't even have to mention it in this case right? Chances are you already have one. Don't confuse with "networking" during IT conference. &lt;br&gt;
You own a homelab to play with infra. You create virtual networks, set up credentials, firewalls and policies. You are basically refining your skills as IT guy, because you like it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #3: Penetration Testers
&lt;/h3&gt;

&lt;p&gt;Homelab is great environment to actually deploy vulnerable apps and OS, to try and hack into them. Yeah sure, there are sites like hackthebox that provide that experience, but if you want to see ransomeware running wild, VM or container are safe environment for that. &lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #4: DevSecOps
&lt;/h3&gt;

&lt;p&gt;Now, I don't have to explain docker containers for you. But kubernetes on the other hand, is a different story, and with homelab you can test deployment script for cheap. And I know, probably like some of you I tested kubernetes on VM stack, and was not impressed. Well, let me tell you that there is such thing like kubernetes in docker (kind), and it runs 100x better on vm-less machine than vm stack! &lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #5: Cloud Native Developers
&lt;/h3&gt;

&lt;p&gt;Basically same category as DevSecOps. You might put Openstack or Rancher on top of your infra. Or you might add Gitlab to keep your repos private with additional CI/CD pipeline to deploy locally. It makes total sense if you want your apps available internally. After all it's part of the fun.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case #6: Automation, Media Server, Documents and News Hosting.
&lt;/h3&gt;

&lt;p&gt;And the final case is almost simply "others". You can set up Home Assistant, N8N, Plex/Jellyfin, Paperless. All of that are convenience software. It's just helps to organize your life better and automate a bunch of staff. It also works great with smart homes, when you have local network enabled devices, like lights, air conditioning, blinders. You don't need fancy service to set up your smart home, you can do it yourself one bit at a time. Granted, if you only have only couple of services, newest raspberry pi might be more then enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upsides and downsides
&lt;/h2&gt;

&lt;p&gt;Of course running homelab has it's upsides and downsides. It's not for everyone. For me, opportunity to learn and shear fun of setting it up outshines the frustration of maintaining it. But let me recap some pros and cons:&lt;/p&gt;

&lt;p&gt;Pros: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your personal network, hidden behind the curtain&lt;/li&gt;
&lt;li&gt;Full control over what's in the&lt;/li&gt;
&lt;li&gt;Free open-source stuff&lt;/li&gt;
&lt;li&gt;Fun way to improve your skills and learn&lt;/li&gt;
&lt;li&gt;Depending on setup you can cut quite a lot of monthly costs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Steep initial cost&lt;/li&gt;
&lt;li&gt;Takes quite some time to set up&lt;/li&gt;
&lt;li&gt;Requires above average technical skills&lt;/li&gt;
&lt;li&gt;Adds up to your electricity bill&lt;/li&gt;
&lt;li&gt;Requires regular semi-maintenance &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The number of use cases for homelab might not be that big and for many it might not be convenient to maintain it themselves. With using third party, you often pay for exactly that: convenience. However if you are into this stuff, you don't have to use your homelab for exactly one use case. You can mix and match according to your needs. And that's the beauty of homelab - flexibility. That's why I chose to run mine. For every paid service I might need at the moment, there is an open source version, that I might be able to run myself. And for everything that is client-facing - I will use AWS.&lt;/p&gt;

</description>
      <category>selfhosted</category>
      <category>docker</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Take control! Run ChatGPT and Github Copilot yourself!</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Sat, 01 Jun 2024 04:57:15 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/personal-free-open-source-openai-like-gpt-4an7</link>
      <guid>https://forem.com/sudo_blaze/personal-free-open-source-openai-like-gpt-4an7</guid>
      <description>&lt;h2&gt;
  
  
  The Dark Side
&lt;/h2&gt;

&lt;p&gt;I guess by now everyone already used ChatGPT in some form. At some point I found myself using it to generate boilerplate code, write tests and document stuff. All the boring things. I do like some challenge, but writing same thing over and over again get's quite boring. Well good thing that AI excel at boring! &lt;br&gt;
There is however some concern for not-so-boring stuff, which is &lt;strong&gt;privacy&lt;/strong&gt;. Pasting a proprietary code into ChatGPT is like committing environment variables to git repository. Yet for obvious reason people keep using ChatGPT and Github Copilot on daily basis - for convenience.&lt;/p&gt;

&lt;p&gt;You probably also heard already that Reddit and StackOverflow already sold their data to OpenAI for training (actually probably long before it was publicly announced!). We can be almost sure that chat history is also fed into teaching the model. &lt;/p&gt;
&lt;h2&gt;
  
  
  The Alternative
&lt;/h2&gt;

&lt;p&gt;Enough of my ranting about OpenAI. The alternative is obvious - run it by yourself. If you are well versed in running your own models - you probably won't learn anything new. But if you are new to this, here's the gist: if you have a PC good enough to run a modern game in medium settings, then you are able to run it by yourself. You might be surprised how open source community is active in this area. More so - Meta (yes, Facebook's Meta) is quite in forefront of open-source llm models. &lt;/p&gt;

&lt;p&gt;But there are downsides. Self-hosted model will never be up to par with Closed ones. And speed and quality relies heavily on your GPU capabilities. However, for many use cases any RTX cards, and even some GTX 10 Series are enough. Personally I'm using RTX 3080 TI 12GB of VRAM. &lt;br&gt;
Oh - one more thing. While I recommend using NVidia cards for inference (running local model), I don't have experience with AMD Cards, and I cannot recommend or even be 100% sure my approach works for AMD. From what I heard, working with AMD drivers is hell not only for ML Researchers but also for game developers. It breaks my heart because I like their processors. &lt;/p&gt;
&lt;h2&gt;
  
  
  The Tutorial
&lt;/h2&gt;

&lt;p&gt;There are several solutions out there, but I would go with one that is seamless, and runs in the background, which makes it almost invisible.&lt;br&gt;
However there are some requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;CUDA enabled GPU
Yeap, that's it. This solutions should work on Windows, Linux and MacOS. However you might be already aware that docker in windows requires WSL2 enabled.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step. 1 Install Ollama
&lt;/h3&gt;

&lt;p&gt;Ollama will be our inference backend. If you are familiar with docker, ollama will fill like home. It downloads models from it's own repository and serve them through it's own API which follows OpenAI format. You can also converse with chat in CLI.&lt;br&gt;
To install Ollama go to &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;ollama's downloads page&lt;/a&gt; and install it according to your OS.&lt;/p&gt;

&lt;p&gt;For bash users (non-windows) here's quick install script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important!&lt;/strong&gt; I don't recommend running ollama on docker unless you really know what you are doing, and know how to setup GPU usage from docker. Otherwise, in my opinion Ollama works best on host system.&lt;/p&gt;

&lt;p&gt;Now let's test our ollama (yes, after installing windows version, it should be available in cmd and git-bash as well):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull llama3
ollama run llama3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After second command you need to wait a little while until model loads to VRAM. Then you can chat with it in the CLI!&lt;/p&gt;

&lt;p&gt;There are also other models worth checking, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;llava&lt;/code&gt; - multimodal model - you can drop media to prompt about&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;codellama&lt;/code&gt;, &lt;code&gt;codegemma&lt;/code&gt;, &lt;code&gt;deepseek-coder&lt;/code&gt; - three models dedicated for coding tasks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;qwen2&lt;/code&gt; - multilingual competitor to llama3 that also performs twice as good on coding tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More: &lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;Ollama library&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2. Install OpenWebUI
&lt;/h3&gt;

&lt;p&gt;If you are familiar with ChatGPT UI, and feel at home with it, you might like &lt;a href="https://openwebui.com/" rel="noopener noreferrer"&gt;OpenWebUI&lt;/a&gt; which is heavily inspired by it. It actually might be more intuitive and powerful than ChatGPT, because it not only supports full chat mode, but also multi-modality (droping files into chat), and RAG workflow (Retrieval Augmented Generation - takes into account context of your files/documents). I checked multiple solutions so far, and this one by an extent is my favorite. And to have it running locally we will use docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:8080 &lt;span class="nt"&gt;--add-host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host.docker.internal:host-gateway &lt;span class="nt"&gt;-v&lt;/span&gt; open-webui:/app/backend/data &lt;span class="nt"&gt;--name&lt;/span&gt; open-webui &lt;span class="nt"&gt;--restart&lt;/span&gt; always ghcr.io/open-webui/open-webui:main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All the other installation options can be found here: &lt;a href="https://github.com/open-webui/open-webui" rel="noopener noreferrer"&gt;https://github.com/open-webui/open-webui&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's heads up towards &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt;&lt;br&gt;
We should be greeted with following screen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0f1ejuh9tdmuo2r75tr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0f1ejuh9tdmuo2r75tr.jpg" alt=" " width="570" height="570"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In general you should be able to start chatting, but in case ollama is not set up as backend, you can go to &lt;strong&gt;Profile&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Settings&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Connections&lt;/strong&gt;, and set connection &lt;code&gt;http://host.docker.internal:11434&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff55seoh2uhipxxyn1emm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff55seoh2uhipxxyn1emm.png" alt=" " width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It means that your docker container will connect to localhost:11434 - exactly where your local ollama api sits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Local Copilot
&lt;/h3&gt;

&lt;p&gt;As of writing this article, I found only one solid replacement for Github Copilot: &lt;a href="https://www.continue.dev/" rel="noopener noreferrer"&gt;continue.dev&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysy5qyf32rmplp7nbgr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysy5qyf32rmplp7nbgr0.png" alt=" " width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best thing is - it works both with &lt;strong&gt;intellij&lt;/strong&gt; and &lt;strong&gt;vscode&lt;/strong&gt;!&lt;br&gt;
It has all the good stuff: contextual autocomplete, chat with gpt, and shortcuts for snippets!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejnhlfalxvfeo6vg5i6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejnhlfalxvfeo6vg5i6q.png" alt=" " width="728" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setup process is pretty straightforward. It might ask you during install phase to login via github, but you can skip it, and you won't be asked again. Next up, choose your engine, public api or ollama. Then install starcoder:3b for completion and voila! &lt;/p&gt;

&lt;p&gt;For me, most important was overall developer experience similar to Copilot, and lack of extra authentication layer. There are several solutions and extensions out there, but this setup works completely on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;What I described here is most optimal workflow I found to be working for me. There are multiple ways to run open source models locally worth mentioning like &lt;a href="https://github.com/oobabooga/text-generation-webui" rel="noopener noreferrer"&gt;Oobabooga WebUI&lt;/a&gt; or &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt;, however I didn't found them to be so seamless, and fit my workflow.&lt;/p&gt;

&lt;p&gt;For vscode extensions there are also many plugins that attempt to replicate Copilot experience like &lt;strong&gt;Llama Coder&lt;/strong&gt;, &lt;strong&gt;CodeGPT&lt;/strong&gt; or &lt;strong&gt;Ollama Autocoder&lt;/strong&gt; and many more, and I tested a lot of them, however only &lt;strong&gt;Continue&lt;/strong&gt; actually comes close or even surpasses by a bit actual Copilot.&lt;/p&gt;

&lt;p&gt;Worth mentioning is also &lt;a href="https://pieces.app/" rel="noopener noreferrer"&gt;Pieces OS&lt;/a&gt; which is basically locally running RAG-app that keeps your whole basecode in the context. I did use it for a while and it's pretty good, however current setup works better with my coding habits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt9ppmd3vlabj2bc0af8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt9ppmd3vlabj2bc0af8.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall, this setup saves you some subscription fees and works just as good as original thing! &lt;/p&gt;

</description>
      <category>docker</category>
      <category>llm</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why You Should Self-Host Everything</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Mon, 27 May 2024 12:16:49 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/why-you-should-self-host-everything-2f31</link>
      <guid>https://forem.com/sudo_blaze/why-you-should-self-host-everything-2f31</guid>
      <description>&lt;p&gt;In today's digital age, it seems like everything is subscription-based. If you're not paying for a service, you're likely being monetized by watching ads or providing personal data to companies that don't necessarily have your best interests at heart. The internet has become a polluted space where our online activities are tracked and sold to the highest bidder. &lt;br&gt;
And most companies try to exploit and leverage human behavior for profit.&lt;br&gt;
But there's a way to take back control: self-hosting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem with Centralization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you use popular services like Netflix, Facebook, Dropbox, or Microsoft Office 360, you're entrusting your data to companies that have no obligation to keep it private or secure. These corporations are incentivized to collect and sell your data to maximize their profits, often without your consent. This centralization of information has created a surveillance state where our online activities are monitored and analyzed for commercial gain. In some cases you pay twice: with your data, and with your wallet. Now it's more visible then ever, when suddenly your repos are being fed to train AI models if by any chance your are using Github.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Alternative: Homelab Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Self-hosting is not just about moving your data from one centralized location to another; it's about taking control of your digital life. By setting up a homelab server, you can store your files, communicate with others, and access your favorite services without relying on third-party companies. With a homelab server, you'll have complete control over your data and can ensure that it remains private and secure. To achieve that you will need either pretty solid NAS (like Synology) or micro-pc, like Intel NUC. Raspberry Pi won't do unfortunately, unless you run up to 4 lightweight containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Comparison&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;While setting up a homelab server may require an initial investment of time and money, it's often more cost-effective in the long run. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Service x4: $10 per month x 12 months x 4 = $480&lt;/li&gt;
&lt;li&gt;Intel NUC or Synology NAS: approximately $300-$500 (depending on options you choose)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So depending on your situation and amount of services you are currently subscribed to, the cost of homelab will pay itself in about a year! &lt;br&gt;
Of course there is also cost of time, and required maintenance, but with proper setup it can be minimal effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HomeLab possible solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As I mentioned, best options are not that expensive, and all you need is a micro-pc. Here's a list of good options eligible for solid docker based homelab:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/4aRU8R3" rel="noopener noreferrer"&gt;Intel Nuc 11 i-7, 32GB RAM, 1TB&lt;/a&gt; $550 - solid starting point with quite a bit of storage, and a lot of RAM. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/4bUQy9C" rel="noopener noreferrer"&gt;Intel Nuc 11 i-7, Bare&lt;/a&gt; $390 - No ram, no storage option, if you want to upgrade it yourself from scratch&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3KgLkZW" rel="noopener noreferrer"&gt;Intel NUC 11, Celeron N5105, 8GB RAM, 256GB SSD&lt;/a&gt; $240 - Low budget option, I know - it's twice as expensive as RPi 5 with the same amount of RAM, but let's be honest - you cannot extend Raspberry Pi&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/4bVYpDY" rel="noopener noreferrer"&gt;Raspberry Pi 5, 8GB&lt;/a&gt; $95 - For the sake of completion. You would still need to buy SD Card. But you can at least set up Pi.Hole and Pi.Alert on it.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3Kjwpy2" rel="noopener noreferrer"&gt;Synology 2-Bay NAS DS223, 2GB RAM, Diskless&lt;/a&gt; $250 -  For those who favor storage space over computing power. As you can see, compared to NUC, it does not have much RAM.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3Kjwpy2" rel="noopener noreferrer"&gt;Synology DS723 2-Bay, 2GB RAM, 8TB Storage&lt;/a&gt; $990 - A bit more powerful machine with quite solid CPU, but still in range of 2GB of RAM. Some versions comes even with docker preinstalled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, as you can see, Intel NUC might seem like more cost-effective solution, however NAS has it's own benefits, and often comes with preinstalled OS and Manager, where you can deploy docker on your own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy Deployment with Docker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setting up a homelab server doesn't have to be a daunting task. What we need is ubuntu or debian OS on our machine.&lt;br&gt;
With the help of containerization platforms like Docker or Podman, you can easily deploy and manage your services without requiring extensive technical expertise. And after initial setup and ssh exposed to your local network, you won't even need to connect your monitor and keyboard anymore, unless to upgrade the whole system again!&lt;/p&gt;

&lt;p&gt;You can read how I did it in future article. But for now, there is still one more step in our setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open Source Community&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The open source community is thriving, and many self-hosted services are built on top of these collaborative efforts. Now more then ever we have a ton of open source software "just laying around" GitHub. &lt;br&gt;
Many of those software offer simple one line setup for docker. The best thing about docker is, that you don't have to worry about dependencies. And know what's best about them? Because they are open source, it means that you can contribute yourself as well! You are missing a feature? You found a bug and fixed it? Create a Pull Request, Report, Contribute! That's what makes open source community thriving. And by doing a homelab environment, there's nothing stoping you from doing your own docker hosted tools!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Self hosting and running a homelab was never as easy as this. Not that long ago I was running Proxmox and creating VM for everything I needed. Problem is, VMs take up a lot of resources, and without RACK they are highly unreliable, unless you do Penetration Testing and you need like 3-4 environments. Single OS with docker makes it much easier! And by self-hosting everything, you'll enjoy numerous other benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt;: Your data remains private and secure, away from prying eyes. You own your data, not a third-party.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control&lt;/strong&gt;: You have complete control over what's running. You own the server. Nobody beside you have access to that server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: You can choose the services and software that best suit your needs, without being locked into a specific ecosystem. You can integrate them if you want, or keep them separate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial Benefits&lt;/strong&gt;: In the long run, self-hosting can be more cost-effective than relying on subscription-based services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In an era where data is the new currency, it's time to take back control of our online activities. Self-hosting everything offers a powerful alternative to the centralization of information and provides a way to ensure that your digital life remains private, secure, and flexible. Join the self-hosting movement today and start reclaiming your digital sovereignty!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UPDATE!&lt;/strong&gt;&lt;br&gt;
Because couple of you asked for AMD solution, I dug deeper into mini-pc market, and found couple sweet deals, even one favorite I would definitely went for, if I chose to change systems!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/4bQqb4C" rel="noopener noreferrer"&gt;MINISFORUM EliteMini UM780 XTX AMD Ryzen 7 7840HS, 64GB DDR5 1TB&lt;/a&gt; $690 - This one is packing! &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3V1pSx8" rel="noopener noreferrer"&gt;Beelink AMD Ryzen 7 16GB RAM 1TB SSD&lt;/a&gt; $360 - Cheap option, good option for starter with great CPU.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/450mAid" rel="noopener noreferrer"&gt;Kamrui AMD Ryzen 5 16GB DDR4 512GB SSD&lt;/a&gt; $270 - Slightly cheaper option, more than enough, definitely more than you would get from VPS for $20 a month. Better value then Intel NUC's alternative in similar price range.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>selfhosted</category>
      <category>docker</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Most common Junior JS Developer interview questions</title>
      <dc:creator>Sudo BLAZE</dc:creator>
      <pubDate>Thu, 27 Jul 2023 06:01:15 +0000</pubDate>
      <link>https://forem.com/sudo_blaze/most-common-junior-js-developer-interview-questions-4147</link>
      <guid>https://forem.com/sudo_blaze/most-common-junior-js-developer-interview-questions-4147</guid>
      <description>&lt;p&gt;As a junior JavaScript developer, you can expect to be asked a variety of questions during interviews, covering both technical and non-technical aspects. Here are some of the most common questions that you might encounter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Basic JavaScript Concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is JavaScript, and what is it commonly used for?&lt;/li&gt;
&lt;li&gt;Explain the difference between var, let, and const.&lt;/li&gt;
&lt;li&gt;What are data types in JavaScript?&lt;/li&gt;
&lt;li&gt;How do you declare and call functions in JavaScript?&lt;/li&gt;
&lt;li&gt;What is the difference between null and undefined?&lt;/li&gt;
&lt;li&gt;How does hoisting work in JavaScript?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DOM Manipulation and Events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the DOM (Document Object Model) in the context of web development?&lt;/li&gt;
&lt;li&gt;How do you access and modify elements in the DOM using JavaScript?&lt;/li&gt;
&lt;li&gt;Explain event delegation and its benefits.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Asynchronous Programming:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is asynchronous programming in JavaScript, and why is it important?&lt;/li&gt;
&lt;li&gt;How do you handle asynchronous operations using callbacks, promises, or async/await?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Array Manipulation and Higher-Order Functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do you loop through an array in JavaScript?&lt;/li&gt;
&lt;li&gt;What are some common array methods like map, filter, and reduce, and how are they used?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ES6+ Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are some new features introduced in ES6 and later versions of JavaScript?&lt;/li&gt;
&lt;li&gt;How do arrow functions differ from regular functions?&lt;/li&gt;
&lt;li&gt;What are template literals, and how do you use them?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Common Libraries and Frameworks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have you worked with any JavaScript libraries or frameworks like React, Vue, or Angular?&lt;/li&gt;
&lt;li&gt;What are the advantages and disadvantages of using a framework?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Debugging and Tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do you debug JavaScript code?&lt;/li&gt;
&lt;li&gt;Have you used any developer tools like Chrome DevTools?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have you written unit tests for JavaScript code?&lt;/li&gt;
&lt;li&gt;What testing frameworks have you used?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Code Organization and Best Practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do you organize your JavaScript code to make it maintainable and scalable?&lt;/li&gt;
&lt;li&gt;What are some best practices you follow while writing JavaScript code?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Project Experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can you describe a JavaScript project you worked on?&lt;/li&gt;
&lt;li&gt;What were the most significant challenges you faced during the project, and how did you overcome them?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember that the specific questions can vary depending on the company and the interviewer's preferences. It's crucial to review JavaScript concepts, practice coding exercises, and be prepared to discuss your own experiences and projects to demonstrate your skills and enthusiasm for the field.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
