<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jota Feldmann</title>
    <description>The latest articles on Forem by Jota Feldmann (@jotafeldmann).</description>
    <link>https://forem.com/jotafeldmann</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jotafeldmann"/>
    <language>en</language>
    <item>
      <title>The Legend, the Problem, and "Shock! Shock!"</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Sat, 07 Mar 2026 03:18:03 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/the-legend-the-problem-and-shock-shock-210k</link>
      <guid>https://forem.com/jotafeldmann/the-legend-the-problem-and-shock-shock-210k</guid>
      <description>&lt;h2&gt;
  
  
  Who is Donald Knuth?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Donald_Knuth" rel="noopener noreferrer"&gt;Donald Knuth&lt;/a&gt; is, without exaggeration, one of the most important figures in the history of computer science. He wrote The Art of Computer Programming — a multi-volume series that has sat on the desk of nearly every serious programmer for the past fifty years. He won the Turing Award, which is the Nobel Prize of computing. He invented TeX, the typesetting system that scientists around the world still use to write papers and format equations.&lt;/p&gt;

&lt;p&gt;If you're a developer, you've used his work today without knowing it. That sorting algorithm you called in one line of code? The analysis explaining why it's fast or slow? The mathematical notation in the documentation? Knuth is in the foundation of all of it. He's 88 years old, still active, and considered by many to be the greatest living mind in the field.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem — and Why It Matters
&lt;/h2&gt;

&lt;p&gt;Imagine a three-dimensional grid of dots — think of it like a Rubik's cube, but instead of colored stickers, every dot is connected to exactly three neighbors by wires. Now the question: can you trace three separate routes through this grid so that each route visits every single dot exactly once and returns to where it started — and the three routes never share the same wire?&lt;/p&gt;

&lt;p&gt;This kind of path — one that visits every point in a network exactly once — is called a Hamiltonian cycle. It shows up everywhere in practical problems: optimizing delivery routes, designing efficient circuits, scheduling tasks with complex dependencies, planning network infrastructure. The challenge isn't just solving it for one specific grid. It's finding a general recipe that works for any size.&lt;/p&gt;

&lt;p&gt;Knuth solved it by hand for a small 3×3×3 grid. A collaborator used a computer to find solutions for grids up to size 16. But a universal construction — a formula that provably works for every odd size forever — had never been found. The number of possibilities grows so explosively that brute-force search is simply not an option, even on the fastest computers alive.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Shock! Shock!"
&lt;/h2&gt;

&lt;p&gt;Those are the first two words of a paper Knuth published this week.&lt;br&gt;
Let that sink in. A man who has spent decades at the absolute frontier of mathematical thinking, who has seen computing go from punch cards to neural networks, who has written the books that define the field — chose to open a scientific paper with those words.&lt;/p&gt;

&lt;p&gt;Why? Because Claude, Anthropic's AI model, found the general construction that no one else had.&lt;/p&gt;

&lt;p&gt;It didn't happen in a flash of machine brilliance. It happened over 31 iterations that looked remarkably like real research: early attempts with simple formulas failed; brute-force search proved unworkable; a promising approach inspired by Gray codes (a technique for navigating ordered sequences) got partway there but stalled; decompositions by "slices" of the grid were tried; randomized simulations with controlled backtracking were explored.&lt;br&gt;
On the 31st attempt, something clicked. Claude found a small set of simple directional rules — based on each dot's position in the grid — that generate all three complete cycles. It tested the construction for multiple grid sizes. It worked every time.&lt;br&gt;
Then came the human part.&lt;/p&gt;

&lt;p&gt;Claude found the recipe. Knuth wrote the proof — the rigorous mathematical argument for why the recipe works, why it will always work, with no exceptions. Without that step, it's a well-supported hypothesis. With it, it's mathematics.&lt;/p&gt;

&lt;p&gt;This kind of problem — finding paths that visit every point in a network — is considered fundamentally hard in computer science. No general shortcut is known to exist.&lt;/p&gt;

&lt;p&gt;What changed here isn't that AI "solved everything." It's that it managed to see a hidden structure inside an enormous space of possibilities, and delivered a construction with real mathematical logic — not just a lucky numerical result.&lt;/p&gt;

&lt;p&gt;The division of labor that emerged naturally — AI explores, human proves — may be a powerful model for mathematical research going forward.&lt;/p&gt;

&lt;p&gt;Knuth published the paper this week, you can read it at &lt;a href="https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf" rel="noopener noreferrer"&gt;https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;AI Fair usage: Written with Claude, revised with Grammarly. Thinking is still mine.&lt;/p&gt;

</description>
      <category>graph</category>
      <category>ai</category>
      <category>donaldknuth</category>
    </item>
    <item>
      <title>Adapting Technical Interviews to Counter AI-Assisted Cheating</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Sat, 31 May 2025 03:05:29 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/adapting-technical-interviews-to-counter-ai-assisted-cheating-36lk</link>
      <guid>https://forem.com/jotafeldmann/adapting-technical-interviews-to-counter-ai-assisted-cheating-36lk</guid>
      <description>&lt;p&gt;I've been hiring developers for over 10 years—initially as a senior engineer, and for the last two years, as a dedicated technical recruiter. During that time, I’ve reviewed dozens of take-home projects, attended numerous whiteboard sessions, and observed the evolution of interview trends.&lt;/p&gt;

&lt;p&gt;But lately, something’s changed.&lt;/p&gt;

&lt;p&gt;Candidates are using AI tools like ChatGPT and GitHub Copilot to pass technical interviews, often in ways that bypass actual engineering judgment. I’m not against AI—it's a powerful tool when used responsibly. But when it's used to mask a candidate’s true skill level, it undermines the purpose of technical hiring.&lt;/p&gt;

&lt;p&gt;This article outlines some practical strategies and observations I’ve developed to help design interviews that are fairer, more insightful, and harder to game with AI.&lt;/p&gt;

&lt;h1&gt;
  
  
  Personalize the Problem
&lt;/h1&gt;

&lt;p&gt;Generic problems are the easiest to solve with AI. So I’ve started giving contextual, personalized exercises that reflect challenges from our actual codebase or systems.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Here’s a simplified version of our logging system. We recently had issues with performance under load. What would you change, and why?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This forces the candidate to think beyond “just code” and engage with the problem like an engineer, not a copy-paste operator.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use Live, Interactive Interviews
&lt;/h1&gt;

&lt;p&gt;Cheating is harder when someone’s watching. I run most coding interviews live, using platforms like CoderPad or VSCode Live Share. Cameras on, code shared, and I ask candidates to think out loud as they work.&lt;/p&gt;

&lt;p&gt;This reveals so much more than a finished solution. I’m not just hiring for correct syntax—I’m looking for how they reason, debug, and adjust when they hit a wall.&lt;/p&gt;

&lt;h1&gt;
  
  
  Focus on Thought Process Over Output
&lt;/h1&gt;

&lt;p&gt;I’ve learned to push beyond the “Did it run?” mindset. Instead, I ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why did you choose that approach?&lt;br&gt;
What’s the trade-off of doing it this way?&lt;br&gt;
What would break if the data doubled?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI can give you an answer. It can’t (yet) explain why that answer matters in context.&lt;/p&gt;

&lt;h1&gt;
  
  
  Break Challenges into Timed Parts
&lt;/h1&gt;

&lt;p&gt;Rather than one long take-home, I now use multi-part, timed challenges during live sessions. For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Part 1: Quick algorithm&lt;br&gt;
Part 2: Real-world refactor&lt;br&gt;
Part 3: Code review or optimization&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I tell them upfront: “You probably won’t finish it all.” That’s the point. I’m looking at how they prioritize, communicate, and think under fair pressure, not just what they ship.&lt;/p&gt;

&lt;h1&gt;
  
  
  Add a Bit of Ambiguity
&lt;/h1&gt;

&lt;p&gt;Real-world specs are rarely perfect, so I sometimes leave edge cases open-ended. I want to see:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Do they ask clarifying questions?&lt;br&gt;
Do they make smart assumptions?&lt;br&gt;
Do they catch corner cases without prompting?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is where junior devs and senior engineers start to show their differences.&lt;/p&gt;

&lt;h1&gt;
  
  
  Ask Open-Ended, Non-Code Questions
&lt;/h1&gt;

&lt;p&gt;I’ve also added open-ended discussions into my process. Things like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How would you design a feature to reduce backend load?”&lt;br&gt;
“How would you onboard a junior into a legacy system?”&lt;br&gt;
“How do you approach technical debt?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These are the kinds of questions you can’t just Google or AI your way through—they demand real experience and opinions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Set Boundaries on AI Use
&lt;/h1&gt;

&lt;p&gt;For take-home assignments, I now include a clear statement:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“This test must be completed without the assistance of AI tools such as ChatGPT, Copilot, or similar.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Is it foolproof? No. But it sets the tone—and the ethical candidates take it seriously.&lt;/p&gt;




&lt;p&gt;I’m not trying to “catch” candidates using AI. I’m trying to design a process where genuine skill and thought shine through, regardless of whether a candidate has ChatGPT open in another tab. AI is changing the game, and as interviewers, we need to adapt—not by being punitive, but by being thoughtful.&lt;/p&gt;

&lt;p&gt;AI is here to stay, I believe it is a human extension (Iron Man), not to replace developers (Androids). However, you need to be the best you can be with the tool; not to mention that people trust individuals to handle ethics.&lt;/p&gt;

&lt;p&gt;If you're a fellow tech hirer seeing similar patterns, I hope these ideas help you refine your process.&lt;/p&gt;

&lt;p&gt;Let’s keep interviews human—and fair.&lt;/p&gt;




&lt;p&gt;AI usage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chat-GPT for the cover image&lt;/li&gt;
&lt;li&gt;Grammarly for final redaction&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>interview</category>
      <category>interviewer</category>
      <category>ai</category>
    </item>
    <item>
      <title>How a Math Breakthrough Can Help Us Build Better Software: A COMMON Developer's Guide to Williams' Time-Space Insight</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Sat, 24 May 2025 18:38:31 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/how-a-math-breakthrough-can-help-us-build-better-software-a-common-developers-guide-to-williams-3ldc</link>
      <guid>https://forem.com/jotafeldmann/how-a-math-breakthrough-can-help-us-build-better-software-a-common-developers-guide-to-williams-3ldc</guid>
      <description>&lt;p&gt;As a &lt;strong&gt;common&lt;/strong&gt; software developer with experience in &lt;strong&gt;common&lt;/strong&gt; commercial software (what I call "real world software"), especially in startup settings, I constantly fight resource constraints: memory, compute, and cost. &lt;/p&gt;

&lt;p&gt;Recently, I stumbled on something from the academic world that might actually matter to us, common craftsmanship developers in the trenches: a &lt;a href="https://www.quantamagazine.org/for-algorithms-a-little-memory-outweighs-a-lot-of-time-20250521/" rel="noopener noreferrer"&gt;breakthrough by MIT's Ryan Williams&lt;/a&gt;. No, it's not some abstract math proof for theorists only. It's a real insight with potential real-world impact on how we write memory-efficient code.&lt;/p&gt;

&lt;p&gt;What caught my eye was: "One computer scientist’s 'stunning' proof is the first progress in 50 years on one of the most famous questions in computer science."&lt;/p&gt;

&lt;p&gt;WOW.&lt;/p&gt;

&lt;p&gt;The core idea is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You don’t always need linear space to finish a task efficiently.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It took me some time to understand the context of linear space here (thanks Chat-GPT), but essentially, it's this: the memory you need to use to be faster is equal to the input.&lt;/p&gt;

&lt;p&gt;If you have 1 million records, you’re expected to use 1 million “slots” in memory to process them efficiently. That’s linear space — one-to-one. More data? More memory. No questions asked.&lt;/p&gt;

&lt;p&gt;It’s like trying to be the fastest at checking every name on a list, so you write all of them on sticky notes and plaster them on your walls. Sure, you can find anyone instantly, but now your house is full of notes and you can’t find your cat.&lt;/p&gt;

&lt;p&gt;Williams comes along and says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Hey… what if we just remembered the important parts, and recomputed the rest when we need it? You’d use fewer sticky notes — and your cat would thank you.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before, computer scientists used to believe that:&lt;/p&gt;

&lt;p&gt;To run an algorithm efficiently, you needed memory roughly proportional to the size of the input — linear space.&lt;/p&gt;

&lt;p&gt;In essence, it sounds obvious: more data means you need more memory to handle it well. Like saying, “If I want to cook for 100 people, I need 100 plates.”&lt;/p&gt;

&lt;p&gt;But now, thanks to Ryan Williams, we have proof that this isn’t always true.&lt;/p&gt;

&lt;p&gt;Turns out, with the right approach, you can sometimes cook for 100 people using just 10 plates — you just wash and reuse them fast enough that no one notices.&lt;/p&gt;

&lt;p&gt;In algorithm terms: you simulate the same result, using way less memory, maybe with a bit more time or clever recomputation. It's not magic. It's just a smarter use of resources — and now it's mathematically backed.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Kind of Real World Simple Example: Finding the Maximum Value in a List
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re finding the maximum number in a list. Most developers know two ways to do this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before (How We Typically Think)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Load everything in memory
&lt;/span&gt;&lt;span class="n"&gt;nums&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="n"&gt;max_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time: O(n)&lt;br&gt;
Space: O(n)&lt;/p&gt;

&lt;p&gt;You load all the numbers into memory, then call max(). Fast and simple, but memory-heavy.&lt;/p&gt;
&lt;h3&gt;
  
  
  Williams-Inspired Interpretation
&lt;/h3&gt;

&lt;p&gt;Instead of storing everything, why not just keep track of the max as you go?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;max_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;float&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-inf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;num&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;max_val&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;max_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time: O(n)&lt;br&gt;
Space: O(1)&lt;/p&gt;

&lt;p&gt;This simulates the same behavior with way less memory, and its not that slow how we used to think before.&lt;/p&gt;
&lt;h2&gt;
  
  
  MORE Real World Example: Building a Search Engine
&lt;/h2&gt;

&lt;p&gt;Let’s say you’re building a support dashboard or knowledge base search tool. Normally, you’d build an inverted index like Elasticsearch or Lucene:&lt;/p&gt;
&lt;h3&gt;
  
  
  Before: Traditional Inverted Index
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;inverted_index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;word&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;inverted_index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setdefault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Time: Fast lookups&lt;br&gt;
Space: O(n) to store the index&lt;/p&gt;

&lt;p&gt;This is memory-heavy. If you have millions of documents, the index might not fit on smaller machines or edge devices.&lt;/p&gt;
&lt;h3&gt;
  
  
  After: Williams-Inspired Interpretation
&lt;/h3&gt;

&lt;p&gt;What if we simulate the index instead of storing it entirely? We could use space-efficient structures like Bloom Filters or sketches:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BloomFilter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_hashes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bits&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_hashes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each document gets a small filter. At search time, instead of querying a big inverted index, you check which filters "probably" contain your words. You trade exactness for space, but still get fast, usable search.&lt;/p&gt;

&lt;h2&gt;
  
  
  My two cents: personal insight while digging into this
&lt;/h2&gt;

&lt;p&gt;What really clicked for me while exploring this idea wasn’t just the resource savings — though that’s cool. It was how this way of thinking led me to design software differently.&lt;/p&gt;

&lt;p&gt;Instead of just asking “How do I load everything and go fast?”, I started thinking in units of work — batches, chunks, steps. Suddenly, I was building systems that naturally scale better, are easier to retry, and recover more gracefully from failures.&lt;/p&gt;

&lt;p&gt;You’re not just writing smarter algorithms — you’re architecting smarter systems and NOW YOU CAN JUSTIFY THAT IS NOT THAT SLOW HOW WE USED TO BELIEVE!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Memory limits become design constraints that actually make your software more resilient.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s weird: a theory paper ended up helping me in the system design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts: Why You Should Care
&lt;/h2&gt;

&lt;p&gt;If you're building for constrained environments or just want more efficient systems, Ryan Williams' result gives you permission to rethink the memory/time trade-offs in your architecture. It’s not just theory — it’s a mindset shift.&lt;/p&gt;

&lt;p&gt;And mindset shifts can lead to big wins in the world of startups and real-world software.&lt;/p&gt;




&lt;p&gt;Update: I removed a Fibonnaci example that is not the best explanation in this case.&lt;/p&gt;




&lt;p&gt;AI Usage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To understand the "linear space" concept&lt;/li&gt;
&lt;li&gt;The cover image&lt;/li&gt;
&lt;li&gt;Grammarly for text correction&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>algorithms</category>
      <category>complexity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>A brief history of tech interviews</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Wed, 05 Feb 2025 18:28:02 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/a-brief-history-of-tech-interviews-1618</link>
      <guid>https://forem.com/jotafeldmann/a-brief-history-of-tech-interviews-1618</guid>
      <description>&lt;p&gt;Since I've been doing tech interviews for 10+ years and professionally for 2, I decided to look for previous work in that area.&lt;/p&gt;

&lt;p&gt;It's a common topic for tech professionals, but finding material is difficult.&lt;/p&gt;

&lt;p&gt;This post intends to start a series about this topic.&lt;/p&gt;

&lt;p&gt;Let's start with a small recap AFAIK:&lt;/p&gt;

&lt;h2&gt;
  
  
  Early Days (1950s-1970s) The Birth of Computing &amp;amp; Hiring Engineers
&lt;/h2&gt;

&lt;p&gt;Companies like IBM, Bell Labs, and Xerox PARC were pioneers in hiring computer scientists.&lt;/p&gt;

&lt;p&gt;Interviews were academic in nature, focusing on logic, mathematics, and electrical engineering principles.&lt;/p&gt;

&lt;p&gt;Hiring was often referral-based, with informal questioning rather than structured interviews.&lt;/p&gt;

&lt;p&gt;Many interviewers were scientists and engineers rather than dedicated HR professionals.&lt;/p&gt;

&lt;h2&gt;
  
  
  1980s-1990s: The Rise of Silicon Valley &amp;amp; Algorithmic Interviews
&lt;/h2&gt;

&lt;p&gt;The rise of companies like Microsoft, Apple, and Sun Microsystems brought more structured hiring processes.&lt;/p&gt;

&lt;p&gt;Microsoft famously introduced puzzle-based interviews, testing problem-solving skills.&lt;/p&gt;

&lt;p&gt;Whiteboard coding and algorithm-heavy interviews became the standard, with books like Programming Pearls influencing interview techniques.&lt;/p&gt;

&lt;p&gt;Dedicated technical recruiters and specialist interviewers emerged, distinct from hiring managers.&lt;/p&gt;

&lt;h2&gt;
  
  
  2000s: Google’s Influence &amp;amp; Standardization of Coding Interviews
&lt;/h2&gt;

&lt;p&gt;Google revolutionized hiring by implementing data-driven hiring and rigorous coding interviews.&lt;/p&gt;

&lt;p&gt;LeetCode-style algorithmic questions and system design interviews became industry norms.&lt;/p&gt;

&lt;p&gt;Behavioral interviews like "Googleyness" checks were introduced to assess cultural fit.&lt;/p&gt;

&lt;p&gt;Many companies adopted bar-raiser programs (inspired by Amazon), where trained interviewers had veto power over hiring decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  2010s: Remote Interviews, AI Assessments, &amp;amp; Diversity Efforts
&lt;/h2&gt;

&lt;p&gt;The rise of remote interviews with tools like CoderPad, CodeSignal, and online assessments.&lt;/p&gt;

&lt;p&gt;AI-powered resume screening and technical assessments became common.&lt;/p&gt;

&lt;p&gt;Companies focused on reducing bias in interviews, incorporating structured rubrics and blind hiring practices.&lt;/p&gt;

&lt;p&gt;Pair programming interviews and take-home projects grew in popularity.&lt;/p&gt;

&lt;h2&gt;
  
  
  2020s-Present: AI and Vibe Coding
&lt;/h2&gt;

&lt;p&gt;AI-assisted interviews (e.g., AI grading coding challenges) and machine learning to predict candidate success.&lt;/p&gt;

&lt;p&gt;Shift away from pure algorithmic interviews toward real-world problem-solving and system design.&lt;/p&gt;

&lt;p&gt;Live coding interviews and asynchronous video interviews continue to evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://daedtech.com/deploying-guerrilla-tactics-combat-stupid-tech-interviews/" rel="noopener noreferrer"&gt;Deploying Guerrilla Tactics to Combat Stupid Tech Interviews&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/ExperiencedDevs/comments/1ae4gtw/the_evolution_of_the_coding_interview_a_common/" rel="noopener noreferrer"&gt;The evolution of the coding interview: A common misconception&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.hackerrank.com/blog/past-present-future-technical-interview/" rel="noopener noreferrer"&gt;The Past, Present, and Future of the Technical Interview&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/better-programming/a-history-of-coding-interviews-23b5e8f9c92f" rel="noopener noreferrer"&gt;A History of Coding Interviews&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/better-programming/a-history-of-coding-interviews-23b5e8f9c92f" rel="noopener noreferrer"&gt;Tech interviews: an origin story &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.plan99.net/in-defence-of-the-technical-interview-966f54a58927" rel="noopener noreferrer"&gt;In defense of the technical interview | by Mike Hearn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://happycoding.io/tutorials/interviewing/history" rel="noopener noreferrer"&gt;A Brief History of Data Structures, Algorithms, and Tech Interviews&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>interview</category>
      <category>career</category>
    </item>
    <item>
      <title>Converting a Node project to Deno</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Wed, 13 May 2020 06:15:23 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/converting-a-node-project-to-deno-9dp</link>
      <guid>https://forem.com/jotafeldmann/converting-a-node-project-to-deno-9dp</guid>
      <description>&lt;p&gt;I was intrigued to test Ryan Dhal's &lt;a href="https://deno.land/"&gt;Deno&lt;/a&gt; and nothing better than some personal project to make it right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p63FPeC2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/kt3k/deno_sticker/master/sticker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p63FPeC2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/kt3k/deno_sticker/master/sticker.png" alt="Deno"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some important stuff before:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/suhas_chatekar/converting-a-javascript-project-to-typescript-one-file-at-a-time"&gt;Convert your project to TypeScript&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@mudgen/porting-node-js-code-to-deno-e7225bd5be58"&gt;Some gotchas about existent modules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://deepu.tech/deno-runtime-for-typescript/"&gt;Read basic stuff about Deno&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Remove all NPM files and node_modules
&lt;/h2&gt;

&lt;p&gt;You don't need anything more than Deno, but some tasks will be converted to some Deno "out of the box" command (e.g. &lt;code&gt;deno test&lt;/code&gt;), and for others, I'll use a Makefile for convenience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove &lt;code&gt;package.json&lt;/code&gt;, &lt;code&gt;package-lock.json&lt;/code&gt;, and all related stuff. Check this file for all &lt;code&gt;NPM&lt;/code&gt; possibilities: &lt;a href="https://github.com/github/gitignore/blob/master/Node.gitignore"&gt;https://github.com/github/gitignore/blob/master/Node.gitignore&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Remove &lt;code&gt;node_modules&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Add file extensions to imports
&lt;/h2&gt;

&lt;p&gt;Add &lt;code&gt;.ts&lt;/code&gt; to all import statements.&lt;/p&gt;

&lt;p&gt;One easy way using VS Code &lt;a href="https://code.visualstudio.com/docs/editor/codebasics#_find-and-replace"&gt;"search and replace"&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable regex&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Search&lt;/strong&gt; field use from &lt;code&gt;(.+?)(?=.ts')&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Replace&lt;/strong&gt; field use from &lt;code&gt;$1.ts&lt;/code&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DgL46Vko--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zccqypfn4gf8s5ahbzj8.png" alt="Alt Text"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fix parser warnings and adapt the logic
&lt;/h2&gt;

&lt;p&gt;Deno uses strict guidelines using the TypeScript and style guide. It includes some logical/code adaptations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optional: convert tests and test task
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Short version: &lt;a href="https://github.com/denoland/deno/blob/master/docs/testing.md"&gt;https://github.com/denoland/deno/blob/master/docs/testing.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow the "out of the box" test suite &lt;a href="https://deno.land/std/testing"&gt;https://deno.land/std/testing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Convert from &lt;code&gt;npm test&lt;/code&gt; to something like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Optional Makefile for convenience&lt;/span&gt;
&lt;span class="nb"&gt;test&lt;/span&gt;:
    deno &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Convert install task and add the first dependency
&lt;/h2&gt;

&lt;p&gt;Forget &lt;code&gt;npm install&lt;/code&gt;. &lt;a href="https://github.com/denoland/deno/blob/master/docs/linking_to_external_code.md#it-seems-unwieldy-to-import-urls-everywhere"&gt;You can use &lt;code&gt;dep.ts&lt;/code&gt;&lt;/a&gt;, but it's not required. I'm using a Makefile to keep track of all dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Optional Makefile for convenience&lt;/span&gt;
&lt;span class="nb"&gt;install&lt;/span&gt;:
    deno &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--unstable&lt;/span&gt; &lt;span class="nt"&gt;--allow-read&lt;/span&gt; &lt;span class="nt"&gt;--allow-run&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://deno.land/x/denon/denon.ts&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Convert run and dev tasks (with Denon)
&lt;/h2&gt;

&lt;p&gt;Here I'm using &lt;a href="https://deno.land/x/denon"&gt;Denon&lt;/a&gt; module, the Nodemon for Deno, to watch and reload file changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Optional Makefile for convenience&lt;/span&gt;
dev:
    denon &lt;span class="si"&gt;$(&lt;/span&gt;ENTRY_POINT&lt;span class="si"&gt;)&lt;/span&gt;
run:
    deno run &lt;span class="si"&gt;$(&lt;/span&gt;ENTRY_POINT&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set entry point
&lt;/h2&gt;

&lt;p&gt;Change the entry point file name from &lt;code&gt;index.ts&lt;/code&gt; to &lt;code&gt;mod.ts&lt;/code&gt; &lt;a href="https://github.com/denoland/deno/blob/master/docs/contributing/style_guide.md"&gt;Deno/Rust standard&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use my project as a template
&lt;/h2&gt;

&lt;p&gt;All these steps are documented on my project: &lt;a href="https://github.com/jotafeldmann/elevators/pull/1"&gt;https://github.com/jotafeldmann/elevators/pull/1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enjoy and, please, send me feedback to improve.&lt;/p&gt;

</description>
      <category>node</category>
      <category>deno</category>
      <category>convert</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Uncached Go tests: avoid flaky tests</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Thu, 27 Feb 2020 21:25:42 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/uncached-go-tests-avoid-flaky-tests-2m49</link>
      <guid>https://forem.com/jotafeldmann/uncached-go-tests-avoid-flaky-tests-2m49</guid>
      <description>&lt;p&gt;So, crazy about your integration tests with Go?&lt;/p&gt;

&lt;p&gt;I was totally crazy testing my endpoints, and, for hours, I didn't know why &lt;a href="https://hackernoon.com/flaky-tests-a-war-that-never-ends-9aa32fdef359"&gt;some tests were flaky&lt;/a&gt; in my CI environment.&lt;/p&gt;

&lt;p&gt;But &lt;a href="https://golang.org/doc/go1.10#test"&gt;since Go 1.10 tests are cached&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To avoid cache, you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Thanks &lt;a class="comment-mentioned-user" href="https://dev.to/marciorodrigues"&gt;@marciorodrigues&lt;/a&gt;
 and &lt;a href="https://github.com/golang/go/issues/24573#issuecomment-427455066"&gt;that issue&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>go</category>
      <category>testing</category>
    </item>
    <item>
      <title>A tricky Python default argument</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Sat, 15 Feb 2020 22:34:55 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/a-tricky-python-default-argument-4l6c</link>
      <guid>https://forem.com/jotafeldmann/a-tricky-python-default-argument-4l6c</guid>
      <description>&lt;p&gt;Check the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[]):&lt;/span&gt;
  &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In my opinion, we have a problematic mutator function, but for the purpose of this article, just focus on the default argument/parameter. What's wrong?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[]):&lt;/span&gt;
  &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# [1]
&lt;/span&gt;&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# [1,1]
&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here's the tricky default parameter: any object (list, map, etc) will be &lt;strong&gt;instantiate and will live in the memory for the rest of the function's life!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IMHO it's problematic behavior, different from any other language, like Java or &lt;a href="https://repl.it/@jotafeldmann/defaultArgument"&gt;JavaScript&lt;/a&gt;. Ok, it's cool when you can work with dependency injection because once set, it will never be instantiated again. But in common use, you can forget that default parameter and have practical side-effects. Example: in unitary tests (my reason to write that article).&lt;/p&gt;

&lt;p&gt;To avoid that stuff, you can check and instantiate every time, at the beginning of the function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;immutableAppend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;l&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;
  &lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;l&lt;/span&gt;

&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;immutableAppend&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# [1]
&lt;/span&gt;&lt;span class="k"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;immutableAppend&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="c1"&gt;# [1]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This gotcha leads me to the great &lt;a href="https://docs.python-guide.org/writing/gotchas/"&gt;Python Guide&lt;/a&gt;, which first gotcha is exactly this behavior. And one more reason to read all &lt;a href="https://docs.python.org/3/tutorial/controlflow.html#default-argument-values"&gt;Python doc&lt;/a&gt;, again 😞 (search for "Important warning: The default value is evaluated only once").&lt;/p&gt;

&lt;p&gt;You can test it on &lt;a href="https://repl.it/@jotafeldmann/trickyDefaultArgument"&gt;https://repl.it/@jotafeldmann/trickyDefaultArgument&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>immutability</category>
      <category>testing</category>
    </item>
    <item>
      <title>You become responsible, forever, for what you have published on NPM</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Thu, 30 Jan 2020 04:54:06 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/you-become-responsible-forever-for-what-you-have-published-on-npm-1cam</link>
      <guid>https://forem.com/jotafeldmann/you-become-responsible-forever-for-what-you-have-published-on-npm-1cam</guid>
      <description>&lt;p&gt;&lt;q&gt;You become responsible, forever, for what you have tamed.&lt;/q&gt;&lt;br&gt;
&lt;small&gt;― Antoine de Saint-Exupéry, The Little Prince &lt;/small&gt;&lt;/p&gt;

&lt;p&gt;I was checking for my old small codes, looking for some good projects to apply tests and improve my skills.&lt;/p&gt;

&lt;p&gt;Suddenly I remembered that some of them were published on NPM, just for fun, to my own use. And then, in that small right area, which I've used so many times before to analyze packages, my eyes became bright:&lt;strong&gt; most of my packages have about 6 to 24 downloads/week!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fpracticaldev%2Fimage%2Ffetch%2Fs--Etk7LUQF--%2Fc_imagga_scale%2Cf_auto%2Cfl_progressive%2Ch_420%2Cq_auto%2Cw_1000%2Fhttps%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp9k8o53dnxp24wo0kzvm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fpracticaldev%2Fimage%2Ffetch%2Fs--Etk7LUQF--%2Fc_imagga_scale%2Cf_auto%2Cfl_progressive%2Ch_420%2Cq_auto%2Cw_1000%2Fhttps%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp9k8o53dnxp24wo0kzvm.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was shocked! Somebody, out there, in the ocean of the open-source, is really using my humble chunks of code...&lt;/p&gt;

&lt;p&gt;After minutes of self-admiration, one thing became clear: I was responsible, somehow, for the code of the others. Never, in my 20 years of coding, this kind of responsibility crashed over me.&lt;/p&gt;

&lt;p&gt;One thing is to code to the production environment of software you're paid to execute; another is ship some silly code and somebody, not your fellas or company, start to use your package. Weekly.&lt;/p&gt;

&lt;p&gt;So, I've decided to improve all of them. Version, tests, good documentation, better code. Even knowing my published codes was simple, the need for some baseline overwhelm my shoulders and became a duty.&lt;/p&gt;

&lt;p&gt;And now, 20 years after my first code, I can feel what open source can be. It's not just "contributing back" but become responsible for the others, raising the bar, as I think those same guys do in the all-stared repositories.&lt;/p&gt;

&lt;p&gt;...&lt;/p&gt;

&lt;p&gt;So, for you out there, using my code: thanks a lot. And one lesson learned: open source out your ideas, maybe somebody can find it useful.&lt;/p&gt;

</description>
      <category>npm</category>
      <category>javascript</category>
      <category>opensource</category>
      <category>node</category>
    </item>
    <item>
      <title>Tricky async declarations</title>
      <dc:creator>Jota Feldmann</dc:creator>
      <pubDate>Thu, 16 Jan 2020 22:39:23 +0000</pubDate>
      <link>https://forem.com/jotafeldmann/tricky-async-declarations-1hii</link>
      <guid>https://forem.com/jotafeldmann/tricky-async-declarations-1hii</guid>
      <description>&lt;p&gt;Imagine the following code, using Express and Sequelize:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sequelizeModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getStuff&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In simple words: given an endpoint, it will execute a query on the database and return the result.&lt;/p&gt;

&lt;p&gt;But there's a small gotcha: that code works but doesn't return the results, because at the time of the promise creation, the &lt;code&gt;[0]&lt;/code&gt; is not available/ready.&lt;/p&gt;

&lt;p&gt;Only after the promise/async is fulfilled, the Sequelize object exists.&lt;/p&gt;

&lt;p&gt;Working code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;sequelizeModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getStuff&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
      <category>javascript</category>
      <category>node</category>
      <category>async</category>
      <category>promise</category>
    </item>
  </channel>
</rss>
