<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Michael Tiel</title>
    <description>The latest articles on Forem by Michael Tiel (@theirritainer).</description>
    <link>https://forem.com/theirritainer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/theirritainer"/>
    <language>en</language>
    <item>
      <title>ThreadQL: just query your data in a slack thread</title>
      <dc:creator>Michael Tiel</dc:creator>
      <pubDate>Thu, 26 Mar 2026 15:41:39 +0000</pubDate>
      <link>https://forem.com/theirritainer/threadql-just-query-your-data-in-a-slack-thread-437e</link>
      <guid>https://forem.com/theirritainer/threadql-just-query-your-data-in-a-slack-thread-437e</guid>
      <description>&lt;p&gt;ThreadQL: just query your data in a slack thread&lt;/p&gt;

&lt;p&gt;I'm excited to share that ThreadQL v0.1.0 is now open-source and available at &lt;a href="https://threadql.com" rel="noopener noreferrer"&gt;https://threadql.com&lt;/a&gt;. This AI-powered application pioneers the era where asking questions is the new query language.&lt;/p&gt;

&lt;p&gt;The problem I've always had&lt;br&gt;
Work in a tech company long enough and you realize: the database is where the truth lives. Every order, every user, every metric — it's all in there. But getting answers out of it? That's a different story.&lt;/p&gt;

&lt;p&gt;If you're technical you write SQL queries. If you're not, you message the dev or data team and wait. And wait. And wait.&lt;/p&gt;

&lt;p&gt;I've come to understand this dependency from both positions. On the product side, experiencing frustrated product owners wanting instant answers but being blocked if we can’t respond in time. And on the dev side I’ve had my plethora of flow-breaking "can you just run this quick query?" requests. And I kept thinking:&lt;/p&gt;

&lt;p&gt;Why can't I just ask my database a question like I'd ask a person?&lt;br&gt;
And why can't it live where I already work — in Slack?&lt;br&gt;
That's what ThreadQL does. &lt;/p&gt;

&lt;p&gt;ThreadQL Lives in Slack&lt;br&gt;
ThreadQL isn't another app you have to switch to. It's right in your Slack.&lt;/p&gt;

&lt;p&gt;You ask a question in plain English, and ThreadQL responds with your data. No SQL. No context switching. No waiting.&lt;/p&gt;

&lt;p&gt;"How many users signed up this month?"&lt;br&gt;
"What's our top product by revenue?"&lt;br&gt;
"Show me the orders that haven't shipped yet."&lt;/p&gt;

&lt;p&gt;Ask. Get answers. Done. &lt;/p&gt;

&lt;p&gt;Need the Raw Data?&lt;br&gt;
Need further analysis? Just ask ThreadQL to export a CSV and it'll send it straight in the slack thread. Download, open in Excel and work from there. Your data, delivered where you already are.&lt;/p&gt;

&lt;p&gt;Why I’m Open Sourcing It&lt;/p&gt;

&lt;p&gt;This isn't a product I’m selling. It's a tool I believe belongs to the community. &lt;/p&gt;

&lt;p&gt;Organizations deserve to unlock the data they already have. Every person in your company - product, marketing, support, operations - should be able to ask questions and get answers. The knowledge is sitting there in your database. ThreadQL just helps you talk to it.&lt;/p&gt;

&lt;p&gt;Under the Hood&lt;/p&gt;

&lt;p&gt;For Security:&lt;br&gt;
Only parameterized SELECT queries allowed, SSH Bastion Host support, privacy by design, optional approval-first workflows&lt;/p&gt;

&lt;p&gt;For Reliability:&lt;br&gt;
Multi-Tenant Support, LLM Provider Fallback, MySQL &amp;amp; PostgreSQL support, Auto table scanning schedule&lt;/p&gt;

&lt;p&gt;For Deployment:&lt;br&gt;
Helm Chart, or docker-compose stack and an admin panel &lt;/p&gt;

&lt;p&gt;The Vision&lt;/p&gt;

&lt;p&gt;Here's what I think the future of data access looks like:&lt;br&gt;
You have a question. You ask it in Slack. You get an answer. That's it. &lt;/p&gt;

&lt;p&gt;The database holds your organization's living knowledge. ThreadQL unlocks it — right where your team already works.&lt;/p&gt;

&lt;p&gt;Check it out: &lt;a href="https://github.com/emtay-com/threadql" rel="noopener noreferrer"&gt;https://github.com/emtay-com/threadql&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;P.S. This is a 0.1.0 release - early but functional. I've tested it as thoroughly as I could, but might have missed cases. If you run into issues or have ideas for improvements, your contributions and feedback are more than welcome. And if this clicks, star it on GitHub to help others discover it&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>slack</category>
    </item>
    <item>
      <title>This dev built his own LLM from scratch</title>
      <dc:creator>Michael Tiel</dc:creator>
      <pubDate>Thu, 05 Feb 2026 12:04:31 +0000</pubDate>
      <link>https://forem.com/theirritainer/this-dev-built-his-own-llm-from-scratch-1i62</link>
      <guid>https://forem.com/theirritainer/this-dev-built-his-own-llm-from-scratch-1i62</guid>
      <description>&lt;p&gt;&lt;em&gt;what happened next will surprise you&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the summer of 2023, at the beginning of the AI wave I spent some lunch breaks trying to get a better grasp of how large language models work by watching the nanoGPT YouTube series by Andrej Karpathy. In this series he trains a tiny model from scratch on a Shakespeare file of about 1 MB. I found it conceptually difficult, even though the steps made sense, but it was amazing to see that at the end the nano AI was able to spit out Shakespearean word salad (correct spelling and perhaps grammar, but not really coherent).&lt;/p&gt;

&lt;p&gt;Recently I’ve been reading more about the inner workings of LLMs. Although I grasp a lot more now, there’s still a ton that feels alien. Throwing terminology at Grok or ChatGPT helps, but this mode of learning is slow and doesn’t really stick. So I decided to go on the journey of building an LLM myself instead of just reading theory.&lt;/p&gt;

&lt;h4&gt;
  
  
  Constraints
&lt;/h4&gt;

&lt;p&gt;Unfortunately, I had constraints. I’m currently travelling, so investing in a new machine with a proper GPU made no sense: All training had to happen on my modest 13th-gen Intel i5 CPU. Since CPUs are an order of magnitude slower, training data had to be small and expectations realistic. It wouldn’t really transcend word salad, but might get some connotation back with a prompt. Plus… have a lot of patience.&lt;/p&gt;

&lt;p&gt;With these constraints in mind I decided to start really tiny. My initial idea was to take the original national anthem as training material, but that was far too small. I did like the idea of an LLM spitting out old Dutch, so after some searching I found a Middle Dutch miracle play from the early 16th century: Mariken van Nieumeghen. After stripping the PDF of all annotations I was left with a mere 72 KB.&lt;/p&gt;

&lt;p&gt;I created a new directory, set up a Python venv, placed the text in an input folder and started a ChatGPT thread asking it to explain nanoGPT’s prepare.py line by line so I could rebuild it myself. I wasn’t alone in this journey: I paired with ChatGPT and Grok. ChatGPT worked better for long threads, Grok for quick explanations and trivial Python questions. Within two hours I had my own character-based prepare script.&lt;/p&gt;

&lt;h4&gt;
  
  
  The transformer rabbit hole
&lt;/h4&gt;

&lt;p&gt;The next session was all about the transformer—the cornerstone of an LLM—and the training loop. I spent a full day rewriting nanoGPT’s example into a new class, giving variables sensible names, extracting methods, and adding type hints. When ChatGPT suddenly introduced three more classes—CausalSelfAttention, Block and MLP I stopped it immediately. What was this? I thought my TinyTransformerLM was enough.&lt;/p&gt;

&lt;p&gt;It turned out that logic previously hidden inside pytorchs TransformerEncoderLayer function call was now made explicit. So we spent the next hours reproducing and rewriting those classes, understanding their role in the transformer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8thce597b3aln72irt6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8thce597b3aln72irt6z.png" alt="My first train loop" width="671" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the end of the day I ran python train.py and… it was training. My heart jumped a little. I was training a (tiny) large language model.&lt;/p&gt;

&lt;h4&gt;
  
  
  Overfitting, inference, and the first win
&lt;/h4&gt;

&lt;p&gt;The Mariken model quickly reached the point where training loss could still drop but validation loss could not. A good lesson in overfitting: “how well did I study” versus “how well did I perform on the exam.” After about 1750 iterations, Mariken was done.&lt;/p&gt;

&lt;p&gt;Next, I wrote a small inference script - inference is when you are using an AI model, instead of training. &lt;/p&gt;

&lt;p&gt;Another AHA moment: inference is basically the training loop without backpropagation. I ran it for the first time and got another small heart jump—the model was producing correctly written (but word-salad-like) Middle Dutch, including role indicators from the play. I had trained a tiny Middle Dutch AI&lt;/p&gt;

&lt;h4&gt;
  
  
  Scaling up: Caesaero
&lt;/h4&gt;

&lt;p&gt;I still saw quite some repetition when increasing max tokens, so I decided to train another model. 72 KB is insanely tiny—what would happen if I tried 20 times more? I created a projects directory, instructed Claude to turn the scripts into a proper CLI app, and started a new model, this was getting somewhat serious!&lt;/p&gt;

&lt;p&gt;For my next language model, I thought it would be amusing to train it on Latin, even though I can only somewhat decipher it since my last Latin lessons were 26 years ago, it would be a really cool toy. I gathered 750 KB of texts by Caesar and 750 KB by Cicero and named the project Caesaero. 1.5 megabyte, it almost fits on an ancient 3.5’’ floppy disk.&lt;/p&gt;

&lt;p&gt;Training went well, though much slower. At around 4750 iterations the model was pretrained. Inference produced clean Latin word salad—completely translatable, yet nonsensical. But this made me want to go further: the model had no notion of when to stop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8uy0opy6gwdxt6ovfh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8uy0opy6gwdxt6ovfh1.png" alt="Guessed Cicero wouldn't mind me training an AI on his writings" width="652" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to introduce the stop token and teach the model when to stop through finetuning. Feeding the model question answer pairs&lt;/p&gt;

&lt;p&gt;After patching the vocab with &amp;lt;|eos|&amp;gt; and setting up finetuning on a few hundred JSONL examples, the model consistently emitted stop tokens. I did the same for Mariken. My LLMs had become one-turn chatbots. Incredible!&lt;/p&gt;

&lt;h3&gt;
  
  
  “Only Latin.” Famous last words.
&lt;/h3&gt;

&lt;p&gt;From here I wanted to take it up another notch: teach the Caesaero model to refuse English. This turned out to be far harder than expected.&lt;/p&gt;

&lt;p&gt;I spent half a day working together with ChatGPT on a policy-training variant: batches with refusal examples mixed with non-examples. Whatever we tried, it always failed in one of two ways. Train too little, and the model happily answered word salad. Train too much, and everything collapsed into “Non respondebo.” Even pure Latin prompts. Total semantic model collapse.&lt;/p&gt;

&lt;p&gt;ChatGPT started contradicting itself at this point, suggesting mutually exclusive fixes in the same thread. We were clearly poking at something fragile. I stopped for the day having learned what model collapse is — but with no idea how to avoid it.&lt;/p&gt;

&lt;p&gt;The next session brought a new suggestion: train only the head of the transformer. The final projection layer. Don’t touch the internal representations, just steer the output. That sounded reasonable. I tried it. Unfortunately same result&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekf02gwceew30twcia2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekf02gwceew30twcia2o.png" alt="And oh did I feel bad" width="544" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Undershoot: no refusal. Overshoot: collapse again. Restore checkpoint. Try again.&lt;/p&gt;

&lt;p&gt;Half a day gone. Nothing usable.&lt;/p&gt;

&lt;p&gt;At this point I stopped brute-forcing and realised I had to go back to first principles.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLMs are just predicting the next token.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My tiny model didn’t have much vocabulary, during prepare we simply turned one char into one token. Character-level tokens meant English and Latin were nearly indistinguishable — both just long sequences of  similar letters. On top of that, English has a lot of romance loan words. Overtraining refusal didn’t “teach behavior”; it simply taught the model the easiest escape hatch.&lt;/p&gt;

&lt;p&gt;So I rebuilt Caesaero again, this time with byte-pair encoding (BPE). Larger vocabulary, not 128 chars as tokens but 8000 tokens. Word fragments instead of letters. I cloned the project and retrained from scratch.&lt;/p&gt;

&lt;p&gt;Twice.&lt;/p&gt;

&lt;p&gt;The first time, my tokenizer greedily merged complete headers and sentences and broke Latin entirely. The second time was better. Pretraining worked. Latin looked Latin again.&lt;/p&gt;

&lt;h4&gt;
  
  
  Evaluation-driven despair
&lt;/h4&gt;

&lt;p&gt;Finetuning, however, became much harder. Loss curves stopped meaning anything. I had versions with beautiful loss numbers that behaved terribly. So I wrote a test suite and switched to evaluation-driven progression: train a bit, test a lot, revert aggressively.&lt;/p&gt;

&lt;p&gt;This took days.&lt;/p&gt;

&lt;p&gt;I’d get something that worked… mostly. Then a subtle drift would appear. Then refusal tokens would leak. Then everything would collapse again.&lt;/p&gt;

&lt;p&gt;Restore. Try again.&lt;/p&gt;

&lt;p&gt;Eventually I reached a reasonably stable point: Latin answers were mostly correct (“Roma in Italia est” ~70%, “Roma in Africa est” ~30%). Not great, but coherent. I was ready to try policy training again.&lt;/p&gt;

&lt;p&gt;And it failed. Again.&lt;/p&gt;

&lt;p&gt;English gave bogged refusal  Latin. Refusal leaked into everything. ChatGPT — once again — suggested inference hacks: special tags, runtime guards, stripping tokens during generation. I refused. No hacks. At one point even ChatGPT suggested moving on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgptc90g9yt0f0njiepqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgptc90g9yt0f0njiepqe.png" alt="Remember, ChatGPT can make mistakes - telling me either to add hacks or just stop the show.&amp;lt;br&amp;gt;
" width="710" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s when I decided to properly engineer the problem instead of guessing.&lt;/p&gt;

&lt;p&gt;First, I added a CLI command to inspect tokenization. English was still just e-n-g-l-i-s-h. That had to change, but how?&lt;/p&gt;

&lt;p&gt;I explored a couple of bad ideas: adding English tokens post-hoc (useless), brute-forcing duos or trios (wasted vocab). Researched, hypothesized things with Grok. Nothing seemed to be a tangible way going forward.&lt;/p&gt;

&lt;p&gt;Then ChatGPT finally said something that stuck:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To refuse English, the model must first recognize English.&lt;br&gt;
That was a missing piece.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Domain-aware pretraining
&lt;/h4&gt;

&lt;p&gt;I rebuilt the model again, introducing domain-aware pretraining, basically namespaced. Latin and English were now separate domains from the start. I added a domain head, lowercased everything, increased vocab size to 12000, and retrained.&lt;/p&gt;

&lt;p&gt;Three times. Since had to clean the data a bit and cap the token length to give usable token sequences. Now properly trying to debug every step on the road. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02gys5b5ic5ejgqcj276.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02gys5b5ic5ejgqcj276.png" alt="Engineering step by step" width="576" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, something felt different. Even before finetuning, the pretrained model behaved differently: Latin prompts yielded Latin soup, English prompts yielded English soup. That alone felt like progress.&lt;/p&gt;

&lt;p&gt;Finetuning worked cleanly. Proper fortune cookie latin style responses on Latin queries. Then came policy training.&lt;/p&gt;

&lt;p&gt;Still not perfect.&lt;/p&gt;

&lt;p&gt;Refusal worked sometimes. Other times, refusal words poisoned Latin answers. The model kept reaching for easy exits like “Latine” or “Non respondebo.” Tiny models love shortcuts.&lt;/p&gt;

&lt;h4&gt;
  
  
  The final grind
&lt;/h4&gt;

&lt;p&gt;After me suggesting to ChatGPT to perhaps add 2 domain heads in the policy mode again, its final suggestion felt almost absurd: split English policy into two domains — english_respond (which should respond in Latin on english queries) and english_refuse — and blacklist refusal tokens from leaking into Latin and English-response loss calculations during backpropagation.&lt;/p&gt;

&lt;p&gt;I was skeptical. But I was also out of ideas. And I mentioned like 20 times by now to ChatGPT that we would not introduce inference hacks. This  was my only hope.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw78c0lsr3ysdp0ujbyjn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw78c0lsr3ysdp0ujbyjn.jpg" alt="Help me chatgpt, you are my only hope" width="500" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the final session — after instructing Claude to make the proper modifications — we started loading the policy sets again. First up was the v4 variant, with a 1:1 ratio of refuse vs respond on English prompts, plus about 500 basic Latin queries to anchor the language. We ran 3000 iterations (on top of the ~2000 from finetuning) at a learning rate of 0.0003%. This finally showed some result, but refusal was still far too weak: English was reduced, but not reliably refused. And we started to see serious leakage.&lt;/p&gt;

&lt;p&gt;So we built a new set: v5, with a 2:1 ratio, explicitly aiming to unlearn the model’s tendency to talk English back. We started cautiously: 800 iterations at 0.0006%. No real movement. Upped it to 1500 iterations—still barely anything. Then we cranked it to 3000, and there it was again: the classic poisoning pattern. English was mostly gone, but now refusal tokens were leaking everywhere.&lt;/p&gt;

&lt;p&gt;But by this point I had already augmented the training loop to autosave checkpoints and added tooling to select and evaluate any saved version. Instead of reverting and try all over I started running the full test suite on every 250-step subcheckpoint, comparing outputs side by side.&lt;/p&gt;

&lt;p&gt;That’s when things got interesting.&lt;/p&gt;

&lt;p&gt;The checkpoint at 2250 iterations was clearly the best so far. Hardly any leakage, stable Latin answers, and over 50% of English prompts were now answered in Latin instead of English. Not refusal yet—but no longer English either. That felt like real progress.&lt;/p&gt;

&lt;p&gt;Based on that, I got the instruction to do one final hard pass: a maximum of 250 iterations, but this time with a strong learning rate of 0.001%, evaluating every 50 steps. I pinned the autosave mechanism to keep every 50 iterations and ran the training again, watching it like a hawk.&lt;/p&gt;

&lt;p&gt;We evaluated the end result step by step. Finally, at around iteration 2400 (+ 2000 from finetuning), it clicked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8tj7yaegxj0ioqpipsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8tj7yaegxj0ioqpipsl.png" alt="Best result we had seen so far" width="560" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Latin prompts produced Latin answers. Albeit latin fortune cookie answers. English prompts were refused ….&lt;/p&gt;

&lt;p&gt;Ok, not always. About 80% of the time. But crucially: the model was no longer trying to predict English tokens. The Latin prose remained intact. Leakage was minimal. No inference hacks; just learned behaviour.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ee316wmza5vvsi25fvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ee316wmza5vvsi25fvg.png" alt="Thats latin to me" width="700" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a tiny CPU-trained model, that was the win. And my veni, vidi, vici. &lt;/p&gt;

&lt;h4&gt;
  
  
  Closing
&lt;/h4&gt;

&lt;p&gt;This was a hell of a journey. I learned more from building this than from any paper: transformers, tokenization, overfitting, collapse, poisoning, policy training, evaluation-driven progress — and the familiar lessons too: build–test–evaluate–repeat, and git as an absolute lifesaver.&lt;/p&gt;

&lt;p&gt;I’m thinking of creating one more tiny LLM, codename Roddenberry, pretrained on 15mb of Star Trek transcripts — but that one stays for myself (and in .gitignore) for copyright reasons. Ambitious goals there: multi-turn chat, character conditioning (talk like Picard) and maybe even system prompts?. More things to learn and play so to say. &lt;/p&gt;

&lt;p&gt;You can find the repo of my experiment &lt;a href="https://github.com/TheIrritainer/mariken" rel="noopener noreferrer"&gt;here&lt;/a&gt;. And if you’re an ML / GenAI specialist who stumbles upon it: Feel free to tell me what I could have done different or how I could improve this toy project, suggestions more than welcome!&lt;/p&gt;

</description>
      <category>python</category>
      <category>nanogpt</category>
      <category>ai</category>
      <category>pytorch</category>
    </item>
    <item>
      <title>The definitive case for getting those upgrades scheduled</title>
      <dc:creator>Michael Tiel</dc:creator>
      <pubDate>Fri, 23 May 2025 13:41:16 +0000</pubDate>
      <link>https://forem.com/theirritainer/the-definitive-case-for-getting-those-upgrades-scheduled-2lca</link>
      <guid>https://forem.com/theirritainer/the-definitive-case-for-getting-those-upgrades-scheduled-2lca</guid>
      <description>&lt;p&gt;In our wonderful profession of software engineering, there is one ever-recurring subject: upgrades. Language upgrades, framework upgrades, key package upgrades, you name it. We have all had our share of it, dealt with it one way or another, by dealing or by not dealing with it. It is a never ending cycle of major version jumps, minor update steps, LTS versions, deprecations, often scheduled on a bi-yearly or yearly cycle. Often these changes are small and do not directly affect our application code, but sometimes they are more significant or even represent a paradigm shift by their creators, requiring time to integrate. The more we lean on a certain component the longer it often takes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feavgv8rtmrvl0el1ba7n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feavgv8rtmrvl0el1ba7n.jpg" alt="Image description" width="310" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From a business perspective needing to reserve time for these upgrades is something not always very well understood. &lt;/p&gt;

&lt;p&gt;Most likely upgrades feel comparable to the ‘joyful’ experience of bringing your car to the garage for its major service interval; You’ll be unable to use the vehicle for a day, guaranteed of a hefty bill and if you’re unlucky you’ll get a call during the day that some part you’ve probably never heard of needs to be replaced straight away - thus supersizing that already unpleasant bill. And when you collect your vehicle at the end of the day, it doesn’t feel as though anything special has been done&lt;/p&gt;

&lt;p&gt;On top of that, upgrades inherently bring a form of risk. When the upgrade has been done unexpected things could happen - though this is more of a rarity.I vividly remember the strangest bug we encountered with a password reset after an upgrade, where internal staff could not reproduce our customers’ complaints at all. Only then did I realized the cache TTL had changed from minutes to seconds after the upgrade, leaving our password reset with a TTL of only 60 seconds, during which we conducted smoke testing, but most customers did not&lt;/p&gt;

&lt;p&gt;One could make the argument that executing these upgrades is a matter of necessity for security of the business. Although this holds some truth, a zero-day vulnerability could be discovered and pose a significant risk; however, multiple layers of protection often prevent malicious intent from reaching the code in the first place. While important, it is simply a confounding factor&lt;br&gt;
Or one could make the case that executing the upgrades is to improve performance of the system, removing bugs and faults that would hinder us if we would stick on the same versioning. Though hopefully this should not have affected our business logic in the first place since it should be adequately test covered to begin with. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud7xplb9u33guzm5gmpk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud7xplb9u33guzm5gmpk.jpg" alt="Image description" width="600" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No, the definitive case for upgrades can be explained much simpler, it is serving the business needs of being focused on delivering application code: No sane developer would consider opening a TCP/IP connection to port 25 of an SMTP server to manually specify all headers and encode payloads to send an email—we use libraries for that. A lot of them, which stem from the infinite pool of wisdom that is called the open source community. Frameworks, libraries, packages; battle tested pieces of logic that are already out there to use freely.&lt;/p&gt;

&lt;p&gt;Executing these upgrades is part of the total cost of ownership of the codebase. It is simply the price we have to pay for using code we did not have to write ourselves to get the business goal moving forward. It is also the price we are willing to pay in advance to plug in future open source components that allow us to move the business goal forward&lt;/p&gt;

&lt;p&gt;Not doing the upgrades means not investing in the compatibility of the codebase, because eventually we will run into a situation where a new open source package could really speed things up but we’re either forced to write something ourselves completely from scratch or use an inferior or outdated alternative. Or to summarize it in a one-liner; it is spending quality effort in having a forwards-compatibe application.&lt;/p&gt;

&lt;p&gt;And not executing upgrades for a prolonged  time will eventually leave you in some kind of language-framework-package conundrum where you can’t move up any component slowly at all because; A depends on B depends on C and C cannot be upgraded because that would require fully upgrade X. X had a paradigm shift which requires redoing 43 application code classes to be fully compatible again which breaks 185 unittests because we kept working so long on this current version  of X which makes the upgrade backlog feel like an impossible endeavor. &lt;br&gt;
Often thus, due to business misunderstandings of its necessity these upgrades get postponed over and over again, getting the team stuck with unhappy developer experience having to browse legacy documentation, create forks and other workarounds to support abandoned packages and feel general discontent on the state of the application ultimately slowing down the velocity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvizh4f4fklocfx0ykdx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyvizh4f4fklocfx0ykdx.jpg" alt="Image description" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And when you finally commence and tackle all the components, the operation feels daunting, as it involves replacing numerous building blocks simultaneously. Thus you’ll be either forced to do risky big-bang upgrades or take the time consuming road of version-by-version like you should have done in the first place.  &lt;/p&gt;

&lt;p&gt;Perhaps those periodic upgrades are a necessity after all. Just as those scheduled service interval for a car are there  to ensure getting a sizable mileage  out of your vehicle, periodic upgrades on our application are there to make sure we drive the business forward. Not always the most fun part, granted, but one of the best ways to make sure of having the best traction and velocity as possible!&lt;/p&gt;

</description>
      <category>upgrades</category>
      <category>stakeholders</category>
      <category>frameworks</category>
      <category>libraries</category>
    </item>
    <item>
      <title>Writing integration tests with jest and puppeteer</title>
      <dc:creator>Michael Tiel</dc:creator>
      <pubDate>Fri, 16 Aug 2024 12:04:37 +0000</pubDate>
      <link>https://forem.com/theirritainer/writing-integration-tests-with-jest-and-puppeteer-kjh</link>
      <guid>https://forem.com/theirritainer/writing-integration-tests-with-jest-and-puppeteer-kjh</guid>
      <description>&lt;p&gt;For quite some time already I'm working on a personal project every friday which contains a part that involves heavy detection work being executed in the browser. &lt;/p&gt;

&lt;p&gt;Even though my project is generally divided into multiple files with small, well-isolated methods, classes, and other elements—each thoroughly tested with Jest unit tests—I occasionally encounter situations where I need to test parts of the project in a more realistic environment. This typically arises when combining various methods to achieve a specific goal that works well in the browser but isn't easily covered by Jest unit tests running in a Node environment. In these cases, attempting to use mocks, stubs, or specialized test packages can introduce unnecessary complexity, which is something I strive to avoid.&lt;/p&gt;

&lt;p&gt;The challenge with developing these browser-targeted algorithms lies in the difference in context. Jest operates in a Node environment, running directly on your hardware, rather than in a browser context where JavaScript usually executes.&lt;/p&gt;

&lt;p&gt;Thankfully, Puppeteer—a programmable headless Chrome—provides a solution. By using Puppeteer alongside Jest, we can create robust integration tests for frontend code intended to run in the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  The general setup is as follows.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;We define a folder in our project, eg. &lt;code&gt;stubs&lt;/code&gt; which we can use to store snippets to test against. You can for instance download a webpage as html and store the downloaded page for later use&lt;/li&gt;
&lt;li&gt;We copy - or when using typescript - build the source file and store it in the same stubs folder for easy access during the test&lt;/li&gt;
&lt;li&gt;Before we start each test we load our stub html inside puppeteer, and then add the script as script tag to our puppeteer instance so we are able to interact with our code
&lt;/li&gt;
&lt;li&gt;Then we run our test, we run evaluate to our puppeteer page where we invoke our code inside the browser context and return the result back to the node context&lt;/li&gt;
&lt;li&gt;Finally, we remove our script from the stubs folder to prevent committing it later on&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example implementation&lt;/strong&gt;&lt;br&gt;
Lets look at a practical implementation for something I was working on, an algorithm that extracts the dominant colors based on a 'screenshot' of the page using canvas. Html canvas does not exist in the node context which makes writing tests for this algorithm kind of a pain train. It then loops over all pixels and gives a list of colors based on their weight&lt;/p&gt;

&lt;p&gt;I've got my stubs folder up and runnning so lets proceed to step 1: I am using typescript so I need to compile the target I wish to test first. I am injecting it on a global object IntegrationTests&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="cm"&gt;/**
 * @jest-environment puppeteer
 */&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jest-puppeteer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;puppeteer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;puppeteerSettings&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../../Constants/Tests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PluckedColor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./mapColorsUsingCanvas&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;esbuild&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;esbuild&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;esbuild&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;entryPoints&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/src/Algorithms/Colors/mapColorsUsingCanvas.ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;bundle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;outfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;es2015&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;iife&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;globalName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;IntegrationTests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second step is setting up the tests by starting puppeteer, loading the stub page and injecting my script. The debug flag is for local testing, if you turn that on you get the console.log inside the browser context which is a bit of a maze to plough through&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;debug&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mapColorsUsingCanvas Integration test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;puppeteerSettings&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// THIS spams your console&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;console&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PAGE LOG:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setViewport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1366&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;768&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`file:///&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;/stubs/text_strategy.html`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;domcontentloaded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addScriptTag&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;20000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also I'm defining my final step of cleanup hereunder, its basically a simple rm on the javascript file created in the esbuild step&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rmSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally lets look at our integration tests itself now. We run a page eval - the method body operates inside puppeteer in the browser context - which allows us to invoke our injected script we want to test. We return the result however back to jest running in the node context which then allows us to run the necessary assertions!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;runs mapColorsUsingCanvas on the stub and returns the dominant colors&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PluckedColor&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// eslint-disable-next-line @typescript-eslint/ban-ts-comment&lt;/span&gt;
      &lt;span class="c1"&gt;//@ts-ignore-next-line&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;IntegrationTests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapColorsUsingCanvas&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;r&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;g&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.9839&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;r&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;g&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0057&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using this approach we can add these type of integration tests easily to our project. It has little to do with TDD of course, these kind of tests more act as an an insurance policy that future changes won't affect any real life situation, and they have been proven really useful already on some occasions.   &lt;/p&gt;

&lt;p&gt;for reference, the whole file hereunder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="cm"&gt;/**
 * @jest-environment puppeteer
 */&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jest-puppeteer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;puppeteer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;puppeteerSettings&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../../Constants/Tests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PluckedColor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./mapColorsUsingCanvas&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;esbuild&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;esbuild&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;esbuild&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;entryPoints&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/src/Algorithms/Colors/mapColorsUsingCanvas.ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;bundle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;outfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;es2015&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;iife&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;globalName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;IntegrationTests&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;debug&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mapColorsUsingCanvas Integration test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;puppeteer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;puppeteerSettings&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;newPage&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// THIS spams your console&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;console&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PAGE LOG:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setViewport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1366&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;768&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;goto&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`file:///&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;/stubs/text_strategy.html`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;domcontentloaded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addScriptTag&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;20000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rmSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/stubs/mapColorsUsingCanvas.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;runs mapColorsUsingCanvas on the stub and returns the dominant colors&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PluckedColor&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// eslint-disable-next-line @typescript-eslint/ban-ts-comment&lt;/span&gt;
      &lt;span class="c1"&gt;//@ts-ignore-next-line&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;IntegrationTests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapColorsUsingCanvas&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;r&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;g&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;255&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.9839&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;r&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;g&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;weight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.0057&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>jest</category>
      <category>typescript</category>
      <category>puppeteer</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>My key learnings of 20 years of software engineering</title>
      <dc:creator>Michael Tiel</dc:creator>
      <pubDate>Mon, 03 Jul 2023 13:27:53 +0000</pubDate>
      <link>https://forem.com/theirritainer/my-key-learnings-of-20-years-of-software-engineering-3g2d</link>
      <guid>https://forem.com/theirritainer/my-key-learnings-of-20-years-of-software-engineering-3g2d</guid>
      <description>&lt;p&gt;Oh wow, today is the day. 20 years ago, on the first Monday of July I entered the professional world of software engineering  as a 19 year old uni dropout. From the tight quarters of a starting web agency in Amsterdam staring at a modest 14-inch CRT screen my journey began. Even though I had been tinkering with Q-basic all my high school years and done some spare web dev jobs for a few guilders as a young adolescent from that moment it had grown into something more - my profession&lt;/p&gt;

&lt;p&gt;Today is not my birthday but I'm marking an important milestone. I’ve been a software engineer longer than not. It feels like a celebration all its own.&lt;br&gt;
This journey has been filled with lessons. Many mistakes were made. Stress threatened my passion. But I've learned, grown, and found joy in this craft.&lt;/p&gt;

&lt;p&gt;Here are a few lessons I've come to understand over these years:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mental energy&lt;/strong&gt;&lt;br&gt;
1: Mental energy is the most precious resource. It powers our ideas, fuels our creativity, and drives our resilience. Nurturing it is paramount.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unique privillege&lt;/strong&gt;&lt;br&gt;
2: Software engineering is a magical craft. We create solutions out of nothing but thought and code, transforming digital ether into concrete problem-solving marvels. It’s a unique privilege.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning never stops&lt;/strong&gt;&lt;br&gt;
3: The learning never ceases. There’s no plateau of knowledge, every day offers an invitation to understand something new, to refine our skills, and expand our horizons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It'll always take longer than expected&lt;/strong&gt;&lt;br&gt;
4: The first 80% of the project takes 80% of the time, the last 20% of the project takes 80% of the time &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teamwork = dreamwork&lt;/strong&gt;&lt;br&gt;
5: Teamwork is the foundation of success. The unity in brainstorming, coding, debugging, and ultimately, creating solutions is a thrill. It’s a reminder that together we make the impossible possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust your tools&lt;/strong&gt;&lt;br&gt;
6: Trust your tools. PHP (with Laravel and Symfony as the big guns) has been my trusted ally, demonstrating time and again its efficiency for me against other inflated tech stacks. Don't let trends overshadow the tools that truly empower you. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imposter syndrome isn't that bad&lt;/strong&gt;&lt;br&gt;
7: And yes, the whisper of imposter syndrome is a reality. It can keep us grounded, spark our curiosity, and push us to strive for better, reminding us that there’s always room to grow.&lt;/p&gt;

&lt;p&gt;I'm thrilled about what lies ahead. Celebrating this 20-year milestone I look forward to 20+ more years of crafting solutions, chasing down bugs like a bloodhound for my clients and delving deeper into this craft. The journey continues, and the future excites me. Here's to the next 20 years! 🚀👨‍💻🥳&lt;/p&gt;

</description>
      <category>20years</category>
      <category>php</category>
      <category>learning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
