<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Liudas</title>
    <description>The latest articles on Forem by Liudas (@liudasjan).</description>
    <link>https://forem.com/liudasjan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/liudasjan"/>
    <language>en</language>
    <item>
      <title>ReadyAPI vs Rentgen: One Builds Enterprise Confidence. The Other Checks If the API Falls Apart Before That.</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Wed, 06 May 2026 18:51:14 +0000</pubDate>
      <link>https://forem.com/liudasjan/readyapi-vs-rentgen-one-builds-enterprise-confidence-the-other-checks-if-the-api-falls-apart-2fo6</link>
      <guid>https://forem.com/liudasjan/readyapi-vs-rentgen-one-builds-enterprise-confidence-the-other-checks-if-the-api-falls-apart-2fo6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw8lquk4sxdte2djllqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw8lquk4sxdte2djllqb.png" alt=" " width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s a funny moment in API development that almost nobody talks about.&lt;/p&gt;

&lt;p&gt;A developer finishes an endpoint, sends one request, gets a beautiful 200 OK back, and suddenly the room behaves like the API survived Normandy.&lt;/p&gt;

&lt;p&gt;Then somebody opens ReadyAPI, starts building proper test suites, adds assertions, environments, CI integration, performance checks, maybe even security testing. Serious enterprise stuff. And to be fair, ReadyAPI absolutely deserves its reputation. SoapUI evolved into one of the biggest API testing platforms for a reason. Banks, telecoms, healthcare systems — this is the territory where structured testing matters and auditors enjoy PowerPoint presentations about “quality gates”.&lt;/p&gt;

&lt;p&gt;But there’s a problem hiding before all of that.&lt;/p&gt;

&lt;p&gt;What if the endpoint was already fragile before the first test suite even existed?&lt;/p&gt;

&lt;p&gt;That’s the gap Rentgen focuses on.&lt;/p&gt;

&lt;p&gt;Not enterprise automation. Not giant regression packs. Just the uncomfortable two-minute reality check right after “it works”.&lt;/p&gt;

&lt;p&gt;Take one working cURL request and start making it slightly annoying. Remove fields. Break types. Add whitespace. Push boundaries. Send malformed payloads. Suddenly APIs that looked very confident five minutes ago begin returning strange status codes, inconsistent validation, or the occasional glorious 500 error.&lt;/p&gt;

&lt;p&gt;And honestly, that’s useful.&lt;/p&gt;

&lt;p&gt;Because building beautiful automation around assumptions is still building automation around assumptions.&lt;/p&gt;

&lt;p&gt;ReadyAPI helps teams define and enforce correctness over time. Rentgen helps expose weird backend behavior before those definitions even exist. Different stage. Different responsibility. No reason they can’t work together.&lt;/p&gt;

&lt;p&gt;Full article:&lt;br&gt;
&lt;a href="https://rentgen.io/api-stories/ReadyAPI-and-Rentgen-enterprise-test-suites-and-step-before-them.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/ReadyAPI-and-Rentgen-enterprise-test-suites-and-step-before-them.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>readyapi</category>
      <category>api</category>
      <category>restapi</category>
    </item>
    <item>
      <title>Thunder Client vs Rentgen is one of those comparisons that sounds logical until you actually look at how people use them.</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Tue, 05 May 2026 16:56:35 +0000</pubDate>
      <link>https://forem.com/liudasjan/thunder-client-vs-rentgen-is-one-of-those-comparisons-that-sounds-logical-until-you-actually-look-5631</link>
      <guid>https://forem.com/liudasjan/thunder-client-vs-rentgen-is-one-of-those-comparisons-that-sounds-logical-until-you-actually-look-5631</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u2pob1kg4a2zt16eudz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3u2pob1kg4a2zt16eudz.png" alt=" " width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both deal with API requests, both sit somewhere in the development flow, and both are useful. But putting them in the same box is a bit like comparing a screwdriver with a crash test. Yes, both are involved in building things. No, they are not doing the same job.&lt;/p&gt;

&lt;p&gt;Thunder Client lives exactly where developers spend most of their time — inside the editor. You write some code, tweak an endpoint, open Thunder Client, send a request, get a response, fix something, repeat. It is fast, lightweight, and does not force you to jump between tools like a caffeinated squirrel. For day-to-day development, that convenience matters more than people like to admit.&lt;/p&gt;

&lt;p&gt;But here is the part everyone quietly skips. You send a request, it returns 200, JSON looks fine, and someone inevitably says “works.” Which is technically correct, in the same way that starting a car proves it can move forward. It does not prove the brakes work when you actually need them.&lt;/p&gt;

&lt;p&gt;Manual API clients, including Thunder Client, do exactly what you tell them to do. Nothing more, nothing less. If you test a valid request, you get a valid response. If you think about edge cases, you can test them too. But that thinking still depends on you. And when deadlines are tight, nobody is sitting there inventing fifty creative ways to break their own endpoint just for fun.&lt;/p&gt;

&lt;p&gt;That is where the story changes. Instead of asking “does this request work,” the more interesting question is “what happens when this request stops being polite.” Missing fields, wrong types, oversized payloads, malformed JSON, broken tokens, weird casing, extra data nobody asked for. The kind of input that always shows up in production, usually at the worst possible moment.&lt;/p&gt;

&lt;p&gt;That is the gap Rentgen is built for. Not to replace the editor workflow, not to compete with API clients, but to take one working request and push it a bit harder. You copy the cURL, drop it in, and instead of manually crafting edge cases, you get a whole set of variations automatically. Some APIs handle it well. Some respond like they have just seen a ghost. Both outcomes are useful.&lt;/p&gt;

&lt;p&gt;The difference is not about features or UI. It is about responsibility. Thunder Client helps you build and verify your API while you are working. Rentgen helps you challenge the assumptions behind that work before you move on. One keeps things smooth. The other makes things slightly uncomfortable. And if you have spent enough time debugging production issues, you already know which one tends to reveal the interesting problems.&lt;/p&gt;

&lt;p&gt;The workflow is actually simple when you stop trying to compare them. Use Thunder Client while building. Keep everything close to your code, move fast, get feedback, fix things. Once the request works, take that same request and run it through Rentgen. Now you are no longer testing what you expected. You are testing what you forgot.&lt;/p&gt;

&lt;p&gt;That shift matters more than most teams realize. Because a lot of automation is built on top of assumptions that were never properly challenged. You end up with clean test suites that quietly ignore the messy cases. And those messy cases are exactly where bugs like to hide.&lt;/p&gt;

&lt;p&gt;So no, this is not Thunder Client vs Rentgen in the usual sense. One is an API client. The other is a reality check. One helps you build. The other helps you find out what you missed before someone else does.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://rentgen.io/api-stories/Thunder-Client-and-Rentgen-testing-inside-editor-reality-checking-outside-happy-path.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/Thunder-Client-and-Rentgen-testing-inside-editor-reality-checking-outside-happy-path.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>api</category>
      <category>thunderclient</category>
      <category>restapi</category>
    </item>
    <item>
      <title>Without cURL, Rentgen Doesn’t Exist — But cURL Alone Isn’t Enough</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Mon, 04 May 2026 14:48:16 +0000</pubDate>
      <link>https://forem.com/liudasjan/without-curl-rentgen-doesnt-exist-but-curl-alone-isnt-enough-59ki</link>
      <guid>https://forem.com/liudasjan/without-curl-rentgen-doesnt-exist-but-curl-alone-isnt-enough-59ki</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp7fc0ey6e7tnjff2rix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp7fc0ey6e7tnjff2rix.png" alt=" " width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve seen this pattern too many times to ignore. You write a clean cURL request, hit enter, get a nice 200 back, JSON looks good, and suddenly everyone relaxes like the job is done. cURL is brilliant at that moment. It is simple, direct, brutally honest. You send exactly what you wrote and the server responds. No magic, no abstraction, just raw truth. And that’s exactly why developers trust it. If something works in cURL, you know what was sent. But here’s the uncomfortable part: it only proves that one exact request worked once. Nothing more.&lt;/p&gt;

&lt;p&gt;Real APIs don’t live in that perfect scenario. Fields go missing, values change, types break, tokens expire, payloads get messy, and clients behave in ways nobody planned for. And suddenly that clean request doesn’t mean much anymore. You could write dozens of variations by hand in cURL, but let’s be honest, nobody wants to sit there crafting broken payloads all day just to see if something explodes.&lt;/p&gt;

&lt;p&gt;This is where I ended up building something on top of that idea. Instead of replacing cURL, just extend it. Take one working request and start asking a better question: what happens when this request stops being perfect? That’s the gap. That moment between “it works” and “it actually survives real input.” Turns out that’s where most of the interesting problems live. Inconsistent validation, weird status codes, backend 500s that should never exist, responses that make debugging unnecessarily painful.&lt;/p&gt;

&lt;p&gt;The workflow is simple and it makes more sense than pretending one tool can do everything. Use cURL to get the request right. Then take that exact request and push it a bit. Remove fields, break types, stretch boundaries, mess with payloads. Now you’re not testing one scenario anymore, you’re seeing how the API behaves in reality. And that’s a very different picture.&lt;/p&gt;

&lt;p&gt;A lot of teams jump straight from a working request into automation. They build clean test suites around clean scenarios and feel safe. But automation built on assumptions is just a faster way to miss things. If you don’t explore the messy cases first, your tests will simply never cover them.&lt;/p&gt;

&lt;p&gt;So no, this is not cURL vs anything. cURL stays exactly where it belongs — as the ground truth. But one request proving it works is only half the story. The other half is figuring out what breaks. And that’s where things actually get interesting.&lt;/p&gt;

&lt;p&gt;Full version: &lt;a href="https://rentgen.io/api-stories/cURL-and-Rentgen-one-proves-it-works-the-other-proves-what-breaks.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/cURL-and-Rentgen-one-proves-it-works-the-other-proves-what-breaks.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>cli</category>
      <category>testing</category>
      <category>tooling</category>
    </item>
    <item>
      <title>HTTPie vs Rentgen is one of those comparisons that sounds sensible right up until the moment you realise they’re not even playing the same sport.</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Sun, 03 May 2026 06:41:38 +0000</pubDate>
      <link>https://forem.com/liudasjan/httpie-vs-rentgen-is-one-of-those-comparisons-that-sounds-sensible-right-up-until-the-moment-you-4f60</link>
      <guid>https://forem.com/liudasjan/httpie-vs-rentgen-is-one-of-those-comparisons-that-sounds-sensible-right-up-until-the-moment-you-4f60</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui4snf96gb6v5n7p8ddx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui4snf96gb6v5n7p8ddx.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes, both talk to APIs, but so does half the internet, and we don’t compare a terminal tool with a stress test for the same reason we don’t compare a steering wheel with a crash test. HTTPie is there to make requests readable, clean, almost pleasant. You type something that looks like English instead of a cursed cURL command, hit enter, and the API politely responds. It’s fast, it’s human, and it’s exactly what you want when you’re building or debugging something without losing your will to live.&lt;/p&gt;

&lt;p&gt;And then, of course, comes the dangerous bit. The request works. The response looks fine. Everyone leans back and says, “good enough.” This is where reality quietly clears its throat. Because that one perfect request doesn’t prove your API is solid, it proves that one carefully constructed scenario didn’t fall apart. Everything else is still a mystery. Missing fields, wrong data types, strange payloads, expired tokens, random whitespace, and all the other things real systems throw at your backend are still waiting their turn.&lt;/p&gt;

&lt;p&gt;HTTPie doesn’t pretend to solve that, and it shouldn’t. It does its job properly. It lets you talk to the API like a normal human being instead of fighting with syntax. But it will only ever send what you ask it to send. If you don’t think about edge cases, they don’t exist. If you don’t test something, it remains untested. Simple, honest, slightly terrifying.&lt;/p&gt;

&lt;p&gt;This is where Rentgen wanders in, not as a replacement, but as the slightly annoying colleague who asks uncomfortable questions. You take that same working request, drop it in, and suddenly the API is dealing with all the things you didn’t bother to try. Fields disappear, values change, payloads get ugly, and the system is forced to respond to something closer to reality. Sometimes it handles it gracefully. Sometimes it produces errors that make you wonder how this ever reached production. Nothing dramatic, just the usual collection of “we probably should have tested that.”&lt;/p&gt;

&lt;p&gt;The difference isn’t power, it’s timing. HTTPie lives in the moment where you build and understand the request. Rentgen lives right after, when you ask whether that understanding was actually complete. One gives you clarity, the other gives you doubt, and you need both if you don’t enjoy debugging things at three in the morning.&lt;/p&gt;

&lt;p&gt;Used properly, they fit together perfectly. You build the request with HTTPie, make sure it works, then you let Rentgen make it slightly uncomfortable. Fix what breaks, learn what matters, and only then turn that knowledge into proper tests. That way you’re not automating guesses, you’re automating reality.&lt;/p&gt;

&lt;p&gt;HTTPie helps you speak to your API. Rentgen helps you find out what your API says when things stop being polite. Same request, different purpose, no drama required.&lt;/p&gt;

&lt;p&gt;Full version here: &lt;a href="https://rentgen.io/api-stories/HTTPie-and-Rentgen-first-talk-to-the-API-then-make-it-uncomfortable.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/HTTPie-and-Rentgen-first-talk-to-the-API-then-make-it-uncomfortable.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>api</category>
      <category>httpie</category>
      <category>rest</category>
    </item>
    <item>
      <title>Insomnia vs Rentgen — powerful API platform vs raw API reality</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Fri, 01 May 2026 18:41:38 +0000</pubDate>
      <link>https://forem.com/liudasjan/insomnia-vs-rentgen-powerful-api-platform-vs-raw-api-reality-230o</link>
      <guid>https://forem.com/liudasjan/insomnia-vs-rentgen-powerful-api-platform-vs-raw-api-reality-230o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftan38l3fudn36bz94rdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftan38l3fudn36bz94rdd.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Insomnia vs Rentgen is one of those comparisons that sounds logical until you actually think about it for more than five seconds. Yes, both deal with APIs, but so does electricity and your toaster, and nobody is comparing those. Insomnia is a proper API platform. You build requests, manage collections, write assertions, run tests, sync with Git, collaborate with teams and generally behave like a responsible adult. It’s structured, repeatable, and absolutely necessary once your API stops being a toy and starts becoming a system.&lt;/p&gt;

&lt;p&gt;Rentgen doesn’t even try to compete with that. It shows up earlier, at that suspiciously quiet moment when the first request returns 200 and everyone suddenly decides the job is done. That’s where things usually go wrong. Because one clean request doesn’t prove the API works, it proves that one carefully crafted scenario didn’t explode.&lt;/p&gt;

&lt;p&gt;Insomnia works with what you define. If you don’t test missing fields, they don’t exist. If you don’t try invalid data types, wrong casing, broken payloads or boundary values, the system will happily pretend those problems don’t exist… right until production proves otherwise. And production is very good at proving people wrong.&lt;/p&gt;

&lt;p&gt;Rentgen flips that around. Instead of asking you to think of every edge case, it assumes you didn’t. You take one real cURL request, drop it in, and suddenly the API is dealing with missing fields, garbage input, weird payloads, and all the things real systems eventually send whether you like it or not. No ceremony, no scripts, just a fast way to see how fragile the endpoint actually is.&lt;/p&gt;

&lt;p&gt;The real difference is timing. Insomnia lives in the main workflow. It’s where you build, test, debug, and maintain your API over time. Rentgen lives before that. It’s the uncomfortable reality check before you start writing beautiful automation around assumptions that were never challenged.&lt;/p&gt;

&lt;p&gt;And that matters, because a lot of teams go straight from “it works” to “let’s automate it” and end up with test suites that look impressive but quietly ignore half the problem space. Automation based on assumptions is just a very efficient way to be wrong.&lt;/p&gt;

&lt;p&gt;Used properly, they sit next to each other perfectly. Build and understand the request in Insomnia, then take that exact request, run it through Rentgen, fix what breaks, and only then turn it into proper tests. That way you’re automating reality, not wishful thinking.&lt;/p&gt;

&lt;p&gt;Insomnia helps you build and manage API systems. Rentgen helps you find out what those systems don’t handle yet. Same request, different phase, completely different job.&lt;/p&gt;

&lt;p&gt;Full breakdown here: &lt;a href="https://rentgen.io/api-stories/Insomnia-vs-Rentgen-powerful-API-platform-vs-raw-API-reality.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/Insomnia-vs-Rentgen-powerful-API-platform-vs-raw-API-reality.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>backend</category>
      <category>testing</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Hoppscotch vs Rentgen? Not really. One sends requests. The other breaks them.</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Thu, 30 Apr 2026 15:19:34 +0000</pubDate>
      <link>https://forem.com/liudasjan/hoppscotch-vs-rentgen-not-really-one-sends-requests-the-other-breaks-them-164</link>
      <guid>https://forem.com/liudasjan/hoppscotch-vs-rentgen-not-really-one-sends-requests-the-other-breaks-them-164</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv48a0lamvho3jfhyzwl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv48a0lamvho3jfhyzwl.png" alt=" " width="800" height="545"&gt;&lt;/a&gt;&lt;br&gt;
There’s a funny moment in every API project.&lt;/p&gt;

&lt;p&gt;You send a request.&lt;br&gt;&lt;br&gt;
It returns 200.&lt;br&gt;&lt;br&gt;
JSON looks clean.&lt;br&gt;&lt;br&gt;
Everyone nods.&lt;/p&gt;

&lt;p&gt;“Works.”&lt;/p&gt;

&lt;p&gt;And that’s exactly where things usually start going wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  The clean illusion
&lt;/h2&gt;

&lt;p&gt;Tools like Hoppscotch are brilliant at what they do.&lt;/p&gt;

&lt;p&gt;Fast. Lightweight. Open-source.&lt;br&gt;&lt;br&gt;
You send requests, tweak headers, manage collections, debug responses — all the good stuff.&lt;/p&gt;

&lt;p&gt;It’s the modern version of “does the API respond?”&lt;/p&gt;

&lt;p&gt;And that matters.&lt;/p&gt;

&lt;p&gt;Because without a tool like that, you’re basically copying cURL commands between tabs like it’s 2008.&lt;/p&gt;




&lt;h2&gt;
  
  
  But here’s the uncomfortable part
&lt;/h2&gt;

&lt;p&gt;That one successful request?&lt;/p&gt;

&lt;p&gt;It proves exactly one thing:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;That exact request worked once.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That’s it.&lt;/p&gt;

&lt;p&gt;It says nothing about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;missing fields
&lt;/li&gt;
&lt;li&gt;wrong data types
&lt;/li&gt;
&lt;li&gt;extra whitespace
&lt;/li&gt;
&lt;li&gt;invalid enums
&lt;/li&gt;
&lt;li&gt;malformed payloads
&lt;/li&gt;
&lt;li&gt;broken auth
&lt;/li&gt;
&lt;li&gt;or the classic: “why is this returning 500?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s not a tooling problem.&lt;/p&gt;

&lt;p&gt;That’s just how manual testing works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where Rentgen shows up
&lt;/h2&gt;

&lt;p&gt;Rentgen starts right after that “it works” moment.&lt;/p&gt;

&lt;p&gt;Not to replace Hoppscotch.&lt;br&gt;&lt;br&gt;
To question it.&lt;/p&gt;

&lt;p&gt;You take the same request.&lt;br&gt;&lt;br&gt;
Paste it in.&lt;br&gt;&lt;br&gt;
And suddenly you’re not testing one scenario anymore.&lt;/p&gt;

&lt;p&gt;You’re testing dozens.&lt;/p&gt;

&lt;p&gt;Fields disappear.&lt;br&gt;&lt;br&gt;
Types change.&lt;br&gt;&lt;br&gt;
Payloads break.&lt;br&gt;&lt;br&gt;
Headers go weird.  &lt;/p&gt;

&lt;p&gt;And now the API has to deal with reality.&lt;/p&gt;




&lt;h2&gt;
  
  
  This is where things get interesting
&lt;/h2&gt;

&lt;p&gt;Because APIs behave very differently when input stops being polite.&lt;/p&gt;

&lt;p&gt;Some handle it well. Clean 4xx responses, consistent validation.&lt;/p&gt;

&lt;p&gt;Others… panic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;500 errors where there shouldn’t be any
&lt;/li&gt;
&lt;li&gt;inconsistent status codes
&lt;/li&gt;
&lt;li&gt;HTML responses from JSON APIs (yes, still happens)
&lt;/li&gt;
&lt;li&gt;validation that works sometimes and then just… gives up
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing exotic. Just the boring stuff nobody tests properly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real difference
&lt;/h2&gt;

&lt;p&gt;Hoppscotch gives you control.&lt;/p&gt;

&lt;p&gt;Rentgen gives you pressure.&lt;/p&gt;

&lt;p&gt;One helps you ask a question.&lt;br&gt;&lt;br&gt;
The other slightly messes up the question and watches what happens.&lt;/p&gt;




&lt;h2&gt;
  
  
  A workflow that actually makes sense
&lt;/h2&gt;

&lt;p&gt;Use Hoppscotch (or any API client) to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build the request
&lt;/li&gt;
&lt;li&gt;understand the endpoint
&lt;/li&gt;
&lt;li&gt;confirm the happy path
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;p&gt;Take that exact request → run it through Rentgen.&lt;/p&gt;

&lt;p&gt;Now you're asking a better question:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;What happens when this request stops being perfect?&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;A lot of teams go straight from “200 OK” to automation.&lt;/p&gt;

&lt;p&gt;They build test suites around clean scenarios…&lt;br&gt;&lt;br&gt;
and accidentally automate assumptions.&lt;/p&gt;

&lt;p&gt;That’s how you end up with beautiful CI pipelines&lt;br&gt;&lt;br&gt;
testing things that were never really explored.&lt;/p&gt;




&lt;h2&gt;
  
  
  The simple version
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Hoppscotch → work with APIs
&lt;/li&gt;
&lt;li&gt;Rentgen → challenge APIs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same request.&lt;br&gt;&lt;br&gt;
Different job.&lt;/p&gt;




&lt;p&gt;If you want the full breakdown (without me trying to be clever), I wrote a detailed version here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://rentgen.io/api-stories/Hoppscotch-vs-Rentgen-API-client-and-API-reality-check-are-not-the-same-job.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/Hoppscotch-vs-Rentgen-API-client-and-API-reality-check-are-not-the-same-job.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>api</category>
      <category>rest</category>
      <category>hoppscotch</category>
    </item>
    <item>
      <title>Bruno vs Rentgen — same cURL, very different consequences</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Wed, 29 Apr 2026 18:04:05 +0000</pubDate>
      <link>https://forem.com/liudasjan/bruno-vs-rentgen-same-curl-very-different-consequences-lg2</link>
      <guid>https://forem.com/liudasjan/bruno-vs-rentgen-same-curl-very-different-consequences-lg2</guid>
      <description>&lt;p&gt;There is a moment every developer knows.&lt;/p&gt;

&lt;p&gt;You take a cURL. Maybe from Swagger. Maybe from logs. Maybe you copied it from the browser like a civilized person. You paste it into Bruno, press “Send”, and it works.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Beautiful response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, someone leans back in their chair and says: “Looks good. Tested.”&lt;/p&gt;

&lt;p&gt;And that, right there, is how bugs quietly book their flight to production.&lt;/p&gt;

&lt;p&gt;Bruno is not the problem here. Bruno is actually very good. It does exactly what you expect: you send requests, organize them, keep things in Git, build collections, run flows. It’s clean, fast, doesn’t try to be smarter than you, which is already better than half the tools out there.&lt;/p&gt;

&lt;p&gt;But here’s the uncomfortable bit.&lt;/p&gt;

&lt;p&gt;Bruno does exactly what you tell it to do.&lt;/p&gt;

&lt;p&gt;No more.&lt;/p&gt;

&lt;p&gt;So if you didn’t think about removing a required field, it won’t happen. If you didn’t try sending a string instead of a number, nobody will. If you didn’t check what happens when the payload is slightly broken, the API will happily keep that secret until production introduces it to you at full speed.&lt;/p&gt;

&lt;p&gt;This is where Rentgen walks in like an annoying inspector who doesn’t trust your “looks fine” attitude.&lt;/p&gt;

&lt;p&gt;Because Rentgen starts where Bruno stops.&lt;/p&gt;

&lt;p&gt;Not when the request fails. That would be too easy. It starts when the request works — which is precisely when most teams stop thinking. Rentgen takes that same perfectly working cURL and begins doing the things nobody bothers to do manually. It removes fields, breaks formats, stretches values, messes with enums, and generally behaves like the worst possible client your API will ever meet.&lt;/p&gt;

&lt;p&gt;And suddenly, things get interesting.&lt;/p&gt;

&lt;p&gt;That clean 200 becomes a 500. Validation disappears. Errors make no sense. Status codes start behaving like they were picked randomly by a drunk intern. Nothing catastrophic, nothing dramatic — just a collection of small, embarrassing problems that were always there, quietly waiting.&lt;/p&gt;

&lt;p&gt;This is the part nobody likes to admit.&lt;/p&gt;

&lt;p&gt;Most API bugs are not genius-level failures. They are laziness, assumptions, and the belief that “it worked once” somehow means “it works”.&lt;/p&gt;

&lt;p&gt;It doesn’t.&lt;/p&gt;

&lt;p&gt;What it means is that one very specific request, under very polite conditions, did not explode.&lt;/p&gt;

&lt;p&gt;Congratulations.&lt;/p&gt;

&lt;p&gt;Bruno helps you operate the API. Rentgen questions whether the API deserves to exist in its current state.&lt;/p&gt;

&lt;p&gt;They are not competitors. They are different stages of reality.&lt;/p&gt;

&lt;p&gt;You use Bruno to get things working. You use Rentgen to see how quickly they stop working.&lt;/p&gt;

&lt;p&gt;And if you skip that second part, you end up with automated tests that confidently prove that your happy path still works, while everything else burns quietly in the background.&lt;/p&gt;

&lt;p&gt;The sensible workflow is painfully simple. Get your request working in your client. Run it through something that doesn’t trust you. Fix what breaks. Only then write automation.&lt;/p&gt;

&lt;p&gt;Anything else is just optimism with better tooling.&lt;/p&gt;

&lt;p&gt;If you want the original version without the sarcasm, it’s here: &lt;a href="https://rentgen.io/api-stories/Bruno-vs-Rentgen-same-cURL-different-job.html" rel="noopener noreferrer"&gt;https://rentgen.io/api-stories/Bruno-vs-Rentgen-same-cURL-different-job.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same cURL.&lt;/p&gt;

&lt;p&gt;Very different outcome.&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>api</category>
      <category>bruno</category>
      <category>qa</category>
    </item>
    <item>
      <title>AI Testing Is Not Magic — You Already Know How to Do It</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Wed, 29 Apr 2026 15:57:45 +0000</pubDate>
      <link>https://forem.com/liudasjan/ai-testing-is-not-magic-you-already-know-how-to-do-it-3o5n</link>
      <guid>https://forem.com/liudasjan/ai-testing-is-not-magic-you-already-know-how-to-do-it-3o5n</guid>
      <description>&lt;p&gt;AI is everywhere right now. Everyone is talking about AI testing—more specifically, testing LLMs. We keep hearing that it’s a completely different world, that QA engineers need to reskill, that the main issue is non-determinism, and that testing such systems is nearly impossible. Or that classical testing methods simply don’t apply.&lt;/p&gt;

&lt;p&gt;In reality, not really. It’s just overcomplicated.&lt;/p&gt;

&lt;p&gt;Long before AI, we already had systems that behaved non-deterministically. If you’ve worked in AdTech, you know exactly what this means. Whether a banner is shown or not, and to whom—it all depended on hundreds of parameters and real-time data. The same input didn’t necessarily produce the same output. In fact, the same input produced different results.&lt;/p&gt;

&lt;p&gt;And somehow, we still tested those systems.&lt;/p&gt;

&lt;p&gt;In such systems, logic was verified at a lower level—in isolated environments, through unit and integration tests, with clearly defined boundary values. We tested at the edges: with these inputs, the result must be A; with slightly different inputs, the system transitions into another state. The fact that above the boundary there was more variation was not surprising—it was expected. That’s how the system was supposed to behave.&lt;/p&gt;

&lt;p&gt;The exact same principle applies to LLM-based systems. But first, let’s put things into perspective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are You Actually Testing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In most cases, you are testing an AI-powered application or system—not the model itself.&lt;/p&gt;

&lt;p&gt;You only need to test the model in two cases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If your company is building its own model&lt;/li&gt;
&lt;li&gt;If you are actually training or fine-tuning the model with your own data (fine-tuning, labeling, humans-in-the-loop, etc.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In all other cases, you do not need to test the model. Yes, that’s a strict statement—intentionally.&lt;/p&gt;

&lt;p&gt;If you’re using OpenAI, Anthropic, or any other cloud model, you are testing the integration. If you’re using an open-source model locally, you are still testing whether it works correctly within your system. That is integration testing. This is a third-party dependency, and the same rules apply as with any other integration.&lt;/p&gt;

&lt;p&gt;Separate your application testing from third-party behavior. Mock the model. Build an emulator. Ensure your tests are fast and stable. Your goal is to find when your application fails—not to retest OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What About Model Updates?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Of course, you need control. Just like with any integration.&lt;/p&gt;

&lt;p&gt;If the model version changes, you need to verify that your use case hasn’t been broken. But this is not model testing in the fundamental sense. It’s regression testing—checking whether the integration still works and whether behavior hasn’t changed in a critical way.&lt;/p&gt;

&lt;p&gt;A small regression dataset, a few key scenarios, a sanity check—and move on. We are not testing the model architecture. We are verifying that our system still works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Interesting Part — When You Train the Model Yourself&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’re using an open-source model and fine-tuning it with your own data, then you’re no longer testing the base model—you’re testing your training. In other words, you’re verifying whether you trained the model correctly.&lt;/p&gt;

&lt;p&gt;And here, things are much simpler than they appear.&lt;/p&gt;

&lt;p&gt;The goal of testing remains exactly the same as always: find where the system fails.&lt;/p&gt;

&lt;p&gt;In the case of LLMs, that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model gets stuck&lt;/li&gt;
&lt;li&gt;It produces nonsense&lt;/li&gt;
&lt;li&gt;It starts hallucinating&lt;/li&gt;
&lt;li&gt;It becomes unstable with certain inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Where does this happen? Most often—at the edges.&lt;/p&gt;

&lt;p&gt;Where data is scarce. Where classes overlap. Where context is ambiguous. That’s where the model behaves more randomly. And that’s expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Practical Testing Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The method is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Take a boundary case&lt;/li&gt;
&lt;li&gt;Run it hundreds of times&lt;/li&gt;
&lt;li&gt;Measure the percentage of correct responses&lt;/li&gt;
&lt;li&gt;If the percentage meets your defined threshold—the test passes. / If not—the test fails.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The fix? Not code changes. You need more or better-quality training data.&lt;/p&gt;

&lt;p&gt;Now take clear, non-boundary cases where the answer should be almost always correct. Run them hundreds of times again. The expected result is close to 99–100%.&lt;/p&gt;

&lt;p&gt;If you see significant variation here, you have a problem with training and/or data quality. This is not rocket science—it’s basic statistical stability testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When Everything Becomes Part of a System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the AI model becomes part of a larger system, additional challenges appear—performance, monitoring, real-world data, unexpected inputs. But this is just system-level testing—operational testing.&lt;/p&gt;

&lt;p&gt;Does the system work under real conditions?&lt;br&gt;
Does it handle load?&lt;br&gt;
Do fallback mechanisms work?&lt;br&gt;
Security, etc.&lt;/p&gt;

&lt;p&gt;Fundamentally, this is no different from testing any other complex system.&lt;/p&gt;

&lt;p&gt;Final Thought&lt;/p&gt;

&lt;p&gt;AI testing is not magic.&lt;/p&gt;

&lt;p&gt;LLM non-determinism is not a revolution in testing, nor is it a problem. We have already tested complex, probabilistic systems before.&lt;/p&gt;

&lt;p&gt;The key is to clearly separate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;where you are testing your application&lt;/li&gt;
&lt;li&gt;where you are testing integration&lt;/li&gt;
&lt;li&gt;and where you are actually testing your data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is the same: the same principles, the same goals, the same discipline.&lt;/p&gt;

&lt;p&gt;Of course, if you are building your own AI model from scratch—that’s a different topic.&lt;/p&gt;

&lt;p&gt;I’ll leave that for another time.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>testing</category>
    </item>
    <item>
      <title>Rentgen vs Apidog — not competitors, different moments</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Sun, 26 Apr 2026 14:33:34 +0000</pubDate>
      <link>https://forem.com/liudasjan/rentgen-vs-apidog-not-competitors-different-moments-gk4</link>
      <guid>https://forem.com/liudasjan/rentgen-vs-apidog-not-competitors-different-moments-gk4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz61sa90pf119myebdju1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz61sa90pf119myebdju1.gif" alt=" " width="720" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;People often try to compare Rentgen with tools like Apidog. On paper it sounds logical — both are related to APIs. But in practice, they solve completely different problems.&lt;/p&gt;

&lt;p&gt;Apidog is a full API development platform. You design APIs, mock them, document them, collaborate with your team, write assertions, and integrate everything into CI/CD. It’s a system you live in when building and maintaining APIs.&lt;/p&gt;

&lt;p&gt;Rentgen is not that. And it doesn’t try to be.&lt;/p&gt;

&lt;p&gt;Rentgen flow: cURL in, tests out&lt;br&gt;
One request in. Reality instead of assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The moment everyone skips&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is a specific moment in API development that almost nobody talks about. A developer writes an endpoint, sends a request, gets a response, and moves on. Maybe they check for 200 OK, maybe they look at the response body. And then comes the classic line: “I tested it. It works.”&lt;/p&gt;

&lt;p&gt;That’s where most problems are already hiding. Not because someone did something wrong, but because only the expected scenario was tested. Everything else is still unknown.&lt;/p&gt;

&lt;p&gt;Tools like Apidog come in after that — to formalize behavior, add structure, build proper test suites. But the initial assumption is already baked into the system.&lt;/p&gt;

&lt;p&gt;Rentgen exists exactly at that earlier step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Rentgen actually does&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of writing tests, you take a real request — the same cURL you would paste into any API tool — and run it through Rentgen. It expands that single request into a wide range of variations that reflect real-world usage: invalid inputs, boundary values, malformed payloads, and unexpected combinations.&lt;/p&gt;

&lt;p&gt;Then you see how the API actually behaves. Not what it was designed to do, but what it really does under imperfect conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where the difference shows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where the gap becomes obvious. Instead of discovering new features, you start seeing the same patterns over and over: inconsistent status codes, unexpected 500 errors, validation happening too late, inputs that should fail but pass, or payloads that break things entirely.&lt;/p&gt;

&lt;p&gt;These are not rare edge cases. These are common problems that only show up when you stop testing just the happy path.&lt;/p&gt;

&lt;p&gt;We’ve seen this repeatedly across real-world APIs — including large, production systems. The interesting part is that many of these issues are not caught by traditional automation because automation usually reinforces expected behavior instead of challenging it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apidog builds. Rentgen questions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apidog is about building, structuring, and managing APIs properly. It gives teams control and collaboration across the entire lifecycle.&lt;/p&gt;

&lt;p&gt;Rentgen does something much simpler. It questions assumptions. It asks what happens when real input doesn’t match what was expected.&lt;/p&gt;

&lt;p&gt;That question usually comes too late — after tests are written or even after release. Rentgen moves it earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not a replacement. A missing layer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is why these tools are not competitors. You don’t replace Apidog with Rentgen. You run Rentgen before you fully trust your API.&lt;/p&gt;

&lt;p&gt;Fix the obvious issues early, then move into structured testing, automation, and CI/CD with something that actually behaves correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most teams don’t fail because they lack tools. They fail because they trust the first working response too quickly.&lt;/p&gt;

&lt;p&gt;One request works, so everything “feels done”. But one request only proves one thing works. Everything else is still unknown.&lt;/p&gt;

&lt;p&gt;That’s the gap Rentgen is built for — not to replace existing tools, but to make sure that when you say “it works”, you actually know what that means.&lt;/p&gt;

</description>
      <category>retgen</category>
      <category>api</category>
      <category>rest</category>
      <category>qa</category>
    </item>
    <item>
      <title>Postman VS Bruno</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Tue, 21 Apr 2026 05:29:55 +0000</pubDate>
      <link>https://forem.com/liudasjan/postman-vs-bruno-5bj3</link>
      <guid>https://forem.com/liudasjan/postman-vs-bruno-5bj3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmpao5ck7pvhrlaj1p4o.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmpao5ck7pvhrlaj1p4o.jpeg" alt=" " width="800" height="796"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Postman and Bruno argue who’s the better API client…&lt;/p&gt;

&lt;p&gt;Rentgen is not even in that category.&lt;/p&gt;

&lt;p&gt;Because the problem is not sending requests.&lt;br&gt;
The problem is understanding what your API actually does.&lt;/p&gt;

&lt;p&gt;👉 API clients = send requests&lt;br&gt;
👉 Rentgen = shows real behavior&lt;/p&gt;

&lt;p&gt;One request → 50–200 test cases → real edge cases → real bugs&lt;/p&gt;

&lt;p&gt;That’s the difference.&lt;br&gt;
Automation before automation.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://rentgen.io" rel="noopener noreferrer"&gt;https://rentgen.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14bszbwqjzme74uwidbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14bszbwqjzme74uwidbk.png" alt=" " width="800" height="1252"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Most APIs don’t break on happy paths</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Mon, 20 Apr 2026 11:32:07 +0000</pubDate>
      <link>https://forem.com/liudasjan/most-apis-dont-break-on-happy-paths-2g4f</link>
      <guid>https://forem.com/liudasjan/most-apis-dont-break-on-happy-paths-2g4f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxejrnx6tlunx7vxxnewy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxejrnx6tlunx7vxxnewy.png" alt=" " width="800" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before Rentgen&lt;br&gt;
Everything looks fine.&lt;br&gt;
Status: 200 ✅&lt;/p&gt;

&lt;p&gt;After Rentgen&lt;br&gt;
Reality hits. 💥&lt;/p&gt;

&lt;p&gt;Most APIs don’t break on happy paths.&lt;br&gt;
They break on everything else.&lt;/p&gt;

&lt;p&gt;Paste one request → see what actually happens.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://rentgen.io" rel="noopener noreferrer"&gt;https://rentgen.io&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>New Rentgen Release v1.20.0 🚀</title>
      <dc:creator>Liudas</dc:creator>
      <pubDate>Mon, 06 Apr 2026 17:23:51 +0000</pubDate>
      <link>https://forem.com/liudasjan/new-rentgen-release-v1200-4he7</link>
      <guid>https://forem.com/liudasjan/new-rentgen-release-v1200-4he7</guid>
      <description>&lt;p&gt;Release v1.20.0 🚀 &lt;/p&gt;

&lt;p&gt;👉 Project export / import (this is the main one)&lt;br&gt;
No accounts. No cloud. No data leaving your machine.&lt;br&gt;
Export your project → get a file → put it anywhere: Dropbox, GitHub or any shared folder. Your team (5, 100 or 10 000 people) just takes it and uses it. At 0 cost. And No sync back to server. No logins. No vendor lock-in. No data exposure. Just a file. Full control.&lt;/p&gt;

&lt;p&gt;👉 New test: invalid token. We take your Authorization and break it.&lt;br&gt;
Expected: 401&lt;br&gt;
Reality: often not.&lt;/p&gt;

&lt;p&gt;👉 Response time. Now visible per request and per test. No extra tools. No guessing. &lt;/p&gt;

&lt;p&gt;Paste a request. See what breaks. Automation before automation.&lt;/p&gt;

&lt;p&gt;New release Rentgen.io&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdap97tpjrzx7rbfop5z.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdap97tpjrzx7rbfop5z.gif" alt=" " width="1897" height="972"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rentgen</category>
      <category>api</category>
      <category>testing</category>
      <category>devtool</category>
    </item>
  </channel>
</rss>
