<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andrian Budantsov</title>
    <description>The latest articles on Forem by Andrian Budantsov (@abudantsov).</description>
    <link>https://forem.com/abudantsov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/abudantsov"/>
    <language>en</language>
    <item>
      <title>Regression Testing: Why Getting It Right Matters So Much</title>
      <dc:creator>Andrian Budantsov</dc:creator>
      <pubDate>Thu, 22 Jan 2026 15:42:07 +0000</pubDate>
      <link>https://forem.com/abudantsov/regression-testing-why-getting-it-right-matters-so-much-2fib</link>
      <guid>https://forem.com/abudantsov/regression-testing-why-getting-it-right-matters-so-much-2fib</guid>
      <description>&lt;p&gt;“If it ain’t broke, don’t fix it,” the old adage goes. But things do break, and they need to be fixed, updated, modified or tweaked frequently, especially in the world of software. And when changes are made there’s a risk that rather than making things better, something else gets broken. &lt;/p&gt;

&lt;p&gt;Software applications are complex and interconnected. They’re also fragile and imperfect and patches and fixes will need to be applied regularly. This means that as soon as version 1.0.1 of a software product exists, QA teams must carry out regression testing. &lt;/p&gt;

&lt;p&gt;And getting regression testing wrong can be calamitous. Already this year, we’ve seen Volkswagen and Porsche have to recall more than &lt;a href="https://www.reuters.com/business/autos-transportation/volkswagen-recall-over-356600-us-vehicles-over-rearview-camera-glitch-2026-01-06/" rel="noopener noreferrer"&gt;half a million&lt;/a&gt; vehicles in the US due to a rearview camera glitch, while Volvo too have also needed to recall hundreds of thousands of cars for &lt;a href="https://www.reuters.com/legal/litigation/volvo-cars-recalls-over-413000-us-vehicles-due-rearview-camera-issue-2026-01-08/" rel="noopener noreferrer"&gt;urgent updates&lt;/a&gt; for a rearview camera issue. &lt;/p&gt;

&lt;p&gt;There’s an irony at play here that the failure of these companies to look back over their work properly meant that so many drivers couldn’t look backwards properly, but that’s not to make light of the serious consequences of getting regression testing wrong. Last year, a buggy app update even &lt;a href="https://www.reuters.com/business/retail-consumer/sonos-ceo-patrick-spence-steps-down-after-app-update-debacle-2025-01-13/" rel="noopener noreferrer"&gt;cost the Sonos CEO his job&lt;/a&gt;. HP was involved in &lt;a href="https://arstechnica.com/gadgets/2025/03/hp-avoids-monetary-damages-over-bricked-printers-in-class-action-settlement/" rel="noopener noreferrer"&gt;years of litigation&lt;/a&gt; over ‘bricked’ printers. But getting regression testing right is complicated, with experienced QA professionals required to make trade-offs and sound judgments. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regression testing isn’t just for mature systems&lt;/strong&gt;&lt;br&gt;
Regression testing starts well before the first release. Even in early development, product owners expect to see constant improvement rather than old bugs resurfacing. But the level of risk is significantly raised once the software is in the wild. As soon as people depend on a product for their daily work, any update that breaks existing functionality becomes a major liability.&lt;/p&gt;

&lt;p&gt;This is not a failure of process or discipline on the part of software developers, but an intrinsic property of software itself. The way modern systems are built revolves around reuse and centralization. Shared libraries, common services, and abstracted infrastructure mean development is faster and maintenance is easier. When a bug is fixed in one place, the benefits are seen across the system. &lt;/p&gt;

&lt;p&gt;Unfortunately, the same is true when a bug is introduced. A change made to address a specific issue can break functionality in parts of the application that are seemingly completely unrelated. Engineers will insist that they touched only a small, isolated component – and they’re usually telling the truth. The problem is that in software systems very few things exist in complete isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regression testing isn’t about making sure new features work&lt;/strong&gt;&lt;br&gt;
This is why regression testing exists. Its purpose is not to prove that the software is perfect or even that new features work particularly well. It is simply about ensuring that the software is not worse than it was before. Users who relied on a feature yesterday should be able to rely on it today. New functionality may arrive in a broken or imperfect state, and while this is unfortunate, it rarely causes immediate disruption. What does cause disruption, though, is when features that people previously depended on suddenly stop working.&lt;/p&gt;

&lt;p&gt;The need for regression testing is even more critical with updates. Users may be hostile toward them, unconvinced that they are necessary; they may even believe that they’re a way of perpetuating &lt;a href="https://en.wikipedia.org/wiki/Planned_obsolescence#Software_degradation_and_lock-out" rel="noopener noreferrer"&gt;planned obsolescence&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;But in reality, updates are usually unavoidable. Regulatory requirements change and security vulnerabilities emerge. Even when an update has been issued solely to fix a security issue, regression testing is essential. A secure system that no longer performs its core functions is not going to please end users, and will damage a company’s reputation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developing a robust regression testing strategy&lt;/strong&gt;&lt;br&gt;
Regression testing isn't a 100% reliable process, but tech companies release thousands of updates on a daily basis, and it's only the faulty ones that get publicity. A single line of code could alter execution paths in ways that are difficult to predict, and to give themselves the best chance of finding any bugs QA teams must work out what they need to test, as well as how they run these tests. &lt;/p&gt;

&lt;p&gt;A robust regression testing strategy starts with an understanding of what the system is supposed to do. In an ideal world, this is based on documented requirements. In the real world, where documentation often lags behind deployment, this 'backbone' is often built on empirical observation – a shared understanding of established behavior. Whether the source is a formal specification or the collective 'tribal knowledge' of the team, these expectations must be captured in test cases to ensure that yesterday’s progress doesn't become tomorrow's regression.&lt;/p&gt;

&lt;p&gt;And maintaining this backbone is harder than it sounds. Real users often interact with systems in ways those who created them hadn’t anticipated. Sometimes bugs even become features for certain users. And in fixing these bugs, QA teams may then realise they had become essential for some workflows. The line between defects and features is blurred, meaning any decisions about what should and shouldn’t regress are complicated.&lt;/p&gt;

&lt;p&gt;Running regression tests on everything isn’t realistic&lt;br&gt;
There will always be an attempt to predict the impact of changes without running extensive tests. Static analysis tools map out code paths, and in recent years AI-enabled solutions can attempt to infer behavioral consequences from source code changes. These tools are helpful, but don’t solve the problem as such; while they provide additional insights, they cannot give any certainty. Ultimately, regression testing remains the only reliable way to detect when software has become worse than it was before.&lt;/p&gt;

&lt;p&gt;Running exhaustive tests where every single possible use case is checked is not realistic, so even the most comprehensive regression test suites will have gaps. In an ideal world, every single change would involve a full cycle of regression testing. In practical terms, though, this is an expensive process, requiring a great deal of maintenance effort on the part of QA teams, and a lot of computation. &lt;/p&gt;

&lt;p&gt;So organizations make assumptions; they trust that recent fixes are sufficiently targeted and do not invalidate everything that has already been tested. These assumptions aren’t always correct, but without making them, most software would take a long time to ship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regression test libraries need regular maintenance&lt;/strong&gt;&lt;br&gt;
Running regression tests is only part of the challenge for QA teams; the test suites also need to be maintained. Manual testing doesn’t scale well and is prone to human error. Automated scripts are quick, but also fragile; user interfaces change, workflows evolve, and suddenly large portions of the test suite fail. Teams are then forced to decide whether to invest time fixing scripts and slow down the testing process or to reduce coverage and increase risk levels.&lt;/p&gt;

&lt;p&gt;Newer approaches such as agentic and AI-assisted testing can address some of this fragility. Instead of rigid scripts that expect exact UI layouts, intelligent agents can interpret intent and adapt to changes in presentation. This reduces fragility, but also comes with a certain level of uncertainty. These systems are improving rapidly, yet they still require human oversight. For now, they complement traditional regression testing rather than replace it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regression testing: The least-worst option&lt;/strong&gt;&lt;br&gt;
Winston Churchill once &lt;a href="https://winstonchurchill.org/resources/quotes/the-worst-form-of-government/" rel="noopener noreferrer"&gt;famously described&lt;/a&gt; democracy as “the worst form of government, except for all those other forms that have been tried from time to time”. The same could be said for regression testing. It is rarely perfect, it is often expensive, and it can be frustratingly slow. Yet it remains the only viable defense we have against the inherent instability of changing code.&lt;/p&gt;

&lt;p&gt;In the wake of a major software failure, hindsight usually reveals a missed opportunity – a specific test case that, had it been run, would have caught the issue. This is the central tension of the discipline: while no suite can catch every bug, almost every bug was catchable in theory.&lt;/p&gt;

&lt;p&gt;Instead of seeing these failures as purely negative, experienced teams view them as essential data. Every production defect is an opportunity to refine the backbone of your testing strategy. By feeding these lessons back into the regression suite, the software gets more resilient. We don't perform regression testing to achieve perfection; we do it to ensure that the mistakes of the past are never repeated in the future.&lt;/p&gt;

&lt;p&gt;But the ultimate truth is that once software is used in the real world, regression testing is essential. After the first release, the question is no longer whether change will introduce problems, but when. Regression testing is how teams detect those problems before users do, and that alone makes it one of the most critical disciplines in software production.&lt;/p&gt;

</description>
      <category>qa</category>
      <category>testing</category>
      <category>product</category>
      <category>learning</category>
    </item>
    <item>
      <title>Pace over precision – or precision over pace?</title>
      <dc:creator>Andrian Budantsov</dc:creator>
      <pubDate>Thu, 20 Nov 2025 14:35:15 +0000</pubDate>
      <link>https://forem.com/abudantsov/pace-over-precision-or-precision-over-pace-5812</link>
      <guid>https://forem.com/abudantsov/pace-over-precision-or-precision-over-pace-5812</guid>
      <description>&lt;p&gt;Time is money. And this maxim is especially true in the world of software, where delays to release schedules can be costly for both the developing company and its customers. &lt;/p&gt;

&lt;p&gt;It’s only natural that everybody wants to get things done more quickly. Not only for the sake of efficiency, but so they can avoid the penalties associated with late delivery and letting customers down. But there’s another principle that’s equally as important – if not more so – in the software industry. The cost of shipping a subpar product, riddled with bugs, can spiral far beyond the savings of an earlier release.&lt;/p&gt;

&lt;p&gt;Faulty software applications need to be fixed, which is expensive and causes great inconvenience for customers. Products that aren’t good enough – whether games, productivity suites, smartphone apps, or applications designed for specific functions within highly specialized industries – will cause the end user to be unhappy. They may never buy another product from you in future. &lt;/p&gt;

&lt;p&gt;And it’s not only the additional costs, lost sales and damage to reputation that needs to be considered; there’s also the risk of regulatory and legal trouble. With the EU introducing the Product Liability Directive 2024/2853 (PLD 2024), which will be enshrined into statutes across EU member nations by 9 December 2026, software companies could face substantial liability for injuries, property damage, or data loss caused by defective software.&lt;/p&gt;

&lt;p&gt;So how can software companies find the perfect balance between risk and reward? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t cut corners – but do prioritize speed&lt;/strong&gt;&lt;br&gt;
The first thing to emphasize is that there are several issues around the language that is used to describe this drive for efficiency. To describe it as ‘cutting corners’ suggests a complacency that any engineer or salesman would not want to be associated with. But in markets that move at a rapid pace, there is nothing wrong with prioritizing speed. &lt;/p&gt;

&lt;p&gt;Speed in itself isn't a bad thing. Think about bullet trains: they move incredibly fast, but never at the expense of safety or reliability. They run on carefully maintained tracks, follow strict schedules, and rely on constant monitoring and oversight. The speed works because the system around it ensures consistency and control.&lt;/p&gt;

&lt;p&gt;In the vast majority of cases, the people working at a software company – no matter their function or level – care about the quality. They want to do good work and to make functional, fit-for-purpose products. Within that organization, many will be acutely aware of the urgency of meeting release schedules, and whose job it is to exert pressure on development and engineering teams to ensure deadlines are met. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The irresistible force meets an immovable object&lt;/strong&gt;&lt;br&gt;
Management and customer relationship teams need to ensure software products are delivered on schedule. It’s not realistic for development and QA teams to demand extra testing time to ensure the code is absolutely flawless. But there’s no way that the company can risk issuing an unstable product. &lt;/p&gt;

&lt;p&gt;While managers must translate the business' sense of urgency and desire to move faster, to iterate faster, and to spend less resources to achieve their results, they must also put in checks and balances to ensure that teams and individuals don’t adopt a slapdash attitude. At the same time, removing pressure to hit deadlines and allowing teams to work entirely at their own pace carries another risk: inefficient use of time. Without clear incentives, effort can drift toward polishing areas that don't matter, while the parts of the product that truly impact users get less attention.&lt;/p&gt;

&lt;p&gt;So something has to give. There is a need for solid risk assessment and sound judgement. The key is to understand the full picture. For software companies working in highly regulated industries such as healthcare and finance, then there can be no corner-cutting. For gaming companies, things aren’t quite so serious from that perspective – however, there is likely to be a lot of analysis from gamers and media. And the bigger the game, the more ruthless that scrutiny is likely to be. &lt;/p&gt;

&lt;p&gt;Full knowledge of where you stand from a regulatory and legal point of view is the starting point. Then reputation must be considered; companies like Rockstar Games have clearly decided that a sub-standard product just isn’t worth releasing with its decision to delay the launch of Grand Theft Auto VI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to identify the balance of risk and reward&lt;/strong&gt;&lt;br&gt;
There are some key questions that you must know the answer to. Such as: What happens if something breaks? Will end users notice? Will they care? Do we have a system in place to detect when things break? And what is our method for delivering and applying hotfixes?&lt;/p&gt;

&lt;p&gt;Once you’ve established the answers, you can begin to prioritize the areas where rigorous testing is non-negotiable, and where the need isn’t quite so pressing. Things that are customer facing and affect the primary functionality of the software would obviously come at the top of the list.  &lt;/p&gt;

&lt;p&gt;Monitoring for issues post-release is vital. With some software applications that have been designed for physical products that don't connect to the internet, this may be difficult. Even where connectivity exists, privacy rules, compliance obligations, or air-gapped environments may restrict telemetry. In those cases, teams must rely on on-device logs, staged rollouts, or structured feedback loops to gain the necessary visibility. Where telemetry is permitted, it provides an early warning system that helps companies see when products are working properly — or when they aren't, and what the issue might be.&lt;/p&gt;

&lt;p&gt;And when it comes to patching software products, having an effective system for developing and issuing hotfixes is essential. Monitoring and patching go hand in hand: visibility without a rapid response channel is wasted, and patching without reliable detection risks being blind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Put quality first and speed will follow&lt;/strong&gt;&lt;br&gt;
With the right guardrails in place, the next step is refining how software gets built. Moving QA earlier in the cycle — often called the ‘shift-left’ approach — helps teams catch issues when they're still small and easy to fix. Every bug found early saves hours (and headaches) later.&lt;/p&gt;

&lt;p&gt;Clarity is just as important. Test management has to be streamlined and transparent: who owns each test, what's being tested, when it's happening, and how feedback flows back. When that visibility is in place, issues surface sooner, and teams avoid wasting time chasing them down.&lt;/p&gt;

&lt;p&gt;Automation adds another layer of speed. Repetitive checks should be handled by machines, freeing humans to focus on judgment calls and edge cases. Experienced teams know that automation is a multiplier only when it's stable — without reliability (flakiness control, deterministic data, fast feedback), it risks becoming a drag. When done right, and paired with tighter communication between developers, QA, and management, it delivers a smoother, faster cycle.&lt;/p&gt;

&lt;p&gt;Finally, QA itself is evolving into Quality Engineering (QE). Instead of quality being "owned" by one team, it becomes a shared responsibility. Developers design and write tests for their code. Quality engineers provide the frameworks, automation, and oversight that catch blind spots. Together, they make quality scalable — without slowing delivery down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collective responsibility is key&lt;/strong&gt;&lt;br&gt;
Ultimately, experienced engineers will have a deep understanding of where testing should be focused and rigorous. But they will also know that you can't test everything, and have a good grasp on the areas where testing is less critical. &lt;/p&gt;

&lt;p&gt;If parts of the testing process can be automated – with human oversight – then there are time savings to be made. But the best way to improve delivery times while ensuring quality is high is to embrace QE. Again, it’s about finding the right processes for your organisation. But when everyone has a vested interest in the quality of the product, it’ll be easier to find and identify problems earlier and give you a much better chance of delivering the final version more quickly. &lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>qa</category>
      <category>product</category>
      <category>testing</category>
    </item>
    <item>
      <title>Engineering matters: How QA is evolving into QE</title>
      <dc:creator>Andrian Budantsov</dc:creator>
      <pubDate>Tue, 09 Sep 2025 15:26:08 +0000</pubDate>
      <link>https://forem.com/abudantsov/engineering-matters-how-qa-is-evolving-into-qe-16oe</link>
      <guid>https://forem.com/abudantsov/engineering-matters-how-qa-is-evolving-into-qe-16oe</guid>
      <description>&lt;p&gt;Experienced test engineers and transparent Quality Assurance (QA) practices are vital to helping ensure that software applications work as they are supposed to, and reducing the risk of nasty surprises. However, poorly planned testing can seriously derail the ability of software companies to deliver products on schedule.&lt;br&gt;
In the past, it was very common to work on a software product for very long cycles. Big software companies would take years to develop large applications, from operating systems to productivity suites. There would be a distinct, dedicated testing phase before release. And once the product was released into the wider world, any problems that emerged would often be blamed on QA teams.&lt;br&gt;
That's a lot of pressure to put on QA. Software applications are complex and will be used in many different scenarios and environments by different users; it's impossible to test every single use case. And there are also hard deadlines to meet; if the software is to be released as planned, there’s only so much testing that can be done.&lt;br&gt;
That's one reason why shorter release cycles became more common, with each new version of a software product having a much smaller scope in terms of changes. But even then, QA teams would still probably be on the hook if things went wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should we be talking about QA or QE?&lt;/strong&gt;&lt;br&gt;
In the quest to deliver better outcomes, many development teams are now thinking less about QA and more about QE – Quality Engineering. To understand this shift, it's helpful to clarify the distinctions: QA assures the process and system of work; Testing/Quality Control evaluates the product; QE embeds engineering practices: testability, automation, CI/CD, and feedback loops, so quality is built in, not inspected in. &lt;br&gt;
It's not about abandoning other practices in favour of QE, but rather ensuring quality becomes integral to every part of the development process. And responsibility for quality is shared out, so quality is not the sole task of a single engineer or team.&lt;br&gt;
While changing established ways of working might be a challenge for organisations, the benefits — faster delivery times and better products — are clear, at least in theory. Working smarter, not harder, is the ultimate objective with a QE approach. But there are real challenges that trip up many teams trying to make this shift.&lt;br&gt;
The biggest hurdle is getting people on board. Developers need to care about quality, not just shipping features. If your company only rewards teams for how much code they write or how many features they deliver, quality will always come second. Teams need clear quality goals and responsibility for fixing what breaks in production.&lt;br&gt;
Management support is crucial too. You can't ask developers to own quality without giving them the right tools and time. That means proper test environments, decent build pipelines, and time to improve old code. Without these basics, asking people to own quality is just setting them up to fail.&lt;br&gt;
Then there's the technical reality: old code may be hard to test. Many companies have applications that were built years ago without modern testing in mind. Making these systems testable can take years of gradual improvement before QE practices actually work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You might already be doing QE without realising it&lt;/strong&gt;&lt;br&gt;
QE is fundamentally about two things: mindset (shared responsibility for quality) and process (systematic practices to deliver it). You might already be doing QE without realising it.&lt;br&gt;
Smaller companies and startups often have the mindset naturally — with lean teams, everyone feels accountable for quality outcomes. While many struggle with process maturity, some have done a sufficient job implementing quality practices from the ground up.&lt;br&gt;
For larger companies, it depends on culture and team dynamics. Some brilliant engineering teams already practice QE-style development without calling it that — embedding quality throughout their workflow. Others need to make changes on many levels to shift from traditional quality gatekeeping to shared responsibility models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How companies can change from QA to QE&lt;/strong&gt;&lt;br&gt;
To change from a 'traditional' QA mindset to a QE approach requires changes in both mindset and procedures.&lt;/p&gt;

&lt;p&gt;On the mindset side, bringing in coaches and specialists to run workshops for development teams can help instil shared responsibility thinking through exercises focused on teamwork and collaboration. There may be some hurdles with getting people to embrace these sessions and to be receptive to new ways of working, of course. Everyone should be aware of the need to work on quality before the product is ready, thinking about tests during the planning process, and writing tests alongside the code.&lt;/p&gt;

&lt;p&gt;On the procedural side, teams need to focus on several key areas:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foundational practices.&lt;/strong&gt; This involves reworking the approach to unit testing and other efforts to improve reliability, code review and static analysis, and treating functional testing as part of implementation. Design for testability — stable test data, environment parity, seams and hooks — so tests are cheap to write and trust. In regulated and safety-critical contexts, you must keep appropriate independence and formal validation, even as you adopt QE practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shift left.&lt;/strong&gt; Design the system for testability and run fast checks as early as possible – with both developers and testers together – so issues surface when they’re cheapest to fix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shift right.&lt;/strong&gt; QE also extends into production, using observability, canary releases, synthetic monitoring, feature flags, automated canary analysis, rollback procedures, and production feedback loops to continuously validate quality. In larger organisations, Site Reliability Engineering (SRE) or platform teams typically own many of these practices — monitoring, release strategies, and incident response — while QE collaborates with them so pre-production test evidence and production signals form one feedback loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Beyond functional testing.&lt;/strong&gt; QE treats non-functional risks — performance, reliability, security, accessibility — as first-class concerns, with explicit tests and runtime monitoring and observability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tooling.&lt;/strong&gt; You don't need one magic tool that does everything. Instead, think of QE tooling as different tools working together smoothly. Your test management system tracks what tests you have and what happened when you ran them. Your build pipeline (GitHub Actions, Jenkins, and so on) runs tests automatically when code changes. Monitoring tools watch what's happening in production. The key is making sure these tools talk to each other — when a test fails, you should be able to trace it back to the original requirement and see what got deployed. And only automate what you can trust — flaky or unreliable automation is worse than doing things manually because it gives false confidence and wastes everyone's time.&lt;/p&gt;

&lt;p&gt;These combined changes will help introduce a QE philosophy, leading to more predictability, less firefighting, and less last-minute testing while running up against tight deadlines. Ultimately, this means higher product quality and faster release times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway: Quality is not a phase — it should be everywhere&lt;/strong&gt;&lt;br&gt;
Quality isn't something you tack on at the end of development — it needs to be part of everything you do. QE makes sure everyone on the team thinks about quality, while the people who actually build and run the code take responsibility for making it work properly. You'll still need specialist testers for the tricky exploratory work that automation can't handle.&lt;br&gt;
The shift from QA to QE is really about changing how you think and work, not just buying new tools. Whether you're a small startup or a large company, QE can work for you if you're willing to adapt it to your situation. Get the mindset right first, then invest in the tools that support it.&lt;/p&gt;

</description>
      <category>qa</category>
      <category>softwareengineering</category>
      <category>softwaredevelopment</category>
      <category>testing</category>
    </item>
    <item>
      <title>Why shift-left testing is vital for effective software</title>
      <dc:creator>Andrian Budantsov</dc:creator>
      <pubDate>Thu, 21 Aug 2025 18:01:51 +0000</pubDate>
      <link>https://forem.com/abudantsov/why-shift-left-testing-is-vital-for-effective-software-4893</link>
      <guid>https://forem.com/abudantsov/why-shift-left-testing-is-vital-for-effective-software-4893</guid>
      <description>&lt;p&gt;Software development is often described as a left-to-right timeline: you start with an idea, move through design and prototyping, then build production code. In many teams, testing still happens only at the very end of this timeline. If a critical bug slips through late testing, it can end up in customers’ hands — forcing expensive fixes and risking public reputational damage.&lt;/p&gt;

&lt;p&gt;By introducing testing earlier in the software development life cycle, any problems that are found will be cheaper to fix. Addressing a problem in the user interface is less time-consuming if it’s identified at the design stage, rather than having to undo hours of development work when the issue is not found until the production code has gone through several iterations. &lt;/p&gt;

&lt;p&gt;The idea of ‘shift-left’ testing is that moving the test processes to the left of this axis can save time, money and embarrassment for a software company. Here I will explain how teams can embrace the shift-left philosophy and why it’s so important for building effective software applications. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why ‘shift-left’ matters for modern testing&lt;/strong&gt;&lt;br&gt;
Shift-left testing describes the practice of moving testing activities earlier in the software development lifecycle, instead of waiting until the final stages. While often associated with agile methods, the roots of shift-left thinking go back much further — for example, Barry Boehm’s V-model from late 1970s emphasized verifying and validating requirements and design early to avoid costly rework later.&lt;/p&gt;

&lt;p&gt;That’s why the maxim ‘test early and often’ is repeated so much in software teams. In reality, testing is still frequently done too late and too little, because resources are always limited. Ideally, you would validate after every stage, but that isn’t always realistic. The key point is to test as early as possible, because the earlier you catch defects, the cheaper and easier they are to fix.&lt;/p&gt;

&lt;p&gt;However, companies will need to be certain that when they make changes to the codebase that they don’t introduce bugs that break something that was previously working well. Regression testing is the process of making sure that everything that worked before still works after new features are added or updates are made. &lt;/p&gt;

&lt;p&gt;Regression testing is a necessary part of any serious software development process, whether agile or not. It helps ensure that previously working functionality still works after new updates. If you need a reminder of just how critical it is, think back to the CrowdStrike update in &lt;a href="https://homeland.house.gov/2024/07/22/chairmen-green-garbarino-request-public-testimony-from-crowdstrike-ceo-following-global-it-outage/" rel="noopener noreferrer"&gt;July 2024&lt;/a&gt; that impacted millions of Windows systems worldwide with considerable disruption to businesses and services.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to adopt the shift-left approach&lt;/strong&gt;&lt;br&gt;
To adopt shift-left effectively, companies should first analyze where problems tend to surface today. Then they can move targeted parts of their testing earlier, depending on their challenges. For instance, if customers struggle after deployment, shift more testing before release. If final testing is delaying deployment, shift testing into the development phase. If too much rework happens after development, move usability or UI testing into the prototype stage. &lt;/p&gt;

&lt;p&gt;Shift-left isn’t one-size-fits-all: it’s about strategically moving the right tests earlier in the cycle to prevent waste and defects. This starts with developers writing comprehensive unit and integration tests, complemented by automated API tests and static code analysis running in the CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;But for shift-left testing to be effective, the testing itself has to be well-organised and managed.&lt;/p&gt;

&lt;p&gt;Test case creation is one of the most important facets of software testing. This requires Quality Assurance (QA) teams to assess the most appropriate tests for the functionalities of the software, and to decide at which stage these tests should be run. For example, performance testing under heavy load or cybersecurity checks aren’t feasible to run at the design stage, so there’s a limit to how far to the left these can be shifted. &lt;/p&gt;

&lt;p&gt;In the case of test case creation, specialist expertise is required. Well-written test cases reduce the risk of bugs and ensure the end product is of a high quality. They provide testers with detailed instructions for running tests and list expected outcomes, ensuring the integrity and reliability of the testing process. &lt;/p&gt;

&lt;p&gt;The number of potential test cases will rise exponentially with the complexity of the application, so experienced QA teams will be able to accurately judge which tests are necessary to run and which aren’t, ensuring the process is efficient. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing must be well-organised&lt;/strong&gt;&lt;br&gt;
Once the test cases have been written and there’s an agreement about when they should be carried out – and when regression testing is required – teams must also set out clear roles so everyone knows who is responsible for testing what. &lt;/p&gt;

&lt;p&gt;Ideally, testing should involve people who can stay objective — whether that’s QA engineers, SDETs, or even other developers — since testing one’s own code can be challenging from a psychological and impartiality perspective. &lt;/p&gt;

&lt;p&gt;While there can be an element of automation in the process of testing – and even in test case creation – there must always be human oversight, with clear accountability. &lt;/p&gt;

&lt;p&gt;And it’s also important that the lines of communication between the QA teams and developers are strong. Many software companies rely on spreadsheets, emails and even Post-It notes; these do not make for efficient workflows and could cause more problems than they solve. If someone misses an email containing important information about a bug that needs to be fixed might go astray, or if someone is working in the wrong version of an Excel file, tests might be unnecessarily duplicated or missed altogether.&lt;/p&gt;

&lt;p&gt;That’s why it’s essential to use dedicated test management tools to ensure the testing process is as streamlined as possible. Managers will be able to get a comprehensive view of progress, test results will be fed back to the right people at the right time, keeping everyone on track and working together. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Effective test management is vital for the shift-left approach to work&lt;/strong&gt;&lt;br&gt;
The shift-left approach improves efficiency and quality by integrating testing into every stage of development. But as testing becomes a distributed activity — with developers running automated unit tests, pipelines executing integration checks, and QA specialists performing exploratory sessions — it generates signals from dozens of disconnected sources. Without a central hub, this creates noise, not clarity.&lt;/p&gt;

&lt;p&gt;This is where a modern test management approach, enabled by dedicated tools, becomes the backbone of quality. It provides the single source of truth that aggregates results from both automated pipelines and manual testing efforts. This ensures that a passed unit test isn't mistaken for full feature validation and that critical insights from human testers are tracked with the same rigor as automated checks. ‘Test early and often’ is a powerful mantra, but it’s only truly effective when the entire quality story can be seen, managed, and understood from a single, cohesive viewpoint.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>software</category>
      <category>qa</category>
    </item>
  </channel>
</rss>
