<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kelina Cowell</title>
    <description>The latest articles on Forem by Kelina Cowell (@kelina_cowell_qa).</description>
    <link>https://forem.com/kelina_cowell_qa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kelina_cowell_qa"/>
    <language>en</language>
    <item>
      <title>Regression testing workflow: the risk first checks that keep releases stable</title>
      <dc:creator>Kelina Cowell</dc:creator>
      <pubDate>Mon, 29 Dec 2025 09:00:00 +0000</pubDate>
      <link>https://forem.com/kelina_cowell_qa/regression-testing-workflow-the-risk-first-checks-that-keep-releases-stable-3ed1</link>
      <guid>https://forem.com/kelina_cowell_qa/regression-testing-workflow-the-risk-first-checks-that-keep-releases-stable-3ed1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Portfolio version (canonical, with full context and styling):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/regression-testing.html" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/regression-testing.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow shown:&lt;/strong&gt; risk first regression scoping → golden path baseline → targeted probes → evidence backed results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example context:&lt;/strong&gt; &lt;em&gt;Sworn&lt;/em&gt; on PC Game Pass (Windows) used only as a real world backing example.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build context:&lt;/strong&gt; tested on the PC Game Pass build &lt;code&gt;1.01.0.1039&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scope driver:&lt;/strong&gt; public SteamDB patch notes used as an external change signal (no platform parity assumed).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outputs:&lt;/strong&gt; a regression matrix with line by line outcomes, session timestamps, and bug tickets with evidence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyk14boesls9zmat3xvt.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyk14boesls9zmat3xvt.webp" alt="Regression testing workflow diagram for live builds: change introduced, risk based scope, targeted regression checks, behaviour verification, and outputs (defects, confirmation notes, evidence, re test results)." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Regression testing flow used to verify stability after change during a timeboxed Sworn (PC) pass.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression testing scope: what I verified and why
&lt;/h2&gt;

&lt;p&gt;This article is grounded in a self directed portfolio regression pass on &lt;strong&gt;Sworn&lt;/strong&gt; using the &lt;strong&gt;PC Game Pass (Windows)&lt;/strong&gt; build &lt;code&gt;1.01.0.1039&lt;/code&gt;, run in a one week solo timebox.&lt;/p&gt;

&lt;p&gt;Scope was &lt;strong&gt;change driven&lt;/strong&gt; and &lt;strong&gt;risk based&lt;/strong&gt;: golden path stability (launch → play → quit → relaunch), save and continue integrity, core menus, audio sanity, input handover, plus side effect probes suggested by upstream patch notes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Steam and Game Pass parity claim is made.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What regression testing is (in practice)
&lt;/h2&gt;

&lt;p&gt;For me, regression testing is simple: after change, does existing behaviour still hold.&lt;/p&gt;

&lt;p&gt;Not “re test everything”, and not “run a checklist because that’s what we do”.&lt;/p&gt;

&lt;p&gt;A regression pass is selective by design. Coverage is driven by risk:&lt;br&gt;
what is most likely to have been impacted, what is most expensive if broken, and what must remain stable for the build to be trusted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regression testing outputs: pass and fail results with evidence
&lt;/h3&gt;

&lt;p&gt;Clear outcomes: pass or fail, backed by evidence and repeatable verification. Not opinions. Not vibes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden path smoke baseline for regression testing
&lt;/h2&gt;

&lt;p&gt;I start every regression cycle with a repeatable golden path smoke because it prevents wasted time. If the baseline is unstable, deeper testing is noise.&lt;/p&gt;

&lt;p&gt;In this Sworn pass, the baseline line was &lt;strong&gt;BL-SMOKE-01&lt;/strong&gt;:&lt;br&gt;
cold launch → main menu → gameplay → quit to desktop → relaunch → main menu.&lt;/p&gt;

&lt;p&gt;I also include a quick sanity listen for audio cutouts during this flow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some systems absolutely cannot break. Those are the ones you want to verify on every build before spending time on deeper testing.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Conrad Bettmann, QA Manager (Rovio Entertainment)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Why baseline stability matters in regression testing
&lt;/h3&gt;

&lt;p&gt;The golden path includes the most common player actions (launch, play, quit, resume). If those are unstable, you get cascading failures that masquerade as unrelated defects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression testing scope: change signals and risk
&lt;/h2&gt;

&lt;p&gt;For this project I used SteamDB patch notes as an &lt;strong&gt;external oracle&lt;/strong&gt;: &lt;strong&gt;SWORN 1.0 Patch #3 (v1.0.3.1111), 13 Nov 2025&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That does not mean I assumed those changes were present on PC Game Pass.&lt;/p&gt;

&lt;p&gt;Instead, I used the patch notes as a change signal to decide where to probe for side effects on the Game Pass build. This is useful when you have no internal access, no studio data, and no changelog for your platform.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Knowing what changed and where helps you focus regression on affected areas, rather than running very wide checks that probably won’t find anything valuable. It’s usually best to mix multiple oracles instead of relying on one source.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Conrad Bettmann, QA Manager (Rovio Entertainment)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Regression outcomes: pass vs not applicable (with evidence)
&lt;/h3&gt;

&lt;p&gt;SteamDB notes mention a &lt;strong&gt;music cutting out fix&lt;/strong&gt;, so I ran an audio runtime probe (&lt;strong&gt;STEA-103-MUSIC&lt;/strong&gt;) and verified music continuity across combat, pause and unpause, and a level load (&lt;strong&gt;pass&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;SteamDB also mentions a &lt;strong&gt;Dialogue Volume&lt;/strong&gt; slider. On the Game Pass build that control was not present, so the check was recorded as &lt;strong&gt;not applicable&lt;/strong&gt; with evidence of absence (&lt;strong&gt;STEA-103-AVOL&lt;/strong&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  How my regression matrix is structured
&lt;/h2&gt;

&lt;p&gt;My Regression Matrix lines are written to be auditable. Each line includes a direct check, a side effect check, a clear outcome, and an evidence link.&lt;/p&gt;

&lt;p&gt;That keeps results reviewable and prevents “I think it’s fine” reporting.&lt;/p&gt;

&lt;p&gt;Example matrix lines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Baseline smoke:&lt;/strong&gt; BL-SMOKE-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Settings persistence:&lt;/strong&gt; BL-SET-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Save and Continue integrity:&lt;/strong&gt; BL-SAVE-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post death flow sanity:&lt;/strong&gt; BL-DEATH-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio runtime continuity probe:&lt;/strong&gt; STEA-103-MUSIC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio settings presence check:&lt;/strong&gt; STEA-103-AVOL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codex and UI navigation sanity:&lt;/strong&gt; STEA-103-CODEX&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input handover plus hot plug:&lt;/strong&gt; BL-IO-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alt Tab sanity:&lt;/strong&gt; BL-ALT-01&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhancement spend plus ownership persistence:&lt;/strong&gt; BL-ECON-01&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Save and Continue regression testing: anchors, not vibes
&lt;/h2&gt;

&lt;p&gt;Save and Continue flows are a classic regression risk area because failures can look intermittent. To reduce ambiguity, I verify using anchors.&lt;/p&gt;

&lt;p&gt;In this pass (&lt;strong&gt;BL-SAVE-01&lt;/strong&gt;), I anchored:&lt;br&gt;
room splash name (&lt;em&gt;Wirral Forest&lt;/em&gt;), health bucket (&lt;em&gt;60/60&lt;/em&gt;), weapon type (&lt;em&gt;sword&lt;/em&gt;), and the start of objective text.&lt;/p&gt;

&lt;p&gt;I then verified those anchors after &lt;strong&gt;menu Continue&lt;/strong&gt; and after a &lt;strong&gt;full relaunch&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Outcome: pass, anchors matched throughout (session S2).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why anchors make regression results repeatable
&lt;/h3&gt;

&lt;p&gt;“Continue worked” is not useful if someone else cannot verify what you resumed into. Anchors turn “seems fine” into a repeatable verification result.&lt;/p&gt;

&lt;h2&gt;
  
  
  QA evidence for regression testing: what I capture and why
&lt;/h2&gt;

&lt;p&gt;For regression, evidence matters for passes as much as failures. A pass is still a claim.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Video clips:&lt;/strong&gt; show input, timing, and outcome together (ideal for flow and audio checks).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Screenshots:&lt;/strong&gt; support UI state, menu presence and absence, and bug clarity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session timestamps:&lt;/strong&gt; keep verification reviewable without scrubbing long recordings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment notes:&lt;/strong&gt; platform, build, input devices, cloud saves enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the evidence cannot answer what was done, what happened, and what should have happened, it is not evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression testing examples from the Sworn pass
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Example regression bug: Defeat overlay blocks the Stats screen (SWOR-6)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bug:&lt;/strong&gt; &lt;em&gt;[PC][UI][Flow] Defeat overlay blocks Stats; Continue starts a new run&lt;/em&gt; (&lt;strong&gt;SWOR-6&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expectation:&lt;/strong&gt; after Defeat, pressing Continue reveals the full Stats screen in the foreground and waits for player confirmation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actual:&lt;/strong&gt; Defeat stays in the foreground, Stats renders underneath with a loading icon, then a new run starts automatically. Outcome: you cannot review Stats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repro rate:&lt;/strong&gt; 3/3, observed during progression verification (S2) and reconfirmed in a dedicated re test (S6).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Patch note probe example: music continuity check (STEA-103-MUSIC)
&lt;/h3&gt;

&lt;p&gt;SteamDB notes mention a fix for music cutting out, so I ran &lt;strong&gt;STEA-103-MUSIC&lt;/strong&gt;:&lt;br&gt;
10 minutes runtime with combat transitions, plus pause and unpause and a level load.&lt;/p&gt;

&lt;p&gt;Outcome: pass, music stayed continuous across those transitions (S3).&lt;/p&gt;

&lt;h3&gt;
  
  
  Evidence backed not applicable example: missing Dialogue Volume slider (STEA-103-AVOL)
&lt;/h3&gt;

&lt;p&gt;SteamDB notes mention a Dialogue Volume slider, but on the Game Pass build the Audio menu only showed Master, Music, and SFX.&lt;/p&gt;

&lt;p&gt;Outcome: &lt;strong&gt;not applicable&lt;/strong&gt; with evidence of absence (&lt;strong&gt;STEA-103-AVOL&lt;/strong&gt;, S4). This avoids inventing parity and keeps the matrix honest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Accessibility issues logged as a known cluster (no new build to re test)
&lt;/h3&gt;

&lt;p&gt;On Day 0 (S0), I captured onboarding accessibility issues as a known cluster (&lt;strong&gt;B-A11Y-01&lt;/strong&gt;: SWOR-1, SWOR-2, SWOR-3, SWOR-4). Because there was no newer build during the week, regression re test was not applicable until a new build exists. This is logged explicitly rather than implied.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results snapshot (for transparency)
&lt;/h3&gt;

&lt;p&gt;In this backing pass, the matrix recorded: &lt;strong&gt;8 pass&lt;/strong&gt;, &lt;strong&gt;1 fail&lt;/strong&gt;, &lt;strong&gt;1 not applicable&lt;/strong&gt;, plus &lt;strong&gt;1 known accessibility cluster&lt;/strong&gt; captured on Day 0 with no newer build available for re test.&lt;/p&gt;

&lt;p&gt;Counts are included here for context, not as the focus of the article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Regression testing takeaways (risk, evidence, and verification)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Regression testing is change driven verification, not “re test everything”.&lt;/li&gt;
&lt;li&gt;A repeatable golden path baseline stops you wasting time on an unstable build.&lt;/li&gt;
&lt;li&gt;External patch notes can be used as a risk signal without assuming platform parity.&lt;/li&gt;
&lt;li&gt;Anchors make progression and resume verification credible and repeatable.&lt;/li&gt;
&lt;li&gt;Not applicable is a valid outcome if it is evidenced, not hand waved.&lt;/li&gt;
&lt;li&gt;Pass results deserve evidence too, because they are still claims.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Regression testing FAQ (manual QA)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is regression testing just re testing old bugs?
&lt;/h3&gt;

&lt;p&gt;No. Regression testing verifies that existing behaviour still works after change. It covers previously working systems, whether or not bugs were ever logged against them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do you need to re test everything in regression?
&lt;/h3&gt;

&lt;p&gt;No. Effective regression testing is selective. Scope is driven by change and risk, not by feature count.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do you scope regression without internal patch notes?
&lt;/h3&gt;

&lt;p&gt;By using external change signals such as public patch notes, previous builds, and observed behaviour as oracles, without assuming platform parity.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s the difference between regression and exploratory testing?
&lt;/h3&gt;

&lt;p&gt;Regression testing verifies known behaviour after change. Exploratory testing searches for unknown risk and emergent failure modes. They complement each other but answer different questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is a pass result meaningful in regression testing?
&lt;/h3&gt;

&lt;p&gt;Yes. A pass is still a claim. That’s why regression passes should be supported with evidence, not just a checkbox.&lt;/p&gt;

&lt;h3&gt;
  
  
  When is not applicable a valid regression outcome?
&lt;/h3&gt;

&lt;p&gt;When a feature is not present on the build under test and that absence is confirmed with evidence. Logging this explicitly is more honest than assuming parity or skipping the check silently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Evidence and case study links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Sworn regression case study (full artefacts and evidence):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/sworn/" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/sworn/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SteamDB patch notes used as external oracle (SWORN 1.0 Patch #3, v1.0.3.1111):&lt;br&gt;&lt;br&gt;
&lt;a href="https://steamdb.info/patchnotes/20786520/" rel="noopener noreferrer"&gt;https://steamdb.info/patchnotes/20786520/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This dev.to post stays focused on the regression workflow. The case study links out to the workbook tabs (Regression Matrix, Sessions Log, Bug Log) and evidence clips.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>testing</category>
      <category>qualityassurance</category>
      <category>ux</category>
    </item>
    <item>
      <title>Exploratory testing on mobile: the messy checks that find real bugs</title>
      <dc:creator>Kelina Cowell</dc:creator>
      <pubDate>Mon, 22 Dec 2025 09:00:00 +0000</pubDate>
      <link>https://forem.com/kelina_cowell_qa/exploratory-testing-on-mobile-the-messy-checks-that-find-real-bugs-2ldg</link>
      <guid>https://forem.com/kelina_cowell_qa/exploratory-testing-on-mobile-the-messy-checks-that-find-real-bugs-2ldg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Portfolio version (canonical, with full context and styling):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/exploratory-testing.html" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/exploratory-testing.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it is:&lt;/strong&gt; risk-driven exploratory sessions where design, execution, and analysis happen together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform context:&lt;/strong&gt; mobile (Android), where interruptions and device state changes are normal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timebox:&lt;/strong&gt; short focused sessions, not long wandering playthroughs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approach:&lt;/strong&gt; charters, controlled variation, observation-led decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outputs:&lt;/strong&gt; defects and observations that explain behaviour, with enough context to reproduce.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdze7sv90n7qagsql4y1t.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdze7sv90n7qagsql4y1t.webp" alt="Flow diagram of a mobile exploratory test session: test charter → timebox (20 to 45 minutes) → controlled variation → outputs: defects, context notes, bug reports, evidence." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Exploratory testing on mobile in practice: chartered, timeboxed sessions with controlled variation, producing defects, context notes, bug reports, and evidence.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About this article
&lt;/h2&gt;

&lt;p&gt;Exploratory testing is often summarised as “testing without scripts”. In real mobile QA work, that description is incomplete.&lt;/p&gt;

&lt;p&gt;This article explains &lt;strong&gt;exploratory testing on mobile&lt;/strong&gt; as it is actually applied in a practical workflow: &lt;strong&gt;session structure&lt;/strong&gt;, &lt;strong&gt;risk focus&lt;/strong&gt;, &lt;strong&gt;interruptions and recovery&lt;/strong&gt;, and how this approach consistently finds issues that scripted checks often miss.&lt;/p&gt;

&lt;p&gt;Examples are drawn from a real Android mobile game pass, but the focus here is the &lt;strong&gt;method&lt;/strong&gt;, not the case study.&lt;/p&gt;

&lt;h2&gt;
  
  
  What exploratory testing actually means
&lt;/h2&gt;

&lt;p&gt;In practice, exploratory testing is a way of working where test design, execution, and analysis happen together.&lt;/p&gt;

&lt;p&gt;You are not following a pre-written script. You are observing behaviour and choosing the next action based on risk, evidence, and what the product is doing right now.&lt;/p&gt;

&lt;p&gt;That does not mean “random testing”. It means structured freedom: you keep a clear intent, and you keep your changes controlled so outcomes remain interpretable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why exploratory testing matters on mobile
&lt;/h2&gt;

&lt;p&gt;Mobile products rarely fail under perfect conditions. They fail when something changes unexpectedly. On Android especially, many failure modes are contextual and lifecycle-driven.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alarms, calls, and notifications interrupt active flows.&lt;/li&gt;
&lt;li&gt;Apps are backgrounded and resumed repeatedly.&lt;/li&gt;
&lt;li&gt;Network quality changes during critical moments (login, purchase, reward claim).&lt;/li&gt;
&lt;li&gt;UI must remain usable on small screens and unusual aspect ratios.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Applied insight:&lt;/strong&gt; For mobile exploration, compare performance across devices where possible and probe interruptions: lock screen, phone calls, network drops, switching Wi-Fi/data, rotation, and kill/restart recovery.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Radu Posoi, Founder, AlkoTech Labs (ex Ubisoft QA Lead)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Exploratory sessions target these risks directly instead of assuming a clean uninterrupted journey.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploratory testing workflow in practice
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Exploratory test charters, not scripts
&lt;/h3&gt;

&lt;p&gt;Sessions start with a charter: a short statement of intent.&lt;/p&gt;

&lt;p&gt;For example, “Explore reward claim behaviour under interruptions” or “Explore recovery after network loss”.&lt;/p&gt;

&lt;p&gt;The charter defines &lt;strong&gt;focus&lt;/strong&gt;, not steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeboxed exploratory testing sessions
&lt;/h3&gt;

&lt;p&gt;Exploratory testing works best in short sessions. Timeboxing forces prioritisation and prevents unfocused wandering.&lt;/p&gt;

&lt;p&gt;Typical sessions range from &lt;strong&gt;20 to 45 minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Applied insight:&lt;/strong&gt; Before you go deep, verify the basics first. A short daily smoke test protects the golden path, so deeper exploratory work is not wasted rediscovering obvious breakage.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Controlled variation: one variable at a time
&lt;/h3&gt;

&lt;p&gt;Rather than changing everything at once, one variable is altered at a time: lock state, network type, lifecycle state.&lt;/p&gt;

&lt;p&gt;This keeps results interpretable and defects reproducible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exploratory testing session checklist (charter, timebox, evidence)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Charter chosen (risk and focus)&lt;/li&gt;
&lt;li&gt;Timebox set (20 to 45 mins)&lt;/li&gt;
&lt;li&gt;Variables defined (one at a time)&lt;/li&gt;
&lt;li&gt;Notes captured live&lt;/li&gt;
&lt;li&gt;Evidence captured when it happens&lt;/li&gt;
&lt;li&gt;Bug report drafted while context is fresh&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common mobile bugs found with exploratory testing
&lt;/h2&gt;

&lt;p&gt;Exploratory testing is effective at surfacing issues that are low-frequency but high-impact, especially on mobile.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Soft locks where the UI appears responsive but progression is blocked.&lt;/li&gt;
&lt;li&gt;State inconsistencies after backgrounding or relaunch.&lt;/li&gt;
&lt;li&gt;Audio or visual desynchronisation after OS-level events.&lt;/li&gt;
&lt;li&gt;UI scaling or readability problems that only appear in specific contexts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Android exploratory testing example: reward claim soft lock
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; reward claim flow under interruptions (Android).&lt;/p&gt;

&lt;p&gt;During an exploratory session, repeatedly backgrounding and resuming the app while a reward flow was mid-animation triggered a soft lock: the UI stayed visible, but the claim state never completed, blocking progression.&lt;/p&gt;

&lt;p&gt;This did not appear during clean uninterrupted smoke testing because the trigger was lifecycle timing and state recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; it is normal user behaviour on mobile, not a rare edge case. Exploratory sessions hit it because they are designed to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug reporting for exploratory testing: notes and evidence
&lt;/h2&gt;

&lt;p&gt;Because exploratory testing is adaptive, notes and evidence matter more than in scripted runs. Findings must be supported with enough context to reproduce and diagnose.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Applied insight:&lt;/strong&gt; High impact exploratory bugs live or die by their evidence. Capture context (client and device state), include frequency (for example 3/3 or 10/13), and attach a clear repro so the issue is actionable.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Nathan Glatus, ex Senior QA / Game Integrity Analyst (Fortnite, ex Epic Games)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Screen recordings captured during the session, not recreated later.&lt;/li&gt;
&lt;li&gt;Notes that include context, not just actions (device state, network, lifecycle transitions).&lt;/li&gt;
&lt;li&gt;Bug reports that clearly separate expected behaviour from actual behaviour.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is to make exploratory findings actionable, not anecdotal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploratory testing skills shown in this mobile pass
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Risk-based testing decisions&lt;/li&gt;
&lt;li&gt;Test charter creation and execution&lt;/li&gt;
&lt;li&gt;Defect analysis and clear bug reporting&lt;/li&gt;
&lt;li&gt;Reproduction step clarity under variable conditions&lt;/li&gt;
&lt;li&gt;Evidence-led communication&lt;/li&gt;
&lt;li&gt;Mobile UI and interaction awareness&lt;/li&gt;
&lt;li&gt;Device and network variation testing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key takeaways for mobile QA
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Exploratory testing is structured, not random.&lt;/li&gt;
&lt;li&gt;Mobile risk is contextual, not just functional.&lt;/li&gt;
&lt;li&gt;Interruptions and recovery deserve dedicated exploration.&lt;/li&gt;
&lt;li&gt;Good notes and evidence make exploratory work credible and actionable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exploratory testing FAQ (mobile QA)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How do you stop exploratory testing becoming random wandering?
&lt;/h3&gt;

&lt;p&gt;By using a clear charter, a strict timebox, and controlled variation. If you can’t explain what you were trying to learn in that session, the charter is too vague.&lt;/p&gt;

&lt;h3&gt;
  
  
  What do you write down during an exploratory session?
&lt;/h3&gt;

&lt;p&gt;The variables that matter for reproduction: device state, network, lifecycle transitions, and what changed between attempts. Notes should capture context, not just button presses.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do you reproduce a bug found through exploration?
&lt;/h3&gt;

&lt;p&gt;First, reduce the scenario to the smallest set of steps that still triggers the issue. Then rerun it while changing one variable at a time until the trigger conditions are clear.&lt;/p&gt;

&lt;h3&gt;
  
  
  What makes mobile exploratory testing different from PC or console?
&lt;/h3&gt;

&lt;p&gt;Mobile failure modes are often lifecycle and OS-driven: backgrounding, notifications, lock/unlock, network switching, permissions, battery and performance constraints. Normal user behaviour creates timing and recovery issues that clean runs will miss.&lt;/p&gt;




&lt;h2&gt;
  
  
  Evidence and case study links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Rebel Racing: Charter-based Exploratory &amp;amp; Edge-Case Testing (full artefacts and evidence):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/rebel-racing/" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/rebel-racing/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;QA Chronicles Issue 2: Rebel Racing:&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-02-rebel-racing" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-02-rebel-racing&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This dev.to post stays focused on the workflow. The case study links out to the workbook structure, runs, and evidence.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>ux</category>
      <category>testing</category>
      <category>qualityassurance</category>
    </item>
    <item>
      <title>Functional testing: the boring basics that catch real bugs</title>
      <dc:creator>Kelina Cowell</dc:creator>
      <pubDate>Sun, 21 Dec 2025 21:49:54 +0000</pubDate>
      <link>https://forem.com/kelina_cowell_qa/functional-testing-the-boring-basics-that-catch-real-bugs-1ion</link>
      <guid>https://forem.com/kelina_cowell_qa/functional-testing-the-boring-basics-that-catch-real-bugs-1ion</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Portfolio version (canonical, with full context and styling):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/functional-testing.html" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/articles/functional-testing.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it is:&lt;/strong&gt; a functional testing workflow for timeboxed solo passes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backing project:&lt;/strong&gt; &lt;em&gt;Battletoads&lt;/em&gt; on PC (Game Pass), one-week pass (27 Oct to 1 Nov 2025).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approach:&lt;/strong&gt; validate start-to-control and Pause/Resume first, then expand where risk appears (often input and focus).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outputs:&lt;/strong&gt; pass/fail outcomes, reproducible defect reports, and short evidence recordings supporting each finding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfa4vnswepnmkg5dmspw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfa4vnswepnmkg5dmspw.webp" alt="Functional testing workflow diagram for a timeboxed manual QA pass: start to first control, timebox, mixed input ownership, and outputs (defects, context notes, bug reports, evidence)." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Functional testing workflow used during a one-week Battletoads (PC) pass.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Functional testing workflow context and scope
&lt;/h2&gt;

&lt;p&gt;This article is grounded in a self-directed portfolio pass on &lt;strong&gt;Battletoads (PC, Game Pass)&lt;/strong&gt;, build &lt;code&gt;1.1F.42718&lt;/code&gt;, run in a one-week timebox.&lt;/p&gt;

&lt;p&gt;Test focus was &lt;strong&gt;core functional flows&lt;/strong&gt; and &lt;strong&gt;mixed input ownership&lt;/strong&gt; (controller plus keyboard), with short evidence clips captured for reproducibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  What functional testing is (in practice)
&lt;/h2&gt;

&lt;p&gt;For me, functional testing is simple: does the product do what it claims to do, end-to-end, without excuses or interpretation?&lt;/p&gt;

&lt;p&gt;I validate core flows, confirm expected behaviour, and write issues so a developer can reproduce them without guessing.&lt;/p&gt;

&lt;p&gt;The mistake is treating functional testing as “easy” and therefore less valuable. It is the foundation. If the foundation is cracked, everything built on top of it fails in more complicated ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  Functional testing outputs: pass/fail results with evidence
&lt;/h3&gt;

&lt;p&gt;Clear outcomes: pass or fail, backed by evidence and reproducible steps. Not vibes. Not opinions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The two flows I verify first
&lt;/h2&gt;

&lt;p&gt;1) &lt;strong&gt;Start to first control&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The first minute determines whether the product feels broken. If “New Game” does not reliably get you playing, nothing else matters.&lt;br&gt;&lt;br&gt;
In &lt;em&gt;Battletoads&lt;/em&gt;, I validate this from Title into Level 1 and through the first arena transition before expanding scope.&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Pause and Resume&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Pause stresses state, focus, input context, UI navigation and overlays. If Pause is unstable, you get a stream of defects that look random but are not.&lt;br&gt;&lt;br&gt;
In &lt;em&gt;Battletoads (PC)&lt;/em&gt;, this surfaced early as keyboard and controller routing issues around Pause and Join In.&lt;/p&gt;

&lt;h2&gt;
  
  
  Input ownership testing: controller and keyboard hand-off
&lt;/h2&gt;

&lt;p&gt;Mixed input is a feature, not an edge case. When a controller is connected and a keyboard is used, behaviour must remain predictable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pause must open consistently.&lt;/li&gt;
&lt;li&gt;Navigation must respect the active input method.&lt;/li&gt;
&lt;li&gt;Confirm and back must not silently stop responding.&lt;/li&gt;
&lt;li&gt;Hand-off must not route to the wrong UI or disable input.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I treat keyboard and controller hand-off as a dedicated test area because it produces high-impact, easily reproducible defects. In this Battletoads pass, mixed input could misroute actions (for example, &lt;strong&gt;Resume opening Join In&lt;/strong&gt;) and temporarily break controller response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common pattern: mixed input causes menu focus bugs
&lt;/h3&gt;

&lt;p&gt;Controller connected + keyboard input + menu open = focus bugs. Easy to reproduce. Easy to prove. Easy to fix once isolated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug evidence: what I capture and why
&lt;/h2&gt;

&lt;p&gt;I favour short video clips (10 to 30 seconds) and only use screenshots when they add clarity. The goal is to make the defect obvious without forcing someone to scrub a long recording.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Video:&lt;/strong&gt; shows timing, input, and incorrect outcome together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Screenshot:&lt;/strong&gt; supports UI state, text, or configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment:&lt;/strong&gt; platform, build/version, input device, display mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the evidence cannot answer what was pressed, what happened, and what should have happened, it is not evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I timebox a one-week pass
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Day 1:&lt;/strong&gt; Smoke testing and baseline flow validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Days 2 to 4:&lt;/strong&gt; Execute runs, expand where risk appears, log defects immediately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Days 5 to 6:&lt;/strong&gt; Retest, tighten repro steps, confirm consistency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Day 7:&lt;/strong&gt; Summarise outcomes and document learnings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Practical note: I start each session with a short baseline loop (load, gain control, Pause, resume) before deeper checks. It catches obvious breakage early and prevents wasted time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing oracles used for functional verification
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In-game UI and outcomes:&lt;/strong&gt; observable behaviour of core flows (control, progression, Pause/Resume).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controls menu bindings:&lt;/strong&gt; used as an oracle for expected key behaviour (for example, Esc and Enter bindings).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency across repeated runs:&lt;/strong&gt; behaviour confirmed via reruns to rule out one-off variance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples from the Battletoads functional pass
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Example bug pattern: mixed input misroutes Pause and overlays
&lt;/h3&gt;

&lt;p&gt;With a controller connected on PC, Pause opened and closed reliably via controller Start. Keyboard interaction on Pause could be ignored or misrouted into Join In, and in one observed case the controller became unresponsive until the overlay was closed.&lt;/p&gt;

&lt;p&gt;Short evidence clips were captured to show input, timing, and outcome together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micro-charter: mixed input ownership around Pause and overlays
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Charter:&lt;/strong&gt; Mixed input ownership around Pause and overlays (controller plus keyboard).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Goal:&lt;/strong&gt; Confirm predictable focus, navigation, and confirm/back actions under common PC setups.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Functional testing takeaways (flows, input, evidence)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Functional testing finds high-impact defects early because it targets the systems everything else relies on.&lt;/li&gt;
&lt;li&gt;Pause and overlays are reliable bug generators because they stress state and input routing.&lt;/li&gt;
&lt;li&gt;On PC, mixed input should be treated as a primary scenario, not an edge case.&lt;/li&gt;
&lt;li&gt;Short evidence clips reduce repro ambiguity and speed up triage.&lt;/li&gt;
&lt;li&gt;Repeating the same steps is how “random” issues become diagnosable patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Functional testing FAQ (manual QA)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is functional testing just basic or beginner testing?
&lt;/h3&gt;

&lt;p&gt;No. Functional testing validates the core systems everything else depends on. When it’s done poorly, teams chase “random” bugs that are actually foundational failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does functional testing only check happy paths?
&lt;/h3&gt;

&lt;p&gt;No. It starts with happy paths, but expands wherever risk appears. Input ownership, Pause/Resume, and state transitions are common failure points.&lt;/p&gt;

&lt;h3&gt;
  
  
  How is functional testing different from regression testing?
&lt;/h3&gt;

&lt;p&gt;Functional testing validates expected behaviour end-to-end. Regression testing verifies that previously working behaviour still holds after change. They overlap, but they answer different questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is functional testing still relevant if automation exists?
&lt;/h3&gt;

&lt;p&gt;Yes. Automation relies on a correct understanding of expected behaviour. Functional testing establishes that baseline and finds issues automation often misses.&lt;/p&gt;

&lt;h3&gt;
  
  
  What makes a functional bug report “good”?
&lt;/h3&gt;

&lt;p&gt;Clear steps, a clear expected result, a clear actual result, and short evidence that shows input, timing, and outcome together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Evidence and case study links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Battletoads functional case study (full artefacts and evidence):&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/battletoads/" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/Manual-QA-Portfolio-Kelina-Cowell/projects/battletoads/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;QA Chronicles Issue 1: Battletoads:&lt;br&gt;&lt;br&gt;
&lt;a href="https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-01-battletoads.html" rel="noopener noreferrer"&gt;https://kelinacowellqa.github.io/QA-Chronicles-Kelina-Cowell/issues/issue-01-battletoads.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This dev.to post stays focused on the workflow. The case study links out to the workbook structure, runs, and evidence.&lt;/p&gt;

</description>
      <category>gametesting</category>
      <category>qualityassurance</category>
      <category>gamedev</category>
      <category>ux</category>
    </item>
  </channel>
</rss>
