<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: James Whitfield</title>
    <description>The latest articles on Forem by James Whitfield (@jwithfield_qa).</description>
    <link>https://forem.com/jwithfield_qa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jwithfield_qa"/>
    <language>en</language>
    <item>
      <title>How we built PMS reports that actually survive a notified-body audit</title>
      <dc:creator>James Whitfield</dc:creator>
      <pubDate>Thu, 30 Apr 2026 00:18:15 +0000</pubDate>
      <link>https://forem.com/jwithfield_qa/how-we-built-pms-reports-that-actually-survive-a-notified-body-audit-31la</link>
      <guid>https://forem.com/jwithfield_qa/how-we-built-pms-reports-that-actually-survive-a-notified-body-audit-31la</guid>
      <description>&lt;p&gt;I’ve owned post-market surveillance (PMS) for Class II devices through two notified-body audits. The audits weren’t hostile — they were meticulous. The difference between a report that passes and one that generates two follow-up requests is almost entirely in how you show the decision-making trail: what data you looked at, how you evaluated signals, how risk-management and clinical evaluation fed back into product actions.&lt;/p&gt;

&lt;p&gt;Below are the practical things that helped our PMS reports pass without surprise findings. This is grounded in our Class II setup (ISO 13485 + MDR awareness, working with a mid-size notified body), and YMMV for higher-risk/device types.&lt;/p&gt;

&lt;h2&gt;
  
  
  What notified bodies actually check
&lt;/h2&gt;

&lt;p&gt;From my experience, auditors look for a few consistent things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evidence of an active PMS system: documented plan, data sources, frequency, responsibilities.&lt;/li&gt;
&lt;li&gt;Traceability from raw data (complaints, service reports, registries, literature) to findings and decisions.&lt;/li&gt;
&lt;li&gt;Risk-based signal detection and documented rationale for actions (or for no action).&lt;/li&gt;
&lt;li&gt;Links to other QMS processes: complaints, vigilance, CAPA, change control, risk management, clinical evaluation.&lt;/li&gt;
&lt;li&gt;Timeliness: periodic reviews done on schedule and follow-through on identified actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re not just checking boxes — they want to see the logic. A one-page summary saying “no signals” isn't convincing unless you can show what you looked at and why that’s sufficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we prepared PMS reports that made audits easy
&lt;/h2&gt;

&lt;p&gt;We treated each PMS report like a mini-audit trail. Concrete steps we used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Start with a clear scope and sources&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the time window and product variants covered.&lt;/li&gt;
&lt;li&gt;List all sources (complaints DB, service records, distributor feedback, registries, literature searches, social media monitoring if used, complaint samples).&lt;/li&gt;
&lt;li&gt;For each source, document the owner and refresh cadence.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Executive summary + decision log&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One-page executive summary that states: conclusion, highest risks identified, and actions taken.&lt;/li&gt;
&lt;li&gt;A decision log that records each signal considered, evaluation outcome, and rationale (e.g., “no action because root cause outside device control; monitoring only”).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Signal evaluation methodology&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spell out how signals are detected (trend thresholds, qualitative triggers).&lt;/li&gt;
&lt;li&gt;Describe any statistical tests or sampling approaches — auditors don’t need to be statisticians, but they need to see that you didn’t eyeball it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Link to risk management and clinical evaluation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For every identified issue, reference the risk-management file and any updates to residual risk, mitigations, or warnings.&lt;/li&gt;
&lt;li&gt;Show how the PMS findings affected the clinical evaluation (and vice versa).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Documented follow-up and verification&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For actions (CAPA, labeling, supplier change), include closure evidence and outcomes monitoring.&lt;/li&gt;
&lt;li&gt;If you decided not to act, show monitoring steps and review dates.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep the report navigable&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a table-of-contents, hyperlinks to evidence, and a simple traceability map: Data source → Finding → Decision → Action → Verification.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We used our eQMS to maintain most artifacts (PMS plan, raw data exports, CAPA records). The eQMS made it easier to produce an “audit pack” — but the key was the content and traceability, not the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Audit-day evidence pack (what I prepare, and what auditors ask for)
&lt;/h2&gt;

&lt;p&gt;Bring more than the PMS report. We shipped a digital pack with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The current PMS Plan and last approved version.&lt;/li&gt;
&lt;li&gt;The PMS report (signed, dated) and its change history.&lt;/li&gt;
&lt;li&gt;Raw data extracts for any data source referenced (complaints, service logs, literature search outputs).&lt;/li&gt;
&lt;li&gt;Signal evaluation worksheets / decision log.&lt;/li&gt;
&lt;li&gt;Relevant Risk Management File excerpts (with cross-references).&lt;/li&gt;
&lt;li&gt;CAPA records triggered by PMS findings and closure evidence.&lt;/li&gt;
&lt;li&gt;Minutes of multidisciplinary PMS review meetings.&lt;/li&gt;
&lt;li&gt;Training records for staff who ran the evaluations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can show a single source-of-truth system where each item is traceably linked, audits go much faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common pitfalls that trigger nonconformities
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Vague methodology: “we looked at complaints” without saying how or how many.&lt;/li&gt;
&lt;li&gt;Missing rationale for inaction: auditors expect documented decisions, not silence.&lt;/li&gt;
&lt;li&gt;Poor linkage to risk management / clinical evaluation.&lt;/li&gt;
&lt;li&gt;Incomplete data provenance: where did the complaint counts come from, and who pulled them?&lt;/li&gt;
&lt;li&gt;No evidence of periodic review cadence or missed reviews without documented reason.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Small automation wins we relied on
&lt;/h2&gt;

&lt;p&gt;I’m engineering-brained, so we automated where it reduced audit friction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduled exports from the complaints system to a controlled folder, with hash/versioning for provenance.&lt;/li&gt;
&lt;li&gt;Dashboards for key metrics (complaints over time, open CAPAs linked to PMS findings) that we could snapshot and export into the report.&lt;/li&gt;
&lt;li&gt;Webhook alerts for spikes that feed into the signal evaluation queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don’t need full-blown ML to pass an audit — auditors care more that your pipeline is reproducible and reviewable than that it’s fully automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final practical checklist before you submit the report
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Does the exec summary state concrete conclusions and next steps?&lt;/li&gt;
&lt;li&gt;Can you point to the raw data behind every conclusion?&lt;/li&gt;
&lt;li&gt;Is each conclusion linked to risk-management/clinical-eval files?&lt;/li&gt;
&lt;li&gt;Are actions documented, assigned, and tracked to closure?&lt;/li&gt;
&lt;li&gt;Is the report versioned, signed, and stored in your QMS?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can answer “yes” to those five, you’re in a good place.&lt;/p&gt;

&lt;p&gt;I’m curious: how are other teams balancing automated signal detection with the need for human-reviewed, auditable rationale? Are you keeping automated flags in a separate queue, or integrating them directly into the formal PMS decision log?&lt;/p&gt;

</description>
      <category>qms</category>
      <category>medtech</category>
      <category>compliance</category>
      <category>regulatory</category>
    </item>
    <item>
      <title>What device users actually notice first when quality starts to fall apart</title>
      <dc:creator>James Whitfield</dc:creator>
      <pubDate>Tue, 28 Apr 2026 18:42:58 +0000</pubDate>
      <link>https://forem.com/jwithfield_qa/what-device-users-actually-notice-first-when-quality-starts-to-fall-apart-2c1m</link>
      <guid>https://forem.com/jwithfield_qa/what-device-users-actually-notice-first-when-quality-starts-to-fall-apart-2c1m</guid>
      <description>&lt;p&gt;I’ve been responsible for quality on Class II products long enough to see the same surface symptoms show up across different companies and tech stacks. In our 200-person shop the first few signs of "quality rot" weren’t flagged in an audit report — they came from users, clinicians, and service teams who had to do the workarounds.&lt;/p&gt;

&lt;p&gt;This post is a short catalog of the things people notice first when a QMS is slipping, the underlying process failures those symptoms usually point at, and a few pragmatic fixes that have actually bought us time while we rebuild controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  What frontline users report (the symptoms)
&lt;/h2&gt;

&lt;p&gt;When quality degrades, the non-QA folks complain about concrete, interrupting problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inconsistent or conflicting instructions: labels, IFUs, or SOPs that don’t match what’s physically on the product or what service techs actually do.&lt;/li&gt;
&lt;li&gt;Guidance gaps: "I guess we should..." moments where there’s no clear, reviewable path for a decision (maintenance, triage, dev changes).&lt;/li&gt;
&lt;li&gt;Shadow SOPs and local checklists: people keeping their own spreadsheets or PDFs because the controlled doc is hard to find or out-of-date.&lt;/li&gt;
&lt;li&gt;Slow, manual change control: engineering submits a change and hears crickets for days or weeks; meanwhile production improvises.&lt;/li&gt;
&lt;li&gt;Escalations from tech support: the same complaint keeps coming back because root cause investigations stall.&lt;/li&gt;
&lt;li&gt;Training mismatches: people are signed-off on SOPs that do not reflect the current process or product variant.&lt;/li&gt;
&lt;li&gt;Traceability gaps: inability to quickly show which revisions of design outputs, risk assessments, and verification artifacts map to a shipped lot or complaint.&lt;/li&gt;
&lt;li&gt;Field actions/near-recalls: before an actual recall you often see tracing, triage, and coordination friction — people asking "who owns this?" and "do we need to tell regulators?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the things that annoy and endanger users, and they’re also the things auditors and notified bodies will notice because they map to control failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Typical root causes behind those symptoms
&lt;/h2&gt;

&lt;p&gt;The same user-visible problems tend to come from a few recurring process failures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Poor document discipline: slow doc approvals, uncontrolled drafts, hard-to-find current revisions.&lt;/li&gt;
&lt;li&gt;Fractured change control: approvals bottlenecked in a single person or committee; impact analysis done in a meeting and never captured.&lt;/li&gt;
&lt;li&gt;Weak linkages between processes: complaints, CAPA, design changes, and supplier records live in different silos with manual reconciliation.&lt;/li&gt;
&lt;li&gt;Inadequate supplier visibility: vendors changing parts or processes without timely notification.&lt;/li&gt;
&lt;li&gt;Training treated as a checkbox: signature on a form but no evidence of competence or refreshed content after a change.&lt;/li&gt;
&lt;li&gt;Tool friction: the QMS is harder to use than ad-hoc spreadsheets so people avoid it.&lt;/li&gt;
&lt;li&gt;Staffing/triage failures: no one assigned to run day-to-day complaint triage, so issues age until they become urgent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you map the symptoms back to these causes, it becomes clearer where to look first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical, fast-remediation steps that helped us
&lt;/h2&gt;

&lt;p&gt;When a notified body audit or a field action looms, you don’t have time for a big replatforming project. These are the pragmatic steps that reduced noise and risk in our shop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triage and freeze: pause non-critical changes until you clear high-priority complaints and training gaps. Communicate the freeze across teams.&lt;/li&gt;
&lt;li&gt;Quick doc sweep: target the top 10 documents people actually use (IFU, assembly SOPs, service guides). Fix obvious mismatches and publish emergency revisions with traceable rationale.&lt;/li&gt;
&lt;li&gt;Capture impact analysis where people already work: if engineers keep notes in a ticketing tool, add a required attachment or link that becomes the authoritative record for change-control. (We started with simple mandatory fields in our Jira workflow.)&lt;/li&gt;
&lt;li&gt;Automate routing for complaints → CAPA: even simple webhooks that create a CAPA stub from a complaint ticket removes the human hand-off that was stalling investigations.&lt;/li&gt;
&lt;li&gt;Short feedback loops: establish a 48–72 hour acknowledgement window for complaints and a weekly CAPA stand-up so issues don’t age.&lt;/li&gt;
&lt;li&gt;Make training meaningful: require completion of a short, role-specific checklist tied to each document change instead of a general sign-off.&lt;/li&gt;
&lt;li&gt;Improve supplier change visibility: require advance notice for part revisions and add supplier changes to your change-control queue for impact review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these were glamorous. They were low-tech, quick to implement, and they reduced the number of "who owns this?" interruptions enough to buy the team space to plan longer-term fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Longer-term fixes worth budgeting for
&lt;/h2&gt;

&lt;p&gt;If you have the runway, invest in things that remove manual reconciliation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connected workflows: link complaints, risk assessments, change control, and CAPAs so you can run impact queries. This reduces rework during audits.&lt;/li&gt;
&lt;li&gt;Better search and discoverability for controlled docs: users should land on the doc they need in two clicks.&lt;/li&gt;
&lt;li&gt;Traceable review and impact artifacts: enforce the habit that impact analysis is captured at change creation, not after approval.&lt;/li&gt;
&lt;li&gt;Integrations with engineering tools: reduce copy-paste by syncing design baseline and DHF items from version control or PLM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the moves that transform repeated firefighting into predictable maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing — the practical question
&lt;/h2&gt;

&lt;p&gt;From the floor-level friction to the audit room, the things users notice first are almost always the same. I’m interested in the community’s experience: what was the single smallest process change you made that stopped recurring user complaints in your org? How did you get people to adopt it quickly?&lt;/p&gt;

</description>
      <category>qms</category>
      <category>medtech</category>
      <category>compliance</category>
      <category>regulatory</category>
    </item>
    <item>
      <title>Quality culture vs quality theater — what inspectors actually notice</title>
      <dc:creator>James Whitfield</dc:creator>
      <pubDate>Thu, 23 Apr 2026 08:20:05 +0000</pubDate>
      <link>https://forem.com/jwithfield_qa/quality-culture-vs-quality-theater-what-inspectors-actually-notice-2i26</link>
      <guid>https://forem.com/jwithfield_qa/quality-culture-vs-quality-theater-what-inspectors-actually-notice-2i26</guid>
      <description>&lt;p&gt;I’ve been on both sides of audits and inspections enough times to recognize the difference between a QMS that’s alive and one that’s optimized for photo ops. Regulators (FDA, notified bodies under MDR/IVDR, auditors against ISO 13485) don’t come to admire your template library — they come to see whether the system actually influences day-to-day decisions.&lt;/p&gt;

&lt;p&gt;Below are the concrete signals I’ve seen that tip an inspector off to "culture" vs "theater", and the short fixes that moved us from the latter toward the former.&lt;/p&gt;

&lt;h2&gt;
  
  
  What inspectors are actually looking for
&lt;/h2&gt;

&lt;p&gt;Inspectors want evidence that the requirements have been integrated into the way the business makes decisions. They don’t just cross-check clauses; they trace behaviors back to outcomes.&lt;/p&gt;

&lt;p&gt;Typical things they ask for and observe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Records that show not just completion, but intent and follow-through (e.g., CAPA documentation that links to verification activities, supplier corrective actions and the impact on production).&lt;/li&gt;
&lt;li&gt;How changes are handled in practice: was a risk assessment updated before the change was implemented, or after you discovered problems?&lt;/li&gt;
&lt;li&gt;Who actually knows the procedure — can a line operator explain the “why” behind a work instruction, not just the “how”?&lt;/li&gt;
&lt;li&gt;Open issues and trends: are problems hidden in folders, or being trended and acted on?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In one FDA inspection I supported, the investigator spent more time shadowing production personnel and tracing specific lots back to change records than they did poring over glossy dashboards. The dashboard looked impressive. The trace from a change to the risk file and the production decision was where the conversation got sharp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality theater — common red flags
&lt;/h2&gt;

&lt;p&gt;These are the things that look good at first glance but fail the “show me” test:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch of training completions all signed on the same date with identical notes. (Often indicates retroactive sign-offs.)&lt;/li&gt;
&lt;li&gt;Design review minutes that are one-page, checklist-only, and lack linked actions or evidence that decisions changed design direction.&lt;/li&gt;
&lt;li&gt;CAPAs with corrective actions that are "training" only, no root-cause evidence or verification steps.&lt;/li&gt;
&lt;li&gt;Closed nonconformances where the evidence is a document revision but no verification that the problem stopped recurring.&lt;/li&gt;
&lt;li&gt;Supplier scorecards with perfect scores but unresolved open quality events.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inspectors can often tell when records are produced to meet an audit rather than to support control. They look for consistency between what’s on paper and what happens on the shop floor or in the lab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Signals of a living quality culture
&lt;/h2&gt;

&lt;p&gt;Contrast that with the concrete behaviors that indicate a functioning culture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Findings become actions: a complaint or audit observation turns into a tracked quality event that someone owns, with timebound actions and measurable verification. (We use our QMS so a finding becomes a quality event and shows up on the owner’s dashboard.)&lt;/li&gt;
&lt;li&gt;Documents are living artifacts: work instructions evolve as improvements are discovered and those changes are captured with rationale and verification.&lt;/li&gt;
&lt;li&gt;Cross-functional ownership: design changes are reviewed by manufacturing, QA, RA, and service where relevant — meeting minutes include who disagreed and why.&lt;/li&gt;
&lt;li&gt;Transparent trending: recurring issues show up in management review and trigger strategy-level decisions (supplier change, design mitigation).&lt;/li&gt;
&lt;li&gt;People speak the system: operators and engineers can explain why a process exists, not just how to execute a SOP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those signals make an inspector comfortable that the QMS isn’t just a compliance exercise; it’s how the company manages risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical swaps: from theater to culture (what actually worked for us)
&lt;/h2&gt;

&lt;p&gt;We implemented small, practical changes that had outsized effects during audits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace passive signoffs with active evidence: require a piece of work (a photo, a measurement, a code diff, a screenshot) tied to training or approval so signoffs aren’t just checkboxes.&lt;/li&gt;
&lt;li&gt;Force traceability links earlier: require a link from a change request to the risk assessment and to impacted documents before the change moves from “approved” to “released”.&lt;/li&gt;
&lt;li&gt;Make CAPA verification tangible: define measurable verification steps up front (sampling plan, metric threshold) and record their results in the CAPA.&lt;/li&gt;
&lt;li&gt;Day-in-the-life walkthroughs: before audits, do internal “shop-floor tracebacks” where an engineer follows a part through from receipt to shipping and documents what they saw.&lt;/li&gt;
&lt;li&gt;Surface open work: dashboards are fine, but we started embedding a short narrative in management review for each recurring theme — what was done and what’s next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are process changes, not heroic hires. They require discipline and tooling that supports traceability and ownership.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools help, but they don’t create culture
&lt;/h2&gt;

&lt;p&gt;A connected QMS that links findings, changes, risk, and CAPA reduces friction. Automation that turns a finding into a quality event, or that forces a required link before state change, helps prevent theater. But tools won’t replace leadership and day-to-day behaviors.&lt;/p&gt;

&lt;p&gt;I’d rather have imperfect records that reflect real problems being worked on than immaculate records that hide issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up — what I watch during inspections
&lt;/h2&gt;

&lt;p&gt;When I’m in an audit room now I listen for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ownership (who will fix this and how will I know?)&lt;/li&gt;
&lt;li&gt;Evidence (can they show me the work?)&lt;/li&gt;
&lt;li&gt;History (has this been tried before? what changed?)&lt;/li&gt;
&lt;li&gt;Impact (did the decision change production, design, supplier choices?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those four threads tie together, it’s a culture. If not, it’s theater — and inspectors notice.&lt;/p&gt;

&lt;p&gt;What have you seen in audits that made you think “this company gets it” — or the opposite?&lt;/p&gt;

</description>
      <category>qms</category>
      <category>medtech</category>
      <category>compliance</category>
      <category>regulatory</category>
    </item>
  </channel>
</rss>
