<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Filipe Brito Ferreira</title>
    <description>The latest articles on Forem by Filipe Brito Ferreira (@fbritoferreira).</description>
    <link>https://forem.com/fbritoferreira</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/fbritoferreira"/>
    <language>en</language>
    <item>
      <title>We're Shipping More Code Than Ever. We Understand Less of It.</title>
      <dc:creator>Filipe Brito Ferreira</dc:creator>
      <pubDate>Sun, 03 May 2026 00:00:00 +0000</pubDate>
      <link>https://forem.com/fbritoferreira/were-shipping-more-code-than-ever-we-understand-less-of-it-93n</link>
      <guid>https://forem.com/fbritoferreira/were-shipping-more-code-than-ever-we-understand-less-of-it-93n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.fbritoferreira.com%2Fimages%2Fwere-shipping-more-code-than-ever-we-understand-less-of-it-1200.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.fbritoferreira.com%2Fimages%2Fwere-shipping-more-code-than-ever-we-understand-less-of-it-1200.webp" alt="A developer slumped face-down on a desk in front of a code editor, lit only by the dim glow of a monitor at night" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few weeks ago I watched a junior push four commits to the same pull request, each one a fresh Claude paste at the same bug. Each round he was more confident. Each round he was no closer to fixing it, because the bug was in his prompt, not in the code. By round three nobody on the call could explain what the function was supposed to do.&lt;/p&gt;

&lt;p&gt;The junior wasn’t the failure here. The pipeline was. We hired juniors into a process that quietly stopped doing the one thing it was supposed to do for them, which is to push back when they ship something they can’t defend. They learned to lean on AI because we let them, and then we trained them to.&lt;/p&gt;

&lt;p&gt;In 2026, &lt;a href="https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools" rel="noopener noreferrer"&gt;76% of developers use an AI coding assistant&lt;/a&gt;, up from 44% two years ago. AI now writes roughly 41% of the code shipped. And yet &lt;a href="https://devops.com/survey-surfaces-high-devops-burnout-rates-despite-ai-advances/" rel="noopener noreferrer"&gt;65% of developers still report burnout&lt;/a&gt;, AI-generated code &lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;carries 1.7× more issues&lt;/a&gt; than human-written code, and incidents per pull request are up 23.5% year-on-year.&lt;/p&gt;

&lt;p&gt;We’re moving faster on paper while learning less, shipping worse, and burning out harder. The tools aren’t the problem. The expectation that they replace thinking is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yes, We’ve Heard This Before
&lt;/h2&gt;

&lt;p&gt;Every five years a new tool gets blamed for breaking software engineering. IDEs were going to kill our ability to write Makefiles. Stack Overflow was going to produce a generation of copy-paste developers. Offshoring was going to gut institutional knowledge. None of those predictions aged well, and the industry survived each one.&lt;/p&gt;

&lt;p&gt;So why is this time different? Because LLM output doesn’t look foreign. A Stack Overflow paste was obviously someone else’s code: different style, different variable names, different formatting. You knew you hadn’t written it. AI-generated code is formatted to your conventions, named after your variables, and commented in your tone. The social cue that used to say "stop, read this carefully" is gone. The output looks like it came from you, even when nobody on the team understood it.&lt;/p&gt;

&lt;p&gt;That changes the failure mode. Old tools eroded specific skills. This one erodes the line between writing code and reading code, which is the line that mentorship, review, and learning all sit on.&lt;/p&gt;

&lt;p&gt;This piece is about what happens when an organisation crosses that line at scale. If you’re a solo founder or a two-person shop, AI-shipped speed isn’t a failure of discipline, it’s the job. What I’m worried about is what breaks when companies scale those habits without scaling accountability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Speed Illusion
&lt;/h2&gt;

&lt;p&gt;Output volume is up. Real velocity isn’t.&lt;/p&gt;

&lt;p&gt;In a study of experienced developers using AI tools, &lt;a href="https://dev.to/increase123/the-ai-productivity-paradox-why-developers-are-19-slower-and-what-this-means-for-2026-a14"&gt;participants took 19% longer to complete tasks while believing they were 20% faster&lt;/a&gt;. A 43-point gap between perception and reality, one of the largest "expectations gaps" recorded in modern software engineering research. The dashboards show acceleration. The clock shows the opposite.&lt;/p&gt;

&lt;p&gt;The pattern repeats at the team level. According to a recent Cortex report, pull requests per author are up 20% year-on-year, incidents per pull request are up 23.5%, and change failure rates have risen by roughly 30%.&lt;/p&gt;

&lt;p&gt;We’re measuring the wrong things. Lines of code, pull request count, story points closed: all up with AI. Time-to-resolution, incident counts, rework, weekend on-call hours: also up, faster. If you only look at the top of the funnel, AI looks like a miracle. If you look at the full ledger, the miracle starts to look like a loan.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Habits That Don’t Get Fixed
&lt;/h2&gt;

&lt;p&gt;In 2018, a junior who copy-pasted a Stack Overflow answer they didn’t understand would get stopped in code review before lunch. Someone would ask "why did you write it this way?" and the gap would surface. In 2026, that same junior gets a thumbs-up emoji and the pull request merges, because the senior who would’ve caught it is reviewing five times more code in the same hour.&lt;/p&gt;

&lt;p&gt;Here are the habits I keep watching not get corrected:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy-paste without reading.&lt;/strong&gt; Developers ship code they couldn’t have written by hand and can’t explain in review. That isn’t laziness. It’s a rational response to a quota that assumes AI velocity but pays for human throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack traces become LLM input, not learning.&lt;/strong&gt; Pasting a trace into Claude isn’t the problem. Pasting it without reading what came back is. When the model is right, you ship. When the model is wrong, you’ve got nothing to fall back on, because you never built the muscle that lets you read the trace yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-generated tests no one validates.&lt;/strong&gt; Green CI doesn’t mean correct code. Tests that mirror the implementation pass even when the implementation is wrong. Coverage numbers go up while real coverage goes down. We’re watching codebases where every line is "tested" and nobody can explain what any of it actually verifies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern atrophy.&lt;/strong&gt; Developers are forgetting basic shapes: recursion, loops, common data structures, even the order of arguments to functions they wrote last month. Try this on yourself this weekend. Write merge sort with no AI, no Google, just a blank file. If it surprises you how rusty you feel, the atrophy already started. The skill rusts the same way any skill does when you stop using it.&lt;/p&gt;

&lt;p&gt;The same dynamic plays out beyond closed teams. Open-source maintainers are now drowning in plausible-looking pull requests submitters can’t defend in review, the same broken feedback loop, just with no shared employer to escalate to.&lt;/p&gt;

&lt;p&gt;None of this would survive a real review. That’s the whole problem. Reviews used to be where bad habits died. They’ve become a throughput gate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging Is Where the Bill Comes Due
&lt;/h2&gt;

&lt;p&gt;You can ship code you don’t understand. You can’t debug it.&lt;/p&gt;

&lt;p&gt;Production incident at 2 a.m. The AI doesn’t know your system’s quirks, your team’s conventions, or which of the seven retry layers is the one masking the real failure. You’re now reading code you wrote but never read. A debug cycle that took thirty minutes in 2020 takes three hours in 2026, not because the bug is harder, but because the mental model the original author should have built was never built. Compound interest on every shortcut taken during the original pull request.&lt;/p&gt;

&lt;p&gt;This is also why the productivity numbers lie. The hours you saved during the pull request get quietly spent during the incident. The accounting is hidden because the two events happen on different days, owned by different on-call rotations, and reported in different dashboards. Speed booked in one quarter, cost paid in the next.&lt;/p&gt;

&lt;p&gt;If you think this is just my anecdote, look at GitHub itself.&lt;/p&gt;

&lt;p&gt;On 23 April 2026, &lt;a href="https://github.blog/news-insights/company-news/an-update-on-github-availability/" rel="noopener noreferrer"&gt;GitHub’s merge queue silently corrupted code in 2,092 pull requests across 230 repositories&lt;/a&gt;. The root cause, in their own words: "existing test coverage primarily exercised single-PR merge queue groups, which did not exhibit the faulty base-reference calculation." A predictable edge case nobody thought to test. That gap wasn’t a memory failure, it was a velocity failure. The tests that would’ve caught a multi-PR squash regression are the kind you write when you’re slowing down to think about how a feature actually behaves under load. They’re the first tests to get cut when the team is told to ship faster.&lt;/p&gt;

&lt;p&gt;The company that ships AI dev tools to the rest of us shipped at AI velocity, and an obvious failure mode reached production unguarded. If GitHub is hitting these walls, your team is too. You just haven’t noticed yet, because nobody is paying you to write a public availability report.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But AI Saves Real Time, Right?"
&lt;/h2&gt;

&lt;p&gt;The wins are real. They’re also narrow.&lt;/p&gt;

&lt;p&gt;Boilerplate is genuinely faster. Scaffolding new files, projects, and test fixtures is much faster. Searching documentation for an unfamiliar library, translating between formats, drafting commit messages, generating regex you’d otherwise spend forty minutes on Stack Overflow for: all real wins. I use these tools every day at work.&lt;/p&gt;

&lt;p&gt;The point isn’t that AI is bad. The point is that the wins are concentrated on the easy work, while the cost lands on the hard work. System design, debugging novel failures, writing code that survives a refactor in two years: none of that gets easier with AI. It gets harder, because the developer trying to do the hard work has less practice on the easy work that used to build the muscles.&lt;/p&gt;

&lt;p&gt;You can ship a product without ever doing the hard work. You can’t keep one running.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quota Got Higher, Not the Pay
&lt;/h2&gt;

&lt;p&gt;AI didn’t free up time. It became the new baseline expectation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.scientificamerican.com/article/why-developers-using-ai-are-working-longer-hours/" rel="noopener noreferrer"&gt;&lt;em&gt;Scientific American&lt;/em&gt; reports&lt;/a&gt; that developers using AI tools are working longer hours, not shorter ones. &lt;a href="https://techcrunch.com/2026/02/09/the-first-signs-of-burnout-are-coming-from-the-people-who-embrace-ai-the-most/" rel="noopener noreferrer"&gt;TechCrunch’s February 2026 piece&lt;/a&gt; put it more bluntly: the first signs of burnout are coming from the developers who embraced AI the most. 65% of developers are reporting burnout in 2026 even though 61% of organisations have rolled AI into their development pipelines.&lt;/p&gt;

&lt;p&gt;Most engineering managers didn’t set this quota. They received it. The line manager telling the tech lead that review is a bottleneck is themselves being told their team’s velocity needs to double now that Copilot is paid for. The pressure runs from the top of the org chart down, and the cognitive load of reviewing AI slop lands on whoever is holding the bag at the bottom.&lt;/p&gt;

&lt;p&gt;Microsoft is the public version of this story. &lt;a href="https://www.windowslatest.com/2026/01/31/microsoft-reportedly-admits-windows-11-went-off-track-cuts-back-copilot-and-promises-real-fixes-in-2026/" rel="noopener noreferrer"&gt;Satya Nadella reportedly described some of Copilot’s own integrations as "almost unusable"&lt;/a&gt;. Windows 11 hit a Patch Tuesday so bad that users couldn’t shut down their machines. The company that bet hardest on AI shipped at AI velocity, and the bug reports caught up with them faster than the marketing did.&lt;/p&gt;

&lt;p&gt;Same pattern as the games industry collapse I &lt;a href="https://dev.to/blog/game-industy-software-downfall"&gt;wrote about last year&lt;/a&gt;: short-term thinking, growth-at-all-costs, no investment in longevity. New tool, same mistake.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Can We Turn This Around?
&lt;/h2&gt;

&lt;p&gt;None of the fixes here are revolutionary. They’re the basics a lot of teams have stopped doing. Skip the platitudes; here’s what to actually do on Monday morning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For developers.&lt;/strong&gt; Don’t merge code where you can’t delete one line and predict the effect. If you can’t, you don’t understand it. Before pasting a stack trace into the LLM, give yourself ten minutes alone with the trace. Keep the muscle alive. Treat AI output like a draft, never a deliverable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For seniors and tech leads.&lt;/strong&gt; Make "can you explain this in plain English?" a required review comment on any pull request that smells AI-generated. It costs the author thirty seconds when they understand the code, and it surfaces the gap immediately when they don’t. That’s mentorship at scale, not theatre.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For managers and engineering leaders.&lt;/strong&gt; Pair every velocity metric on your dashboard with change failure rate. The upward script is short: "velocity is up, but the share of changes that need a fix or rollback is up faster, and that’s the risk we’re taking." That framing survives the conversation with finance because it reads as risk visibility, not as a veto. Then watch your own measurement. Change failure rate gets sandbagged the moment teams notice it matters: incidents quietly drop in severity, rollbacks get rebranded as "follow-up PRs". If the number isn’t moving, the gaming probably already started. The metric is only useful when paired with a definition you don’t let drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For solo founders, OSS maintainers, and anyone owning the whole stack.&lt;/strong&gt; Keep a &lt;code&gt;SHORTCUTS.md&lt;/code&gt; in the repo root. One line per AI shortcut you knowingly took, with a date and the reason. When you’re debugging at 2 a.m. three weeks later, that file is your map. The accounting trick that hides the bill from corporate teams doesn’t exist for you (you’re the one being paged), but the corners you cut still vanish from memory if you don’t write them down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For everyone.&lt;/strong&gt; Slow down to learn. Compound interest works in both directions.&lt;/p&gt;

&lt;p&gt;The 2 a.m. page is the part of this loop you can still avoid. The shortcut is yours, the debugger is yours, and the mental model that gets you back to bed is yours too. None of that is going to be built by the system that’s asking you to ship faster.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>devrel</category>
      <category>burnout</category>
    </item>
    <item>
      <title>Why I Interview Once a Year (Even When I'm Not Job Hunting)</title>
      <dc:creator>Filipe Brito Ferreira</dc:creator>
      <pubDate>Sun, 06 Oct 2024 15:25:00 +0000</pubDate>
      <link>https://forem.com/fbritoferreira/why-i-interview-once-a-year-even-when-im-not-job-hunting-30fb</link>
      <guid>https://forem.com/fbritoferreira/why-i-interview-once-a-year-even-when-im-not-job-hunting-30fb</guid>
      <description>&lt;p&gt;Recently, my direct manager asked me, &lt;em&gt;"Why did your LinkedIn status change to 'open to work'?"&lt;/em&gt; It’s a fair question, and one that I’d like to address. Every so often, about once a year, I go on what I call an "interview spree." This practice isn’t because I’m actively looking to leave my current role, but rather, it’s designed to help me understand the current job market and assess where I stand within it.&lt;/p&gt;

&lt;p&gt;In this blog, I want to explain the reasoning behind this approach and how it can benefit your career as well.&lt;/p&gt;



&lt;h2&gt;
  
  
  Key Benefits of Interviewing Once a Year
&lt;/h2&gt;

&lt;p&gt;Interviewing once a year, even if you're not actively looking for a new job, can offer several key benefits for your career growth and professional development. Here are some reasons why it’s a good idea:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Market Awareness&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The job market is constantly changing, with new roles, skill demands, and salary benchmarks emerging. By interviewing regularly, you get a pulse on these trends. You’ll learn which skills are in demand, what new technologies or methodologies companies are adopting, and what qualifications they value most. This allows you to stay competitive and informed, even if you're content in your current position.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benchmarking Your Skills&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Interviewing helps you gauge how your skills and experiences stack up against industry expectations. It provides insight into where you excel and where there might be gaps. This feedback can guide your ongoing professional development, helping you pursue new certifications, projects, or trainings that enhance your marketability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Networking Opportunities&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Job interviews often allow you to meet and engage with people in your field, expanding your professional network. Even if you don’t take the job, building relationships with interviewers or companies can lead to future opportunities or collaborations down the line.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Confidence Boost&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Going through the interview process once a year helps you refine your communication skills and keeps you sharp. It forces you to articulate your achievements, projects, and strengths, which builds confidence in your own abilities. Regular interviewing ensures that you're always ready to talk about your work, whether at an impromptu networking event or during a surprise promotion conversation.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Plan for the Unexpected&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even if you're happy in your current job, things can change—companies restructure, industries shift, or personal situations evolve. Regular interviewing prepares you for the unexpected by keeping you familiar with the process, ensuring that you're never caught off-guard if the need to find a new position arises.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Leverage in Your Current Role&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you're aware of your value in the market, you can use this knowledge in discussions with your current employer. Whether you're negotiating a raise, asking for a promotion, or proposing new responsibilities, knowing what other companies would offer you gives you a stronger bargaining position.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Career Reflection&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Interviewing allows you to reflect on your current job satisfaction and career trajectory. It prompts you to consider questions like: &lt;em&gt;Am I growing in my current role? Are there better opportunities elsewhere? Is this the right career path for me?&lt;/em&gt; This self-assessment can help you ensure that you're not becoming stagnant and that you're continuously moving toward your career goals.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffbritoferreira.com%2Fimages%2Fposts%2Fwhy-i-interview-once-a-year-even-when-not-job-hunting%2Fsection-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffbritoferreira.com%2Fimages%2Fposts%2Fwhy-i-interview-once-a-year-even-when-not-job-hunting%2Fsection-2.jpg" alt="Article cover Image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Discussing Your Interview Practice with Your Manager
&lt;/h2&gt;

&lt;p&gt;Informing your manager about your decision to interview once a year, even if you're not actively seeking to leave, can foster transparency, strengthen your relationship, and even enhance your professional development within the company. Here’s why it’s a good idea:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Builds Trust and Transparency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Being open with your manager about why you're interviewing demonstrates honesty and transparency. When you proactively explain your motivations—such as staying current with market trends or benchmarking your skills—it shows that you’re committed to personal and professional growth. This can prevent any misunderstandings or suspicions about your intentions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Enhances Communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Open communication is essential for a healthy employee-manager relationship. By having this conversation, you're showing that you respect your manager enough to keep them informed about your career development efforts. It encourages a two-way dialogue where you can openly discuss your career goals and aspirations within the company, as well as outside opportunities.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;May Lead to Internal Growth&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By sharing your market exploration with your manager, it may lead to internal career advancement discussions. If your manager knows you’re evaluating your market position, they may offer opportunities for growth within the company, such as additional responsibilities, a new project, or even a promotion. This could help keep you motivated and ensure that your role continues to align with your career goals.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Encourages Skill Development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you let your manager know you’re interviewing to gauge market expectations and skill demands, they might offer support in helping you bridge any skill gaps. For instance, they could recommend training programs, certifications, or stretch assignments to help you stay competitive, not just for external roles, but within the company as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Preempts Concerns&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If your manager notices that you’ve changed your LinkedIn status to "open to work" or hears that you’ve been interviewing elsewhere, it could raise concerns about your commitment. Proactively informing them allows you to clarify that you’re not actively looking to leave, but rather participating in interviews as a professional development tool. This preempts any negative assumptions or worries.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Shows Initiative and Ambition&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you discuss your market exploration with your manager, it signals that you’re serious about your career trajectory and personal development. This ambition can be viewed positively, showing that you're someone who takes initiative and constantly seeks to grow and improve, which are qualities most employers value highly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Adds Value to the Organization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Understanding the market and how your role compares to others in the industry can help you bring fresh insights back to your current job. By discussing this with your manager, you might share information on industry trends or best practices you’ve learned during interviews, adding value to your organization. This can position you as someone who’s not only focused on personal growth but also on contributing to the company's success.&lt;/p&gt;

&lt;p&gt;By making interviewing an annual habit, you stay agile, aware, and prepared for whatever your career may bring. It’s not about being disloyal to your current employer; it’s about taking proactive control of your professional development.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>interview</category>
      <category>tips</category>
    </item>
    <item>
      <title>Supper Club GraphQL as an Aggregation Layer with Filipe Ferreira of Sky TV</title>
      <dc:creator>Filipe Brito Ferreira</dc:creator>
      <pubDate>Mon, 16 Sep 2024 04:50:12 +0000</pubDate>
      <link>https://forem.com/fbritoferreira/supper-club-x-graphql-as-an-aggregation-layer-with-filipe-ferreira-of-sky-tv-c4</link>
      <guid>https://forem.com/fbritoferreira/supper-club-x-graphql-as-an-aggregation-layer-with-filipe-ferreira-of-sky-tv-c4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsyntax.fm%2Fog%2F529.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsyntax.fm%2Fog%2F529.jpg" alt="Syntax.fm og image" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I Was on the Syntax.fm Supper Club Podcast!&lt;/p&gt;

&lt;p&gt;I’m excited to announce that I recently had the pleasure of being a guest on the Syntax.fm Supper Club podcast, hosted by Wes Bos and Scott Tolinsky.&lt;/p&gt;

&lt;p&gt;During the episode, we discussed my experiences building GraphQL servers and developing JavaScript/TypeScript applications at Sky. It was a great conversation covering a range of topics from the benefits of GraphQL as an aggregation layer to practical insights into modern JavaScript and TypeScript development.&lt;/p&gt;

&lt;p&gt;You can check out the episode and listen to our discussion here: &lt;a href="https://syntax.fm/show/529/supper-club-graphql-as-an-aggregation-layer-with-filipe-ferreira-of-sky-tv" rel="noopener noreferrer"&gt;GraphQL as an Aggregation Layer with Filipe Ferreira of Sky TV&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’d love for you to give it a listen and let me know what you think. Your feedback and thoughts are always appreciated!&lt;/p&gt;

</description>
      <category>podcast</category>
      <category>syntaxfm</category>
    </item>
    <item>
      <title>From Coding Challenges to Real-World Skills: Rethinking the Modern Interview Process</title>
      <dc:creator>Filipe Brito Ferreira</dc:creator>
      <pubDate>Sun, 08 Sep 2024 12:43:41 +0000</pubDate>
      <link>https://forem.com/fbritoferreira/from-coding-challenges-to-real-world-skills-rethinking-the-modern-interview-process-19e1</link>
      <guid>https://forem.com/fbritoferreira/from-coding-challenges-to-real-world-skills-rethinking-the-modern-interview-process-19e1</guid>
      <description>&lt;p&gt;After a decade of experience as a software developer, both as an interviewer and an interviewee, I’ve come to realise that many interviews fail to effectively showcase a candidate’s true skills.&lt;/p&gt;

&lt;p&gt;The most common interview format involves solving problems similar to those found on LeetCode. In over 250 interviews, I’ve repeatedly encountered the same types of problems, such as FizzBuzz or a debounce function. This format often seems more like a test of how many interviews you’ve attended rather than a genuine assessment of your abilities.&lt;/p&gt;

&lt;p&gt;From my experience interviewing candidates, many struggle with these problems and frequently run out of time. This doesn’t necessarily reflect their ability to handle real-world issues. Moreover, these interviews often discourage candidates from using resources like Google or MDN, which feels unfair given the stress and pressure involved. I always encourage candidates to use any online resources they would normally rely on in their daily work.&lt;/p&gt;

&lt;p&gt;Ultimately, I’m more interested in understanding how you approach and solve problems in a real-world work environment rather than how you perform under test conditions. After all, we’ve all experienced times in school where we struggled with a test but excelled in regular classwork.&lt;/p&gt;

&lt;p&gt;Another common interview format involves take-home coding exercises. These are generally more effective than the previous format, especially when candidates are not timed. In my experience, candidates tend to showcase their skills better in this setting, as it more closely resembles a normal working environment. However, issues arise when these take-home exercises are hosted on online platforms with strict monitoring, where candidates are timed and their screens are recorded to ensure they aren’t using tools like ChatGPT, Google, or MDN.&lt;/p&gt;

&lt;p&gt;Many companies now offer AI coding tools—such as ChatGPT, which I use daily at ROKU. While these tools don’t solve everything for you, they are akin to power tools. Just as a power screwdriver makes the task faster and easier, AI tools can enhance productivity without replacing fundamental skills.&lt;/p&gt;

&lt;p&gt;Despite these challenges, securing an interview feels like a rare relief in today’s job market. Many candidates apply to over 100 job postings just to land one interview. This paradox is puzzling, especially when hiring managers complain about the difficulty of finding qualified candidates. Throughout my career, I’ve rarely been in a position where we had a fully staffed team and weren’t hiring. We’re always looking for talent, so why are so many roles left unfilled, and why do candidates struggle to find jobs?&lt;/p&gt;

&lt;p&gt;The disconnect between the abundance of open roles and the difficulty in filling them may stem from a variety of factors. One possibility is that job descriptions are often overly specific or unrealistic, setting an unattainable bar that deters qualified candidates. Additionally, the hiring process itself may be overly rigid, with a heavy emphasis on traditional coding tests and a lack of focus on practical, real-world problem-solving skills. This can lead to a situation where even highly skilled candidates are overlooked because they don't fit a narrow set of criteria. Companies might also be struggling with internal inefficiencies or biases that prevent them from recognising and onboarding talent effectively. Addressing these issues requires a reevaluation of hiring practices, a broader understanding of what constitutes valuable experience, and a more inclusive approach to assessing candidate skills.&lt;/p&gt;

&lt;p&gt;Furthermore, the rapid pace of technological change means that the skills and experiences required for many roles are constantly evolving. This can create a gap between what companies need and what candidates offer. Many job seekers may have the relevant experience but lack specific keywords or certifications that are currently in vogue, which can hinder their chances of being noticed. Additionally, the emphasis on cultural fit and soft skills in many organisations can sometimes overshadow technical abilities, leading to mismatches between job requirements and candidate profiles. To bridge this gap, employers should consider adopting more flexible criteria for job qualifications, focusing on candidates' ability to learn and adapt rather than strictly adhering to a predefined list of skills. Embracing a more holistic view of candidate potential can help to better align hiring practices with the dynamic nature of the tech industry.&lt;/p&gt;

&lt;p&gt;In conclusion, the current state of the job market reflects a complex interplay of evolving skills, rigid hiring practices, and mismatched expectations. Despite the abundance of job openings, many qualified candidates struggle to secure positions due to unrealistic job descriptions, outdated evaluation methods, and a narrow focus on specific credentials. To address these challenges, both companies and candidates need to adopt a more flexible and forward-thinking approach. Employers should reassess their hiring criteria to emphasise practical problem-solving abilities and potential for growth, rather than strictly adhering to traditional qualifications. Meanwhile, candidates should focus on showcasing their adaptability and real-world problem-solving skills. By fostering a more inclusive and realistic hiring process, we can better align the needs of employers with the capabilities of job seekers, ultimately leading to a more efficient and equitable job market.&lt;/p&gt;

</description>
      <category>hiring</category>
      <category>interview</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
