It’s clear from the industry investment that AI is not going away any time in the near future. Quite the opposite - thought leaders are expecting most code to be written by AI in the near future. This post won’t get into the debate of the ROI of that investment, or whether we’re in a bubble.
Instead it will focus on a pressing reality: your teams are already using AI, and as an engineering leader, you need to guide its adoption strategically.
Embrace AI Proactively
The question of whether tech orgs should embrace AI is a moot point - your teams are already using AI. They don’t tout this, because there’s a new flavor of imposter syndrome that comes along with it. Your senior engineers will be more resistant to using AI than your junior engineers, but none will want to acknowledge using AI extensively - not until you make it safe to do so.
As a leader, your role isn’t to debate if AI should be used but how. Encourage its adoption openly, set clear policies, and destigmatize its role in workflows. Hesitation risks shadow usage without oversight, amplifying potential downsides.
Takeaway: AI tools will make your team more productive, but they can be a double-edged sword if you don’t act decisively to channel its benefits and curb its risks.
Understand AI’s Power and Pitfalls
AI agent tools dazzle. Tools like Cursor, Cline and Continue (just to name some of the C’s) will convince you that manually writing code is like banging rocks together to make a fire. These tools can write thousands of lines of high-quality code in minutes. They even have feedback cycles that will check for errors and warnings from your linter (and soon run-time issues), execute unit tests, and implement fixes. When it works well, it’s amazing.
The hiccups often stem from poor planning, complex codebases, or outdated context (e.g., targeting the API of an older version of a framework). The good news? Tooling improves daily, and AI-experienced engineers will develop an intuition for the limitations.
Takeaway: Recognize AI’s strengths (speed, boilerplate) and weaknesses (fuzzy context, lack of precision) to set realistic expectations and guardrails.
Build a Culture of Verification
Developers are “lazy” in the best way - automating grunt work to focus on tough problems, as my colleague Brent highlights in his recent book. AI supercharges this instinct but tempts complacency. Under deadline pressure, even diligent engineers might start to vibe code and accept “slop” (AI’s unorganized, undisciplined output), especially if their AI tools have heretofore been reliable.
Counter this by normalizing AI’s role while insisting on oversight. Micromanagement isn’t the goal; accountability is. Code reviews must evolve to catch AI’s subtle errors, from unused code to security gaps.
Takeaway: Trust AI to accelerate coding, but mandate verification to protect quality; your codebase depends on it.
Planning is Critical
Agile and Scrum give you a head start, but AI demands next-level planning. It can generate files and features faster than any human, but without a tight roadmap, it veers off course quickly. An unchecked AI agent might churn out irrelevant code while an engineer blindly “accepts” the output.
Encourage your teams to use autocomplete suggestions, rather than letting agents make broad changes. This will ensure your engineers build the proper mental models around their code and have accountability for their contributions.
If you choose to embrace AI to make broader changes, ensure the work is broken into small, well-defined tasks, and that your engineers are micromanaging the process.
Takeaway: Invest in planning to amplify AI’s efficiency and avoid costly detours.
Safeguard Security and Quality
AI’s output isn’t inherently secure or maintainable. For example, I’ve seen it slap 20 CSS classes on a variety of <div>
tags, creating an untenable mess, or prioritize quick fixes that hide vulnerabilities. Without vigilance, these flaws compound in production.
Bolster code reviews, enforce security standards (e.g., input validation, access controls), and consider AI-driven audits as a proactive defense. Quality isn’t optional: it’s your reputation.
Takeaway: Treat AI code as a first draft, not a final product, and hold it to your organization’s bar.
Measure AI’s Impact Strategically
Does AI save time or shift effort to cleanup? Are bug rates climbing in AI-heavy areas? Without metrics, you’re guessing. Start small (a pilot team or project) and track bug rates, velocity, and review time. Data reveals whether AI boosts productivity or bogs down delivery.
At Riff, we’ve helped orgs double output with AI, but success hinges on visibility into its effects. We’re actively building accountability tooling to track AI-generated versus human code that integrates directly into popular dev tools.
Takeaway: Quantify AI’s footprint to refine its role and justify broader adoption.
Train Your Teams For Success
Making AI tools available isn’t enough: your engineers need guidance. Host training on effective usage, spotlighting pitfalls like context overload or dependency mismatches. Show them the conditions under which AI shines and where it flops.
Untrained teams fumble, trained teams thrive. It’s an investment in capability, not just tooling.
Takeaway: Equip your people to wield AI responsibly.
In Conclusion
Your teams are already coding with AI - now it’s your move. Embrace it openly, set policies, and align it with your goals: faster delivery, stronger products, sharper teams. The risk isn’t in adoption, it’s in lagging behind.
At Riff, we’ve guided tech orgs through AI integration, from tool selection to best practices to AI solutions. Need help shaping your strategy? Connect with us today.
Top comments (0)