We sprinted into the AI age of autocomplete IDEs now we’re waking up wondering why we forgot how to write a for-loop.
Introduction: how I forgot how to code
You ever stare at your screen and suddenly forget how a for-loop works?
Same. Specifically, Lua’s for-loop. I was on a new machine, hadn’t signed into Copilot, and just sat there like a deer in a syntax-shaped headlight.
“for k, j in… wait… is it pairs? ipairs? What is Lua?”
That’s when it hit me: AI tools like Copilot, Cursor, and CodeWhisperer have slowly numbed our fingers. We’re coding faster than ever before but we’re also thinking less. Repeating prompts like rituals. Accepting autocomplete like gospel. Forgetting why the code works and just being glad it does.
And I’m not the only one. Scroll dev Twitter or Reddit and you’ll see post after post:
“Ever since I started using AI, I feel like I’ve lost my ability to code without it.”
It’s not just dramatic. It’s happening. We’re creating a generation of devs who ship fast, but can’t explain why the code runs.
This isn’t about bashing AI. I use it too daily. But let’s talk honestly about what we’re trading: speed for mastery, autocomplete for understanding, shipping for skill.
The good news? We can fix this. And it starts by asking some uncomfortable questions starting with: Are you actually learning, or just copying really fast?

I’ve got a promo code below — helps you get started, and helps me keep hacking and writing full-time. 🙌…Give a try to UpCloud with €50 free credits for 30 days (signup here)
My muscle memory is dying, and AI is the knife
Coding used to be a craft. You wrote the same loops, functions, and patterns until they lived in your fingers. You could write a binary search in your sleep. Now? You prompt, get a blob of code, squint at it, and ship.
Here’s what’s happening:
AI-powered IDEs are replacing repetition with suggestion. That sounds good until you realize repetition is how you learn. It’s like trying to get stronger at the gym by watching someone else lift weights for you.
“I’ve become super reliant on Copilot. It’s like magic… but now I blank out when I have to write anything from scratch.”
That’s not just a one-off tweet. That’s a common pattern. AI writes just enough to keep you from struggling. But in doing so, it also steals the friction and friction is how knowledge sticks.
Let’s take syntax. Yeah, it’s not the sexiest part of programming, but it’s the glue between your logic and the machine. You don’t need to memorize it, sure until you’re offline, in a pinch, or debugging a subtle bug that only makes sense if you really know what the code is doing.
There’s a moment every dev hits when autocomplete fails and if you haven’t built the reflexes, you just stall.
And trust me, there’s nothing like the existential dread of realizing you can’t write a simple to-do app from scratch without asking your code genie for help.
This isn’t Luddite gatekeeping. It’s a reminder that memory matters. Syntax matters. Struggle builds skill.
We’ve handed off the struggle. Now we’re shocked the skill didn’t stick.
Faster isn’t better it’s just faster
Let’s be honest. AI feels incredible when you’re in the zone. You type half a function name and boom a whole block of working code appears. It’s like having a thousand Stack Overflow threads whispering in your ear, instantly.
But here’s the problem:
We’ve mistaken “I shipped it” for “I understood it.”
Junior devs today can deploy a feature faster than ever. But ask them why that code works or how it breaks and you’ll get blank stares or a nervous chuckle followed by, “Uh… I think Copilot did that part.”
“We traded a lifetime of mastery for 5-second dopamine hits from shipping fast and impressing dev Twitter.”
Yeah, your code “works.” But so does duct tape on a leaking pipe until pressure builds.
Code isn’t static. Every line you ship is future maintenance debt. Every “it just works” block is a bug waiting for a bad day. Rewrites cost money. Worse: they cost confidence.
And let’s not forget the AI trap: it doesn’t care about your architecture. It cares about matching tokens. That means you’re more likely to get a “popular” solution than a correct or sustainable one.
Shipping fast is great. But long-term growth doesn’t come from sprinting. It comes from building foundations. That’s why the best engineers aren’t the fastest they’re the ones whose code is still running five years later.
Speed is fun. Speed is addictive.
But speed without understanding?
That’s just setting future-you on fire.
Debugging is where AI breaks and you break with it
Here’s where things really fall apart: debugging.
You can lean on AI all you want to write code, but the second something breaks in an unexpected way and it will you’re on your own. Because AI can only guess. You’re the one who has to understand.
“Ask a junior dev to debug code Copilot helped them write, and you’ll see the fear in their eyes.”
Debugging is where real developers are forged. It’s the ultimate test of understanding a place where shortcuts die and your mental model either holds up… or collapses like a Jenga tower.
Now imagine someone who’s spent the last year pasting in AI snippets and pressing run. What do they do when the app silently crashes on an edge case no one prompted for? What happens when the logs make no sense and AI can’t help?
AI doesn’t have full context. It doesn’t remember how your state mutated over the last 30 calls. It can’t see that one weird async race condition that only shows up on Fridays in staging.
And let’s be honest we’ve all seen those Copilot hallucinations:
- Making up fake method names that don’t exist.
- Confidently suggesting broken logic.
- Copying Stack Overflow answers from 2012 with bugs included.
Worse, it all looks legit on the surface. If you’re not careful, you’ll trust it and ship it and wonder why your system lights are blinking red at 2 a.m.
AI can assist with debugging, sure. But it can’t understand your system. That’s your job. And if you never built that understanding because you never struggled through the code… well, now you’re screwed.
You can’t debug what you never understood.
And you definitely can’t fix what you didn’t really write.
The fall of Stack Overflow and the rise of shallow answers
Once upon a time, if you had a weird bug or a language quirk you didn’t understand, you went to a sacred place:
Stack Overflow.
It wasn’t perfect. It could be smug, harsh, and full of graybeards correcting you on semicolon usage. But it forced you to ask better questions. It made you read. It shoved you face-first into five different opinions before you picked one and tested it yourself.
That process? It was learning in disguise.
Now? Most junior devs don’t even know what Stack Overflow is. They’re getting instant code snippets from ChatGPT or Copilot and calling it a day. No back-and-forth. No discussion. No painful but necessary exposure to the layers of “why” behind an answer.
“Reading discussions between experienced devs was the best way to learn. You didn’t just get what worked you got why it worked.”
Stack Overflow forced you to research before posting. You learned from writing the question. You learned from the downvotes. You learned from the dude who wrote a four-paragraph comment because you misused parseFloat
.
AI doesn’t care how lazy your question is. It’ll confidently give you an answer, even if that answer is technically wrong but statistically plausible.
Yes, Stack Overflow could be toxic at times. But it created tribal knowledge the kind of understanding you build through community pain and collective wisdom.
Now we’re getting answers without conversation.
Code without mentorship.
Fixes without foundations.
And that trade is costing us way more than we think.
Speed scales flat understanding compounds
Let’s talk about growth curves.
Picture two devs.
Dev A is AI-powered. Copilot, ChatGPT, Cursor they’re stacked. They’re shipping features in hours, not days. Looks impressive.
Dev B is old-school. They’re struggling. Googling. Reading docs. Writing broken code. Fixing it. Learning the hard way.
At first, Dev A leaps ahead. But over time, Dev B starts catching up and then something wild happens: Dev B keeps climbing.
Why?
Because Dev B is building a mental model. They’re learning how systems work. How patterns emerge. How code flows, breaks, and heals. They’re learning how to debug, how to optimize, how to think in code.
“Speed gets you short-term wins. Understanding gives you long-term dominance.”
This is compounding knowledge. Every concept you master connects to the next one. Every bug you solve teaches you three new things. You don’t just ship you scale yourself.
Dev A? They’re still fast. But their growth flatlines. They can ship what Copilot suggests… but when it comes to system design, optimization, or truly novel problems, they stall. Because their knowledge graph is shallow.
Speed is a burst.
Understanding is exponential.
So yeah, it feels like a cheat code to use AI all the time. But if you’re not also putting in the slow, boring, painful reps…
You’re not leveling up you’re just pressing autofire.

Learning the what is easy the why is everything
Everyone knows what a function is, right?
“It’s a reusable block of code you can call from anywhere.”
Cool. But… why does it exist that way? What happens under the hood? How does your language of choice store it in memory? What’s the stack doing during a recursive call? What’s actually happening when you return?
Crickets.
This is the core problem with AI-assisted learning: you learn the what, but rarely the why. You get the definition. You don’t get the story.
Back in school (or self-taught bootcamp trenches), day one meant going deep like “what is an integer, and how is it stored in memory” deep. You learned about binary, two’s complement, memory layout. Not because you needed to write in binary, but because you needed the mental model to build upwards from.
Today? Devs jump straight into frameworks and AI tools that scaffold everything. They start at the rooftop garden and never see the foundation.
“Yes, Copilot can build you an auth system… but can you explain the difference between JWT and session cookies?”
Understanding why things are built a certain way unlocks your ability to debug, refactor, optimize, and innovate. Without that, you’re just copying patterns and when those patterns break, you’re stuck.
Every great dev you’ve ever admired? They didn’t just memorize solutions. They understood systems.
Because knowing how something works is good.
But knowing why it works that way?
That’s when you start building things worth remembering.
Find your tribe learning is a team sport
Here’s a hard truth: you won’t get better just sitting alone with AI and a VS Code terminal.
Yes, you’ll ship. Yes, you’ll feel productive.
But growth? That happens in community where smart people challenge your assumptions, review your code, and make you feel dumb in the most productive way possible.
“The best devs aren’t the ones with the best AI prompts they’re the ones who hang out in the right Discord servers.”
Forget LinkedIn influencers and coding TikTok tips. If you really want to learn:
- Join a niche Discord server (Zig, Rust, Elixir whatever you’re into).
- Lurk Reddit threads where arguments break out over language design.
- Drop into open-source PR discussions and see real engineers debate tradeoffs.
- Ask questions. Then ask why. Then ask again.
And no shade to Mastodon or Reddit, but Discord is the MVP here. It’s where you can find high-signal conversations with people way smarter than you who also love helping you not be dumb.
Pro tip: don’t just join. Contribute. Answer a newbie’s question. Post a design idea. Review someone else’s janky PR. You learn way more teaching than consuming.
AI gives you answers.
Your tribe gives you perspective.
And that “oh wow, I never thought about it that way” moment?
That’s the good stuff. That’s growth. That’s where you become the dev others want to ask questions to.
And eventually, when you see someone blindly copy-pasting AI snippets, you’ll get to be the one who says:
“Cool, but… do you know why that works?”
And that’s a good day.
Code reviews should be conversations, not checklists
Let’s be real most code reviews these days look like this:
- “LGTM”
- “Rename this var”
- “Missing semicolon”
- “Nice job, merge away ”
And sure, that works if your goal is to merge fast. But if your goal is to grow as a developer, that’s a missed opportunity the size of a full-page diff.
“Every code review is a window into another developer’s brain don’t close it with ‘LGTM’.”
Imagine if instead of just checking whether the code runs, we asked:
- Why did you choose this approach?
- What alternatives did you consider?
- How would this break if X changed tomorrow?
- Could we have solved this with less complexity?
Suddenly, code review becomes knowledge transfer. It becomes mentorship. And every pull request becomes a team-wide learning moment.
But here’s the honest catch:
No one has time for that.
In the real world, you’re already behind. You’ve got Jira tickets stacking up. And now you’re being pinged again to re-review a thing you merged 3 minutes ago?
Yeah. It’s hard.
But even if you can’t do the full Socratic seminar, just one thoughtful question per review can change everything. It nudges people toward deeper thinking. It makes them explain their choices. It reveals whether they actually understood the problem, or just prompted ChatGPT into solving it.
And if you’re the one getting reviewed? Ask for it. Seriously.
“Hey can you tell me if this is over-engineered?”
“Would love your thoughts on tradeoffs here.”
Because a good code review isn’t just about catching bugs.
It’s about catching you before you stop learning.

Build from scratch it’s the dev equivalent of leg day
You know what nobody wants to do, but everybody needs to do?
Build stuff from scratch.
Not with npx create-next-app
.
Not by pasting boilerplate from ChatGPT.
I mean really build something.
Like “I-just-learned-what-a-socket-is” kind of build.
“Yes, AI can generate a full auth system but you should build your own at least once. Just to feel the pain. Just to understand.”
Pick something foundational. Like:
- Implementing WebSockets from raw TCP.
- Writing a router in pure JS.
- Parsing a
.env
file manually. - Building a React-like renderer using just DOM APIs.
Will it be slower? Absolutely.
Will the code suck? Almost guaranteed.
But the understanding you gain compounds like interest in a high-yield savings account except it pays off in future interviews, performance debugging, and architectural debates.
“I rewrote a Twitch integration system on two 7-hour plane rides with zero internet. And it felt amazing.”
That’s what building from scratch gives you unfiltered confidence. The kind you can’t get from Copilot filling in the blanks for you.
Even better, there are tools now that guide you through this kind of foundational pain:
- Autobahn WebSocket Test Suite test your raw WebSocket implementation like a beast.
- Boot.dev’s HTTP from TCP course learn how the internet actually works, from the bits up.
- Zig’s Discord because writing low-level code with smart weirdos builds high-level skills.
AI is great at giving you scaffolding.
But once in a while, ditch the scaffolding and pour your own concrete.
Because when you understand every line, no bug is scary.
No black box is truly black.
And no interview question catches you off guard.
Your worst code will teach you the best lessons.
So go build something broken and fix it the hard way.
AI isn’t the villain but it sure is a tempting crutch
Let’s get one thing straight:
AI isn’t the enemy.
It’s just… too helpful.
Like that one group project guy who volunteers to do all the work and then wonders why no one else learned anything.
AI lets you skip struggle.
But struggle is where the learning lives.
“We’re not doomed because we use AI. We’re doomed if we only use AI.”
It’s not about whether you should use Copilot or ChatGPT spoiler: you should, they’re amazing tools.
It’s about how you use them.
Do you:
- Accept the first suggestion and move on?
- Or pause and ask, “Why this solution? Is there a better one?”
Do you:
- Let Copilot write your functions and blindly ship?
- Or rewrite them later, just to understand how they actually work?
One dev uses AI like a jetpack.
The other uses it like a wheelchair.
Guess which one’s going to survive the next legacy codebase handoff.
Here’s the secret: AI makes you faster but understanding makes you irreplaceable.
Because the deeper your mental model, the more AI becomes a superpower not a substitute.
So yes, prompt the LLM. Let it autocomplete your brain sometimes.
But interrogate every answer like it’s lying to you.
Because sometimes… it is.
And the dev who knows the difference?
That’s the dev who wins.
You are not an answer relay
Let’s end with the core of all this:
You are not a human API for ChatGPT.
If your entire workflow is copy → paste → ship, congratulations you’ve been demoted to a middleman. You’re not coding. You’re curating autocomplete.
“Your skill ceiling is now limited by the depth of your prompt… and that’s terrifying.”
The whole point of being a developer isn’t to type fast. It’s to think well. To model problems. To architect solutions. To understand tradeoffs. To debug chaos. To ask better questions. To mentor, explain, abstract, and build.
And guess what? AI can do none of that without you.
If you don’t push beyond surface-level solutions, you’ll always be replaceable by the next intern, the next plugin, the next LLM that types 20% better.
But if you go deep if you understand systems, patterns, internals, tradeoffs, and architecture you become the dev who others turn to when AI fails.
“The future isn’t about who uses AI. It’s about who understands beyond it.”
You don’t need to memorize everything. You don’t need to hand-write red-black trees in interviews (unless you’re into that kind of pain).
But you do need to learn how to think like a developer again before the autocomplete eats that part of your brain.
So build things. Break things. Ask “why” until your team gets annoyed.
Use AI. But don’t let it use you.
Because at the end of the day, your job isn’t to relay answers.
It’s to create understanding.
And that’s something no model can fake.
The Real Dev Flex? deep understanding
We live in a time where AI can ace LeetCode, build landing pages, refactor spaghetti, and scaffold apps with a single prompt.
So what’s left for you?
Plenty.
The real flex in 2025 isn’t how fast you can ship it’s how deeply you understand. It’s being the dev who can:
- Design systems from first principles.
- Debug failures no AI can untangle.
- Explain a bug, a fix, and the tradeoffs in between clearly.
- Say “No, this looks right… but it’s actually wrong,” and then prove it.
“When the AI gets stuck, your depth is what saves the sprint.”
The flashy devs who automate their workflow end up plateauing. They coast on AI’s surface-level suggestions. But you? You’re investing in your future skill compounding. You’re playing the long game.
Because one day, someone will say, “This weird edge case is breaking everything, and AI has no clue why.”
And you’ll say,
“Cool. Let’s figure it out.”
That’s the real dev energy.
1. Use AI with a learning mindset
Don’t just accept the first answer. Ask it why. Ask it for tradeoffs. Ask it for the same solution in three different ways.
Treat it like a junior dev you don’t fully trust because, well, you shouldn’t.
2. Join communities that force you to think
Not just lurk. Engage. Post questions. Review other people’s code. Get roasted.
Discord is for this niche, opinionated, and full of devs who will challenge your assumptions in the best way possible.
3. Do regular “AI-free” builds
Try a weekend project where you don’t prompt at all. Force yourself to write, debug, and Google like it’s 2014.
Yes, it’ll feel slow. That’s the point.
It’s like taking off training wheels to learn balance.
4. Start code reviews with one key question: “Why?”
Even if the code works, dig. Ask what else was considered. Ask what could go wrong.
If nothing else, you’ll train your own pattern recognition by hearing how others think.
5. Rebuild at least one thing from scratch
Pick a system you use every day like routing, auth, or state management and build your own baby version.
Yes, it will suck. Yes, it’ll be glorious.
None of these are fast.
But they’re the only way to stop becoming an AI-shaped echo and start becoming the kind of dev who can solve problems AI hasn’t even seen yet.
Conclusion: don’t become a human autocomplete
We’re living in a golden age of developer tools.
But here’s the dark side: the better our tools get, the less we have to understand.
And that’s a trap.
Because the goal of being a developer isn’t to ship the fastest code it’s to write the right code. It’s to design systems that last. It’s to debug chaos, model complexity, and explain things clearly to humans, not just compilers.
“The more you rely on AI, the more you risk becoming a glorified prompt engineer and that’s not what you signed up for.”
Use AI. Love AI. Push it to the edge.
But never stop asking questions.
Never stop building dumb little things from scratch.
Never stop sitting with broken code just long enough to understand why it broke.
Because when you know not just autocomplete you gain something no AI can replicate:
Judgment. Insight. Mastery.
And in a future where everyone’s getting faster, the dev who goes deeper wins.
Blog Sponsor of the Month: UpCloud
UpCloud is a Finland-based developer and European cloud hosting provider known for its exceptional reliability, speed, and performance. Built for developers who demand control and efficiency, UpCloud offers robust infrastructure with transparent pricing and global data centres.
For our readers, UpCloud is offering €50 in free credits for an extended 30-day trial. Test the platform using this signup link or apply promo code DEVLINK50 at signup.
Tried it out? Drop a comment below, we’d love to hear your feedback and experiences.
Resources & links
Foundational learning:
- Boot.dev — Learn HTTP from TCP Go low-level and understand how the web actually works.
- Computer Science Crash Course (YouTube) Bite-sized CS concepts explained clearly.
- CS50 by Harvard (Free) A full, free, and beginner-to-intermediate friendly intro to computer science.
Build from scratch:
- Autobahn WebSocket Test Suite A gold-standard tool for testing WebSocket implementations.
- Build Your Own X A collection of projects like “Build your own Git”, “Docker”, “Redis”, etc.
- nand2tetris From logic gates to a working computer. The most rewarding pain you’ll ever experience.
Developer communities:
- Zig Discord For devs who like control, weird syntax, and good vibes.
- r/ExperiencedDevs Honest, often salty takes from devs who’ve been through it all.
- Dev.to Good for sharing experiences, writing, and reading posts from devs across the spectrum.
Use AI wisely:
- Prompt Engineering Guide Learn how to ask smarter questions.
- Cursor.sh An AI IDE that can supercharge your workflow if you don’t let it become a crutch.
- awesome-chatgpt-prompts Great for inspiration, not substitution.

Top comments (0)