DUMB DEV Community

Cover image for Which Code Assistant Actually Helps Developers Grow?

Which Code Assistant Actually Helps Developers Grow?

BekahHW on June 27, 2025

Over the past year, we’ve had a ton of conversations at Virtual Coffee about AI. If you’ve opened up X or LinkedIn, you probably realize that peopl...
Collapse
 
skamansam profile image
Samuel

Im curious to see how cascade (windsurf) stacks up against these. I would also like to see how different backend work as well (Claude, SWE, etc). I have yet to see a really comprehensive comparison, and only ever seen cascade in a comparison once. (I am a huge fan of cascade, having tried a bunch of others and cascade is still the only one that works with me instead of against me.)

Collapse
 
bekahhw profile image
BekahHW

Definitely planning on adding windsurf in a future post!

Collapse
 
dotallio profile image
Dotallio

Totally agree that AI assistants shouldn't just give you code, but help you really understand the 'why' behind it - seeing an assistant act like a mentor instead of a shortcut makes all the difference for growth.

Anyone else found specific prompts or habits that make AI tools more helpful for genuine learning?

Collapse
 
bekahhw profile image
BekahHW

I think building the conversation into your work with it is super helpful. For example, just asking "why did you take that approach?" or "Is there a better approach to take?" A while back, I used chatgpt to help me with a feature for a personal project that I wanted to get out really quickly. It got the job done, but it used an outdated method. But I had to know that it was outdated to make the fix. That's one of the reasons I like Continue so far. I have the latest docs linked to my assistant.
Docs linked to astro

Otherwise, it's super important to cross-check the approach with the docs.

Collapse
 
nevodavid profile image
Nevo David

pretty cool seeing someone dig deeper into what actually helps folks learn - you think it’s more about the tools or how we use them over time?

Collapse
 
bekahhw profile image
BekahHW

I think it's both, for sure. I was on an airplane once where they tried to use a wrench to pound in something that was loose on the door...Thankfully, they deplaned us bc that was definitely the wrong tool for the job.

You have to have the right tool for the problem and you have to know how to use it.

Collapse
 
xwero profile image
david duymelinck

If you had to make a similar feature, how much would you remember from this time?
I'm asking because all of the noise about using AI making people less knowledgeable. Are you really learning or are you always going to need AI as a crutch?

Because of all the fuzz, I looked at my own patterns and I found that i relied on CLI autocomplete and memorizing search terms to find the solutions I needed.
I'm really thinking why I don't know those things after doing them many times before.

Maybe that is one of the reasons I haven't commited to an AI workflow when coding. I'm afraid to unlearn too much.
I use AI when I'm stuck, and then I do learn something because the decision to add it is more mine, if it makes sense.

Collapse
 
bekahhw profile image
BekahHW

I think we see this all the time. We have tools to allow us to move faster. I think any engineer who's done an interview where they make them code in notepad realizes how much they depend on autocomplete or prettier or extensions that do things for you allow them to move with confidence.

But they still need to know how to ask the right questions and to debug, bc that's not going to change. This is why you can't rely on AI. You still have to know what you're doing or you're not going to be able to fix it when it breaks. I think if you're deliberate about making sure you're still learning and that you're still questioning the AI implementation, then you'll have the speed of AI and the depth of someone who took the time to learn.

Collapse
 
xwero profile image
david duymelinck • Edited

That is all well and good for me, but I was thinking about people that start in IT now and their path to know enough what they are doing.

In the times of stack overflow you had to search for code and copy paste it. Now faulty code is written for you with breakneck speed.

As an experiment I build a CLI code generator to bootstrap classes. I was amazed about all the questions I had to make it ask to cover eighty procent of the use cases. With AI a lot of those questions you don't need to ask because it assumes them for you.
So people with little experience don't know they are decisions that have been made for them.

I feel AI is like an experienced developer that gives you directions, but doesn't explain the whole reasoning. Because AI can't as far as I know.
For the people who are better at visual comparisons I think of the how to draw a owl meme, where the first drawing are some circles and the second drawing is a detailed owl.

Thread Thread
 
bekahhw profile image
BekahHW

One of the most important parts of learning to be a developer is learning how to ask questions. But, tbh, that's probably true for a lot of professions. I taught college English for 10 years before coming into tech, and my main goal was to teach my students to think critically, listen, and be able to ask good questions, questions that challenged their own beliefs.

I think there are going to be plenty of people that move super fast and look like they know what they're doing bc of AI, but I think at some point, they're going to hit the wall and won't be able to progress or might not even be able to maintain a job.

The example in this post is for a new feature, and it's not a complex one. Using Ai Coding Assistants on an existing codebase will help to show those flaws. When I graduated BootCamp, I had never worked on a large codebase. I worked on my own projects with a handful of files. When I got my first job, I was so overwhelmed by the huge codebase and trying to figure out what was happening where. I do think having a coding assistant would have helped me to navigate the complexity. But, I also think that if I had depended on AI to do the work for me, I would never have been successful in that role.

Collapse
 
0x2e73 profile image
0x2e73

Up until a few months ago, I was fully in favor of using AI for development. I thought I had a healthy approach: I wasn’t using prompts to generate full code, I always reviewed the suggestions, and I had Copilot running quietly in the background just to help speed things up a little.

But after a few months, I realized that we almost always end up overusing it—even without noticing.

This really hit me when I took an entrance exam for a computer science program. The test was on paper, no computer allowed. And that’s when I got a wake-up call. It took me way longer than expected to write out my algorithm. It was like the code didn’t come naturally anymore, like I had unlearned how to think through it on my own.

That’s when I decided to completely turn off Copilot. Now, I also try to use ChatGPT as little as possible for tasks I can handle myself. I mainly rely on it to review my code or suggest improvements—but I want to stay sharp, keep my brain active, and continue growing through my own efforts.

Collapse
 
meligy profile image
Meligy

I wonder how much of the effect is the agentic tool itself and how much is the model. For example, would Copilot get you different results when using a Claude model? Sure, but how different?