I just finished creating a multiplayer online party game using only Claude Code. I didn't edit a single line. However, there is no way someone who doesn't know how to code could get where I am with it.
You have to have an intuition on the sources of a problem. You need to be able to at least glance at the correct and understand when and where the AI is flailing, so you know to backtrack or reframe.
Without that you are as likely to totally mess to you app. Which also means you need to understand source control and when to save and how to test methodically.
I was thinking of that, but asking the right questions and learning the problem domain just a little bit "getting the gist of things" will help a complete newbie to generate code for a complex software.
For example in your case there is the concept of message routing where a message that gets sent to the room is copied to all the participants.
You have timers, animation sheets, events, triggers, etc.
A question that extracts such architectural decisions and relevant pieces of code will help the user understand what they are actually doing and also help debug the problems that arise.
It will of course take them longer, but it is possible to get there.
So I agree, but we aren't at that level of capability yet. Because at some point currently it inevitably hits a wall and you need to dig deeper to push it out of the rut.
Hypothetically, if you codified the architecture as a form of durable meta tests, you might be able to significantly raise the ceiling.
Decomposing to interfaces seems to actually increase architectural entropy instead of decrease it when Claude Code is acting on a code base over a certain size/complexity.
So yes and no. I often just let it work by itself. Towards the very end when I had more of a deadline I would watch and interrupt it when it was putting implementations in places that broke its architecture.
I think only once did I ever give it an instruction that was related to a handful of lines (There certainly were plenty of opportunities, don't get me wrong).
When troubleshooting occasionally I did read the code. There was an issue with player to player matching where it was just kind of stuck and gave it a simpler solution (conceptually, not actual code) that worked for the design constraints.
I did find myself hinting/telling it to do things like centralize the CSS.
It was a really useful exercise in learning. I'm going to write an article about it. My biggest insight is that "good" architecture for an current generation AI is probably different than for humans because of how attention and context works in the models/tools (at least for the current Claude Code). Essentially "out of sight out of mind" creates a dynamic where decomposing code leads to an increase in entropy when a model is working on it.
I need to experiment with other agentic tools to see how their context handling impacts possible scope of work. I extensively use GitHub Copilot, but I control scope, context, and instructions much tighter there.
I hadn't really used hands off automation much in the past because I didn't think the models were at a level that they could handle a significantly sized unit of work. Now they can with large caveats. There also is a clear upper bound with the Claude Code, but that can probably be significantly improved by better context handling.
so if you're an experienced, trained developer you can now add AI as a tool to your skill set? This seems reasonable, but is also a fundamentally different statement that what every. single. executive. is parroting to the echochamber.
I have a strong memory from the start of my career, when I had a job setting up Solaris systems and there was a whispered rumour that one of the senior admins could read core files. To the rest of us, they were just junk that the system created when a process crashed and that we had to find and delete to save disk space. In my mind I thought she could somehow open the files in an editor and "read" them, like something out of the Matrix. We had no idea that you could load them into a debugger which could parse them into something understandable.
I once showed a reasonably experienced infrastructure engineer how to use strace to diagnose some random hangs in an application, and it was like he had seen the face of God.
(Anecdote) Best job I ever had, I walked in and they were like "yeah, we don't have any training or anything like that", but we've got a fully setup lab and a rotating library of literature. <My Boss> "Yeah I'm not going to be around, but here are the office keys" don't blow up the company pretty much.
To be honest, I do find most manuals (man pages) horrible to quickly get information how to do something and here LLMs do shine for me (as long as they don't mix up version numbers).
For man pages, you have to already know what you wants to do and just want information on how exactly to do it. They're not for learning about the domain. You don't read the find manual to learn the basics of filesystems.
I mean the process either works, or it doesn’t. Meaning it either brings in the expected value with acceptable level of defects or it doesn’t.
From a higher up’s perspective what they do is not that different from vibe coding anyway. They pick a direction, provide a high level plan and then see as things take shape, or don’t. If they are unhappy with the progress they shake things up (reorg, firings, hirings, adjusting the terminology about the end goal, making rousing speeches, etc)
They might realise that they bet on the wrong horse when the whole site goes down and nobody inside the company can explain why. Or when the hackers eat their face and there are too many holes to even say which one they did come through. But these things regularly happen already with the current processes too. So it is more of a difference in degree, not kind.
I agree with this completely. I get the impression that a lot of people here think of software development as a craft, which is great for your own learning and development but not relevant from the company's perspective. It just has to work good enough.
Your point about management being vibe coding is spot on. I have hired people to build something and just had to hope that they built it the way I wanted. I honestly feel like AI is better than most of the outsourced code work I do.
One last piece, if anyone does have trouble getting value out of AI tools, I would encourage you to talk to/guide them like you would a junior team member. Actually "discuss" what you're trying to accomplish, lay out a plan, build your tests, and only then start working on the output. Most examples I see of people trying to get AI to do things fail because of poor communication.
> I get the impression that a lot of people here think of software development as a craft, which is great for your own learning and development but not relevant from the company's perspective. It just has to work good enough.
Building the thing may be the primary objective, but you will eventually have to rework what you've built (dependency changes, requirement changes,...). All the craft is for that day, and whatever that goes against that is called technical debt.
You just need to make some tradeoffs between getting the thing out the faster possible and being able to alter it later. It's a spectrum, but instead of discussing it with the engineers, most executive suites (and their manager) wants to give out edicts from high.
> Building the thing may be the primary objective, but you will eventually have to rework what you've built (dependency changes, requirement changes,...). All the craft is for that day, and whatever that goes against that is called technical debt.
This is so good I just wanted to quote it so it showed up in this thread twice. Very well said.
Us?
(Yeah, we’re fucked)