There is often a misunderstanding that “REPL driven development” means typing things into the REPL prompt interactively and accumulating state that’s not easy to reproduce. Writing code down in comment blocks as you go is the solution to that, as you mentioned. With that technique I have no fear of restarting my REPL and getting back to where I was.
Exactly. I start in the repl, but when it becomes a few expressions deep, I transfer it to the editor and turn it into a proper defun. Start 'sending' the defun to the repl and test it out in repl. Rince repeat.
> I start in the repl [...] I transfer it to the editor
That's exactly the kind of misunderstanding parent was talking about :)
Why start somewhere else than your editor, if it's already hooked up to the REPL? When I launch my REPL, I don't think it even accepts stdin, because there is no reason it has to.
It is easy to start creating objects in the repl (say if you are testing an API) and work with those created objects in the repl, one step at a time. You are able to observe the behaviour of the objects every step of the way. This gives you a much better idea when you are writing the functions (usually copy-pasting from repl) in the editor.
I am sure it can be done from the editor itself. (say using the 'comment' block in clojure). It is just matter of preference.
But if you're maintaining a bunch of code blocks by comment toggling, why not just turn it into tests from the get go? I suppose there's a somewhat fine line between TDD and REPL given the right tooling for it.
Different purposes. I usually start out with comment blocks, for experimenting/prototyping, once happy, cement whatever assertions I did there, but inside deftests. Not everything is moved to a test though, some things are just "helpers" sort of, and those tend to remain inside comment blocks instead of being converted to deftest or similar.
I use the REPL as lightweight form of testing while I'm in the flow state, which is complementary to TDD. If there's too much code in the comment block, that's a good signal to move it into a formal test like you're saying, or at least move the code into a reusable function so the comment block can be shorter and focused on usage examples instead of implementation details.
That's true. To be fair, if you come from a language like Ruby, JavaScript or Python, a REPL is basically a standalone experience, so I understand why people initially think it's like that, the name is more or less the same :) "Interactive Development" is maybe a better overall term that can lead to people thinking "Oh, what is that?" rather than "I've tried that already, what's the point?"
I like that. So you're saying that the switch between REPL and writing a test is much more seamless in a clojure system. That makes a lot of sense to me.
At a basic level any Lisp-like with an editor integrated REPL lets you put your cursor over the `greet` invocation below and evaluate it, and have the results shown in the manner of your choice (IDE window, inline temporary comment, etc).
This is all in the file as saved to your version control:
You might see this and think, "great, another hyped up vibe coding tool". If Clojure is about simplicity and understanding your code deeply (with the end goals of long-term maintenance and reliability), why would you need this?
When working with Clojure, I've been using LLMs primarily for two use cases:
1. Search
2. Design feedback
The first case is obvious to anyone who's used an LLM chat interface: it's often easier to ask an LLM for the answer than a traditional search engine.
The second case is more interesting. I believe the design of a system is more important than the language being used. I'd rather inherit a well-designed codebase in some other language over a poorly designed Clojure codebase any day. Due to the values of Clojure embedded in the language itself and the community that surrounds it, Clojure programmers are naturally encouraged to think first, code second.
The problem I've run into with the second case is that it often takes too much effort for me to get the context into the LLM for it to answer my questions in detail. As a result, I tend to reach for LLMs when I have a general design question that I can then translate into Clojure. Asking it specific questions about my existing Clojure code has felt like more effort than it's worth, so I've actually trained myself to make things more generic when I talk to the LLM.
This MCP with Claude Code seems like the tipping point where I can start asking questions about my code, not just asking for general design feedback. I hooked this up to a project of mine where I recently added multi-tenancy support (via an :app-id key), which required low-level changes across the codebase. I asked the following question with Claude Code and the Clojure MCP linked here:
> given that :app-id is required after setup, are there any places where :app-id should be checked that is missing?
It actually gave me some good feedback on specific files and locations in my code for about 10 seconds of effort. That said, it also cost me $0.48. This might be the thing that gets me to subscribe to a Claude Max plan...
This right here. There is a large segment of developers that have not hooked up indexed code, eg: "where I can start asking questions about my code". They treat it like a search engine. What you want is the LLM to INDEX your code. Then you can see the real power of LLMs. Spoiler alert: it is amazing.
This is what clojure-mcp with Claude Desktop lets you try. Or you can try Amazon Q CLI (there is a free tier https://aws.amazon.com/q/developer/pricing/). Not Clojure specific.
You need to find a workflow to leverage it. There are two approaches.
1. Developer Guided
Here you setup the project and basic project structure. Add the dependencies you want to use, setup your src and test folders, and so on.
Then you start creating the namespaces you want, but you don't implement them, just create the `(ns ...)` with a doc-string that describes it. You can also start adding the public functions you want for it's API. Don't implement those either. Just add a signature and doc-string.
Then you create the test namespace for it. Creates a deftest for the functions you want to test, and add `(testing ...)` but don't add the body, just write the test description.
Now you tell the AI to fill in the implementation of the tests and namespace so that all described test cases pass and to run the test and iterate until it all does.
Then ask the AI to code review itself, and iterate on the code until it has no more comments.
Mention security, exception handling, logging, and so on as you see fit, if you explicitly call those concerns it'll work on them.
Rinse and repeat. You can add your own tests to be more sure, and also test things out and ask it to fix.
2. Product Guided
Here you pretend to be the Product Manager. You create a project and start adding markdown files in it that describe the user stories, the features of the app/service and so on.
Then you ask AI to generate a design specification. You review that, and have it iterate on it until you like it.
Then you ask AI to break down a delivery plan, and a test plan to implement it. Review and iterate until you like it.
Then you ask AI to break up the delivery in milestones, and to create a break down of tasks for the first milestone. Review and iterate here.
Then you ask AI to implement the first task, with tests. Review and iterate. Then the next, and so on.
I'm skeptical because I don't think generating the Clojure code is the hard part. These ideas seem more like wishful thinking than actual productivity improvements with the current state of tech.
Developer guided: For the projects I'm currently working on, the understanding is the most difficult part, and writing the code is a way for me to check my understanding as I go. I do use LLMs to generate code when I feel like it can save me time, such as setting up a new project or scaffolding tests, but I think there are diminishing returns the larger and/or more complex the project is. Furthermore, I work on code that other people (or LLMs) are meant to understand, so I value code that is consistent and concise.
Product guided: Even with meat-based agents (i.e humans), there's a limit to how many Jira tickets I can write and junior engineers I can babysit, and this is one of the worst parts of the job to begin with. Furthermore, junior engineers often make mistakes which means I need to have my own understanding to fix the issues. That said, getting feedback from experienced colleagues is invaluable, and that's what I'm currently simulating with LLMs.
What will be required from users of the existing Litestream version to upgrade to the new one? Is it a matter of bumping the version when it comes out or is there more to it?
Such a low quality article. The whole comment thread is just going to be about the flamebait of "I bet Rich was just really bored". Instead, the author could have listed the reasons they actually "loved it, and by the end I hated it", which would have led to an interesting discussion.
I think it's unfortunate they threw the bit about hickey in there; the part about a shiny new language helping relieve the tedium of enterprise software was a good one
I thought it was self explanatory. It had new idioms I had not yet learned and internalized, so I fully absorbed it. When that was finished, I needed something else to do the same thing with. It's like listening to a song on repeat 10-100 times (depending on the song) when you first hear it. You get everything you can out of it and move on when it's empty.
Hey Matt, ex-Automattician here. I don't have any comment on the issues themselves (haven't touched PHP or WP since I left) but please consider that the fallout of these actions may hurt your own employees in ways you can't see.
This has turned into a legal situation and I'd have to guess it's making people very uncomfortable. From the outside, it seems to have happened so suddenly that anyone employed at Automattic wouldn't have time to consider their options beforehand. Some people I really respect and enjoyed working with are in a tough spot right now and there's no way they can be fully honest with you given the circumstances.
From my experience, GPT-4 works well with both Clojure and Zig. A lot of it depends on the way you prompt though. For example, asking to start with a C or C++ example and converting to Zig often works better than starting straight with Zig. The same strategy works with Java and Clojure too.
Check out Babashka. It's a single binary that runs Clojure without a JVM. It also starts up really fast and has a much lower base memory requirement: https://github.com/babashka/babashka
It's pretty easy to switch to writing JVM Clojure if you're familiar with Babashka. Most of the libraries written for Babashka are designed to work in either environment.
That said, there are reasons you may want to use the Clojure on the JVM later on. It might be interesting to read the replies to another poster with similar concerns about the JVM: https://news.ycombinator.com/item?id=40445415
I think the writing style is too fluffy, but I realize some people like that. The true sin of this book is that it actually starts by teaching the reader Emacs, not Clojure. That's a huge distraction for a beginner who is probably coming from VSCode these days.