These features are always nice for meta-magic and monkey-patching but those are usually exactly the reasons why a language can only be optimized so much. It's often a worthwhile tradeoff but needs to be carefully considered.
Is "meta-magic" and "monkey-patching" what we mortals call code generation and which is an useful technique to automate coding and which can both save time and generate safer and more correct code or is it something else?
There are many ways to do code generation. For example, you can have one program write out text files that a compiler reads to create another program. Or you can have a program change itself while it's running. These represent two extremes on a static to dynamic continuum. Most "dynamic" languages like Ruby and Python take the latter approach. This approach is hard to reason about, for both an optimizing runtime and a human, and is one reason Python and Ruby have such poor performance.
Their work seems to focus on discovering phases. IMO it's better if the language allows the programmer to express phases as a language concept. There are a few languages that support this. Racket is the best example I know of.
Many programs already implicitly have phases. For example, a typical web app has one phase at what is traditionally thought of at compile time, then another at startup when it reads it's configuration, and then another phase when it starts running. We smoosh together the last two phases into one "run time" because our languages usually don't support phases, but actually the configuration is known earlier: at deployment time. So we could run that phase as part of the CI / CD process and catch configuration errors earlier. Once you start thinking of phases you see them everywhere. They're in data science (loading data vs exploring data), front-end web apps ("hydration"), etc. I think it's a large productivity gain that is unexplored in commercial programming.
This is such a great comment, which I didn't expect to find in this thread at all! I've been thinking about application "phases" for a while and I wasn't aware that there was research being done in this direction. Do you happen to have any more references beyond Racket and the link above?
> IMO it's better if the language allows the programmer to express phases as a language concept.
Yeah, not just to make life easier for the compiler, but I suspect it'd also make it easier to read & reason about the code.
I mean, performance gains are nice but sometimes performance isn't really the bottleneck but reading & maintaining that unholy cocktail of application code, bash scripts, schema files & specs, build scripts, code generators, Dockerfiles, and Gitlab YAMLs is.
This is a great paper. The general idea is often called "multi-stage programming". The academic work has mainly focused on performance, which I don't think is the most interesting use in industry.
Perhaps you wish to look at Common Lisp? I think it gets you covered in all those things you mention. There is distinct read, compile and evaluation phase, all exposed to the application code.
If you get a good compiler like sbcl, you can go a long way with just the language itself since the language itself offers a blend of scripting language qualities while being a compile language. Emacs Lisp can go long way too.
Meta-magic can include code-gen, but monkey-patching can include effectively doing ast-ast transforms to change the behaviour of existing code without having to actually edit the code.
Honestly, I had no idea that "monkey-patching" was a term, I thought it was something Op just made up :-).
I have just looked it up, and it indeed is a term [meaning something](https://en.wikipedia.org/wiki/Monkey_patch). Even meta-magic seem to be a term, but less defined seems like.
I don't think we need AST for monkey-patching, but it can be argued that each program can be converted to an AST. Anyway, less important.
The "monkey-patching" seem to basically mean anything that changes the runtime somehow. The example in Python on Wikipedia article just changes the value of a global variable. In Lisp we have many, many tools which can change code at runtime, I would even argue that Lisp is all about flexibility and "monkey-patching" since we already write AST and have the entire symbol table and the compiler available to us at runtime. Functions are just objects like any others and we can install any object in the function slot of an symbol, we can wrap it in our own function and install that one, everything is a list and dynamic in some way, so we can add slots to classes and structs if we want etc.
As I conclude, the terms mean what I meant, but with code generation, I mean not just classical generate some code to a file from another file; I mean transform the code in some way, during either the runtime or compile time.
It’s something else. Monkey patching comes from Ruby, since classes are open, you can change implementations at runtime. I’m not quite as familiar with meta magic, but I believe it has similar characteristics. Sorta like dynamically injected accessors via autoload in perl.
There are cool tricks, but they come at a cost. Both for the reader of code, and the compiler of code.
My Lisp Machine's operating system (MIT, end 70s onwards) is written in an object-oriented Lisp (using Flavors and later CLOS). Everything is open and everything can be changed at runtime. Mixins and similar stuff comes from there.
I'm happy to give lisp credit for everything cool.
but Common Lisp has all the good stuff you need to manage different generations of objects under changing classes, not the least of which is jumping into the running system and examining the running state. warnings and errors about redefinition, maybe jump under a different package to manage generations. state and functions are separate, thanks to dynamic dispatch.
Ruby, on the other hand, rewrites the class. hope all the data and functions are forwards and backwards compatible! good luck.
> Apparently from earlier guerilla patch (“code that sneakily changes other code”), understood as gorilla patch. A monkey patch was then a more carefully written instance of such a patch.[1] Or simply derived from monkey (“to mess with”, verb).
Certainly not in Lisp; I have never heard it before; I thought Op made it up :); but have looked it up on Wikipedia and indeed is a term. Seems just to alter the behavior at the runtime.