Some of all this gold is still on display in Spain. Earlier this year my wife and I visited Granada in southern Spain. Vast amounts, truely impressive, of gold are exposed in its Cathedral and in the burial place of Emperor Karl (Charles/Carlos) and his wife, Isabella. Alhambra is still the #1 sight there but if you can spare the time, do visit these two places. They are co-located in central Granada.
I remember admiring the intent of XSLT when it was born. And how difficult it turned out to be to write; using XML framing makes it terse/verbose/arcane, eg. when compared to the compactness of regex/subs.
It is 2025 and now we've got LLMs to write our code - that may actually be a strong argument in favor of keeping XSL(T): It is a useful browser mechanism and LLMs makes it easier to harvest.
Does anybody have experience with LLM-generated XSL(T)?
I have 1 "big" xsl file in a project I maintain. I have fixed an issue this year. I have tried to use chatgpt prompt. The scope was perfect: I had the buggy xsl, the buggy result, the expected result and a clear explanation.
It generated syntactically correct code (almost) that did not work because chatgpt does not understand.
This was not a complete loss: a good refresher of the syntax, but I had to do the thinking fully alone and found alone how to use "node-set".
My previous change in this file was in 2017 when I replaced xalan by the xslt processor built in java. I was very surprised I had to make the following changes:
Calling out to some chess-playing-function would be a deviation from the pure LLM paradigm. As a medium-level chess player I have walked through some of the LLM victories (ChatGPT 3-5-turbo-instruction); I find it is not very good at winning by mate - it misses several chances of forced mate. But forced mate is what chess engines are good at - can be calculated by exhaustive search of valid moves in a given board position.
So I'm arguing that it doesn't call out - it should gotten better advice if it did.
But I remain amazed that OP does not report any illegal moves made any of by LLMs. Assuming training material includes introductory texts of chess playing and a lot of chess games in textual notation (e.g. PGN) I would expect at least occasional illegal moves since the rules are defined in terms of board positions. And board positions are a non-trivial function of the set of moves made in a game. Does an LLM silently perform a transformation of the set of moves to a board position? Can LLMs, during training, read and understand board-position diagrams of chess books?
> But I remain amazed that OP does not report any illegal moves made any of by LLMs.
They did (but not enough detail to know how much of an impact it had):
> For the open models I manually generated the set of legal moves and then used grammars to constrain the models, so they always generated legal moves. Since OpenAI is lame and doesn’t support full grammars, for the closed (OpenAI) models I tried generating up to 10 times and if it still couldn’t come up with a legal move, I just chose one randomly.
> That's a perfectly valid and probably the most common use of SSH, but it can do so much more than that. Just like HTTP, SMTP, FTP and others, SSH is a protocol!