Hacker Newsnew | past | comments | ask | show | jobs | submit | jmmcd's commentslogin

The best example of all is Prolog. It is always held up as the paradigmatic representative of logic programming, a rare language paradigm. But it doesn't need to be a language. It is really a collection of algorithms which should be a library in every language, together with a nice convention for expressing Prolog things in that language's syntax.

(My comment is slightly off-topic to the article but on-topic to the title.)


The main thing about prolog isn't the algorithms. In fact, the actual depth-first search isn't even part of the standard.

The nice thing about prolog is that you write logical rules, and they can get used in whatever order and direction that is needed. By direction, I mean that if you define "a grandparent is the parent of a parent", you can now use that rule not just to evaluate whether one person is a parent (or to find all grandparents) but also to conclude that if you know someone is a grandparent, and they are the parent of some one, then that person is someone's parent. Ok, it can also do recursion, so if you define an ancestor as a parent or the parent of an ancestor it will recurse all the way up the family tree. Neat.

You could write some kind of runtime that takes c code and brute-forces its way from outputs to inputs, except that regular imperative code allows for all kinds of things that make this impossible (e.g. side-effects). So then, you'd be limited to some subset, essentially ending up with a domain specific language again, albeit with the same syntax as your regular code, rather than those silly :- symbols (although LISP looks much sillier than prolog IMHO).

What the article is getting at is that if you use some features specific to a language, it's hard to embed your code as a library in another language. But is it? I mean, DLLs don't need to be written in the same language, there's stuff like JNI, and famously there's stuff like pytorch and tensorflow that runs CUDA code from python.


>The nice thing about prolog is that you write logical rules, and they can get used in whatever order and direction that is needed.

This generalizes!

Prolog: declare relations. Engine figures out how to satisfy them. Bidirectional -- same rules answer "is X a grandparent?" and "find all grandparents."

LLMs do something similar but fuzzier. Declare intent. Model figures out how to satisfy it. No parse -> AST -> evaluate. Just: understand, act.

@tannhaeuser is right that Prolog's power comes from what the engine does -- variables that "range over potential values," WAM optimization, automatic pruning. You can't get that from a library bolted onto an imperative language. The execution model is different.

Same argument applies to LLMs. You can't library your way into semantic understanding. The model IS the execution model. Skills aren't code the LLM runs -- they're context that shapes how it thinks.

Prolog showed that declarative beats imperative for problems where you can formalize the rules. LLMs extend that to problems where you can't.

I've been playing with and testing this: Directories of YAML files as a world model -- The Sims meets TinyMUD -- with the LLM as the inference engine. Seven architectural extensions to Anthropic Skills. 50+ skills. 33 turns of a card game, 10 characters, one LLM call. No round trips. It just works.

https://github.com/SimHacker/moollm/blob/main/designs/stanza...

https://github.com/SimHacker/moollm/tree/main/skills


> By direction, I mean that if you define "a grandparent is the parent of a parent", you can now use that rule not just to evaluate whether one person is a parent (or to find all grandparents) but also to conclude that if you know someone is a grandparent, and they are the parent of some one, then that person is someone's parent.

Not necessarily.


Could you please expand? Because I have the same state of knowledge of the previous poster. I would like to learn if I'm wrong.

Someone who is a grandparent may also have other children who do not have children.

Debugging rarely works correctly between languages, and features like "find all references" usually break too. Maybe that's not an issue with Prolog because a C logic solver would also be hard to debug, but it's a problem with many template languages.

The syntax of Prolog is basically a subset of the language of First Order Logic (sans quantifiers and function symbols), it doesn't get any more minimal than that. What's special in Prolog compared to imperative languages including functional languages is that variables aren't "assigned" but implicitly range over potential values until satisfying the context, like in math formulas for sets. Yes you can express that awkwardly with tons of type annotations and DSL conventions so that you never have to leave your favourite programming language. But then there's the problem of a Prolog "engine" doing quite a bit more than what could be reasonably assumed behind a synchronous library call, such as working with a compact solution space representation and/or value factoring, parallel execution environment, automatic value pruning and propagation, etc.

The integration of a Prolog backend into a mainstream stack is typically achieved via Prolog code generation (and also code generation via LLMs) or as a "service" on the Prolog side, considering Prolog also has excellent support for parsing DSLs or request/responses of any type; as in, you can implement a JSON parser in a single line of code actually.

As they say, if Prolog fits your application, it fits really well, like with planning, constraint solving, theorem proving, verification/combinatoric test case enumeration, pricing models, legal/strategic case differentiation, complex configuration and the like, the latter merely leveraging the modularity of logic clauses in composing complex programs using independent units.

So I don't know how much you've worked hands on with Prolog, but I think you actually managed to pick about one of the worst rather than best examples ;)


You can implement a JSON parser in a single line of code in C, but why?

To win an entry at IOCCC.

> So I don't know how much you've worked hands on with Prolog, but I think you actually managed to pick about one of the worst rather than best examples ;)

Seems more like an interesting research project than something I'd ever deploy in an application serving millions of users


I can't speak to any sort of scalability but I can definitely say that not everything needs to be built for millions of users. There's plenty of utility in tools you use to help even a single person (even yourself!)

> Seems more like an interesting research project

You mean like the kinds of problems digital computing was originally invented to solve?

You know that still exists, right? There are many people using computers to advance the state of Mathematics & related subjects.


By that same logic, we don't need object oriented features in a language, because we have closures and can get the same functionality with a library.

Sometimes, having a language with a distinct syntax is nicer.


Typically, what's nicer is the absence of other features that interfere with the important ones.

For example, Prolog isn't a general purpose functional or imperative language: you can assert, retract and query facts in the automatically managed database, risking only incorrect formulas, inefficiencies and non-monotonicity accidents, but not express functions, types, loops, etc. which could have far more general bugs.


I like how classes escape closures but then dependency injection frameworks have to work around that because now it is hard to think of construction as passing arguments to functions in the correct order.

I am so glad LLMs eliminate all of that and just call functions in the right order.


> I am so glad LLMs eliminate all of that

When LLMs do something, it's always because everybody was already doing it.

The noise you see online about it exists exactly because most people don't understand and can't use DI.


No, LLMs like NextJS style injection as much as anyone.

There is nothing magical about topological sort and calling constructors in the right order, which is all DI is.

I dislike it a lot, it is exactly like any other construct that allows you to throw code at anything in a way that sucks (Haskell lens, optics, monad transformers).

It allows people to granularize the whole codebase to the point where you can’t do much about it. For most, they should just stick with functions, no one can build 100 levels deep function callstacks without it being cumbersome, but DI makes it a breeze.


I fell out of love with Java not because of the language, but because it got tied up with DI. Don't get me wrong, the allure of DI is really neat. Make all sort of interfaces and just call the function and it can be implemented by dozens of things. Perfect for a lot of use-cases. But then reality hits, and you eventually realize how complex the entire codebase is because it's pattern 1, pattern 2, p3, p4, etc... People are so obsessed with GoF patterns that they're not getting anything done other than spending the first month on a beautiful API.

Then I got into Python and people were building useful server APIs in a day.

Both have their place, but I think the problem with the first route is that EVERYTHING ends up with Spring or CDI and complexity overload even if only 1 thing will ever be "implemented".


> There is nothing magical about topological sort and calling constructors in the right order, which is all DI is.

Lol... That's exactly the kind of thing we call "magic" in software development.

Anyway, if your framework is entirely based on DI, everybody that uses the framework will use DI, and the LLMs will generate code for it that uses DI. That does not contest my point in any way.


Absolutely. In the same direction as some people I've heard along the lines of "Anything you can do with language X, can be done with assembler, just have some routines for that in your stash"

Speaking of prolog, any recommendations for resources learning it at a stage somewhere between "draw three circles" and "draw the rest of the owl"

I don't have real work I need prolog for, but I find it an interesting subject, My personal learning goal, the point where I can say I know prolog reasonably well is when I can get it to solve this mit puzzle I found, a sort of variant of soduku. I found a clever prolog solver for soduku that I thought could teach me more in this domain, but it was almost to clever, super optimized for soduku(it exploited geometric features to build it's relationships) and I was still left with no idea on how to build the more generic relationships I need for my puzzle(specific example if soduku cells were not in a grid how could they be specified?), in fact I can find very little information on how to specify moderately complex, ad hoc relationships. One that particularly flummoxed me was that some rules(but you don't know which) are wrong.

https://www.swi-prolog.org/pldoc/man?section=clpfd-sudoku


The only decent Prolog book out there, IMNSHO, is "Clause and Effect" by Clocksin. Maybe some of the later chapters might help?

All the other books that I looked at were pretty awful, including the usual recommendations.


I got recommended and have in my bookshelf "The art of prolog" waiting for the year when I have time (or need) for it.

Prolog Programming for Artificial Intelligence" by Ivan Bratko, is a reasonable text book on Prolog.

Sudoku*. Suu means numbers in Japanese and Doku means solving here. So literally "number solving".

I would go with the reference, "The Art of Prolog".

If you want to learn LP concepts in general, Tarski's World is a great resource as well.


The only languages I know that can do prolog-like-constructs as-a-library are lisps, or at least langs that have reasonable symbol constructs. Usability is way way worse if you can’t talk about variables as first class objects.

I was talking to Bob Harper about this specific issue (context was why macro systems are important to me) and his answer was “you can just write a separate programming language”. Which I get.

But all of this is just to say that doing relational-programming-as-a-library has a ton of issues unless your language supports certain things.


I believe Rust uses datafrog, Datalog as a library to implement some of its next gen solvers for traits and lifetimes. Not a lisp and maybe this still isn’t as elegant as you had in mind? Curious how this library compares for you.

https://github.com/rust-lang/datafrog


If you want to see what it looks like when you actually embed Datalog in your language, have a look at Flix: https://flix.dev/

(Select the "Usinag Datalog..." example in the code sample dropdown)

The Rust code looks completely "procedural"... it's like building a DOM document using `node.addElement(...)` instead of, say, writing HTML. People universally prefer the declarative alternative given the choice.


Haskell can do the same, and does with a lot of little embedded DSLs.

Yea I’ve wanted to try using logict to do some larger logic programming stuff. I’ve done it with list monad but found a lot of speed issues, never quite figured out why it was so slow.

Well, lists are really slow.

To be more precise: lists in your code can be really fast, if the compiler can find a way to never actually have lists in the binary it produces. If it actually has to have lists at runtime, it's generally not all that fast.


The experimental "Polonius" borrow checker was first implemented with datafrog, but I believe it's been abandoned in favour of a revised (also "Polonius") datalog-less algorithm.

But why should it be though? You haven’t really addressed that. It’s like saying “Haskell shouldn’t be its own language, it should just be a couple of features like lambdas integrated into a JavaScript library”. Well if you do that you get some nice features in JavaScript but you’ve done away with the whole value proposition of pure functional programming. That’s an actual loss, there’s something to be said for maintaining a pure set of features that gives languages different properties and guarantees you can’t get with “everything is just a lib to an imperative language”

Pure functional programming and lazy evaluation.. sure, you could create classes and a meta-function that selectively eval's thunks at a time, but the call site of that kind of library would look atrocious..

You might be able to hack on some of the datatype semantics into JS prototype-based inheritance (I'd rather start with TypeScript at that point, but then we're back at the "why isn't it a library" debate) to keep those ontologies from being semantically separate, but that's an uphill battle with some of JS's implicit value conversions.

I consider Logic Programming languages to be the go-to counterargument to TFA but yeah, anything with lazy eval and a mature type system are strong counterexamples too.


Prolog is great because when it works it feels like magic. You give it some pragmas and it tells you the answer. The downside is when it doesn't work it also feels like magic. It's not easy to reason about the amount of work it needs to do. If your Prolog program is taking a long time to run it is hard to figure out if it is going to take 2 hours or 4,000 millennia.

I think the examples others have highlighted show the problem with just making it a library. They’re all lacking a lot of features from Prolog, particularly in terms of optimization. Just use Prolog if you need its features and invoke SWI-Prolog like you’d call any other language’s interpreter.

Agree. The optimization caveat also applies to OOP, another example someone threw in above.

Sure you can implement OOP as a library in pretty much any language, but you’ll probably sacrifice ergonomics, performance and/or safety I guess.


Are there any particularly excellent examples of prolog implemented as a library you could point us to?

As an example of a use case, "Gerrit Code Review"[1] is written in Java and uses prolog for the submit rules.[2]

I haven't looked into the implementation. But taking a brief glance now, it looks interesting. They appear to be translating Prolog to Java via a WAM representation[3]. The compiler (prolog-cafe) is written in prolog and bootstrapped into Java via swi-prolog.

I don't know why compilation is necessary, it seems like an interpreter would be fast enough for that use case, but I'd love to take it apart and see how it works.

[1]: https://www.gerritcodereview.com/ [2]: https://gerrit-documentation.storage.googleapis.com/Document... [3]: https://gerrit.googlesource.com/prolog-cafe/+/refs/heads/mas...


Not exactly prolog, but logic programming implemented as a library/DSL: https://minikanren.org/

If it's a DSL then it's "writing a new language" and you're just calling from a native API, no?

Embedded DSLs are libraries. They are the epitome of the advice to write a library. It just so happens that the library abstractions could map to a language if you wanted to add parsing and all of the other machinery.

Clojure's core.logic is based on minikanren, so that fits the bill.

My time to shine! https://news.ycombinator.com/item?id=45902088

References were Racket with the Racklog library¹. There's also Datalog² and MiniKanren, picat, flix. There were tons of good comments there which you should check out, but PySwip seemed like "the right thing" when I was looking at it: https://github.com/yuce/pyswip/

...documentation is extremely sparse, and assumes you already know prolog, but here's a slightly better example of kindof the utility of it:

https://eugeneasahara.com/2024/08/12/playing-with-prolog-pro...

...ie:

    # ya don't really care how this works
    prolog.consult("diabetes_risk.pl")

    # ...but you can query into it!
    query = "at_risk_for_diabetes(Person)"
    results = list(prolog.query(query))
...the point being there's sometimes some sort of "logic calculation that you wish could be some sort of regex", and I always think of prolog as "regexes for logic".

One time I wished I could use prolog was trying to figure the best match between video file, format, bitrate, browser, playback plugin... or if you've seen https://pcpartpicker.com/list/ ...being able to "just" encode all the constraints, and say something like:

   valid_config = consult("rules.pl")
      + consult("parts_data.pl")
      + python.choice_so_far(...)

   rules.pl: only_one_cpu, total_watts < power_supply(watts)
   parts_data.pl: cpu_xyz: ...; power_supply_abc: watts=1000
   choices: cpu(xyz), power_supply(abc), ...
...this is a terribly syntactically incorrect example, but you could imagine that this would be horrific code to maintain in python (and sqrt(horrific) to maintain in prolog), but _that's_ the benefit! You can take a well-defined portion and kindof sqrt(...) the maintenance cost, at the expense of 1.5x'ing the number of programming languages you need to expect people to know.

not prolog but logical programming in python: https://sites.google.com/site/pydatalog/


Think of Prolog the language as just a serialization of a Prolog AST. The AST could be constructed in any programming language as a library. But what if we want to share and store these ASTs? We can serialize them to Prolog the language! The language has value even if it’s a library.

How many Prolog programmers does it take to change a lightbulb?

false.

https://www.j-paine.org/dobbs/prolog_lightbulb.html

I always wanted to write a compiler whose front-end consumes Prolog and back-end emits PostScript, and call it "PrologueToPostscript".

prologue: a separate introductory section of a literary, dramatic, or musical work.

postscript: an additional remark at the end of a letter, after the signature and introduced by ‘PS’.


Disclaimer: I've no idea what I'm talking about. :)

But I have heard repeatedly that the good thing of prolog is the compiler, that takes information and queries that would be awful inefficient, and convert them in something that actually works. So I'm not sure... of course, you can convert virtually any language in a kind of library with some API that basically accepts source code... but I'm pretty sure is not what you meant.


Prolog admittedly mystifies me even more than the Haskell family. How do you do anything effectful, like read from a file, or output arbitrary data to standard out, or set up a loop to process keyboard events every 1/n seconds? How do you make system calls?

By thinking in LP terms, like FP, LP requires rewiring the brain to think differently how to approach algorithms.

Here is a sample on how to read a file.

https://lpn.swi-prolog.org/lpnpage.php?pagetype=html&pageid=...

How to make syscalls,

https://www.swi-prolog.org/pldoc/man?section=foreign


Prolog is a contrived example. It's small and simple enough to be implemented as a DSL embedded in another language, reusing most of the parser and leaningn its execution logic.

Now try to produce a library that adds compile-time features: static types, lifetimes, the notion of const and constexpr, etc. You can, of course, write external tools like mypy, or use some limited mechanism like Java annotations. But you have a really hard time implementing that in an ergonomic way (unless your language is its own metalanguage, like Lisp or Forth, and even then).

Creating a library that alters the way the runtime works, e.g. adding async, is not entirely impossible, but usually involves some surgery (see Python Twisted, or various C async libs) that results in a number of surprising footguns to avoid.

Frankly, even adding something by altering a language, but not reworking it enough to make the new feature cohesive, results in footguns that the source language did not have. See C#'s LINQ and exceptions.


Except when implemented as library it doesn't take advantage of WAM and related JIT.

i treat it like i treat SQL.

But he actually uses frontier LLMs in his own work. Probably that's stronger evidence.


It is, but biased evidence, as he's both directing and checking that frontier LLM output and not everyone is Terrence Tao.


"Pelican on bicycle" is one special case, but the problem (and the interesting point) is that with LLMs, they are always generalising. If a lab focussed specially on pelicans on bicycles, they would as a by-product improve performance on, say, tigers on rollercoasters. This is new and counter-intuitive to most ML/AI people.


The gold standard for cheating on a benchmark is SFT and ignoring memorization. That's why the standard for quickly testing for benchmark contamination has always been to switch out specifics of the task.

Like replacing named concepts with nonsense words in reasoning benchmarks.


Yes. But "the gold standard" just means "the most natural, easy and dumb way".


About ARC 2:

I would want to hear more detail about prompts, frameworks, thinking time, etc., but they don't matter too much. The main caveat would be that this is probably on the public test set, so could be in pretraining, and there could even be some ARC-focussed post-training - I think we don't know yet and might never know.

But for any reasonable setup, if no egregious cheating, that is an amazing score on ARC 2.



On HN it's very common to see a blog post along the lines of "I found this old piece of equipment with no brand name, I used some network traffic inspection to figure out what it does, I hacked around a bit, I got it working and turned it into a self-ringing doorbell with wifi" (or whatever). All of that is anecdotal, N=1, "I did what worked for me, I hope it's interesting to you". And those posts are highly prized and rightly so.


That's great yes, and I'd like us to give even more weight to the lived experience of others even in other contexts and regarding other subject matters.


In contrast I like the fact that there is often an indication of what is not substantiated by strong/provable/scientific evidence and what is not. In fact I quite love both these subjective experience reporting and the more sceptical perspectives. A lot depends on the specific subject and whether evidence exists or is feasible to gather. I would hate to not hear about cool sounding ideas that MIGHT work, but is just not endorsed by rigorous chains of evidence. As long as the discussions are honest and in good spirit. People can point out risks and likihoods and alternative explanations. I really like those too.


(a) no it's not

(b) your comment is miles off-topic, as he is not addressing doom in any sense


But they're submissions to ICML.


It could certainly replace the author of this article.


Modern LLMs can one-shot code in a totally new language, if you provide the language manual. And you have to provide the language manual, because otherwise how can the students learn the language.


> The fact that AI can do your homework should tell you how much your homework is worth.

A lot of people who say this kind of thing have, frankly, a very shallow view of what homework is. A lot of homework can be easily done by AI, or by a calculator, or by Wikipedia, or by looking up the textbook. That doesn't invalidate it as homework at all. We're trying to scaffold skills in your brain. It also didn't invalidate it as assessment in the past, because (eg) small kids don't have calculators, and (eg) kids who learn to look up the textbook are learning multiple skills in addition to the knowledge they're looking up. But things have changed now.


Completely agree - I always thought the framing of "exercises" is the right one, the point is that your brain grows by doing. It's been possible for a long time to e.g. google a similar algebra problem and find a very relevant math stackexchange post, doesn't mean the exercises were useless.

"The fact that forklift truck can lift over 500kg should tell you how worthwhile it is for me to go to a gym and lift 100kg." - complete non-sequitur.


If the judge of your fitness took into account only the amount you could lift, then sure, why not use a forklift.

This is what's happened in education. The education system is not concerned with teaching you to learn, or even teaching you, it's concerned with grades.

As many have stated here, homework is practice for the student, and can be helpful in learning. However, it's given out as something to grade without any collaboration on the learning experience.

By the way, I'm ranting about all this as someone who did very well in both high-school and university, so I feel somewhat justified in calling bullshit on the whole thing.


> A lot of homework can be easily done by AI

Then maybe the homework assignment has been poorly chosen. I like how the article's author has decided to focus on the process and not the product and I think that's probably a good move.

I remember one of my kids' math teachers talked about wanting to switch to in inverted classroom. The kids would be asked to read a some part of their textbook as homework and then they would work through exercise sheets in class. To me, that seemed like a better way to teach math.

> But things have changed now.

Yep. Students are using AIs to do their homework and teachers are using AIs to grade.


Yep, making time to sit down to do homework, forming an understanding of planning the doing part, forming good habits of doing them, knowing how to look up stuff, in a book index or on Wikipedia or by searching or asking AI. The expectation is still that some kind of text output needs to be found and then read, digested.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: