Hacker Newsnew | past | comments | ask | show | jobs | submit | 4b11b4's commentslogin

MetaGPT was in.. 2023?

> "the way you describe a program _can_ be the program"

One follow-up thought I had was... It may actually be... more difficult(?) to go from a program to a great description


That's a chance to plump for Peter Naur's classic "Programming as Theory Building"!

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

What Naur meant by "theory" was the mental model of the original programmers who understood why they wrote it that way. He argued the real program was is theory, not the code. The translation of the theory into code is lossy: you can't reconstruct the former from the latter. Naur said that this explains why software teams don't do as well when they lose access to the original programmers, because they were the only ones with the theory.

If we take "a great description" to mean a writeup of the thinking behind the program, i.e. the theory, then your comment is in keeping with Naur: you can go one way (theory to code) but not the other (code to theory).

The big question is whether/how LLMs might change this equation.


Even bringing down the "theory" to paper in prosa will be lossy.

And natural languages are open to interpretation and a lot of context will remain unmentioned. While programming languages, together with their tested environment, contain the whole context.

Instrumenting LLMs will also mean, doing a lot of prompt engineering, which on one hand might make the instructions clearer (for the human reader as well), but on the other will likely not transfer as much theory behind why each decision was made. Instead, it will likely focus on copy&pasta guides, that don't require much understanding on why something is done.


I agree that it will be lossy because all writing is lossy.

"The map is not the territory" applies to AI/LLMs even more so.

LLMs don't have a "mental model" of anything.


But if the person writing the prompt is expressing their mental model at a higher level, and the code can be generated from that, the resulting artifact is, by Naur's theory, a more accurate representation of the actual program. That would be a big deal.

(Note the words "if" and "by Naur's theory".)


That theory, or mental model, is a lot like a program, but of a higher kind. A mental model answers the question: what if I do this or that? It can answer this question with a different level of detail, unlike the program that must be executed completely. The language of a mental model is also different: it talks in terms of constraints and invariants, while the program is a step-by-step guide.

Google Earth Engine's Foundation model via the ITU's seminar! This thing is incredible!

Anthropic has been doing this from the start and they are justified in it (the plan has different pricing rates than API). People have been making workarounds and they are justified in that as well - those people understand their workarounds are fragile when they made them.

While I agree a text representation is good for working with LLMs... most of the examples are mis-aligned?

Even the very first one (ASCII-Driven Development) which is just a list.

I guess this is a nitpick that could be disregarded as irrelevant since the basic structure is still communicated.


This seems a meaningless project as the system prompt of these models are changing often. I suppose you could then track it over time to view bias... Even then, what would your takeaways be?

Even then, this isn't even a good use case for an LLM... though admittedly many people use them in this way unknowingly.

edit: I suppose it's useful in that it's a similar to an "data inference attack" which tries to identify some characteristic present in the training data.


I think you mentioned it, when a large number of people outsource their thinking, relationship or personal issues and beliefs to chatgpt, it important that we are aware and don't because of how easy it is to get the LLMs to change their answers based on how leading your questions are due to their sycophancy. HN crowd mostly knows this but general public maybe not

I'm imagining a version of this where you have to use various prompt- or data-centric attacks to navigate scenarios

We want to gamify prompt hacking and give people an UI to add/remove chunks of the system prompt. It'll be unlocked by collecting widgets around the place.

no


I like to think of it as "maintaining fertile soil"


On android this is easy you can add a button to the bottom nav bar


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: