Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a pretty common example, and most systems I've seen would catch that as prompt injection. Like you said, it'll be caught in the 95% coverage systems.

"Here’s one thing that might help a bit though: make the generated prompts visible to us."

Other than their growth and market exposure, that might be the only unique thing a lot of these companies have that are using gpt3.5/4 as the backed, or any foundational model.

I get that and find it frustrating too, lack of observability using LLM tools. But we also don't see the graph database running on ads connecting friends of friends on social networks... and how recommendation systems and building the recommendation.



I'm not the author/GP, but my immediate take here is that if you have a 100 people trying to get access to a secret string, and 5 of them succeed, then 100 people now know your secret string. Once the information is leaked, it's leaked.

The user safety angle is only one part of it. I think a better way of phrasing this is, "given that your secret prompt is already public and is impossible for you to secure, you might as well make it fully public within a context where it helps keep the user safe."

Graph databases and observability around how social algorithms work would also benefit from transparency, but the really big difference is that it's possible to keep those things a secret. In contrast, I would suggest it's not a good business decision for any company to rely on their generated prompts as a competitive moat. What prompt you give to GPT is not a unique enough differentiator to keep your business afloat, that's too easy for other companies to replicate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: