I second this suggestion. This might sound obvious but during my therapy my psychologist asked me to do this, but in a non-personal/non-threatening way for the relation. Just by telling them that I'm working through my issues and I'd like to get an honest (best would be written/no-interaction type) feedback - what makes them uncomfortable etc.
This helped me a lot - to see how different the transmission was on the receiving end from my intentions.
While these optimizations are solid improvements, I was hoping to see more advanced techniques beyond the standard bulk insert and deferred constraint patterns. These are well-established PostgreSQL best practices - would love to see how pgstream handles more complex scenarios like parallel workers with partition-aware loading, or custom compression strategies for specific data types.
> It's pretty common to use a cheaper model to fix these errors to match the schema if it fails with a tool call.
This has not be true for a while.
For open models there's 0 need for these kind of hacks with libraries like Xgrammar and Outlines (and several others) both existing as a solution on their own and being used by a wide range of open source tools to ensure structured generation happens at the logit levels. There's no-need to add multiples to your inference cost, when in some cases (xgrammar) they can reduce inference cost.
For proprietary models more and more providers are using proper structured generation (i.e. constrained decoding) under-the-hood. Most notably OpenAI's current version of structure outputs makes use of logit based methods to guarantee the structure of the output.
You should check new features - like asking questions as a listener.
I don't use it a lot, but it's useful when you want to have an engaging audio interface to long (50p+) reports, which you wouldn't normally read because it's not your area of expertise or you don't have time, but you can listen while doing some cardio or chores.