Yeah, this is the next step. I first wanted to understand if this gets any traction. I think I will provide a dockerized version for the server part that you can just run with a simple command and maybe some interface to create api keys and distribute them to your users.
Fair enough from a business standpoint, but seeing as there are massive privacy/security risks involved in exposing your data to an opaque service, the open source component is probably a non-optional aspect of the value prop.
"TypeScript is now the most used language on GitHub. In August 2025, TypeScript overtook both Python and JavaScript. Its rise illustrates how developers are shifting toward typed languages that make agent-assisted coding more reliable in production. It doesn’t hurt that nearly every major frontend framework now scaffolds with TypeScript by default. Even still, Python remains dominant for AI and data science workloads, while the JavaScript/TypeScript ecosystem still accounts for more overall activity than Python alone."
I am not sure I agree with the conclusion "developers are shifting toward typed languages that make agent-assisted coding more reliable in production". I see it more with fullstack development being democratized.
I am originally Python/BE/ML engineer. But I've built in the last years many Frontend, simply because AI coding enables so much.
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
The only issue with writing a CLI in Node is ecosystem. The CLI libraries for Node are (or were last time I checked) inspired by React. Not a paradigm that is fun to write in, and if I'm making a CLI tool it is because I am bored and want to make something for my own entertainment.
Yeah the last CLI app I used was actually a TUI. It routed std out and std err from scripts and programs it'd call out to into separate windows. It had animations and effects and a help system built in. It also had theming support because the library I used for the TUI happened to have that and came with some default themes! It was a bit beyond a simple CLI tool.
If I'm farfing around with the console I'm going to have fun.
Gemini CLI used it, and I actually hate the layout. Never thought I'd see a CLI manage to waste terminal window space to the point of needing to zoom it out.
There is a major mistake in the article. The author argues that openinference is not otel compatible. That is false.
>OpenInference was created specifically for AI applications. It has rich span types like LLM, tool, chain, embedding, agent, etc. You can easily query for "show me all the LLM calls" or "what were all the tool executions." But it's newer, has limited language support, and isn't as widely adopted.
> The tragic part? OpenInference claims to be "OpenTelemetry compatible," but as Pranav discovered, that compatibility is shallow. You can send OpenTelemetry format data to Phoenix, but it doesn't recognize the AI-specific semantics and just shows everything as "unknown" spans.
What is written above is false. Openinference (or for the matter, Openllmetry, and the GenAI otel conventions) are just semantic conventions for otel. Semantic conventions specify how the span's attributes should be name. Nothing more or less. If you are instrumenting an LLM call, you need to specify the model used. Semantic conventions would tell you to save the model name under the attribute `llm_model`. That's it.
Saying OpenInference is not otel compatible does not make any sense.
Saying Phoenix (the vendor) is not otel compatible because it does not show random spans that does not follow its convention, is ... well unfair to say the least (saying this as a competitor in the space).
A vendor is Otel compliant if it has a backend that can ingest data in the otel format. That's it.
Different vendors are compatible with different semconvs. Generalist observability platforms like Signoz don't care about the semantic conventions. They show all spans the same way, as a JSON of attributes. A retrieval span, an LLM call, or a db transaction look all the same in Signoz. They don't render messages and tool calls any different.
LLM observability vendors (like Phoenix, mentioned in the article, or Agenta, the one I am maintaining and shamelessly plugging), care a lot about the semantic conventions. The UI in these vendors are designed for showing AI traces the best way. LLM messages, tool calls, prompt templates, retrieval results are all shown in user friendly ways. As a result the UI needs to understand where each attribute lives. Semantic conventions matter a lot to LLM Observability vendors. Now the point that the article is making is that Phoenix can only understand the Openinference semconvs. That's very different from saying that Phoenix is not Otel compatible.
Having been from the other side of the table. You did not flunk anything again.
A job process is not an exam where if you do well you succeed.
Your "performance" plays a small role in whether you are accepted (maybe less than 30%). The rest is:
- The pipeline: that is who are your competitors, is there someone late in the process, is there someone a manager worked with / knows
- Your CV: obviously at the point of the interview, you can't change your history
- The position fit: basically who they're looking for. They might have a profile in mind (let's say someone extrovert to do lots of talks, or someone to devrel to enterprise) where you simply don't fit.
- The biases: And there is looot of these. For instance, some would open your blog and say it's unprofessional because of the UI. Not saying that is the case, it's simply their biases.
So, my advice, you reached hn front page twice in a couple of months. Most people, me included, never did. You clearly have something. Find work with people that see that.
I wonder if they plan to invest seriously into this?