Hacker Newsnew | past | comments | ask | show | jobs | submit | HelloNurse's commentslogin

A worrying choice of words.

"Losing sleep" implies an actual problem, which in turn implies that the mentioned mitigations and similar ones have not been applied (at least not properly) for dire reasons that are likely to be a more important problem than bad QoS.

"Infrastructure" implies an expectation that you deploy something external to the troubled application: there is a defective, presumably simplistic application architecture, and fixing it is not an option. This puts you in an awkward position: someone else is incompetent or unreasonable, but the responsibility for keeping their dumpster fire running falls on you.


Fair pushback — to clarify, I’m not assuming incompetence or suggesting infra should paper over bad architecture.

By “losing sleep” I really mean on-call fatigue during partial outages — the class of incidents where backoff, shedding, and breakers exist, but retry amplification, shared rate limits, or degraded dependencies still cause noisy pages and prolonged recovery.

I’m trying to understand how teams coordinate retries and backpressure across many independent clients/services when refactors aren’t immediately available, not replace good architecture or take ownership of someone else’s system.

If you’ve seen patterns that consistently avoid that on-call pain at scale, I’d genuinely love to learn from them.


Yes, but it is a meaningless syntax sampler.

Good examples should be complete music pieces and they should be commented: where is important information? How are the numbers computed? How are commands organized? What is the practical workflow for making changes?


  >  ÆTHRA is output-oriented: you write a script → run it → get a WAV file.
You are competing with traditional noninteractive usage of CSound. What do you think you can do better than CSound? More generally, what are the peculiar and valuable ÆTHRA features that you want to develop well?

The current language is relatively verbose and readable (more suitable for live coding than for a "music compiler"), but somewhat simplistic and ad hoc on the notation side (e.g. no separate tracks, parts etc.) and not very general on the sound synthesis and processing side (e.g. fixed waveforms and keywords for effects).


Indeed. If a test runner embedding the Godot engine is now feasible on paper a proof of concept implementation seems deserved: if there are fatal bugs or limitations they will be eventually corrected (sooner if properly discovered, reported and discussed), and if there are none the new technology is "battle-tested" enough.

Taking feature lists and plans at face value is offensively shallow; the typical Rust fan arrogance pattern can be an explanation (if the Rust rewrite is "better", it doesn't have to be compatible with the rest of the world who uses the actual C SQLite).

Progress would be a respectful experiment to hack an implementation of vector indexing, or some other actually useful feature, into the actual SQLite, preferably as an extension.

That would be a valid experiment and, if it goes well, a contribution, while hoping that someone bases anything important on Turso looks like grabbing captive users.


If you think this discussion is antagonistic, you should see how antagonistic "entrepreneurs" and VCs become when they are in charge of open source projects. Risk aversion is good.

In this case, the familiar "rewrite it in Rust" MO has a special angle: the Turso feature list is such a terrifying collection of high-risk, low-performance, inferior, unlikely to be compatible, unproven and unnecessary departures from SQLite that a malicious embrace-and-extend business plan is a reasonable theory and reckless naivety is the best possible case.


Not only any computer of last resort would have software installed in advance and easily prepared redundant archives to install it again, but "pip install" is perfectly fine for other use cases: testing Reticulum, regularly updating everyday computers, improvised installations on someone else's computer, etc.

"Lastly, if you're set on 6K, there's also the Asus ProArt PA32QCV to consider. I haven't tested it yet, but it's $600 cheaper than LG's model, despite using the same 6K panel. [...] The biggest difference is the lack of Nano IPS Black"

How can it be the same panel if it differs in such a fundamental aspect?


A: the control board is different

B: the glass in front is different

C: the backlight is different

D: the naming is different

Any combo of the above.


These would be differences between two different monitors sharing the same panel. But if one is Nano IPS Black and one isn't, they don't have the same panel.

It might make sense to reduce curve size adaptively. Suppose the first control point of each candidate curve is randomly distributed anywhere in the image (weighted by error at each pixel might be more efficient than a uniform distribution), the other three control points are randomly distributed according to a Gaussian distribution with variance σ and average the previous control point, and the actual curve is clipped against the image rectangle. Then after N consecutive candidate curves are rejected we could reduce σ in order to try smaller curves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: