Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

Even on Linux with overcommit you can have allocations fail, in practical scenarios.

You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller. Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.

Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.

These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.

Rust is the problem here.


I hear this claim on swap all the time, and honestly it doesn't sound convincing. Maybe ten or twenty years ago, but today? CAS latency for DIMM has been going UP, and so is NVMe bandwidth. Depending on memory access patterns, and whether it fits in the NVMe controller's cache (the recent Samsung 9100 model includes 4 GB of DDR4 for cache and prefetch) your application may work just fine.

Swap can be fine on desktops where usage patterns vary a lot, and there are a bunch of idle apps to swap out. It might be fine on a server with light loads or a memory leak that just gets written out somewhere.

What I had in mind was servers scaled to run near maximum capacity of the hardware. When the load exceeds what the server can handle in RAM and starts shoving requests' working memory into swap, you typically won't get higher throughput to catch up with the overload. Swap, even if "fast enough", will slow down your overall throughput when you need it to go faster. This will make requests pile up even more, making more of them go into swap. Even if it doesn't cause a death spiral, it's not an economical way to run servers.

What you really need to do is shed the load before it overwhelms the server, so that each box runs at its maximum throughput, and extra traffic is load-balanced elsewhere, or rejected, or at least queued in some more deliberate and efficient fashion, rather than franticly moving server's working memory back and forth from disk.

You can do this scaling without OOM handling if you have other ways of ensuring limited memory usage or leaving enough headroom for spikes, but OOM handling lets you fly closer to the sun, especially when the RAM cost of requests can be very uneven.


It's almost never the case that memory is uniformly accessed, except for highly artificial loads such as doing inference on a large ML model. If you can stash the "cold" parts of your RAM working set into swap, that's a win and lets you serve more requests out of the same hardware compared to working with no swap. Of course there will always be a load that exceeds what the hardware can provide, but that's true regardless of how much swap you use.

Swap isn't just for when you run out of ram though.

Don't look at swap as more memory on slow / hdds. Look at it as a place the kernel can use if it needs a place to put something temporarily.

This can happen on large memory systems fairly easily when memory gets fragments and something asks for a chunk of memory than can't be allocated because there isn't a large enough contiguous block, so the allocation fails.

I always do a least a couple of GBs now for swap... I won't really miss the storage and that at least gives the kernel a place to re-org/compact memory and keep chugging along.


Thread safety metadata in Rust is surprisingly condensed! POSIX has more fine-grained MT-unsafe concepts than Rust.

Rust data types can be "Send" (can be moved to another thread) and "Sync" (multiple threads can access them at the same time). Everything else is derived from these properties (structs are Send if their fields are Send. Wrapping non-Sync data in a Mutex makes it Sync, thread::spawn() requires Send args, etc.)

Rust doesn't even reason about thread-safety of functions themselves, only the data they access, and that is sufficient if globals are required to be "Sync".


There are other possible explanations, e.g. AVC and HEVC are set to the same bitrate, so AVC streams lose quality, while AV1 targets HEVC's quality. Or they compare AV1 traffic to the sum of all mixed H.26x traffic. Or the rates vary in more complex ways and that's an (over)simplified summary for the purpose of the post.

Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.


I have no doubt they know what they are doing. But it's a srange metric no matter how you slice it. Why compare AV1's bandwith to the average of h.264 and h.265, and without any more details about resolution or compression ratio? Reading between the lines, it sounds like they use AV1 for low bandwidth and h.265 for high bandwidth and h.264 as a fallback. If that is the case, why bring up this strange average bandwidth comparison?

Yeah it's a weird comparison to be making. It all depends on how they selected the quality (VMAF) target during encoding. You couple easily end up with other results had they, say, decided to keep the bandwidth but improve quality using AV1.

I've been writing async Rust for as long as it existed, and never ran into any cancel-safety issue. However, I also never used tokio's select macro.


It depends how you actually use the messages. Zero-copy can be slowing things down. Copying within L1 cache is ~free, but operating on needlessly dynamic or suboptimal data structures can add overheads everywhere they're used.

To actually literally avoid any copying, you'd have to directly use the messages in their on-the-wire format as your in-memory data representation. If you have to read them many times, the extra cost of dynamic getters can add up (the format may cost you extra pointer chasing, unnecessary dynamic offsets, redundant validation checks and conditional fallbacks for defaults, even if the wire format is relatively static and uncompressed). It can also be limiting, especially if you need to mutate variable-length data (it's easy to serialize when only appending).

In practice, you'll probably copy data once from your preferred in-memory data structures to the messages when constructing them. When you need to read messages multiple times at the receiving end, or merge with some other data, you'll probably copy them into dedicated native data structs too.

If you change the problem from zero-copy to one-copy, it opens up many other possibilities for optimization of (de)serialization, and doesn't keep your program tightly coupled to the serialization framework.


I've switched to Home Assistant, and it's sooo much better.

Home Assistant can even share devices with Home app, so you can still use "Siri, turn off the lamp" to have it answer "you don't have any alarms set".


AV2 is in the works, so I guess we'll have AVIF2 soon, and another AVIF2 vs JPEG XL battle.


There's no particular reason for an image format based on video codec keyframes to ever support a lot of the advanced features that JPEG XL supports. It might compress better than AVIF 1, but I doubt it would resolve the other issues.


Cargo's cache is ridiculously massive (half of which is debug info: zero-cost abstractions have full-cost debug metadata), but you can delete it after building.

There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.


> There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.

If it’s just logs, I would prefer to redirect it to /dev/null.


The situation today is very different than what it used to be when people actually used 386 or Amigas because they had no other options (BTW, Rust supports m68k, just not AmigaOS specifically).

Today even crappiest old PCs that you can fish out of a dumpster are already new enough to have Rust/LLVM support. We have mountains of Rust-compatible e-waste that you can save from landfill. Take whatever is cheapest on eBay, or given away on your local FB marketplace, and it will run Rust, and almost certainly be orders of magnitude faster and more practical than the unsupported retro hardware.

Using actual too-niche-for-Rust hardware today is more expensive. Such machines are often collectors' items, and need components and accessories that are hard to obtain, or need replacements/adapters that can be custom low-volume products.

Even if you can put together something from old-but-not-museum-yet parts, it's not going to make more sense economically than getting an older-gen Raspberry PI kit or its Ali Express knock-offs (there are VGA dongles more expensive than some of these boards).

It's fine to appreciate SGI and DEC Alpha, have fun using BE OS, or prove that AmigaOS is still a perfectly fine daily driver, but let's not pretend it's a situation that people are in due to economic hardship.


> but let's not pretend it's a situation that people are in due to economic hardship.

I'd encourage you to not strawman my response. Because I already said myself that it appears to me it's only hobbyists who are losing support.

My objection isn't to the argument that it's dropping support, my objection is that it's dropping support without cause. Other than, the assumed this would be more comfortable for me.

Maintainers are absolutely not required to support everything for ever, but I recall a story where someone from Linux paid for a user to upgrade, not because that was required, because more because that would make dropping support for that floppy driver feel ethical.

This is the level of compassion everyone should expect from software engineers in critical positions of power.

I have no sympathy for people who lack the compassion to expend the effort to help others. I do have sympathy for people who have to watch the world that they, even if it's them alone. Have to watch their world get worse, so that others can avoid a trivial amount of perceived discomfort.

Should this solo maintainer (who understands C) be required do things exactly the way that I want? Of course not, but I'll be damned if everyone expects me to remain silent while I watch them disrespect other people who were previously depending on their support.


By alluding the switch to Rust was "without cause", and bringing up concerns of floppy users and retro-hobby hardware, you seem to be seeing the change only from a very narrow perspective of interests of very specific group of users.

There are lots of other users, and lots of other ways to care about them. Making software less likely to have vulnerabilities is caring about its users too. Making software work better and faster on contemporary hardware is caring about users too, just a different group (and a way larger one, and including users who really can't afford faster hardware).

Sometimes it's just not possible to make everyone happy, and even just keeping the status quo is not always a free option. Hypothetically, keeping working support for some weird floppy drive may be increasing overall system complexity, and cost dev and testing effort that might have been spent on something else that benefitted a larger number of users more.

Switching to a language with a friendlier compiler, fewer gotchas, less legacy cruft, and less picky dependency management can also be a way of caring about users - lowering the barriers to contributing productively can help get more contributions, fewer bugs, improve the software overall, and empower more users to modify their tools.

It'd be fine to argue which trade-offs are better, and which groups users should be prioritized, but it's disingenuous to frame not accommodating the retro/hobby usecases in particular a sign of lack of compassion in general. It could be quite the opposite - focusing only on the status quo and past problems shows lack of care about all the other users and the future of the software.


That's just your lack of familiarity with the foreign-to-you language (you may be unable to read Korean too, despite Korean being pretty readable).

Syntactically, Rust is pretty unambiguous, especially compared to C-style function and variable definitions. You get fn and let keywords, and definitions that are generally read left-to-right, instead of starting with an arbitrary identifier that may be a typedef, a preprocessor macro, or part of a type that is read in so-called "spiral" order (which isn't even a spiral, but more complex than than).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: