Hacker Newsnew | past | comments | ask | show | jobs | submit | hedgehog's commentslogin

There are challenges with really big monolithic caches. IBM does something sort of like your idea in their Power and Telum chips, with different approaches. Power has a non-uniform cache within each die, Telum has a way to stitch together cache even across sockets (!).

https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-main...

https://www.eecg.utoronto.ca/~moshovos/ACA07/projectsuggesti...

(if you do ML things you might recognize Doug Burger's name on the authors line of the second one)


Oh, the Xeons are with the vX vs vY nonsense, where the same number but a different version is an entirely different CPU (like the 2620 v1 and v2 are different microarchitecture generations and core counts). But, not to leave AMD out, they do things like the Ryzen 7000 series which are Zen 4 except for the models that are Zen 2 (!). (yes if you read the middle digits there's some indication but that's not that helpful for normal customers).

I'm curious, what are you doing that has over 1000 hours a month of action runtime?

I run a local Valhalla build cluster to power the https://sidecar.clutch.engineering routing engine. The cluster runs daily and takes a significant amount of wall-clock time to build the entire planet. That's about 50% of my CI time; the other 50% is presubmits + App Store builds for Sidecar + CANStudio / ELMCheck.

Using GitHub actions to coordinate the Valhalla builds was a nice-to-have, but this is a deal-breaker for my pull request workflows.


Cool, that looks a lot nicer than the OBD scanner app I've been using.

On ZeroFS [0] I am doing around 80 000 minutes a month.

A lot of it is wasted in build time though, due to a lack of appropriate caching facilities with GitHub actions.

[0] https://github.com/Barre/ZeroFS/tree/main/.github/workflows


I found that implementing a local cache on the runners has been helpful. Ingress/egress on local network is hella slow, especially when each build has ~10-20GB of artifacts to manage.

What do you use for the local cache?

Just wrote about my approach yesterday: https://jeffverkoeyen.com/blog/2025/12/15/SlotWarmedCaching/

tl;dr uses a local slot-based cache that is pre-warmed after every merge to main, taking Sidecar builds from ~10-15 minutes to <60 seconds.


ZeroFS looks really good. I know a bit about this design space but hadn't run across ZeroFS yet. Do you do testing of the error recovery behavior (connectivity etc)?

This has been mostly manual testing for now. ZeroFS currently lacks automatic fault injection and proper crash tests, and it’s an area I plan to focus on.

SlateDB, the lower layer, already does DST as well as fault injection though.


Wow, that's a very cool project.

Thank you!

1 hour build/test time, 20 devs, that's 50 runs a month. Totally possible.

GH actions templates don’t build all branches by default. I guess it’s due to them not wanting the free tier to use to much resources. But I consider it an anti-pattern to not build everything at each push.

That is because GH Actions is event based, that is more powerful and flexible than just triggering on every push and not letting it be configured.

``` on: push ```

is the event trigger to act on every push.

You'll waste a lot of CI on building everything in my opinion, I only really care about the pull request.


There are several open BSDs.

AFAIK there's no evidence to suggest that permissive vs. copyleft license is the reason for the relative lack of success of the BSDs vs. Linux.

PlayStation and macOS kind of show what happens with upstream.

As did all the UNIXes that used to rule before companies started sponsoring Linux kernel development, and were quite happily taking BSD code into them, alongside UNIX System V original code.


The planetary gear "eCVT" systems that Toyota and Ford use in many models are mechanically a lot simpler than a traditional automatic or sequential manual transmission. Few moving parts and no clutches at all. I don't know what the long term reliability of those drivetrains is is but I wouldn't be surprised if it's measurably measurably better than a traditional transmission + engine. There's a long educational video from Weber State University that gives a good walkthrough of what's going on in those things.

https://www.youtube.com/watch?v=O61WihMRdjM


Color management infrastructure is intricate. To grossly simplify: somehow you need to connect together the profile and LUT for each display, upload the LUTs to the display controller, and provide appropriate profile data for each window to their respective processes. During compositing then convert buffers that don't already match the output (unmanaged applications will probably be treated as sRGB, color managed graphics apps will opt out of conversion and do whatever is correct for their purpose).

Yes, but why is the compositor dealing with this? Shouldn't the compositor simply be deciding which windows go where (X, Y, and Z positions) and leave the rendering to another API? Why does every different take on a window manager need to re-do all this work?

Turning the question around, what other part of the system _could_ do this job? And how would the compositor do any part of its job if it doesn't have access to both window contents and displays? I'm not super deep in this area but a straight-forward example of a non-managed app and a color-aware graphics app running on a laptop with an external display seems like it is enough to figure out how things need to go together. This neglects some complicating factors like display pixel density, security, accessibility, multi-GPU, etc, but I think it more or less explains how the Wayland authors arrived at its design and how some of the problems got there.

I'm questioning the idea that people should be writing compositors at all. Why doesn't Wayland itself do the compositing and let everyone else just manage windows?

It's like going to Taco Bell and they make you grind your own corn for your tortillas.


Why? Probably better to ask the Wayland developers that. Maybe you're right. That said, whether everyone uses the same compositor and window management is modular, or not and shared code travels as libraries, I don't think the complexity of color management is much different.

I mean, when I hear the word "compositing" I definitely imagine something that involves "alpha" blending, and doing that nicely (instead of a literal alpha calculation) is going to involve colour management.

That's on the Wayland team though. They drew up the new API boundaries and decided that all window managers would now be in the business of compositing.

If I wanted to put it most uncharitably, I'd say they decided to push all of the hard parts out of Wayland itself and force everyone else to deal with them.


You can have the tool start by writing an implementation plan describing the overall approach and key details including references, snippets of code, task list, etc. That is much faster than a raw diff to review and refine to make sure it matches your intent. Once that's acceptable the changes are quick, and having the machine do a few rounds of refinement to make sure the diff vs HEAD matches the plan helps iron out some of the easy issues before human eyes show up. The final review is then easier because you are only checking for smaller issues and consistency with the plan that you already signed off on.

It's not magic though, this still takes some time to do.


It depends how old / where, the US didn't start vaccinating widely until around 30 years ago.


We didn't have this new-fangled chickenpox vaccine during my Gen-X childhood.


Or a lot of millenials. My parents were annoyed we had to go through it while Japan had been vaccinating since the 80s.


I hate to break it to you but the original work on that topic was by Schmidhuber & Schmidhuber back in 1963.


I've been using Macs in various forms since the 80s and I've carried a Mac laptop with me nearly full time since the early 2000s. While I don't think quality is necessarily overall worse than a decade or two ago I have run out of patience for rewrites with major regressions of most of the apps I care about. For the first time in a lot of years I have a Linux laptop along side my Mac and, if all works out, I'm planning to shift all my important workflows over.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: