Hacker Newsnew | past | comments | ask | show | jobs | submit | mvanveen's commentslogin

Has the Rebble community ever explored their own open source HW for the rebble ecosystem? I know there’s a ton of work involved to get something high quality/consumer grade and there’s obviously cost implications correlated to order volume and we were all hoping Core Devices would offer the goods but maybe we can lean into a community driven model for the hardware as well?


I'd be surprised if more 'hackable' watches didn't pop up around the Sifli chips. Lilygo have an upcoming device with Sifli 52 chip. There's the SF32LB52-ULP smartwatch development board.


The Silfi 52 chip is news to me- thanks for pointing it out (also didn't know it's what is powering the new Core Devices products- pretty cool).

I've built custom firmware for a DIY OLED ESP-32 watch that is made by a few vendors before. In some ways we're emerging into that reality now but I'd admit that what Core Devices is trying to do and the general level of polish of the Pebble ecosystem is a lot further than something like what I'm describing.


Ironically, I don't think glibly remarking that you "still have all your limbs" and some handmade furniture at the end properly demonstrates someone "fear[ing] the saw," and it demonstrates some of the hubris we're seeing in current tech culture.

One of my high school teachers impressed the same caution upon my cohort but was missing the end of a finger.


I have Susan Kare's original finder iconography tattooed on my body. I don't have much opine on the new design language but I do think the new finder icons displayed in the post are an abomination.


I had the pleasure of meeting Dan in person a few times up here in the Bay Area. He was incredibly approachable and always generous with his time. If he sensed your curiosity, he’d give you his full, undivided attention.

Just weeks before he passed, we were trading long Twitter DMs late into the evening—deep, technical conversations spanning topics that were hard to get good information on elsewhere.

After his passing, as I began sharing these stories, I found that so many others had experienced the exact same generosity from him. He had a remarkable way of making people feel seen and supported.


I thought it was funny they spent so much time bashing choice of distro and highlighting various performance considerations only to pick Alpine?

Alpine linux uses musl as its libc, which contrary to the articles' claims (unless I'm missing new information?) can have severe performance implications in many production settings.

update: I found this April 2025 blog post where someone performed some benchmarks and found that musl runtime performance is still pretty far behind glibc: https://edu.chainguard.dev/chainguard/chainguard-images/abou...


That link may show the current state of Musl on Alpine, but Musl can be built with https://en.wikipedia.org/wiki/Mimalloc

https://github.com/microsoft/mimalloc

as https://en.wikipedia.org/wiki/Chimera_Linux / https://chimera-linux.org/ does.

Which btw. can be done in Alpine, too:

https://wiki.alpinelinux.org/wiki/Mimalloc


I think this is true if there aren't new questions to be asked. But technologies shift and evolve all of the time.

One of my top StackOverflow questions for years was around the viability of ECMAscript 6. It's now essentially irrelevant because it's found wide adoption in browsers etc. but at the time a lot of people appreciated the question because they wanted to adopt the technology but weren't sure what its maturity was.

It's also true that some technology stacks mature to a point where there isn't much more to be asked but I think there will continue to be a place for forums of discussion where you can ask and get answers around newer, bleeding edge technologies, use cases etc.


> What do you do if you need to look up the definition/implementation of some function which is in some other file?

At some point, for me, ‘find <dir> -name “*.ext” | xargs grep <pattern>’ took over for recursive grep, because the required tools are available on most Unix systems.


> If that doesn't work, look into pivoting to tech recruiting. Hopefully I wouldn't need to go back to school for this.

The market for tech recruiting has been hit as hard and in some ways harder than the market for tech workers. Most folks are finding or placing jobs through in-network referrals and recruiters are finding much lower demand for their services after a long period of having it good.

I know some actually good recruiters and really feel for those folks right now, as well as early career folks like yourself. It took me 8+ months to find a job myself in 2023.

Wishing you the best of luck in your search! Sincerely rooting for you and hope your luck will turn around.


In this case let's assume that the weights and biases of, say, a neural network are fixed and the model is already trained.

One way of thinking about explainability is that it deals with determining for some input data how much each feature is contributing to the final outcome (e.g. variable 1 and 2 contributed x% and y% to the final inferred value).

You're correct to suggest that when you backpropogate residual error there are also non-linear interactions between features and that will affect how much each variable is contributing to the updated weight value (in fact that's kind of the point of a deep network ;).


I am a co-author of a patent for model explainability for credit risk underwriting applications using Shapley values.

In fairness I haven't given this article a thorough read but my initial impression is that I'm finding myself frustrated by the FUD this article is attempting to spread. As my boss would often remark to remind us all: model explainability is an under-constrained optimization problem. By definition there isn't a unique explanation decomposition unless you further constrain the problem.

Therefore, I personally find that hand-wringing around there not being 100% agreement around different explanations for a model inference, while definitely thought provoking and worth considering, should at least account for this reality. For some reason a lot of folks in the ML community seem to have come the opinion that because the problem is under-constrained that means that explanations shouldn't be calculated or have no utility.

Would you prefer a model that examines which features are driving the model to deny a disproportionate number of folks of a particular race or ethnicity or not, all things being equal? My point is even if there are limitations to explainability I think there are a lot of very real, critical scenarios where applying SHAP can be of actual, real world utility.

Furthermore, it's not clear that LIME or other explainability methods will provide better or more robust explanations than Shapley values. As someone that has looked at this pretty extensively in credit underwriting I'd personally feel most comfortable computing SHAP values while acknowledging some of the limitations and risks this article calls out.

Axioms such as completeness are also pretty reasonable and I think there is a fair amount of real world utility to explainability algorithms that derive from such an axiomatic basis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: