Hacker Newsnew | past | comments | ask | show | jobs | submit | nvr219's commentslogin

Probably very new domain reg

:allears:

Why are you booing them? They're right.

Look up Vinegar, Baking Soda, and (by a separate developer) Wipr.

I'm using wipr and it's great. using vinegar/baking soda for video adblocks.


How did nobody try this until now?


>Plants in Jars admitted that, while she’s far from the first person to popularize tissue culture, her tutorials and videos explaining the method have likely been a significant driver in its growth within the plant collecting community, leading to a big change in the overall market.

Sometimes a good instructor makes all the difference in the world.


Having a pretty, intelligent, well spoken young woman present it doesn't hurt either.

And in no way do I want to take away from her insight and skill in popularizing and communicating these concepts. Clearly she is good at what she does.


And sharing/spreading the knowledge widely. If 1000x are doing something smart vs a few here and there, makes a huge difference in a marketplace.


I bypassed by clicking “Continue with email” and then closing the dialog box.


But then I have to upgrade to iOS 26… The cure is worse than the disease!

For now I am using wireguard+pihole on a cheap VPS for all my devices. It’s not perfect (data center IP so some places block it) but it’s good enough for now. When I’m forced to update to 26 will definitely look at Wipr since I tested that out and it was really good other than the in-app issue.


Or just do the 26 upgrade, the Liquid Glass changes are grossly overstated, it's not a drastic change...


True. macOS Tahoe is far worse. It looks like a poorly designed Linux variant.


Maybe they could prove it using blockchain!!!


Congrats!! Very cool.


I've been thinking about this too—how different DDN is from other generative models. The idea of generating multiple outputs at once in a single pass sounds like it could really speed things up, especially for tasks where you need a bunch of samples quickly. I'm curious how this compares to something like GANs, which can also generate multiple samples but often struggle with mode collapse.

The zero-shot conditional generation part is wild. Most methods rely on gradients or fine-tuning, so I wonder what makes DDN tick there. Maybe the tree structure of the latent space helps navigate to specific conditions without needing retraining? Also, I'm intrigued by the 1D discrete representation—how does that even work in practice? Does it make the model more interpretable?

The Split-and-Prune optimizer sounds new—I'd love to see how it performs against Adam or SGD on similar tasks. And the fact that it's fully differentiable end-to-end is a big plus for training stability.

I also wonder about scalability—can this handle high-res images without blowing up computationally? The hierarchical approach seems promising, but I'm not sure how it holds up when moving from simple distributions to something complex like natural images.

Overall though, this feels like one of those papers that could really shift the direction of generative models. Excited to dig into the code and see what kind of results people get with it!


Thank you very much for your interest.

1. The comparison with GANs and the issue of mode collapse are addressed in Q2 at the end of the blog: https://github.com/Discrete-Distribution-Networks/Discrete-D...

2. Regarding scalability, please see “Future Research Directions” in the same blog: https://github.com/Discrete-Distribution-Networks/Discrete-D...

3. Answers or relevant explanations to any other questions can be found directly in the original paper (https://arxiv.org/abs/2401.00036), so I won’t restate them here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: