Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Lava lamp simulated by neural net in infinite loop (github.com/muxamilian)
88 points by muxamilian on Feb 5, 2022 | hide | past | favorite | 25 comments
duralava is a neural network which can simulate a lava lamp in an infinite loop.

It uses a recurrent GAN that learns the physical behavior of the lava lamp.

A noteworthy aspect is that can generate an arbitrarily long video of a (virtual) lava lamp, without diverging even after thousands of frames.



If i am remembering correctly, there was a company that was using videos of lava lamps for encryption or as passwords or some such. The claim was that it is un crackable because truly random. I wonder if this can be used to emulate the physical process and break that encryption.


> If i am remembering correctly, there was a company that was using videos of lava lamps for encryption or as passwords or some such.

The method dates back to SGI's Lavarand: https://en.m.wikipedia.org/wiki/Lavarand

A related system, LavaRnd uses webcams that have their lenscaps on, so the sensors are only detecting thermal noise:

https://www.lavarand.org/

But Cloudflare is the most famous implementor of the technique:

https://blog.cloudflare.com/randomness-101-lavarand-in-produ...

ISTR that US embassies tried using atmospheric noise as a seed for generating one-time pads at some point, but that was deprecated as being too vulnerable to undetectable outside interference.

By comparison, the Cloudflare system with the lava lamps in the lobby is tamper-evident.

> The claim was that it is un crackable because truly random. I wonder if this can be used to emulate the physical process and break that encryption.

No. Chaotic processes are so sensitive to initial conditions and perturbations from the environment that any simulation quickly diverges from the actual process being simulated. Other common macroscale systems that exhibit this property are the three body problem and double pendulums.


Note that this method generate a video that looks nice and is very similar to a lava lamp, but it's not a 100% pixel perfect simulation of a lava lamp.

I'm not sure if you can use an image or short video as a seed, but in case it's possible, the rest of the real video and the rest of the video generated by this will look similar for a short time, but after a while they will be more and more different.

It's similar to turning on two lamps of the same factory at the same time. In spite they look similar initially, after a while they will look different.

In particular a real lamp will get some vibration the cars in the streets that will affect the content just a little, but after a while the chaotic behavior will make the differences bigger. Also small temperature differences from the sunlight from the window, and other stuff that looks unimportant will cause the real lamp to have a unpredictable behavior after some time.


Similating a Lava lamp with ML doesn't affect the security of RNGs in my view. You could do a computer simulation of a Lava lamp with the Navier-Stokes equations for a long time now. Chaos theory would mean that predicting the long-term future of a particular realization would require extremely high precision measurements of the initial conditions and boundary conditions of the lamp, certainly beyond what is feasible and possibly beyond what is possible. ML doesn't change that fact. It is possible, however, to predict statistics but that wouldn't be useful to break encryption.


Past threads -

Cloudflare generating Pseudo-random numbers from 100 lava lamps (4 years ago - 4 comments - gizmodo.com)

https://news.ycombinator.com/item?id=15639320

Lavarand - Hardware random number generator using lava lamps (11 years ago - 0 comments - wikipedia.org)

https://news.ycombinator.com/item?id=15639320

Relevant website -

LavaRnd

http://www.lavarnd.org/lavarnd.html


Cloudflare does that: https://www.cloudflare.com/en-gb/learning/ssl/lava-lamp-encr...

That’s an interesting aspect, which I haven’t thought of. I think real world lava lamps have very chaotic behavior. I think the neural network, however, just learns the most common behaviors of a lava lamp but so far cannot learn every aspect of a lava lamp. Training a bigger neural network could work though…


A neural network is only producing a probably bad approximation (but maybe visually okay) solution to the Navier stokes equations. The real lava lamp is doing it exactly.

Maybe the neural network produces good enough looking pictures and it’s cheap to run instead of solving the PDEs. But don’t think its going to be that accurate.


Maybe the neural network can act as a preconditioner for the navier stokes equations.

https://en.m.wikipedia.org/wiki/Preconditioner


I remember hearing this in the late ‘90s, and the company using “camera pointed at lava lamp” as a random number seed was SGI.


Huh, i was, literally this morning, staring at a lava lamp and wondering about its potential as a random seed :)


Just don't place it near a window ...


Fun project, even though I personally find the gan images unconvincing; too many deconvolution artifacts, and poor conservation of mass as another commenter said.

I am however completely awed by the folder in that git repo with 143.000 png files. Checking that into git would have turned my laptop itself into a molten blob of wax, haha.

Edit: rereading my comment, maybe it sounds harsh. Idon't wanna sound like I'm dissing this, GANs are hard and so is image generation. I couldn't have done it better.

Also: nice trick on penalizing poor (growing) noise vectors; another thing you could try is simply always sample a random point on an n-sphere (you divide your random vector by its length, it'll always have length 1)


It does not do it convincingly. At least the gifs on github show blobs of lava disappearing mid rise.

It's nice work though.


Maybe it's just me, but conservation of mass seems like something you'd want to hard-code, not learn.


I think he’d get much better results if he was simulating the actual physical blobs of lava using the ML model and then rendering those as opposed to generating the image using ML.


Maybe I am confused. Is it simulating a lava lamp, or a video of a lava lamp?


This is a great philosophical question that John Searle would be proud of. The superficial answer would be "a video of a lava lamp," since that is its input and output. (It therefore tries to mimic artifacts of the video itself, compression, scaling, banding, etc.)

But the means by which the network generates frames involves some kind of learned representation of the lava lamp that is more than just a matrix of pixel values, and the network encodes a function for how it predicts this representation will change over time. So it also simulates the lava lamp itself.

On the other hand, the network has no idea about things that would be necessary to perform a remotely accurately simulation, like properties of the chemicals involved or the laws of thermodynamics.


Well, any learned "representation of a lava lamp" would necessarily be of a subjective perspective, and so we have zoomed out only slightly from "video of a lava lamp" to "subjective experience of a lava lamp", both of which have video-like qualities (temporal correlation, large-scale persistent structures).

So it simulates really the "experience of a lava lamp".

But it still doesn't do it very well.


You show the neural net many video snippets of a real lava lamp. It tries to output videos that are supposed to imitate the behavior of a real lava lamp. So it actually outputs videos of lava lamps but to do that convincingly, it has to understand the physical behavior of a lava lamp at least a little bit.


So a GAN that draws a cartoon of an owl is 'simulating an owl' and has to 'understand the behavior of an owl at least a little bit'? I don't think so.

Don't get me wrong, it's a cool project, but let's not get ahead of ourselves.


It doesn't have to, no. But come on that's not really what is meant here.

It does need to understand certain properties of owls, even if "arrangement of pixels that look like an eye near arrangement of pixels that look like a beak" is as far as it gets. Though as in another thread it is not necessarily the owl (you would need to do something like rendering a la Neural Radiance Fields (NeRF) to get closer to some perfect comprehension of an owl.


I don't think that's right. The proof of that learning could be that it applies the knowledge to something other than a lava lamp, or at least a modified lava lamp - so something "out of the box".


It doesn't simulate anything in the first place, but generates images from a dataset that's built up from a set of images.


I've wanted to do this with a fireplace for years. Hopefully someone with the skill set can do it


Interesting, I guess you could just take a fireplace video (one hour or more) and train duralava and see what happens. I guess you won’t get perfect results but could be fun!

Edit: It could be easier than with a lava lamp because for the fire there is no complex physical model. For the lava lamp you have to take care that everything that goes up at some point goes down again etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: