Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, I've also always felt there must be ways to extract more data from a moving clip, precisely because of the effect he explains, but then it seems that just superimposing the images doesn't actually extract that information, at least not all of it.

But I wonder how to actually do it, do you have concrete ideas for a simple algorithm?



If you can figure out, for each frame, which sets of (pre-aligned) pixels have been averaged, you can create a large system of equations that captures those relations and solve it to find the unblurred pixel values.

Depending on camera movement (and whether you might get "ground truth" information from pixels entering and leaving the areas near the borders) the system will be more or less well-conditioned. I'm going to try this for the data the author graciously provided and report back!


There are some (fairly old) papers that might contain some useful ideas for you:

- http://www.eyetap.org/papers/docs/mann94virtual.pdf - http://wearcam.org/orbits/index.html

I seem to recall that there used to be a video showing this approach in action. As input it took a video panning across a shelf full of books where the resolution was so low that the titles were illegible. And as output it produced a video with higher resolution and all the titles easily readable. Unfortunately I can't find that video any longer.


Yes, it all boils down to point spread functions. In the mosaic case, the PSF varies locally (per pixel) and temporally (in different video frames). The paper you link similarly details how they figure out the PSF. You can theoretically also do the entire thing without knowing the PSF, which is called blind deconvolution: https://en.wikipedia.org/wiki/Blind_deconvolution




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: