Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could still use some kind of adaptive huffman coding. Current compression schemes have some kind of dictionary embedded in the file to map between the common strings and the compressed representation. Google tried proposing SDCH a few years using a common dictionary for wep pages. There isn't any reason why we can't be a bit more deterministic and share a much larger latent representation of "human visual comprehension" or whatever to do the same. It doesn't need to be stochastic once generated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: