Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Splitting storage from retrieval is a powerful abstraction. You can then build retrieval indexes based on whatever property you desire by indexing it to amortize O(N) over many queries.

Concretely, you could search by metadata (timestamp, geotag, which camera device, etc) or by content (all photos of Joe, or photos with Eiffel tower in the background, or boating at dusk...). For the latter, you just need to process your corpus with a vision language model (VLM) and generate embeddings. Btw, this is not outlandish; there are already photos apps with this such capability if you search a bit online.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: