Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're missing that this is not designed as a tool for photographers, but rather in a collaboration with Mitsubishi aimed at better situational awareness for vehicle operators. The headline doesn't mention this, but it's impossible to miss in the article.


In the context of the GP, I think the point still stands though which is roughly: “the lens matters a lot”.

Without knowing more about the optics, it’s hard to know how much of a role the sensor/ISP play in the innovation, but those are well established and widely capable across both photographic and industrial use cases.

Very curious to eventually learn more about this and whether it might eventually find its way into traditional cameras.


Sure, I guess. But the whole discussion is so void of subject matter knowledge that it's like trying to argue the pros and cons of different bowling balls in terms of how well they pair with Brie.

Nikon is an optics company that's also made cameras for a long time, and then very nearly didn't; before the Z mirrorless line took off, the company's future as a camera manufacturer was seriously in doubt. But even a Nikon that had stopped making cameras entirely after the D780 would still be an optics company. There is no serious reason to assume the necessity of some sensor/ISP "special sauce" behind the novel optics announced here to make the system work. And considering where Nikon's sensors actually come from, if there were more than novel optics involved here, I'd expect to see Sony also mentioned in the partnership.

Of course that's not to say photographic art can't be made with commercial or industrial equipment; film hipsters notwithstanding, pictorialism in the digital era has never been more lively. But I would expect this to fall much in that same genre of "check out this wild shit I did with a junkyard/eBay/security system installer buddy find", rather than anything you'd expect to see on the other end of a lens barrel from a Z-mount flange.


I couldn't tell from the article: is it for human eyeballs or for computers?

If it's for eyeballs it would be nifty to know what kind of image displays both kinds of information at once.

If it's for computers, what is the advantage over two cameras right next to each other? Less hardware? More accurate image recognition? Something else?


These are questions for their CES presentation next week, not for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: