Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

afaik not based on standard RGB displays. All widespread technology for digital color reproduction is based on RGB primaries, i.e. a 3D space of color, or rather a 3D submanifold of spectra inside the effectively infinite-dimensional space of spectra. It is feasible to test for color deficient vision (deficiency or absence of one or more cones, reducing color perception to a 2D or 1D space) because it is easy to sample 3D RGB space and behaviorally detect if colors that are different in 3D are conflated because in some viewer they project to the same location in their 2D or 1D "color" sub-submanifold.

But we'd need a convenient way to sample a 4D space of colors (perhaps with 4 monochromatic sources?), and thereby generate different spectra that normal trichromats see as the same color (called "metamers"), but that tetrachromats could recognize as distinct. And, how the 4D space is sampled would have to be pretty carefully optimized to generate distinct spectra that have the same response with the M (medium or "green") and L (long or "red") cones (which are actually quite similar already!) while also generating different responses for the putative tetrachromat's additional code between M and L. And that isn't possible with any conventional display device.



On the contrary, RGB displays should be excellent tools to determine if somebody has vision which differ from normal. Ask the person to adjust the color settings so that real world footage on the display looks like how they experience the real world. Then you will see if there's any divergence in color perception, since display images are direct light while real world vision is reflected light.


Whether via direct or reflected light, spectra in trichromat's eyes are still projected down to a 3D space (the responses of the S, M, L cones). What you describe would still require a standardized and reliable way to probe an extra degree of freedom in spectra that conventional RGB displays can't access. The paper shared by varunneal explains it better than I can.


If we assume that digital video/film recording will compress the spectrum to images which are composed of three colors, somewhere in the processes between the light hitting the camera and the light being emitted from a display to the viewer, that means any tetrachromatic person will notice a difference between the images and the real world.


Sure, but noticing a difference between the images and the real world also happens with us trichromats too, e.g. colors online don't match those in the real world if the illuminant isn't correctly controlled. The intrinsic difficulty of color reproduction is not the same as detecting tetrachromacy. The nuance here is in generating stimuli that reliably and specifically detect the difference between projecting from an infinite-D space of spectra down to 3D (via metamers like the "keef" and "litz" described in the paper linked above), versus projecting down to 4D.


The difference between display and real world will be at most slight to a trichromat, while it would be extraordinarily obvious to a tetrachromat.

It's not very uncommon for people to be colour blind, dichromats. If media on screens would be dichromatic while the world around me is trichromatic, I would certainly notice at once.


I suggest trying to quantify "extraordinarily", using the actual spectral response curve for the tetrachromat's fourth cone, called "Q" in the paper shared by varunneal. Most people casually equate the short (S), medium (M), and long (L) cones with blue, green, and red, with the idea that these are all as different as can be, but the M and L cones are very similar to each other, compared to S. The L, M, S curves are independent but far from orthogonal in the way you may be thinking as you say "extraordinarily". The Q curve is just another wide bump, with a peak in between that of M and L, so again, very far from being orthogonal. Whatever 4th dimension of color perception is accessed by the Q curve, it is a relatively cramped dimension, so reliably detecting perception along it requires some carefully designed stimuli.


(in the awesome paper shared by varunneal, the metamers are named "keef" and "litz")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: