Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you explain what you consider to be the difference between emitted and reflected light? I'm asking because in the models of global illumination that also handle emissive materials that I've worked with, there's literally no difference between whether a photon was emitted by a surface, or came from somewhere else and bounced off the surface. So I'm trying to understand if there's some fundamental property here (e.g. like polarization) or LED color reproduction at play, that CG models don't capture. Because as soon as you have a ray of light with a certain intensity and mix of wavelengths, it really shouldn't matter how it was produced, so I'm trying to understand the quantitative difference in what is produced.


One of the more obvious differences is that paper is very close to a Lambertian reflector while LCD pixels are more directed. Hence the LCD viewing angle problem, though that gap is closing.

Apart from that, I think it mostly comes down to the fact that an LCD pixel doesn't actually change it's reflected colour when it starts emitting light of a different colour. This means that incident light falling upon the pixel and being reflected will necessarily become noise -- it won't contribute to the image. For instance, if the colour of the LCD surface is some kind of grey and the LCD is emitting green, the resulting colour will be

    incident * r_reflected + emitted
Where `incident` is the incident light from the environment, `r_reflected` is some factor (< 1) representing the amount of grey component reflected from the incident light and `emitted` is the emitted green light. The result is some kind of mix of grey + green.

On the other hand, if you had a green coloured paper, then there is no emitted component so it becomes just

    incident * r_reflected
Where `r_reflected` now represents the proportion of green light that is reflected. The end result is a more pure green.


Ok, that makes sense. But does any of that have any bearing on eye strain? Because it sounds mainly like a color reproduction issue, which seems pretty orthogonal to the whole 'black text on white background vs white text on black background' debate.


Emissive displays don't react to incident light in the same way as paper, even if they try. And they've been trying for a long time -- my grandmother had a TV from the 1970s with an ambient light sensor that would match the ambient color temperature.


They are always set too bright. I always wondered why are they simply not adaptive: you turn it on, and then set the brightness that you find comfortable. Then it adjust the brightness as ambient light changes by extrapolating from your chosen point.

Better systems could let you calibrate multiple points, so they would interpolate instead if extrapolate, but even a single point calibration would be amazing.

FWIW, I've tried dark mode and have to enlarge font sizes to read stuff compared to using very dim setting on my screens and light mode. I felt like I was in minority so didn't bother with exploring further.

But, I did realise and investigate blinking issue (I've got dry eyes, worse due to contacts and lots of screen time). I even have a few experiments in mind (like the typing break apps of old, I want my computer to trigger my blinking without adverse effects; whether it's by bluring content for a couple ms at a time to trigger eye refocus and blink, or whether it's something else, I still need to test it).


"FWIW, I've tried dark mode and have to enlarge font sizes to read stuff compared to using very dim setting on my screens and light mode."

I think the article mentions this indirectly, but then attributes it. Your pupils are going to adjust to the amount of light entering them a lot (most?). This means like a camera, a smaller hole will be less sensitive to focal problems.

So, unless you perfectly compensated for the amount of light coming from your display its likely the brighter backgrounds were dumping more light, reducing your pupils and making things clearer.


They're getting better: https://www.samsung.com/us/televisions-home-theater/tvs/the-...

The first time I saw one of these in the flesh, it was striking. I'd be surprised if this kind of context-awareness in screens didn't work its way into most devices within the next decade.


Could you explain why having the screen react more to incident light would be better?


It looks more like any other object within an environment, reducing eyestrain and improving the illusion of reality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: