Lights move according to music. For the first song strobing can be too much, sorry for that, musicians liked it like that ;).
If this is uncomfortable, you can press F key to freeze lights changes (I implemented this feature exactly for that).
Second and third songs are much gentler, so you may try checking those out (keys 1,2,3 change the song, keys 4,5,6 change the head for the current song).
Tested in Firefox (41), and Firefox Nightly (44.0a1) under up-to-date Linux with Skia.
This breaks for me when the viewport is higher than its width, which is how I normally have it. It breaks by not drawing the head and only drawing the bits that overlay the eyes.
Also getting some warnings in the console, unsure if it's due to the site or the browsers implementaion:
PROGRAM_INFO_LOG: warning: sampler arrays indexed with non-constant expressions is forbidden in GLSL 110
warning: Variable sampler array index unsupported.
This feature of the language was removed in GLSL 1.20 and is unlikely to be supported for 1.10 in Mesa.
warning: Variable sampler array index unsupported.
This feature of the language was removed in GLSL 1.20 and is unlikely to be supported for 1.10 in Mesa.
[···] repeats
Those warnings mean your GPU driver doesn't support a feature that's critical for shaders used there. Sorry for that :(
Which GPU do you have?
The last time I encountered a similar issue was on my old Thinkpad with ancient ATI 3650 Mobility Radeon GPU (on Windows but only when running OpenGL rendering backend, DirectX ANGLE backend was ok, so it wasn't really hardware issue, just drivers).
-----
BTW to see how it looks when it works properly, here are video captures:
I currently have a single die CPU+GPU, the AMD A10-7850K, as I'm having some issues with the PCI-E ports that I have yet had time to troubleshoot. Good to know where the issues lie at least, thanks for the explanations!
In regards to working properly: It does work as in the videos as long as the aspect ratio is "correct".
We are very sensitive to event the smallest details about human eyes. There are some aspects that are not modeled in this demo.
Probably the most important is that head models are completely static: they were "frozen in time" from the time of 3D scan, in exactly one position, one facial expression, one eyes focus.
Each of the models were captured with slightly different eyes orientation / position / focus. Black one especially doesn't look "at you", he looked "past you" when the capture was made.
Demo only changes eyeballs orientations but the rest of the head stays always the same, mesh is static. In the real world when eyes move, also eyelids and skin around moves. We use these details as additional source of information when guessing where the person is looking at.
Another aspect that's missing is a proper physiologically-correct cross-eyedness due to focus distance. When you look at the same point but at different distances, eyes will move closer/further apart, but not in a simple geometric way, there are physiological constrains.
For the best "uncanny valley" effect, try fullscreen mode (press G) and then try different zoom levels (press Z) to find the one that look closest to the real human head size on your monitor.
If your GPU is fast, you can also try ultra mode (should be autodetected, but you can force it by presssing U even if detection fails).
If this is uncomfortable, you can press F key to freeze lights changes (I implemented this feature exactly for that).
Second and third songs are much gentler, so you may try checking those out (keys 1,2,3 change the song, keys 4,5,6 change the head for the current song).