Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately, at this point some of the slow layers are in hardware, or immediately adjacent to it. For example, AFAIR[0], the time between your keyboard registering a press, and the corresponding event reaching the application, is already counted in milliseconds and can become perceptible. Even assuming the app processes it instantly, that's just half of the I/O loop. The other half, that is changing something on display, involves digging through GUI abstraction layers, compositor, possibly waiting on GPU a bit, and then... these days, displays themselves tend to introduce single-digit millisecond lags due to the time it takes to flip a pixel, and buffering added to mask it.

These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.

--

[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.



Perhaps it's this one?

http://danluu.com/input-lag/


Yes, this one exactly, thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: