Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Drawing Lines is Hard (2015) (mattdesl.svbtle.com)
46 points by Doches on Sept 19, 2018 | hide | past | favorite | 10 comments


Does there not exist some ridiculously parallel equivalent of Bresenham or Wu? I imagine with the right datastructure you should be able to do pixel/line intersection tests in the pixel shader?


A pixel shader cannot choose where to draw, it is simply given a pixel and asked what colour it should be.

In order to reduce the overdraw to something manageable, you need to generate geometry (via the CPU, geometry shader or vertex shader) that covers a relatively small superset of the pixels you actually want to draw.


> Does there not exist some ridiculously parallel equivalent of Bresenham or Wu?

Sure—it's trivial. Draw some bounding geometry and calculate the Euclidean distance to the line in your fragment shader. Shade accordingly.

The trick is finding that minimal bounding geometry.



It's easy to extend Bresenham's algorithm (and probably Wu's too) to work in a 3D voxel space. But I don't know if it would solve the problems in TFA or whether the speed penalty of working with voxels would be worth it. Might be an interesting experiment.


The basic problem is that Bresenham and Wu are both sequential algorithms. While you can overcome that by implementing Bresenham in your rasterizer hardware (which I think is what many GPUs do?), you're out of luck if you're implementing it in software. You need an approach that uses the existing rasterization hardware.


if you want to, there are at least a couple flavors of parallel bresenham - is not particularly tricky on the face of it. you do have to be super careful with your arithmetic so that tasks end up mapping to unique pixels. and yeah, you have to be in an environment where task creation overhead is effectively 0 for it to make sense.


Yeah, to be more specific the problem with doing Bresenham on GPU is that, in the traditional GPU pipeline, the main way to create tasks is to use the fixed-function triangle rasterization hardware.

I guess one could technically do something like figuring out the number of pixels spanned on CPU and then rasterizing point primitives. In your vertex shader you'd have each 1x1 pixel point figure out, in parallel, where it's supposed to go. This is of course bound to be horrendously slow and would never be practical :)


It's hard to draw 2D lines using a 3D API? Doesn't sound like a revelation.


OpenGL is a 2D API, bring your own projections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: