Project Treadmill 2 (LIDAR 3)

August 20, 2020

I managed to greatly increase performance by removing the raycasting phase and replacing it with a series of cameras. These cameras apply a special shader to everything they render, writing the relative x, y, and z coordinates of each pixel in world space to that pixels r, g, and b values, then using the alpha channel as a depth map. This allows me to easily reconstruct the point cloud without ever sending data back to the CPU.

Color rendering showing relative X, Y, and Z

Alpha channel showing distance from the camera

To change the horizontal and vertical density of the points I just have to change the resolution of the render texture.

You can see in the video above that Unity's texture filtering initially caused stray points around the edges of objects, but that was easily solved by disabling all filters and anti-aliasing on the render textures. This approach works beautifully for the most part, but there are still a few problems. Most prominent is that the points seem to be rendering at different brightness depending on their position on screen, and the borders between cameras create obvious lines out from the center.

I initially thought those radial lines were caused by two cameras overlapping by one pixel, but all of my attempts to fix that have either been unsuccessful or have left gaps between the cameras.