In a YouTube video posted by AnonymousADAS Developer, you can see Tesla’s Point Clouds, which are the output of a neural network that processes a monocam from Tesla production firmware. The video shows what looks like waves of lines, but then suddenly you can see the images clearly.
There are cars, roads, and the graphic moves around at different angles — this is most likely due to the vehicle being in motion. It’s like looking inside the mind of the car and seeing how that mind processes images.
What Are Point Clouds?
Dronegenuity described point clouds as essentially the simplest form of 3D models. They are collections of individual points plotted in 3D space.
The article noted that every single point contains several measurements, including its coordinates along the X, Y, Z-axes. Sometimes they may have additional data such as color value stored in RGB format. Luminance value is another type of data they may have that determines how bright the point is.
Point clouds are created by scanning an object or structure. The scans are performed by using either a laser scanner or photogrammetry, which is used in the creation of 3D models with drone photos.
The images in the videos remind me of an art form that I’ve encountered many years ago called pointillism, a technique of painting with small, distinct dots of color applied in patterns to form an image. You can apply this form to drawing as well.