You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Coming from raw lidar pointcloud data, whats the best way to render the shades?
(The use case I have in mind is to calculate and render sun energy in a forest which is falling on the ground in canopy gaps or penetrating through the canopy)
I guess the standard way would be to rasterize the the pointcloud (e.g. create a digital surface model) and then use it within rayshader.
Another option I could imagine would be to create a mesh (or object?) and feed it to rayrender.
(Is this even possible and if yes what's the technical advantages/disadvantages?)
Disadvantages of methods 1 and 2 are certainly that only part of the data (e.g. highest points) is used to create the surface which is then "solid" and does not account for point density. If I understand it correctly this surface then either permits no light transmission (rayshader) or a fixed amount (material properties in rayrender).
Basically the lidar pointcloud is a direct measure for for the transmission of light (in a certain direction) which is not used with methods 1 and 2.
Hence, I was also thinking about using every single point as an object with certain size.
Would it be (computationally) possible to render a scene with a large amount of points (as e.g. spheres) to compute the light transmission, and does it make sense?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Coming from raw lidar pointcloud data, whats the best way to render the shades?
(The use case I have in mind is to calculate and render sun energy in a forest which is falling on the ground in canopy gaps or penetrating through the canopy)
I guess the standard way would be to rasterize the the pointcloud (e.g. create a digital surface model) and then use it within rayshader.
Another option I could imagine would be to create a mesh (or object?) and feed it to rayrender.
(Is this even possible and if yes what's the technical advantages/disadvantages?)
Disadvantages of methods 1 and 2 are certainly that only part of the data (e.g. highest points) is used to create the surface which is then "solid" and does not account for point density. If I understand it correctly this surface then either permits no light transmission (rayshader) or a fixed amount (material properties in rayrender).
Basically the lidar pointcloud is a direct measure for for the transmission of light (in a certain direction) which is not used with methods 1 and 2.
Hence, I was also thinking about using every single point as an object with certain size.
Would it be (computationally) possible to render a scene with a large amount of points (as e.g. spheres) to compute the light transmission, and does it make sense?
Beta Was this translation helpful? Give feedback.
All reactions