You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Voxel raymarching is currently performed in a coordinate space that encompasses the entire voxel shape. This results in precision problems if the voxel cell size is too small relative to the shape.
Precision problems are most likely to occur where:
The number of samples is very large, or
The data is confined to a small fraction of the shape, as is common in smaller datasets rendered using the ELLIPSOID shape.
A better option would be to march the ray in eye coordinates, then convert to a perturbation of the shape coordinates relative to the eye position. This would enable us to render massive geospatial datasets as voxels.
Current approach
At each step, we compute:
The ray position in "UV" space. This is a Cartesian space, where the shape is contained within 0 to 1.
The ray position in "shape UV" space. For BOX shape, this is a simple scalar. For CYLINDER the Cartesian "UV" coordinates are converted to vec3(radius, height, angle). For ELLIPSOID the output is vec3(longitude, latitude, altitude).
The Jacobian matrix, built from the partial derivatives of the coordinate transform in step 2.
The ray position in "shape UV" space is then used to look up the appropriate value from the input voxel data.
Problems with current approach
If an individual voxel cell is small relative to the size of the shape, the difference between the coordinates of adjacent voxels can be too small to represent in a single precision floating point number (as used on the GPU).
For example, suppose a voxel dataset has 2 meter sample spacing, but is rendered using the ELLIPSOID shape. Adjacent voxel cells (with different data values) could have longitude coordinates as follows (after scaling to [0, 1] "UV" space):
0.70000002
0.70000007
However, in the single precision floating point calculations on the GPU, both of these numbers will be stored as the same value. The renderer will therefore look up the same value from the input and render both cells with the same color.
Proposed approach
Store the camera position in "shape UV" coordinates as a uniform, split into two vec3s representing the high- and low-significance bits. Then, during ray marching,
Compute the ray position in eye coordinates, as a single precision vec3.
Convert the eye coordinate position to a perturbation of "shape UV" coordinates relative to the camera position, as a single-precision vec3. For BOX and CYLINDER, this conversion will be exact. For ELLIPSOID, the conversion will use approximations based on the local curvature at the camera position, similar to our approach for vertical exaggeration of 3D Tiles.
Add the perturbation to the camera position uniform to obtain a voxel coordinate, as two vec3s representing high- and low-significance bits. (When zoomed close to a small voxel cell, the single-precision perturbation will only affect the low-significance bits of the camera. When zoomed out, the low-precision bits will be inaccurate, but the inaccuracy will be negligible at that scale--we will only be rendering the lower LODs from the input data.)
Use the two-part voxel coordinate to traverse the octree and find the appropriate sample from the input data.
Required changes
Some of the harder parts of this approach include:
Replicating the current glsl convertUvToShapeUvSpaceDerivative methods in JavaScript, to compute the camera position uniform on the CPU.
Modifying the traversal in Octree.glsl to use a two-part coordinate with high and low bits.
The text was updated successfully, but these errors were encountered:
Feature
Voxel raymarching is currently performed in a coordinate space that encompasses the entire voxel shape. This results in precision problems if the voxel cell size is too small relative to the shape.
Precision problems are most likely to occur where:
ELLIPSOID
shape.A better option would be to march the ray in eye coordinates, then convert to a perturbation of the shape coordinates relative to the eye position. This would enable us to render massive geospatial datasets as voxels.
Current approach
At each step, we compute:
BOX
shape, this is a simple scalar. ForCYLINDER
the Cartesian "UV" coordinates are converted tovec3(radius, height, angle)
. ForELLIPSOID
the output isvec3(longitude, latitude, altitude)
.In code form:
The ray position in "shape UV" space is then used to look up the appropriate value from the input voxel data.
Problems with current approach
If an individual voxel cell is small relative to the size of the shape, the difference between the coordinates of adjacent voxels can be too small to represent in a single precision floating point number (as used on the GPU).
For example, suppose a voxel dataset has 2 meter sample spacing, but is rendered using the
ELLIPSOID
shape. Adjacent voxel cells (with different data values) could have longitude coordinates as follows (after scaling to [0, 1] "UV" space):However, in the single precision floating point calculations on the GPU, both of these numbers will be stored as the same value. The renderer will therefore look up the same value from the input and render both cells with the same color.
Proposed approach
Store the camera position in "shape UV" coordinates as a uniform, split into two
vec3
s representing the high- and low-significance bits. Then, during ray marching,vec3
.vec3
. ForBOX
andCYLINDER
, this conversion will be exact. ForELLIPSOID
, the conversion will use approximations based on the local curvature at the camera position, similar to our approach for vertical exaggeration of 3D Tiles.vec3
s representing high- and low-significance bits. (When zoomed close to a small voxel cell, the single-precision perturbation will only affect the low-significance bits of the camera. When zoomed out, the low-precision bits will be inaccurate, but the inaccuracy will be negligible at that scale--we will only be rendering the lower LODs from the input data.)Required changes
Some of the harder parts of this approach include:
convertUvToShapeUvSpaceDerivative
methods in JavaScript, to compute the camera position uniform on the CPU.Octree.glsl
to use a two-part coordinate with high and low bits.The text was updated successfully, but these errors were encountered: