Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relative-to-eye rendering for voxels #12061

Open
jjhembd opened this issue Jul 1, 2024 · 0 comments
Open

Relative-to-eye rendering for voxels #12061

jjhembd opened this issue Jul 1, 2024 · 0 comments

Comments

@jjhembd
Copy link
Contributor

jjhembd commented Jul 1, 2024

Feature

Voxel raymarching is currently performed in a coordinate space that encompasses the entire voxel shape. This results in precision problems if the voxel cell size is too small relative to the shape.

Precision problems are most likely to occur where:

  • The number of samples is very large, or
  • The data is confined to a small fraction of the shape, as is common in smaller datasets rendered using the ELLIPSOID shape.

A better option would be to march the ray in eye coordinates, then convert to a perturbation of the shape coordinates relative to the eye position. This would enable us to render massive geospatial datasets as voxels.

Current approach

At each step, we compute:

  1. The ray position in "UV" space. This is a Cartesian space, where the shape is contained within 0 to 1.
  2. The ray position in "shape UV" space. For BOX shape, this is a simple scalar. For CYLINDER the Cartesian "UV" coordinates are converted to vec3(radius, height, angle). For ELLIPSOID the output is vec3(longitude, latitude, altitude).
  3. The Jacobian matrix, built from the partial derivatives of the coordinate transform in step 2.

In code form:

    vec3 positionUv = u_cameraPositionUv + currentT * viewDirUv;
    PointJacobianT pointJacobian = convertUvToShapeUvSpaceDerivative(positionUv);

The ray position in "shape UV" space is then used to look up the appropriate value from the input voxel data.

Problems with current approach

If an individual voxel cell is small relative to the size of the shape, the difference between the coordinates of adjacent voxels can be too small to represent in a single precision floating point number (as used on the GPU).

For example, suppose a voxel dataset has 2 meter sample spacing, but is rendered using the ELLIPSOID shape. Adjacent voxel cells (with different data values) could have longitude coordinates as follows (after scaling to [0, 1] "UV" space):

  1. 0.70000002
  2. 0.70000007

However, in the single precision floating point calculations on the GPU, both of these numbers will be stored as the same value. The renderer will therefore look up the same value from the input and render both cells with the same color.

Proposed approach

Store the camera position in "shape UV" coordinates as a uniform, split into two vec3s representing the high- and low-significance bits. Then, during ray marching,

  1. Compute the ray position in eye coordinates, as a single precision vec3.
  2. Convert the eye coordinate position to a perturbation of "shape UV" coordinates relative to the camera position, as a single-precision vec3. For BOX and CYLINDER, this conversion will be exact. For ELLIPSOID, the conversion will use approximations based on the local curvature at the camera position, similar to our approach for vertical exaggeration of 3D Tiles.
  3. Add the perturbation to the camera position uniform to obtain a voxel coordinate, as two vec3s representing high- and low-significance bits. (When zoomed close to a small voxel cell, the single-precision perturbation will only affect the low-significance bits of the camera. When zoomed out, the low-precision bits will be inaccurate, but the inaccuracy will be negligible at that scale--we will only be rendering the lower LODs from the input data.)
  4. Use the two-part voxel coordinate to traverse the octree and find the appropriate sample from the input data.

Required changes

Some of the harder parts of this approach include:

  • Replicating the current glsl convertUvToShapeUvSpaceDerivative methods in JavaScript, to compute the camera position uniform on the CPU.
  • Modifying the traversal in Octree.glsl to use a two-part coordinate with high and low bits.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment