这是indexloc提供的服务,不要输入任何密码
Skip to content

Conversation

@MisteryDust
Copy link
Contributor

Description

This PR corrects an issue with the depth sorting order for Gaussian point rendering.

Problem

As I understand the rendering pipeline, once the camera is set, the 3D Gaussian points should be processed as "translucent objects." This requires sorting them from farthest to nearest relative to the camera. Consequently, in the sorted array:

  • The first point should be the farthest.
  • The last point should be the nearest.

However, the current implementation in gaussian.wgsl appears to have this order reversed, which may lead to incorrect blending and visual artifacts.

Solution

The depth-related logic in gaussian.wgsl has been adjusted to ensure the points are sorted and processed in the correct "farthest first" order.

Feature Request: Output Linear Depth in a Single-Channel Image

I've noticed the depth map is currently saved as an RGB image. Would it be possible to modify the system to also (or alternatively) output a single-channel depth map?

The goal is to have an image where the value of each pixel (e.g., in the R channel if saved in a format that supports it) directly encodes the linear depth from the camera to the Gaussian point. A grayscale image would be perfectly suited for this.

Copy link
Owner

@mosure mosure left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are correct, good find!

re. Output Linear Depth in a Single-Channel Image:

this is possible to add, a good example with mesh fragments can be found here (with some differences around depth calculation, e.g. expected vs. accumulated depth): https://github.com/mosure/bevy_zeroverse/blob/main/src/render/depth.wgsl

note, when using bevy's rasterization pipeline, the output texture format dictates depth map quality (without specialized depth rasterization pass /w custom texture bindings). utilizing HDR camera pipeline yields ~16bit single-channel depth map (ideally this would be 24-32bit for large scenes). colorizing the depth /w normalization + de-normalizing to single-channel might yield higher quality maps.

@mosure mosure enabled auto-merge (squash) October 16, 2025 17:47
@mosure mosure merged commit a017cd7 into mosure:main Oct 16, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants