You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How is a single depth buffer going to work with stereo? Will each eye have its own depth buffer?
The way I thought about it is that XRDepthInformation that we return must be relevant to the XRView that was used to retrieve it. In case of a stereo system w/ only one depth buffer, there would be 2 options: either reprojecting the buffer so that each of XRViews gets the appropriate XRDepthInformation, or exposing an additional XRView that would be used only to obtain the single depth buffer (but then it'd be up to the app to reproject, there are some XRViews for which XRDepthInformation would be null, and we are creating a synthetic XRView so maybe not ideal). If we were to require the implementation to reproject the depth buffer, how big of a burden would that be?
The text was updated successfully, but these errors were encountered:
I agree that it should be per view but I don't know how much of burden it is to calculate that.
It seems that if a stereoscopic device provides depth, it should make it available so it's correct for each eye.
@bialpio@toji Quest 3 will ship with support for GPU depth sensing. This information is returned as a texture array, not side by side.
Maybe we can update the API to make this clear?
/agenda discuss exposing depth buffer as a texture array
@toji agreed that we can define that gpu depth sensing always returns a texture array. This would simplify the spec and there would be less of a chance for user confusion.
/agenda should we always expose the depth as a texture array?
Splitting @cabanier's question into a new issue:
The way I thought about it is that XRDepthInformation that we return must be relevant to the XRView that was used to retrieve it. In case of a stereo system w/ only one depth buffer, there would be 2 options: either reprojecting the buffer so that each of XRViews gets the appropriate XRDepthInformation, or exposing an additional XRView that would be used only to obtain the single depth buffer (but then it'd be up to the app to reproject, there are some XRViews for which XRDepthInformation would be null, and we are creating a synthetic XRView so maybe not ideal). If we were to require the implementation to reproject the depth buffer, how big of a burden would that be?
The text was updated successfully, but these errors were encountered: