Replies: 2 comments 1 reply
-
We can talk tot he Falcor devs to see if we can just move that library to the slang repo and ship with slang releases. |
Beta Was this translation helpful? Give feedback.
0 replies
-
This library came to my attention recently: HLSL++ On a first glance, it doesn't seem to be as feature complete as Falcor's math library. This might be sufficient for what I'm looking for. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I think we should consider exposing the built-in vector and matrix intrinsics used by Slang for use in corresponding C++ host-side code. I think this is something that's currently really lacking, and that NVIDIANs don't seem to really notice due to the math library built into Falcor, but that outsiders using Slang run into very quickly.
In a typical Vulkan / GLSL framework, developers will include GLM, which mostly matches syntactically with the vector intrinsics in GLSL.
But for both HLSL and Slang, there really isn't a good solution to this (or at least, none that I've found).
The closest independent library I've found so far to try to match Slang/HLSL math is this "Linalg" library here: https://github.com/sgorsten/linalg
However, linalg assumes matrices are stored column major, which breaks a lot of host-to-device transfers of transforms. This can be a very subtle bug, as the transpose of a matrix is it's inverse when matrices are orthogonal (and in graphics, they very often are). Personally, this has caused me a lot of grief when trying to generate instances of BLAS in a TLAS on the GPU, where the end-user sets up a buffer of linalg's float3x4 type. At least in Vulkan (and iiuc DX), graphics API assume that instance transforms are stored in a row major layout, but a naive memcpy of linalg's float3x4 class results in what's effectively the inverse of that float3x4 in device memory... And beyond this, linalg is missing quite a few intrinsics....
Alternatively, users today can sort of follow what NVIDIANs do, and copy from Falcor's math library here: https://github.com/NVIDIAGameWorks/Falcor/tree/master/Source/Falcor/Utils/Math. Falcor mimics Slang's Matrix intrinsics, and does so in a way where the matrices are stored as row major. It's also thoroughly vetted by folks using the Falcor framework. Still, if an outsider wants to adopt slang by incorporating the math utilities in Falcor, they still need to do quite a few edits to that code to account for Falcor specific namespaces and preprocessor definitions.
The third option is to try to repurpose glm for Slang use, but then code that compiles with the host compiler doesn't naturally compile when shared by slang. This is a workflow that folks using CUDA are used to, and I think we could eventually match this with Slang.
And then the fourth is to just have end users write their own math libraries... But this is quite a lot of work, and not a good user experience at all.
So, to me, none of these options really make any sense. Slang supports C++ as a target language. So, there must be some definitions for all these Slang intrinsics buried somewhere within the Slang compiler, right?
If so, could we consider consolidating those C++ intrinsic implementations in a common spot, which could eg be bundled with Slang releases? Similar perhaps to GFX? (Specifically, the vector and matrix intrinsics, eg dot, cross, determinant, etc)
Beta Was this translation helpful? Give feedback.
All reactions