-
-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance - Text Rendering: Use texture atlas for characters #53
Comments
I've started working on a Reason port of the v1 texture atlas. I'll tell you once I've come up with something reviewable! It will probably require some changes in |
@cryza - awesome! Thank you very much for your help with this 💯 |
It took me quite some time but I think I now have fiddled around enough in a very diffuse way so that I now have a good understanding of what we can do to get a similar approach working as we had in oni v1. Originally, I wanted to just take the logic we already have there in TypeScript and port it to Reason. That approach failed for now though because we don't yet have all the plumbing available to do that and also because we already have differing abstractions in place that I would be avoiding that way. Anyhow, trying around with it gave me some insights: I was originally planning on adding support for heterogeneous data inside vertex attribute buffers because both WebGL and OpenGL support mixing different data types within a single vertex attribute buffer. This means a vertex shader could use a I was also planning to alter the complete rendering logic. What we have now is:
Which results in one GPU upload and one rendering pass per glyph. I was going to recreate the instanced logic we have in oni v1, which works as follows:
This logic requires support for the instanced variants of the GL API, which is unfortunately a little troublesome. In oni v1, we could rely on our chromium version providing that API through WebGL 2 but Edge, Internet Explorer and also Safari don't yet support WebGL 2: https://developer.mozilla.org/en-US/docs/Web/API/WebGL2RenderingContext#Browser_compatibility This means for now, we need to decide:
We could also start with the first option because it would also mean less effort to implement and then later check if we actually need more performance at the moment. Then we can still decide if we want to revisit the second option. What do you say? |
Awesome! Thanks for the update @cryza .
Sounds reasonable to me for now.
Thanks for calling out the options here! Given that the first option sounds like it is less effort to implement for now, I'm on-board with pursuing that. Getting a 'texture atlas' in place gives us a big leg-up when we decide to implement more aggressive optimizations! This also keeps the set of changes more incremental. One thing to note - there is an extension for WebGL 1 for |
Skia integrated in #567 handles this for us, now! |
Rendering text is currently very expensive in Revery, because it involves lots of context-switches to jump between shaders (this is made even worse by the fact that we currently regenerate textures every frame - but that's a separate issue).
The text rendering could be significantly improved by having a texture atlas that contains all the rendered glyphs - then, we could render a line of text in a single pass (or at least, a more minimal set of passes), as opposed to the situation today - where we always render each quad / texture in a single pass.
There's an excellent TextureAtlas implementation by @cryza in Oni here: https://github.com/onivim/oni/blob/master/browser/src/Renderer/WebGLRenderer/TextRenderer/GlyphAtlas/GlyphAtlas.ts that could be useful here 👍
The text was updated successfully, but these errors were encountered: