Skip to content
Tomáš Malý edited this page Aug 6, 2020 · 8 revisions

Rendering

The browser library is completely rendering api agnostic. You may use it with DirectX, Vulkan, Metal, OpenGL or anything else. However, it is designed to be used mostly with OpenGL and some mapping or data reorganization might be needed otherwise.

Rendering with the renderer library

The rendering library requires OpenGL version 3.3, OpenGL ES 3.0 or WebGL 2.

OpenGL context

All functions in the renderer library expect a valid OpenGL context to be already created and bound to the calling thread.

The OpenGL context creation is not part of the library. You may use eg. SDL to create OpenGL context.

Call renderer::loadGlFunctions once after you created the OpenGL context to load gl function pointers. This modifies global state and must be called before any threads tries to use any gl function.

Render context

The class RenderContext maintains all OpenGL resources. It will manage the resources created by the map as well as some of its own resources needed for rendering, eg. shaders.

Use RenderContext::bindLoadFunctions to set the map's load callbacks to point to the methods of this render context.

Generally, you want to have one RenderContext in the entire application, or you can create one for each map.

Render view

Use RenderContext::createView to create new render view for the specified camera.

The render view maintains the state necessary to render contiguous sequence of images for the specified camera.

Use the method RenderView::render each frame to issue actual OpenGL draw calls.

The method RenderView::variables gives you access to ids of some of the OpenGL objects managed by the render view. You may use it to improve the integration of VTS with your own rendering. Be carefull not to modify any of the state associated with any of the objects.

Use RenderView::getWorldPosition to read stored depth value and reconstruct a 3D world space (physical SRS) coordinates corresponding to the provided screen space (eg. mouse) coordinates.

More detailed frame rendering

The RenderView::render is, in fact, just a convenience wrapper around few other methods that can be used instead.

Call these four methods, in this order, in every frame: RenderView::renderInitialize, RenderView::renderSurfaces, RenderView::renderGeodata, RenderView::renderFinalize.

The renderInitialize method will resize render buffers, if needed, and clear them, etc. It is mandatory to call it every frame.

The methods renderSurfaces and renderGeodata are optional.

Finally, the renderFinalize is mandatory and copies the rendered image into the selected target framebuffer or texture, possibly resolving multisampled buffers.

You may interleave these methods with any custom rendering commands, however, you are responsible to restore all OpenGL state that you modify.

Modifying the rendered contents

After the call Camera::renderUpdate and before RenderView::render, you may freely modify the draw commands accessible by calling Camera::draws.

Rendering without the renderer library

Instances of classes DrawSurfaceTask, DrawGeodataTask and DrawSimpleTask each contains a command for rendering. Class DrawColliderTask provides meshes for physics engines. Collections of these commands are in CameraDraws, which is accessed as Camera::draws.

The commands are spit into several categories:

  • opaque commands should be rendered first, but their respective order is not important.
  • transparent commands should be rendered after the opaque and their order is significant.
  • geodata commands require further processing according to the provided properties and their behavior changes significantly depending on the view.
  • infographics commands provide debugging facilities, which is useful for introspection of the data and for development of the libraries and possibly your application.
  • colliders commands provide meshes that represent physical boundary conditions.

Finally, the CameraDraws::camera contains additional relevant information about the camera used for the rendering.

Resources

The map takes care of decoding resources from any transfer format and pass them to the application via load* callbacs. These callbacks contain ResourceInfo, which the application should fill in, Gpu*Spec, which define the properties and data of the resource that the application should copy and id of the resource that the application may use to identify the resource.

ResourceInfo

The load* callback is obliged to set member ResourceInfo.userData, which is later used by the map to reference the specific resource in rendering. It should also set all the other members of the ResourceInfo, namely ramMemoryCost and gpuMemoryCost to indicate memory usage by that specific resource.

Note: the ResourceInfo.userData is of type std::shared_ptr<void>. You may, however, use it with any class you wish. For example: resourceInfo.userData = std::make_shared<MyTextureClass>();. Despite the pointer being cast to void, the shared_ptr will call the appropriate destructor eventually.

Texture

The structure GpuTextureSpec contains all the required information for the application to load the texture to gpu. Fields width and height is the resolution of the texture, in pixels. Attribute components specifies number of color channels per pixel. Attribute type defines type for each pixel and is, naturally, same for all channels. The field buffer (of type Buffer) contains the raw data. Finally, the attributes filterMode and wrapMode define how the texture should be configured.

Mesh

The structure GpuMeshSpec is slightly more complex.

The mesh is composed of multiple graphics primitives (usually triangles). The type of primitives in this mesh is given by the field faceMode.

The primitives are defined by its vertices. However, multiple primitives may share some of its vertices, therefore, it is common to separate all unique vertices into its own array and reference them by indices. If no vertices are being shared, the indicesCount is zero.

Each vertex may contain several data, eg. position, uv coordinates or normal coordinates, called attributes. These attributes are of different types and different dimensionality. Luckily, the attributes configuration is always the same for all vertices in a single mesh.

All the attributes configuration is in array field attributes, each with fields type, components (dimensionality), stride and offset. The stride specifies the number of bytes from the beginning of one vertex to the beginning of consecutive one and the offset is number of bytes of the first byte of the first vertex from the beginning of the vertices buffer.

Finally, the actual data are in buffers vertices and indices. The count of vertices and indices is given by verticesCount and indicesCount respectively. Each vertex index is 16 or 32 bit unsigned int, depending on the indexMode.

Font

Fonts are composed of multiple resources. The basic - initial - resource is provided with GpuFontSpec, whose Buffer data contains font file with some stripped data, and field handle of type FontHandle, which can later on be used to request further individual parts.

The additional parts are, sometime after requesting, provided as regular textures and contain signed distance fields for the glyphs.

Geodata

TBD

Clone this wiki locally