Tuesday, May 16, 2017

Rebuilding the Entity Index

Background

If you are not familiar with the Stingray Entity system you can find good resources to catch up here:
The Entity system is a very central part of the future of Stingray and as we integrate it with more parts new requirements pops up. One of those is the ability to interact with Entity Components via the visual scripting language in Stingray - Flow. We want to provide a generic interface to Entites in Flow without adding weight to the fundamental Entity system.

To accomplish this we added a “Property” system that Flow and other parts of the Stingray Engine can use which is optional for each Component to implement in addition to having its own specialized API. The Property System enables an API to read and write entity component properties using the name of component, the property name and the property value. The Property System needs to be able to find a specific Component Instance by name for an Entity, and the Entity System does not directly track an Entity / Component Instance relationship. It does not even track the Entity / Component Manager relationship.

So what we did was to add the Entity Index, a registry where we add all Component Instances created for an Entity as it is constructed from an Entity Resource. To make it usable we also added the rule that each Component in an Entity Resource should have a unique name within the resource so the user can identify it by name when using the Flow system.

In order for the Flow system to work we need to be able to find a specific component instance by name for an Entity so we could get and set properties of that instance. This is the job of the Entity Index. In the Entity Index you can register an Entitys components by name so you later can do a lookup.

Property System and Entity Index

When creating an Entity we use the name of the component instance together with the component type name, i.e the Component Manager, and create an Entity Index that maps the name to the component instance and the Component Manager. In the Stingray Entity system an Entity cannot have two component instances with the same name.

Example:

 

Entity

  • Transform - Transform Component
  • Fog - Render Data Component
  • Vignette - Render Data Component

For this Entity we would instantiate one Transform Component Instance and two Render Data Component Instances. We get back an InstanceId for each Component Instance which can be used to identify which of Fog or Vignette we are talking about even though they are created from the same Entity using the same Component Manager.

We also register this in the Entity Index as:

Key Value
Entity Array<Components>

The Array<Components> contains one or more entries which each contain the following:

Components
Component Manager
InstanceId
Name

Lets add the a few entities and components to the Entity Index:

entity_1.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager 13
hash(“Fog”) &render_data_manager_1 4
hash(“Vignette”) &render_data_manager_1 5

 

entity_2.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager 14
hash(“Fog”) &render_data_manager_1 6
hash(“Vignette”) &render_data_manager_1 7

 

entity_3.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager 2
hash(“Fog”) &render_data_manager_2 4
hash(“Vignette”) &render_data_manager_2 5

This allows Flow to set and get properties using the Entity and the Component Name. Using the Entity and Component Name we can look up which Component Manager has the component instance and which InstanceId it has assigned to it so we can get the Instance and operate on the data.

The problem with this implementation is that it will become very large - we need a large registry with one key-array pair for each Entity where the array contains one entry for each Component Instance for the Entity, not very efficient as the number of entites grow. There is no reuse at all in the Entity Index - and it can’t be - each entry in the index is unique with no overlap.

Here are some measurements using a synthetic test that creates entities, add and looks up components on the entities and deleted entities. It deletes parts of the entities as it runs and does garbage collection. The number entities given in the tables is the total number created during the test, not the number of simultaneous entities which varies over time. The entities has 75 different types of component compositions, ranging from a single component to eleven for other entities. The test is single threaded and no locking besides some on the memory sub system which makes the times match up well with CPU usage.

Entity Count Test run time (s) Memory used (Mb) Time/Entity (us)
10k 0.01 5.79 0.977
20k 0.01 5.79 0.488
40k 0.03 11.88 0.732
80k 0.06 11.88 0.732
160k 0.13 25.69 0.793
320k 0.32 31.04 0.977
640k 1.08 55.90 1.648
1.28m 2.58 65.82 1.922
2.56m 6.35 65.55 2.366
5.12m 13.42 120.55 2.500
10.24m 25.69 130.55 2.393

As you can see we start to take longer and longer time and use more and more memory as we double the number of entities and as we get to the larger numbers the time and memory increases pretty dramatically.




























Since we plan to use the entity system extensively we need an index that is more efficient with memory and scales more linearly in CPU usage.

Shifting control of the InstanceId

The InstanceId is defined to be unique to the Entity instance for a specific Component Manager - it does not have to be unique for all components in a Component Manager, nor does it have to be unique across different Component Managers.

The create and lookup functions for an Component Instance looks like this:

InstanceWithId instance_with_id = transform_manager.create(entity);
InstanceId my_transform_id = instance_with_id.id;

.....

Instance instance = transform_manager.lookup(entity, my_transform_id);

The interface is somewhat confusing since the create function returns both the component instance id and the instance. This is done so you don’t have to do a lookup of the instance directly after create. As you can see we have no knowledge of what the resulting InstanceId will be so we can’t make any assumptions in the Entity Index forcing us to have unique entries for each Component instance of every Entity.

But we already set up the rule that in the Entity Resource, each Component should have a unique name for the Property System to work - this is a new requirement that was added at a later stage than when designing the initial Entity system. Now that it is there we can make use of this to simplify the Entity Index.

Instead of letting each Component Manager decide the InstanceId we let the caller to the create function decide the InstanceId. We can decide that the InstanceId should be the 32-bit hash of the Component Name from the Entity Resource. Doing this will restrict the possible optimization that a component manager could do if it had control of the InstanceId, but so far we have had no real use case for it and the benefits of changing this are greater than the loss of a possible optimization that we might do sometime in the future.

So we change the API like this:

Instance instance = transform_manager.create(entity, hash("Transform"));

.....

Instance instance = transform_manager.lookup(entity, hash("Transform")); 

Nice, clean and symmetrical. Note though that the InstanceId is entierly up to the caller to control, it does not have to be a hash of a string. It must be unique for an Entity within a specific component manager. Having it work with the Entity Index and the Property System the InstanceId needs to be unique across all Component Instances in all Component Managers for each Entity instance. This is enforced when an Entity is created from a resource but not when constructing Component Instances by hand in code. If you want a component added outside the resource construction to work with the Property System care needs to be taken so it does not collide with other names of component instances for the Entity.

Lets add the entities and components again using the new rule set, the Entity Index now look like this:

entity_1.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager hash(“Transform”)
hash(“Fog”) &render_data_manager_1 hash(“Fog”)
hash(“Vignette”) &render_data_manager_1 hash(“Vignette”)

 

entity_2.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager hash(“Transform”)
hash(“Fog”) &render_data_manager_1 hash(“Fog”)
hash(“Vignette”) &render_data_manager_1 hash(“Vignette”)

 

entity_3.id

Name Component Manager InstanceId
hash(“Transform”) &transform_manager hash(“Transform”)
hash(“Fog”) &render_data_manager_2 hash(“Fog”)
hash(“Vignette”) &render_data_manager_2 hash(“Vignette”)

As we now see the Instance Id column now contain redundant data - we only need to store the Component Manager pointer. We use the Entity and hash the component name to find our Component Manager which can be used to look up the Instance.

entity_1.id

Name Component Manager
hash(“Transform”) &transform_manager
hash(“Fog”) &render_data_manager_1
hash(“Vignette”) &render_data_manager_1

 

entity_2.id

Name Component Manager
hash(“Transform”) &transform_manager
hash(“Fog”) &render_data_manager_1
hash(“Vignette”) &render_data_manager_1

 

entity_3.id

Name Component Manager
hash(“Transform”) &transform_manager
hash(“Fog”) &render_data_manager_2
hash(“Vignette”) &render_data_manager_2


We now also see that the lookup array for entity_1 and entity_2 are identical so two keys could point to the same value.

Options for implementation

We could opt for an index that has a map from entity_id to a list or map of entries for lookup:

entity_1.id = [ hash("Transform"), &transform_manager ], [ hash("Fog"), &render_data_manager_1 ], [ hash("Vignette"), &render_data_manager_1 ]
entity_2.id = [ hash("Transform"), &transform_manager ], [ hash("Fog"), &render_data_manager_1 ], [ hash("Vignette"), &render_data_manager_1 ]
entity_3.id = [ hash("Transform"), &transform_manager ], [ hash("Fog"), &render_data_manager_2 ], [ hash("Vignette"), &render_data_manager_2 ]

We should probably not store the same entry lookup list multiple times if it can be resused by multiple entity instances as this wastes space, but at any time a new component instance can be added or removed from an entity and its entry list would then change - that would mean administrating memory for the lookup lists and detecting when two entities starts to diverge so we can make a new extended copy of the entry list for the changed entity. We should probably also remove lookup lists that are no longer used as it would waste memory.

Entity and component creation

The call sequence for creating entities from resources (or even programmatically) looks something like this:

Entity e = create();
Instance transform = transform_manager.create(e, hash("Transform"));
Instance fog = render_data_manager_1.create(e, hash("Fog"));
Instance vignette = render_data_manager_1.create(e, hash("Vignette"));

In this scenario we could potentially build a entity lookup list for the entity which contains lookup for the transform, fog and vignette instances:

entity_index.register(e, [ hash("Transform"), &transform_manager ], [ hash("Fog"), &render_data_manager_1 ], [ hash("Vignette"), &render_data_manager_1 ]);

But as stated previously - component instances can be added and removed at any point in time making the lookup table change during the lifetime of the Entity. We need to be able to extend it at will, so it should look something like this:

Entity e = create();
Instance transform = transform_manager.create(e, hash("Transform"));
entity_index.register(e, [ hash("Transform"), &transform_manager ]);

Instance fog = render_data_manager_1.create(e, hash("Fog"));
entity_index.register(e, [ hash("Fog"), &render_data_manager_1 ]);

Instance vignette = render_data_manager_1.create(e, hash("Vignette"));
entity_index.register(e, [ hash("Vignette"), &render_data_manager_1 ]);

Now we just extend the lookup list of the entity as we add new components. This means that two entities that started out life as having identical lookup lists after being spawned from a resource might diverge over time so the Entity Index needs to handle that.

Component Instances can also be destroyed, so we should handle that as well. Even if we do not remove component instances things will still work - if we keep a lookup to an Instance that has been removed we would then just fail the lookup in the corresponding Component Manager. It would lead to waste of memory though, something we need to be aware of going forward.

Building a Prototype chain

Looking at how we build up the Component instances for an Entity it goes something like this: first add the Transform, then add Fog and finally Vignette. This looks sort of like an inheritance chain…
Lets call a lookup list that contains a specific set of entry values a Prototype.

An entity starts with an empty lookup list that contains nothing [], this is the base Prototype, lets call that P0.
  • Add the “Transform” component and your prototype is now P0 + [&transform_manager, “Transform”], lets call that prototype P1.
  • Add the “Fog” component, now the prototype is P1 + [&render_data_manager_1, “Fog”] - call it P2.
  • Add the “Vignette” component, now the prototype is P2 + [&render_data_manager_1, “Vignette”] - call it P3.
Your entity is now using the prototype P3, and from that you can find all the lookup entries you need.
The prototype registry will contain:

P0 = []
P1 = [] + [&transform_manager, "Transform"]
P2 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"]
P3 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"] + [&render_data_manager_1, "Vignette"]

If you create another entity which uses the same Components with the same names you will end up with the same prototype:

Create entity_2, it will have the empty prototype P0.
  • Add the “Transform” component and your prototype now P1.
  • Add the “Fog” component, now the prototype is P2.
  • Add the “Vignette” component, now the prototype is P3.

We end up with the same prototype P3 as the other entity - as long as we add the entities in the same order we end up with the same prototype. For entites created from resources this will be true for all entities created from the same entity resource. For components that are added programatically it will only work if the code adds components in the same order, but even if they do not always do this we still will have a very large overlap for most of the entities.

Lets look at the third example where we do not have an exact match, entity_3:

Create entity_3, it will have the empty prototype P0.
  • Add the “Transform” component and your prototype is now P0 + [&transform_manager:Transform, “Transform”] = P1.
  • Add the “Fog” component - this render data component manager is not the same as entity_1 and entity_2 so we get P1 + [&render_data_manager_2, “Fog”], this does not match P2 so we make a new prototype P4 instead.
  • Add the “Vignette” component, now the prototype is P4 + [&render_data_manager_2, “Vignette”] -> P5.
The prototype registry will contain:

P0 = []
P1 = [] + [&transform_manager, "Transform"]
P2 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"]
P3 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"] + [&render_data_manager_1, "Vignette"]
P4 = [] + [&transform_manager, "Transform"] + [&render_data_manager_2, "Fog"]
P5 = [] + [&transform_manager, "Transform"] + [&render_data_manager_2, "Fog"] + [&render_data_manager_2, "Vignette"]

Storage of the prototype

We can either for each prototype store all the component lookup entries - this makes it easy to get all the component instance look-ups in one go at the expense of memory due to data duplication. Each entity will store which prototype it uses.
  • entity_1 -> P3
  • entity_2 -> P3
  • entity_3 -> P5
The prototype registry now contains:

P0 = []
P1 = [] + [&transform_manager, "Transform"]
P2 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"]
P3 = [] + [&transform_manager, "Transform"] + [&render_data_manager_1, "Fog"] + [&render_data_manager_1, "Vignette"]
P4 = [] + [&transform_manager, "Transform"] + [&render_data_manager_2, "Fog"]
P5 = [] + [&transform_manager, "Transform"] + [&render_data_manager_2, "Fog"] + [&render_data_manager_2, "Vignette"]

Some of the entries (P2 and P4) could technically be removed since they are not actively used - we would need to temporarily re-create them as new entries with the same structure were added.
A different option is to actually use the intermediate entries by referencing them, like so:

P0 = []
P1 = P0 + [&transform_manager, "Transform"]
P2 = P1 + [&render_data_manager_1, "Fog"]
P3 = P2 + [&render_data_manager_1, "Vignette"]
P4 = P1 + [&render_data_manager_2, "Fog"]
P5 = P4 + [&render_data_manager_2, "Vignette"]

Less wasteful but requires lookup up in the chain to find all the components for an entity. On the other hand we can make this very efficient storage-wise by having a lookup table like this:
Map from Prototype to {base_prototype, component_manager, component_name}. The prototype data is small and has no dynamic size so they can be stored very effiently.

The prototype will add all the prototypes to the same prototype map and since the HashMap implementation lookup gives us O(1) lookup cost, traversing the chain will only cost us the potential cache-misses of the lookup. Since the hashmap is likely to be pretty compact (via prototype reuse) this hopefully should not be a huge issue. If it turns out to be, a different storage approach might be needed trading memory use for lookup speed.

Since the amount of data we store for each Prototype would be very small - roughly 16 bytes - we can be a bit more relaxed with unused prototypes - we do not need to remove them as aggressively as we would if each prototype contained a complete lookup table for all components.

Building the Prototype index

So how do we “name” the prototypes effectively for fast lookup? Well, the first lookup would be Entity -> Prototype and then from Prototype -> Prototype definition.
A simple approach would be hashing - use the content of the Prototype as the hash data to get a unique identifier.

The first base prototype has an empty definition so we let that be zero.
To calculate a prototype, mix the prototype you are basing it of with the hash of the protoype data, in our case we hash the Component Manager pointer and Component Name, and mix it with the base prototype.

Prototype prototype = mix(base_prototype, mix(hash(&component_manager), hash(component_name)))

The entry is stored with the prototype as key and the value as [base_prototype, &component_manager, component_name].

When you add a new Component to an entity we add/find the new prototype and update the Entity -> Prototype map to match the new prototype.

So, we end up with a structure like this:
struct PrototypeDescription {
    Prototype base_prototype;
    ComponentMananger *component_manager;
    IdString32 component_name;
}

Map<Entity, Prototype> entity_prototype_lookup;
Map<Prototype, PrototypeDescription> prototypes;

void register_component(Entity, ComponentManager, component_name)
{
    Prototype p = entity_prototype_lookup[Entity];
    Prototype new_p = mix(p, mix(hash(ComponentManager), hash(component_name)));
    if (!prototypes.has(new_p))
        prototypes.insert(new_p, {p, &ComponentManager, component_name});
    enity_index[Entity] = new_p;
}

ComponentMananger *find_component_manager(Entity, component_name)
{
    Prototype p = entity_index[Entity];
    while (p != 0)
    {
        PrototypeDescription description = prototypes[p];
        if (description.component_name == component_name)
            return description.component_manager;
        p = description.base_prototype;
    }
    return nullptr;
}

This could lead to a lot of hashing and look-ups but we can change the api to register new components to multiple Entities in one go which would lead to dramatically less number of hashing and look-ups, we already do that kind of optimization when creating entities from resources so it would be a natural fit. Also, we can easily cache the base prototype index to avoid more of the hash look-ups in find_component_manager.

Measuring the results

Lets run the synthetic test again and see how our new entity index match up to the old one.

Entity Count Test run time (s) Memory used (Mb) Time/Entity (us)
10k 0.01 0.26 0.977
20k 0.01 0.51 0.488
40k 0.03 0.99 0.832
80k 0.06 0.99 0.610
160k 0.11 0.99 0.671
320k 0.23 0.99 0.702
640k 0.46 0.99 0.702
1.28m 0.94 0.99 0.700
2.56m 1.88 0.99 0.700
5.12m 3.78 0.99 0.704
10.24m 7.57 0.99 0.705

The run time now scales very close to linearly and is overall faster than the old implementation. Most notable is the win when using a lot of entities. Memory usage has gone down as well and the time/entity is also scaling more gracefully.























Memory usage looks a little strange but there is an easy explanation - the mapping from entity to prototype is using almost all that memory (via a hashmap) and the actual prototypes takes less than 30 Kb. Note that the old index uses the same amount of memory for the Entity to Prototype mapping.

Lets compare the graphs between the old and new implementation:


Entity Count Time New (s) Time Legacy (s) Memory New (Mb) Memory Legacy (Mb) Time/Entity New (us) Time/Entity Legacy (us)
10k 0.01 0.01 0.26 5.79 0.977 0.977
20k 0.01 0.01 0.51 5.79 0.488 0.488
40k 0.03 0.03 0.99 11.88 0.832 0.732
80k 0.05 0.06 0.99 11.88 0.610 0.732
160k 0.11 0.13 0.99 25.69 0.671 0.793
320k 0.23 0.32 0.99 31.04 0.702 0.977
640k 0.46 1.08 0.99 55.90 0.702 1.648
1.28m 0.94 2.58 0.99 65.82 0.700 1.922
2.56m 1.88 6.53 0.99 65.55 0.700 2.366
5.12m 3.78 13.42 0.99 120.55 0.704 2.500
10.24m 7.57 25.69 0.99 130.55 0.705 2.393


























Looks like a pretty good win.

Final words

By taking into account the new requirements as the Entity system evolved we were able to create a much more space efficient and more performant Entity Index.

The implementation chosen here has focused on reducing the amount of data we use in the Entity Index at the cost of lookup complexity, I think this is the right trade-of, especially since it performs better as well. Since the interface for the Entity Index is fairly non-complex and does not dictate how we store the data we could change the implementation to optimize for lookup speed if need be.

Tuesday, March 14, 2017

Stingray Renderer Walkthrough #8: stingray-renderer & mini-renderer

Stingray Renderer Walkthrough #8: stingray-renderer & mini-renderer

Introduction

In the last post we looked at our systems for doing data-driven rendering in Stingray. Today I will go through the two default rendering pipes we ship as templates with Stingray. Both are entirely described in data using two render_config files and a bunch of shader_source files.

We call them the “stingray renderer” and the “mini renderer”

Stingray Renderer

The “stingray renderer” is the default rendering pipe and is used in almost all template and sample projects. It’s a fairly standard “high-end” real-time rendering pipe and supports the regular buzzword features.

The render_config file is approx 1500 lines of sjson. While 1500 might sound a bit massive it’s important to remember that this configuration is highly configurable, pretty much all features can be dynamically switched on/off. It also run on a broad variety of different platforms (mobile -> consoles -> high-end PC), supports a bunch of different debug visualization modes, and features four different stereo rendering paths in addition to the default mono path.

If you are interested in taking a closer look at the actual implementation you can download stingray and you’ll find it under core/stingray_renderer/renderer.render_config.

Going through the entire file and all the implementation details would require multiple blog posts, instead I will try to do a high-level break down of the default layer_configuration and talk a bit about the feature set. Before we begin, please keep in mind that this rendering pipe is designed to handle lots of different content and run on lots of different platforms. A game project would typically use it as a base and then extend, optimize and simplify it based on the project specific knowledge of the content and target platforms.

Here’s a somewhat simplified dump of the contents of the layer_configs/default array found in core/stingray_renderer/renderer.render_config in Stingray v1.8:

// run any render_config_extensions that have requested to insert work at the insertion point named "first"
{ extension_insertion_point = "first" }

// kick resource generator for rendering all shadow maps
{ resource_generator="shadow_mapping" profiling_scope="shadow mapping" }

// kick resource generator for assigning light sources to clustered shading structure
{ resource_generator="clustered_shading" profiling_scope="clustered shading" }

// special layer, only responsible for clearing hdr0, gbuffer2 and the depth_stencil_buffer
{ render_targets=["hdr0", "gbuffer2"] depth_stencil_target="depth_stencil_buffer" 
    clear_flags=["SURFACE", "DEPTH", "STENCIL"] profiling_scope="clears" }      

// if vr is supported kick a resource generator laying down a stencil mask to reject pixels outside of the lens shape
{ type="static_branch" platforms=["win"] render_settings={ vr_supported=true }
    pass = [
        { resource_generator="vr_mask" profiling_scope="vr_mask" }
    ]
}

// g-buffer layer, bulk of all materials renders into this
{ name="gbuffer" render_targets=["gbuffer0", "gbuffer1", "gbuffer2", "gbuffer3"] 
    depth_stencil_target="depth_stencil_buffer" sort="FRONT_BACK" profiling_scope="gbuffer" }

{ extension_insertion_point = "gbuffer" }

// linearize depth into a R32F surface
{ resource_generator="stabilize_and_linearize_depth" profiling_scope="linearize_depth" }

// layer for blending decals into the gbuffer0 and gbuffer1
{ name="decals" render_targets=["gbuffer0" "gbuffer1"] depth_stencil_target="depth_stencil_buffer" 
    profiling_scope="decal" sort="EXPLICIT" }

{ extension_insertion_point = "decals" }

// generate and merge motion vectors for non written pixels with motion vectors in gbuffer
{ type="static_branch" platforms=["win", "xb1", "ps4", "web", "linux"]
    pass = [
        { resource_generator="generate_motion_vectors" profiling_scope="motion vectors" }
    ]
}

// render localized reflection probes into hdr1
{ name="reflections" render_targets=["hdr1"] depth_stencil_target="depth_stencil_buffer" 
    sort="FRONT_BACK" profiling_scope="reflections probes" }

{ extension_insertion_point = "reflections" }

// kick resource generator for screen space reflections
{ type="static_branch" platforms=["win", "xb1", "ps4"]
    pass = [
        { resource_generator="ssr_reflections" profiling_scope="ssr" }
    ]
}

// kick resource generator for main scene lighting
{ resource_generator="lighting" profiling_scope="lighting" }
{ extension_insertion_point = "lighting" }

// layer for emissive materials
{ name="emissive" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="FRONT_BACK" profiling_scope="emissive" }

// kick debug visualization
{ type="static_branch" render_caps={ development=true }
    pass=[
        { resource_generator="debug_visualization" profiling_scope="debug_visualization" }
    ]
}

// kick resource generator for laying down fog 
{ resource_generator="fog" profiling_scope="fog" }

// layer for skydome rendering
{ name="skydome" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="BACK_FRONT" profiling_scope="skydome" }
{ extension_insertion_point = "skydome" }

// layer for transparent materials 
{ name="hdr_transparent" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="BACK_FRONT" profiling_scope="hdr_transparent" }
{ extension_insertion_point = "hdr_transparent" }

// kick resource generator for reading back any requested render targets / buffers to the CPU
{ resource_generator="stream_capture_buffers" profiling_scope="stream_capture" }

// kick resource generator for capturing reflection probes
{ type="static_branch" platform=["win"] render_caps={ development=true }
    pass = [
        { resource_generator="cubemap_capture" }
    ]
}

// layer for rendering object selections from the editor
{ type="static_branch" platforms=["win", "ps4", "xb1"]
    pass = [
        { type = "static_branch" render_settings={ selection_enabled=true }
            pass = [
                { name="selection" render_targets=["gbuffer0" "ldr1_dev_r"] 
                    depth_stencil_target="depth_stencil_buffer_selection" sort="BACK_FRONT" 
                    clear_flags=["SURFACE" "DEPTH"] profiling_scope="selection"}
            ]
        }
    ]
}

// kick resource generators for AA resolve and post processing
{ resource_generator="post_processing" profiling_scope="post_processing" }
{ extension_insertion_point = "post_processing" }

// layer for rendering LDR materials, primarily used for rendering HUD and debug rendering
{ name="transparent" render_targets=["output_target"] depth_stencil_target="stable_depth_stencil_buffer_alias" 
    sort="BACK_FRONT" profiling_scope="transparent" }

// kick resource generator for rendering shadow map debug overlay
{ type="static_branch" render_caps={ development=true }
    pass = [
        { resource_generator="debug_shadows" profiling_scope="debug_shadows" }
    ]
}

// kick resource generator for compositing left/right eye
{ type="static_branch" platforms=["win"] render_settings={ vr_supported=true }
    pass = [
        { resource_generator="vr_present" profiling_scope="present" }
    ]
}

{ extension_insertion_point = "last" }

So what we have above is a fairly standard breakdown of a rendered frame, if you have worked with real-time rendering before there shouldn’t be much surprises in there. Something that is kind of cool with having the frame flow in this representation and pairing that with the hot-reloading functionality of render_configs, is that it really encourages experimentations: move things around, comment stuff out, inject new resource generators, etc.

Let’s go through the frame in a bit more detail:

Extension insertion points

First of all there are a bunch of extension_insertion_point at various locations during the frame, these are used by render_config_extensions to be able to schedule work into an existing render_config. You could argue that an extensions system to the render_configs is a bit superfluous, and for an in-house game engine targeting a specific industry that might very well be the case. But for us the extension system allows building features a bit more modular, it also encourages sharing of various rendering features across teams.

Shadows

// kick resource generator for rendering all shadow maps
{ resource_generator="shadow_mapping" profiling_scope="shadow mapping" }

We start off by rendering shadow maps. As we want to handle shadow receiving on alpha blended geometry there’s no simple way to reuse our shadow maps by interleaving the rendering of them into the lighting code. Instead we simply gather all shadow casting lights, try to prioritize them based on screen coverage, intensity, etc. and then render all shadows into two shadow maps.

One shadow map is dedicated to handle a single directional light which uses a cascaded shadow map approach, rendering each cascade into a region of a larger shadow map atlas. The other shadow map is an atlas for all local light sources, such as spot and point lights (interpreted as 6 spot lights).

Clustered shading

// kick resource generator for assigning light sources to clustered shading structure
{ resource_generator="clustered_shading" profiling_scope="clustered shading" }

We separate local light sources into two kinds: “simple” and “custom”. Simple lights are either spot lights or point lights that don’t have a custom material graph assigned. Simple light sources, which tend to be the bulk of all visible light sources in a frame, get inserted into a clustered shading acceleration structure.

While simple lights will affect both opaque and transparent materials, custom lights will only affect opaque geometry as they run a more traditional deferred shading path. We will touch on the lighting a bit more soon.

Clearing & VR mask

// special layer, only responsible for clearing hdr0, gbuffer2 and the depth_stencil_buffer
{ render_targets=["hdr0", "gbuffer2"] depth_stencil_target="depth_stencil_buffer" 
    clear_flags=["SURFACE", "DEPTH", "STENCIL"] profiling_scope="clears" }      

// if vr is supported kick a resource generator laying down a stencil mask to reject pixels outside of the lens shape
{ type="static_branch" platforms=["win"] render_settings={ vr_supported=true }
    pass = [
        { resource_generator="vr_mask" profiling_scope="vr_mask" }
    ]
}

Here we use the layer system to record a bind and a clear for a few render targets into a RenderContext generated by the LayerManager.

Then, depending on if the vr_supported render setting is true or not we kick a resource generator that marks in the stencil buffer any pixels falling outside of the lens region. This resource generator only does something if the renderer is running in stereo mode. Also note that the branch above is a static_branch so if vr_supported is set to false the execution of the vr_mask resource generator will get eliminated completely during boot up of the renderer.

G-buffer

// g-buffer layer, bulk of all materials renders into this
{ name="gbuffer" render_targets=["gbuffer0", "gbuffer1", "gbuffer2", "gbuffer3"] 
    depth_stencil_target="depth_stencil_buffer" sort="FRONT_BACK" profiling_scope="gbuffer" }

{ extension_insertion_point = "gbuffer" }

// linearize depth into a R32F surface
{ resource_generator="stabilize_and_linearize_depth" profiling_scope="linearize_depth" }

// layer for blending decals into the gbuffer0 and gbuffer1
{ name="decals" render_targets=["gbuffer0" "gbuffer1"] depth_stencil_target="depth_stencil_buffer" 
    profiling_scope="decal" sort="EXPLICIT" }

{ extension_insertion_point = "decals" }

// generate and merge motion vectors for non written pixels with motion vectors in gbuffer
{ type="static_branch" platforms=["win", "xb1", "ps4", "web", "linux"]
    pass = [
        { resource_generator="generate_motion_vectors" profiling_scope="motion vectors" }
    ]
}

Next we lay down the gbuffer. We are using a fairly fat “floating” gbuffer representation. By floating I mean that we interpret the gbuffer channels differently depending on material. I won’t go into details of the gbuffer layout in this post but everything builds upon a standard metallic PBR material model, same as most modern engines runs today. We also stash high precision motion vectors to be able to do accurate reprojection for TAA, RGBM encoded irradiance from light maps (if present, else irradiance is looked up from an IBL probe), high precision normals, AO, etc. Things quickly add up, in the default configuration on PC we are looking at 192 bpp for the color targets (i.e not counting depth/stencil). The gbuffer layout could use some love, I think we should be able to shrink it somewhat without losing any features.

We then kick a resource generator called stabilize_and_linerize_depth, this resource generator does two things:

  1. It linearizes the depth buffer and stores the result in an R32F target using a fullscreen_pass.
  2. It does a hacky TAA resolve pass for depth in an attempt to remove some intersection flickering for materials rendering after TAA resolve. We call the output of this pass stable_depth and use it when rendering editor selections, gizmos, debug lines, etc. We also use this buffer during post processing for any effects that depends on depth (e.g. depth of field) as those runs after AA resolve.

After that we have another more minimalistic gbuffer layer for splatting deferred decals.

Last but not least we kick another resource generator that calculates per pixel velocity for any pixels that haven’t been rendered to during the gbuffer pass (i.e skydome).

Reflections & Lighting

// render localized reflection probes into hdr1
{ name="reflections" render_targets=["hdr1"] depth_stencil_target="depth_stencil_buffer" 
    sort="FRONT_BACK" profiling_scope="reflections probes" }

{ extension_insertion_point = "reflections" }

// kick resource generator for screen space reflections
{ type="static_branch" platforms=["win", "xb1", "ps4"]
    pass = [
        { resource_generator="ssr_reflections" profiling_scope="ssr" }
    ]
}

// kick resource generator for main scene lighting
{ resource_generator="lighting" profiling_scope="lighting" }
{ extension_insertion_point = "lighting" }

At this point we are fully done with the gbuffer population and are ready to do some lighting. We start by laying down the indirect specular / reflections into a separate buffer. We use a rather standard three-step fallback scheme for our reflections: screen-space reflections, falling back to localized parallax corrected pre-convoluted radiance cubemaps, falling back to a global pre-convoluted radiance cubemap.

The reflections layer is the target layer for all cubemap based reflections. We are naively rendering the cubemap reflections by treating each reflection probe as a light source with a custom material. These lights gets picked up by a resource generator performing traditional deferred shading - i.e it renders proxy volumes for each light. One thing that some people struggle to wrap their heads around is that the resource generator responsible for running the deferred shading modifier isn’t kicked until a few lines down (in the lighting resource generator). If you’ve paid attention in my previous posts this shouldn’t come as a surprise for you, as what we describe here is the GPU scheduling of a frame, nothing else.

When the reflection probes are laid down we move on and run a resource generator for doing Screen-Space Reflections. As SSR typically runs in half-res we store the result in a separate render target.

We then finally kick the lighting resource generator, which is responsible for the following:

  1. Build a screen space mask for sun shadows, this is done by running multiple fullscreen_passes. The fullscreen_passes transform the pixels into cascaded shadow map space and perform PCF. Stencil culling makes sure the shader only runs for pixels within a certain cascade.
  2. SSAO with a bunch of different quality settings.
  3. A fullscreen pass we refer to as the “global lighting” pass. This is the pass that does most of the heavy lifting when it comes to the lighting. It handles mixing SSR with probe reflections, mixing of SSAO with material AO, lighting from all simple lights looked up from the clustered shading structure as well as calculates sun lighting masked with the result from sun shadow mask (step 1).
  4. Run a traditional deferred shading modifier for all light sources that has a material graph assigned. If the shader doesn’t target a specific layer the lights proxy volume will be rendered at this point, else it will be scheduled to render into whatever layer the shader has specified.

At this point we have a fully lit HDR output for all of our opaque materials.

Various stuff

// layer for emissive materials
{ name="emissive" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="FRONT_BACK" profiling_scope="emissive" }

// kick debug visualization
{ type="static_branch" render_caps={ development=true }
    pass=[
        { resource_generator="debug_visualization" profiling_scope="debug_visualization" }
    ]
}

// kick resource generator for laying down fog 
{ resource_generator="fog" profiling_scope="fog" }

// layer for skydome rendering
{ name="skydome" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="BACK_FRONT" profiling_scope="skydome" }
{ extension_insertion_point = "skydome" }

// layer for transparent materials 
{ name="hdr_transparent" render_targets=["hdr0"] depth_stencil_target="depth_stencil_buffer" 
    sort="BACK_FRONT" profiling_scope="hdr_transparent" }
{ extension_insertion_point = "hdr_transparent" }

// kick resource generator for reading back any requested render targets / buffers to the CPU
{ resource_generator="stream_capture_buffers" profiling_scope="stream_capture" }

// kick resource generator for capturing reflection probes
{ type="static_branch" platform=["win"] render_caps={ development=true }
    pass = [
        { resource_generator="cubemap_capture" }
    ]
}

// layer for rendering object selections from the editor
{ type="static_branch" platforms=["win", "ps4", "xb1"]
    pass = [
        { type = "static_branch" render_settings={ selection_enabled=true }
            pass = [
                { name="selection" render_targets=["gbuffer0" "ldr1_dev_r"] 
                    depth_stencil_target="depth_stencil_buffer_selection" sort="BACK_FRONT" 
                    clear_flags=["SURFACE" "DEPTH"] profiling_scope="selection"}
            ]
        }
    ]
}

Next follows a bunch of layers for doing various stuff, most of this is straightforward:

  • emissive - Layer for adding any emissive material influences to the light accumulation target (hdr0)
  • debug_visualization - Kick of a resource generator for doing debug rendering. When debug rendering is enabled, the post processing pipe is disabled so we can render straight to the output target / back buffer here. Note: This doesn’t need to be scheduled exactly here, it could be moved later down the pipe.
  • fog - Kick of a resource generator for blending fog into the accumulation target.
  • skydome - Layer for rendering anything skydome related.
  • hdr_transparent - Layer for rendering transparent materials, traditional forward shading using the clustered shading acceleration structure for lighting. VFX with blending usually also goes into this layer.
  • stream_capture_buffer - Arbitrary location for capturing various render targets and dumping them into system memory.
  • cubemap_capture - Capturing point for reflection cubemap probes.
  • selection - Layer for rendering selection outlines.

So basically a bunch of miscellaneous stuff that needs to happen before we enter post processing…

Post Processing

// kick resource generators for AA resolve and post processing
{ resource_generator="post_processing" profiling_scope="post_processing" }
{ extension_insertion_point = "post_processing" }

Up until this point we’ve been in linear color space accumulating lighting into a 4xf16 render target (hdr0). Now its time to take that buffer and push it through the post processing resource generator.

The post processing pipe in the Stingray Renderer does:

  1. Temporal AA resolve
  2. Depth of Field
  3. Motion Blur
  4. Lens Effects (chromatic aberration, distortion)
  5. Bloom
  6. Auto exposure
  7. Scene Combine (exposure, tone map, sRGB, LUT color grading)
  8. Debug rendering

All steps of the post processing pipe can dynamically be enabled/disabled (not entirely true, we will always have to run some variation of step 7 as we need to output our result to the back buffer).

Final touches

// layer for rendering LDR materials, primarily used for rendering HUD and debug rendering
{ name="transparent" render_targets=["output_target"] depth_stencil_target="stable_depth_stencil_buffer_alias" 
    sort="BACK_FRONT" profiling_scope="transparent" }

// kick resource generator for rendering shadow map debug overlay
{ type="static_branch" render_caps={ development=true }
    pass = [
        { resource_generator="debug_shadows" profiling_scope="debug_shadows" }
    ]
}

// kick resource generator for compositing left/right eye
{ type="static_branch" platforms=["win"] render_settings={ vr_supported=true }
    pass = [
        { resource_generator="vr_present" profiling_scope="present" }
    ]
}

Before we present we allow rendering of unlit geometry in LDR (mainly used for HUDs and debug rendering), potentially do some more debug rendering and if we’re in VR mode we kick a resource generator that handles left/right eye combining (if needed).

That’s it - a very high-level breakdown of a rendered frame when running Stingray with the default “Stingray Renderer” render_config file.

Mini Renderer

We also have a second rendering pipe that we ship with Stingray called the “Mini Renderer” - mini as in minimalistic. It is not as broadly used as the Stingray Renderer so I won’t walk you through it, just wanted to mention it’s there and say a few words about it.

The main design goal behind the mini renderer was to build a rendering pipe with as little overhead from advanced lighting effects and post processing as possible. It’s primarily used for doing mobile VR rendering. High-resolution, high-performance rendering on mobile devices is hard! You pretty much need to avoid all kinds of fullscreen effects to hit target frame rate. Therefore the mini renderer has a very limited feature set:

  • It’s a forward renderer. While it’s capable of doing per pixel lighting through clustered shading it rarely gets used, instead most applications tend to bake their lighting completely or run with only a single directional light source.
  • No post processing.
  • While all lighting is done in linear color space we don’t store anything in HDR, instead we expose, tonemap and output sRGB directly into an LDR target (usually directly to the back buffer).

The mini_renderer.render_config file is ~400 lines, i.e. less than 1/3 of the stingray renderer. It is still in a somewhat experimental state but is the fastest way to get up and running doing mobile VR. I also feel that it makes sense for us to ship an example of a more lightweight rendering pipe; it is simpler to follow than the render_config for the full stingray renderer, and it makes it easy to grasp the benefits of data-driven rendering compared to a more static hard-coded rendering pipe (especially if you don’t have source access to the full engine as then the hard-coded rendering pipe would likely be a complete black box for the user).

Wrap up

I realize that some of you might have hoped for a more complete walkthrough of the various lighting and post processing techniques we use in the Stingray renderer. Unfortunately that would have become a very long post and also it feels a bit out of context as my goal with this blog series has been to focus on the architecture of the stingray rendering pipe rather than specific rendering techniques. Most of the techniques we use can probably be considered “industry standard” within real-time rendering nowadays. If you are interested in learning more there are lots of excellent information available, to name a few:

In the next and final post of this series we will take a look at the shader and material system we have in Stingray.

Thursday, March 9, 2017

Stingray Renderer Walkthrough #7: Data-driven rendering

Stingray Renderer Walkthrough #7: Data-driven rendering

Introduction

With all the low-level stuff in place it’s time to take a look at how we drive rendering in Stingray, i.e how a final frame comes together. I’ve covered this in various presentations over the years but will try do go through everything again to give a more complete picture of how things fit together.

Stingray features what we call a data-driven rendering pipe, basically what we mean by that is that all shaders, GPU resource creation and manipulation, as well as the entire flow of a rendered frame is defined in data. In our case the data is a set of different json files.

These json-files are hot-reloadable on all platforms, providing a nice workflow with fast iteration times when experimenting with various rendering techniques. It also makes it easy for a project to optimize the renderer for its specific needs (in terms of platforms, features, etc.) and/or to push it in other directions to better suit the art direction of the project.

There are four different types of json-files driving the Stingray renderer:

  • .render_config - the heart of a rendering pipe.
  • .render_config_extension - extensions to an existing .render_config file.
  • .shader_source - shader source and meta data for compiling statically declared shaders.
  • .shader_node - shader source and meta data used by the graph based shader system.

Today we will be looking at the render_config, both from a user’s perspective as well as how it works on the engine side.

Meet the render_config

The render_config is a sjson file describing everything from which render settings to expose to the user to the flow of an entire rendered frame. It can be broken down into four parts: render settings, resource sets, layer configurations and resource generators. All of which are fairly simple and minimalistic systems on the engine side.

Render Settings & Misc

Render settings is a simple key:value map exposed globally to the entire rendering pipe as well as an interface for the end user to peek and poke at. Here’s an example of how it might look in the render_config file:

render_settings = {
    sun_shadows = true
    sun_shadow_map_size = [ 2048, 2048 ]
    sun_shadow_map_filter_quality = "high"  
    local_lights_shadow_atlas_size = [ 2048, 2048 ]
    local_lights_shadow_map_filter_quality = "high"

    particles_local_lighting = true
    particles_receive_shadows = true

    debug_rendering = false
    gbuffer_albedo_visualization = false
    gbuffer_normal_visualization = false
    gbuffer_roughness_visualization = false
    gbuffer_specular_visualization = false
    gbuffer_metallic_visualization = false
    bloom_visualization = false
    ssr_visualization = false
}

As you will see we have branching logics for most systems in the render_config which allows the renderer to take different paths depending on the state of properties in the render_settings. There is also a block called render_caps which is very similar to the render_settings block except that it is read only and contains knowledge of the capabilities of the hardware (GPU) running the engine.

On the engine side there’s not that much to cover about the render_settings and render_caps, keys are always strings getting murmur hashed to 32 bits and the value can be a bool, float, array of floats or another hashed string.

When booting the renderer we populate the render_settings by first reading them from the render_config file, then looking in the project specific settings.ini file for potential overrides or additions, and last allowing to override certain properties again from the user’s configuration file (if loaded).

The render_caps block usually gets populated when the RenderDevice is booted and we’re in a state where we can enumerate all device capabilities. This makes the keys and values of the render_caps block somewhat of a black box with different contents depending on platform, typically they aren’t that many though.

So that covers the render_settings and render_caps blocks, we will look at how they are actually used for branching in later sections of this post.

There are also a few other miscellaneous blocks in the render_config, most important being:

  • shader_pass_flags - Array of strings building up a bit flag that can be used to dynamically turn on/off various shader passes.
  • shader_libraries - Array of what shader_source files to load when booting the renderer. The shader_source files are libraries with pre-compiled shader libraries mainly used by the resource generators.

Resource Sets

We have the concept of a RenderResourceSet on the engine side, it simply maps a hashed string to a GPU resource. RenderResourceSets can be locally allocated during rendering, creating a form of scoping mechanism. The resources are either allocated by the engine and inserted into a RenderResourceSet or allocated through the global_resources block in a render_config file.

The RenderInterface owns a global RenderResourceSet populated by the global_resources array from the render_config used to boot the renderer.

Here’s an example of a global_resources array:

global_resources = [
    { type="static_branch" platforms=["ios", "android", "web", "linux"]
        pass = [
            { name="output_target" type="render_target" depends_on="back_buffer" 
                    format="R8G8B8A8" }
        ]
        fail = [
            { name="output_target" type="alias" aliased_resource="back_buffer" }
        ]
    }

    { name="depth_stencil_buffer" type="render_target" depends_on="output_target" 
            w_scale=1 h_scale=1 format="DEPTH_STENCIL" }
    { name="gbuffer0" type="render_target" depends_on="output_target" 
            w_scale=1 h_scale=1 format="R8G8B8A8" }
    { name="gbuffer1" type="render_target" depends_on="output_target" 
            w_scale=1 h_scale=1 format="R8G8B8A8" } 
    { name="gbuffer2" type="render_target" depends_on="output_target" 
            w_scale=1 h_scale=1 format="R16G16B16A16F" }

    { type="static_branch" render_settings={ sun_shadows = true }
        pass = [
            { name="sun_shadow_map" type="render_target" size_from_render_setting="sun_shadow_map_size" 
                format="DEPTH_STENCIL" }
        ]
    }
    
    { name="hdr0" type="render_target" depends_on="output_target" w_scale=1 h_scale=1 
        format="R16G16B16A16F" }
]

So while the above example mainly shows how to create what we call DependentRenderTargets (i.e render targets that inherit its properties from another render target and then allow overriding properties locally), it can also create other buffers of various kinds.

We’ve also introduced the concept of a static_branch, there are two types of branching in the render_config file: static_branch and dynamic_branch. In the global_resource block only static branching is allowed as it only runs once, during set up of the renderer. (Note: The branch syntax is far from nice and we nowadays have come up with a much cleaner syntax that we use in the shader system, unfortunately it hasn’t made its way back to the render_config yet.)

So basically what this example boils down to is the creation of a set of render targets. The output_target is a bit special though, on PC and consoles we simply just setup an alias for an already created render target - the back buffer, while on gl based platforms we create a new separate render target. (This is because we render the scene up-side-down on gl-platforms to get consistent UV coordinate systems between all platforms.)

The other special case from the example above is the sun_shadow_map which grabs the resolution from a render_setting called sun_shadow_map_size. This is done because we want to expose the ability to tweak the shadow map resolution to the user.

When rendering a frame we typically pipe the global RenderResourceSet owned by the RenderInterface down to the various rendering systems. Any resource declared in the RenderResourceSet is accessible from the shader system by name. Each rendering system can at any point decide to create its own local version of a RenderResourceSet making it possible to scope shader resource access.

Worth pointing out is that the resources declared in the global_resource block of the render_config used when booting the engine are all allocated in the set up phase of the renderer and not released until the renderer is closed.

Layer Configurations

A render_config can have multiple layer_configurations. A Layer Configuration is essentially a description of the flow of a rendered frame, it is responsible for triggering rendering sub-systems and scheduling the GPU work for a frame. Here’s a simple example of a deferred rendering pipe:


layer_configs = {
    simple_deferred = [
        { name="gbuffer" render_targets=["gbuffer0", "gbuffer1", "gbuffer2"] 
            depth_stencil_target="depth_stencil_buffer" sort="FRONT_BACK" profiling_scope="gbuffer" }

        { resource_generator="lighting" profiling_scope="lighting" }

        { name="emissive" render_targets=["hdr0"] 
            depth_stencil_target="depth_stencil_buffer" sort="FRONT_BACK" profiling_scope="emissive" }

        { name="skydome" render_targets=["hdr0"] 
            depth_stencil_target="depth_stencil_buffer" sort="BACK_FRONT" profiling_scope="skydome" }

        { name="hdr_transparent" render_targets=["hdr0"] 
            depth_stencil_target="depth_stencil_buffer" sort="BACK_FRONT" profiling_scope="hdr_transparent" }

        { resource_generator="post_processing" profiling_scope="post_processing" }

        { name="ldr_transparent" render_targets=["output_target"] 
            depth_stencil_target="depth_stencil_buffer" sort="BACK_FRONT" profiling_scope="transparent" }
    ]
}


Each line in the simple_deferred array specifies either a named layer that the shader system can reference to direct rendering into (i.e a renderable object, like e.g. a mesh, has shaders assigned and the shaders know into which layer they want to render - e.g gbuffer), or it can trigger a resource_generator.

The order of execution is top->down and the way the GPU scheduling works is that each line increments a bit in the “Layer System” bit range covered in the post about sorting.

On the engine side the layer configurations are managed by a system called the LayerManager, owned by the RenderInterface. It is a tiny system that basically just maps the named layer_config to an array of “Layers”:

struct Layer {
    uint64_t sort_key;

    IdString32 name;
    render_sorting::DepthSort depth_sort;
    IdString32 render_targets[MAX_RENDER_TARGETS];
    IdString32 depth_stencil_target;
    IdString32 resource_generator;
    uint32_t clear_flags;   

    #if defined(DEVELOPMENT)
        const char *profiling_scope;
    #endif  
};

  • sort_key - As mentioned above and in the post about how we do sorting, each layer gets a sort_key assigned from the “Layer System” bit range. By looking up the layer’s sort_key and using that when recording Commands to RenderContexts we get a simple way to reason about overall ordering of a rendered frame.
  • name - the shader system can use this name to look up the layer’s sort_key to group draw calls into layers.
  • depth_sort - describes how to encode the depth range bits of the sort key when recording a RenderJobPackage to a RenderContext. depth_sort is an enum that indicates if sorting should be done front-to-back or back-to-front.
  • render_targets - array of named render target resources to bind for this layer
  • depth_stencil_target - named render target resource to bind for this layer
  • resource_generator -
  • clear_flags - bit flag hinting if color, depth or stencil should be cleared for this layer
  • profiling_scope - used to record markers on the RenderContext that later can be queried for GPU timings and statistics.

When rendering a World (see: RenderInterface) the user passes a viewport to the render_world function, the viewport knows which layer_config to use. We look up the array of Layersfrom the LayerManager and record a RenderContext with state commands for binding and clearing render targets using the sort_keys from the Layer. We do this dynamically each time the user calls render_world but in theory we could cache the RenderContext between render_world calls.

The name Layer is a bit misleading as a layer also can be responsible for making sure that a ResourceGenerator runs, in practice a Layer is either a target for the shader system to render into or it is the execution point for a ResourceGenerator. It can in theory be both but we never use it that way.

Resource Generators

The Resource Generators is a minimalistic framework for manipulating GPU resources and triggering various rendering sub-systems. Similar to a layer configuration a resource generator is described as an array of “modifiers”. Modifiers get executed in the order they were declared. Here’s an example:

auto_exposure = {
    modifiers = [
        { type="dynamic_branch" render_settings={ auto_exposure_enabled=true } profiling_scope="auto_exposure"
            pass = [
                { type="fullscreen_pass" shader="quantize_luma" inputs=["hdr0"] 
                    outputs=["quantized_luma"]  profiling_scope="quantize_luma" }

                { type="compute_kernel" shader="compute_histogram" thread_count=[40 1 1] inputs=["quantized_luma"] 
                    uavs=["histogram"] profiling_scope="compute_histogram" }

                { type="compute_kernel" shader="adapt_exposure" thread_count=[1 1 1] inputs=["quantized_luma"] 
                    uavs=["current_exposure" "current_exposure_pos" "target_exposure_pos"] profiling_scope="adapt_exposure" }
            ]
        }
    ]   
}

First modifier in the above example is a dynamic_branch. In contrast to a static_branch which gets evaluated during loading of the render_config, a dynamic_branch is evaluated each time the resource generator runs making it possible to take different paths through the rendering pipeline based on settings and other game context that might change over time. Dynamic branching is also supported in the layer_config block.

If the branch is taken (i.e if auto_exposure_enabled is true) the modifiers in the pass array will run.

The first modifier is of the type fullscreen_pass and is by far the most commonly used modifier type. It simply renders a single triangle covering the entire viewport using the named shader. Any resource listed in the inputs array is exposed to the shader. Any resource(s) listed in the outputs array are bound as a render target(s).

The second and third modifiers are of the type compute_kernel and will dispatch a compute shader. inputs array is the same as for the fullscreen_pass and uavs lists resources to bind as UAVs.

This is obviously a very basic example, but the idea is the same for more complex resource generators. By chaining a bunch of modifiers together you can create interesting rendering effects entirely in data.

Stingray ships with a toolbox of various modifiers, and the user can also extend it with their own modifiers if needed. Here’s a list of some of the other modifiers we ship with:

  • cascaded_shadow_mapping - Renders a cascaded shadow map from a directional light.
  • atlased_shadow_mapping - Renders a shadow map atlas from a set of spot and omni lights.
  • generate_mips - Renders a mip chain for a resource by interleaving a resource generator that samples from sub-resource n-1 while rendering into sub-resource n.
  • clustered_shading - Assign a set of light sources to a clustered shading structure (on CPU at the moment).
  • deferred_shading - Renders proxy volumes for a set of light sources with specified shaders (i.e. traditional deferred shading).
  • stream_capture - Reads back the specified resource to CPU (usually multi-buffered to avoid stalls).
  • fence - Synchronization of graphics and compute queues.
  • copy_resource - Copies a resource from one GPU to another.

In Stingray we encourage building all lighting and post processing using resource generators. So far it has proved very successful for us as it gives great per project flexibility. To make sharing of various rendering effects easier we also have a system called render_config_extension that we rolled out last year, which is essentially a plugin system to the render_config files.

I won’t go into much detail how the resource generator system works on the engine side, it’s fairly simple though; There’s a ResourceGeneratorManager that knows about all the generators, each time the user calls render_world we ask the manager to execute all generators referenced in the layer_config using the layers sort key. We don’t restrain modifiers in any way, they can be implemented to do whatever and have full access to the engine. E.g they are free to create their own ResourceContexts, spawn worker threads, etc. When the modifiers for all generators are done executing we are handed all RenderContexts they’ve created and can dispatch them together with the contexts from the regular scene rendering. To get scheduling between modifiers in a resource generators correct we use the 32-bit “user defined” range in the sort key.

Future improvements

Before we wrap up I’d like to cover some ideas for future improvements.

The Stingray engine has had a data-driven renderer from day one, so it has been around for quite some time by now. And while the render_config has served us good so far there are a few things that we’ve discovered that could use some attention moving forward.

Scalability

The complexity of the default rendering pipe continues to increase as the demand for new rendering features targeting different industries (games, design visualization, film, etc.) increases. While the data-driven approach we have addresses the feature set scalability needs decently well, there is also an increasing demand to have feature parity across lots of different hardware. This tends to result in lots of branching in render_config making it a bit hard to follow.

In addition to that we also start seeing the need for managing multiple paths through the rendering pipe on the same platform, this is especially true when dealing with stereo rendering. On PC we currently we have 5 different paths through the default rendering pipe:

  • Mono - Traditional mono rendering.
  • Stereo - Old school stereo rendering, one render_world call per eye. Almost identical to the mono path but still there are some stereo specific work for assembling the final image that needs to happen.
  • Instanced Stereo - Using “hardware instancing” to do stereo propagation to left/right eye. Single scene traversal pass, culling using a uber-frustum. A bunch of shader patch up work and some branching in the render_config.
  • Nvidia Single Pass Stereo (SPS) - Somewhat similar to instanced stereo but using nvidia specific hardware for doing multicasting to left/right eye.
  • Nvidia VRSLI - DX11 path for rendering left/right eye on separate GPUs.

We estimate that the number of paths through the rendering pipe will continue to increase also for mono rendering, we’ve already seen that when we’ve experimented with explicit multi-GPU stuff under DX12. Things quickly becomes hairy when you aren’t running on a known platform. Also, depending on hardware it’s likely that you want to do different scheduling of the rendered frame - i.e its not as simple as saying: here are our 4 different paths we select from based on if the user has 1-4 GPUs in their systems, as that breaks down as soon as you don’t have the exact same GPUs in the system.

In the future I think we might want to move to an even higher level of abstraction of the rendering pipe that makes it easier to reason about different paths through it. Something that decouples the strict flow through the rendering pipe and instead only reasons about various “jobs” that needs to be executed by the GPUs and what their dependencies are. The engine could then dynamically re-schedule the frame load depending on hardware automatically… at least in theory, in practice I think it’s more likely that we would end up with a few different “frame scheduling configurations” and then select one of them based on benchmarking / hardware setup.

Memory

As mentioned earlier our system for dealing with GPU resources is very static, resources declared in the global_resource set are allocated as the renderer boots up and not released until the renderer is closed. On last gen consoles we had support for aliasing memory of resources of different types but we removed that when deprecating those platforms. With the rise of DX12/Vulkan and the move to 4K rendering this static resource system is in need of an overhaul. While we can (and do) try to recycle temporary render targets and buffers throughout the a frame it is easy to break some code path without noticing.

We’ve been toying with similar ideas to the “Transient Resource System” described in Yuriy O’Donnell’s excellent GDC2017 presentation: FrameGraph: Extensible Rendering Architecture in Frostbite but have so far not got around to test it out in practice.

DX12 improvements

Today our system implicitly deals with binding of input resources to shader stages. We expose pretty much everything to the shader system by name and if a shader stage binds a resource for reading we don’t know about it until we create the RenderJobPackage. This puts us in a somewhat bad situation when it comes to dealing with resource transitions as we end up having to do some rather complicated tracking to inject resource barriers at the right places during the dispatch stage of the RenderContexts (See: RenderDevice).

We could instead enforce declaration of all writable GPU resources when they get bound as input to a layer or resource generator. As we already have explicit knowledge of when a GPU resource gets written to by a layer or resource generator, adding the explicit knowledge of when we read from one would complete the circle and we would have all the needed information to setup barriers without complicated tracking.

Wrap up

Last week at GDC 2017 there were a few presentations (and a lot of discussions) around the concepts of having more high-level representations of a rendered frame and what benefits that brings. If you haven’t already I highly encourage you to check out both Yuriy O’Donnell’s presentation “FrameGraph: Extensible Rendering Architecture in Frostbite” and Aras Pranckevičius’s presentation: “Scriptable Render Pipeline”.

In the next post I will briefly cover the feature set of the two render_configs that we ship as template rendering pipes with Stingray.

Wednesday, February 22, 2017

Stingray Renderer Walkthrough #6: RenderInterface

Stingray Renderer Walkthrough #6: RenderInterface

Today we will be looking at the RenderInterface. I’ve struggled a bit with deciding if it is worth covering this piece of the code or not, as most of the stuff described will likely feel kind of obvious. In the end I still decided to keep it to give a more complete picture of how everything fits together. Feel free to skim through it or sit tight and wait for the coming two posts that will dive into the data-driven aspects of the Stingray renderer.

The glue layer

The RenderInterface is responsible for tying together a bunch of rendering sub-systems. Some of which we have covered in earlier posts (like e.g the RenderDevice) and a bunch of other, more high-level, systems that forms the foundation of our data-driven rendering architecture.

The RenderInterface has a bunch of various responsibilities, including:

  • Tracking of windows and swap chains.

    While windows are managed by the simulation thread, swap chains are managed by the render thread. The RenderInterface is responsible for creating the swap chains and keep track of the mapping between a window and a swap chain. It is also responsible for signaling resizing and other state information from the window to the renderer.

  • Managing of RenderWorlds.

    As mentioned in the Overview post, the renderer has its own representation of game Worlds called RenderWorlds. The RenderInterface is responsible for creating, updating and destroying the RenderWorlds.

  • Owner of the four main building blocks of our data-driven rendering architecture: LayerManager, ResourceGeneratorManager, RenderResourceSet, RenderSettings

    Will be covered in the next post (I’ve talked about them in various presentations before [1] [2]).

  • Owner of the shader manager.

    Centralized repository for all available/loaded shaders. Controls scheduling for loading, unload and hot-reloading of shaders.

  • Owner of the render resource streamer.

    While all resource loading is asynchronous in Stingray (See [3]), the resource streamer I’m referring to in this context is responsible for dynamically loading in/out mip-levels of textures based on their screen coverage. Since this streaming system piggybacks on the view frustum culling system, it is owned and updated by the RenderInterface.

The interface

In addition to being the glue layer, the RenderInterface is also the interface to communicate with the renderer from other threads (simulation, resource streaming, etc.). The renderer operates under its own “controller thread” (as covered in the Overview post), and exposes two different types of functions: blocking and non-blocking.

Blocking functions

Blocking functions will enforce a flush of all outstanding rendering work (i.e. synchronize the calling thread with the rendering thread), allowing the caller to operate directly on the state of the renderer. This is mainly a convenience path when doing bigger state changes / reconfiguring the entire renderer, and should typically not be used during game simulation as it might cause stuttering in the frame rate.

Typical operations that are blocking:

  • Opening and closing of the RenderDevice.

    Sets up / shuts down the graphics API by calling the appropriate functions on the RenderDevice.

  • Creation and destruction of the swap chains.

    Creating and destroying swap chains associated to a Window. Done by forwarding the calls to the RenderDevice.

  • Loading of the render_config / configuring the data-driven rendering pipe.

    The render_config is a configuration file describing how the renderer should work for a specific project. It describes the entire flow of a rendered frame and without it the renderer won’t know what to do. It is the RenderInterface responsibility to make sure that all the different sub-systems (LayerManager, ResourceGeneratorManager, RenderResourceSet, RenderSettings) are set up correctly from the loaded render_config. More on this topic in the next post.

  • Loading, unloading and reloading of shaders.

    The shader system doesn’t have a thread safe interface and is only meant to be accessed from the rendering thread. Therefor any loading, unloading and reloading of shaders needs to synchronize with the rendering thread.

  • Registering and unregistering of Worlds

    Creates or destroys a corresponding RenderWorld and sets up mapping information to go from World* to RenderWorld*.

Non-blocking functions

Non-blocking functions communicates by posting messages to a ring-buffer that the rendering thread consumes. Since the renderer has its own representation of a “World” there is not much communication over this ring-buffer, in a normal frame we usually don’t have more than 10-20 messages posted.

Typical operations that are non-blocking:

  • Rendering of a World.

    void render_world(World &world, const Camera &camera, const Viewport &viewport, 
        const ShadingEnvironment &shading_env, uint32_t swap_chain);
    

    Main interface for rendering of a world viewed from a certain Camera into a certain Viewport. The ShadingEnvironment is basically just a set of shader constants and resources defined in data (usually containing a description of the lighting environment, post effects and similar). swap_chain is a handle referencing which window that will present the final result.

    When the user calls this function a RenderWorldMsg will be created and posted to the ring buffer holding handles to the rendering representations for the world, camera, viewport and shading environment. When the message is consumed by rendering thread it will enter the first of the three stages described in the Overview post - Culling.

  • Reflection of state from a World to the RenderWorld.

    Reflects the “state delta” (from the last frame) for all objects on the simulation thread over to the render thread. For more details see [4].

  • Synchronization.

    uint32_t create_fence();
    void wait_for_fence(uint32_t fence);
    

    Synchronization methods for making sure the renderer is finished processing up to a certain point. Used to handle blocking calls and to make sure the simulation doesn’t run more than one frame ahead of the renderer.

  • Presenting a swap chain.

    void present_frame(uint32_t swap_chain = 0);
    

    When the user is done with all rendering for a frame (i.e has no more render_world calls to do), the application will present the result by looping over all swap chains touched (i.e referenced in a previous call to render_world) and posting one or many PresentFrameMsg messages to the renderer.

  • Providing statistics from the RenderDevice.

    As mentioned in the RenderContext post, we gather various statistics and (if possible) GPU timings in the RenderDevice. Exactly what is gathered depends on the implementation of the RenderDevice. The RenderInterface is responsible for providing a non blocking interface for retrieving the statistics. Note: the statistics returned will be 2 frames old as we update them after the rendering thread is done processing a frame (GPU timings are even older). This typically doesn’t matter though as usually they don’t fluctuate much from one frame to another.

  • Executing user callbacks.

    typedef void (*Callback)(void *user_data);
    void run_callback(Callback callback, void *user, uint32_t user_data_size);
    

    Generic callback mechanics to easily inject code to be executed by the rendering thread.

  • Creation, dispatching and releasing of RenderContexts and RenderResourceContexts.

    While most systems tends to create, dispatch and release RenderContexts and RenderResourceContexts from the rendering thread there can be use cases for doing it from another thread (e.g. the resource thread creates RenderResourceContexts). The RenderInterface provides the necessary functions for doing so in a thread-safe way without having to block the rendering thread.

Wrap up

The RenderInterface in itself doesn’t get more interesting than that. Something needs to be responsible for coupling of various rendering systems and manage the interface for communicating with the controlling thread of the renderer - the RenderInterface is that something.

In the next post we will walk through the various components building the foundation of the data-driven rendering architecture and go through some examples of how to configure them to do something fun from the render_config file.

Stay tuned.