top of page
  • Photo du rédacteurThomas Deliot

Adding real-time volumetric clouds to a 1:1 scale terrain

A few months ago I had to choose a personnal project for this semester of my studies, so I decided to use this opportunity to start working again on something I had wanted to do for a while : adding volumetric clouds and other weather phenomena to the terrain engine. It ended last month and we had to record a short demonstration video and design a poster for it, so here they are :

The main issue with this project was that I wanted to implement the volumetric clouds that we are starting to see in video games nowadays, like the ones in Horizon: Zero Dawn (their presentation on volumetric clouds is great) or Reset, but most of these implementations are focused on having volumetric clouds as a skybox behind the geometry, meaning they can't interact with the scene, be in front of a mountain, etc. For a realistically scaled terrain engine this wouldn't work, what I needed was volumetric rendering fully integrated with the "normal" opaque geometry of the terrain.

To solve this, instead of ray-marching one huge volume above and behind the scene, I chose to define clouds and other volumetric phenomenons as ellipsoid objects in the scene (meaning each cloud is a separate obejct in the scene hierarchy). The ellipsoids define the shell of the volumetric phenomenon inside it, and the ray-marching is only executed inside these shells.

The rendering is done in a post-effect which uses the camera's depth buffer to intersect the volumetric objects with the opaque geometry of the terrain. I optimized the performance in two main ways : rendering the volumetric post-effect at a smaller resolution than the main one, and implementing temporal rendering (where for exemple only 1 out of 4 pixels is updated each frame, with the three others being reprojected from the previous frame). As clouds have smooth borders and move slowly, both these optimizations greatly improve the performance without losing much quality. Afterwards I spent a great amount of time focusing on having the most physically correct rendering algorithm possible with real-time performance in mind, to achieve realistic clouds without "cheating" by having to tweak them.

Here is a graphic summarizing what the post-effect does. For each pixel of the screen, we find which ellipsoids are intersected by the pixel's ray, and store the intersection zones for the ray-marching loop, which ray-marches through the different volumes while evaluating the light energy contributions from both the sun (with a second ray-marching step towards it) and the sky's hemisphere (grossly approximated).

Once this was done I quickly realized how it was also necessary to have the same atmospheric scattering already in place for the opaque geometry, in-between the clouds themselves. Otherwise clouds far in the distance above blue-tinted mountains stand out and are generally unrealistic. I decided to simply modify the ray-marching loop so that when it jumps from one ellipsoid to the next, the atmospheric scattering for the distance of the jump is added to the accumulated in-scattering and extinction values.

Overall I was quite satisfied with what I could achieve during the time of the project, but it is of course far from being complete. The main missing thing now is having the clouds cast shadows on the ground and on other clouds below it. This proved to be a far more complex issue than I would've thought, as simply running the entire ray-marching loop again towards the sun for each primary ray-marching sample would be way too complex. Instead I am focusing on methods like the one used in this paper : Boundary-aware extinction mapping which does a ray-marching step from the light source and stores the evolution of the transmittance over space in Fourier series. This then allows the ray-marching from the camera to know the amount of sun light energy arriving at each point in space, replacing the secondary ray-marhcing step for each samples. However this proves to be a complicated task to implement because of the scale of the terrain engine and the distances between the clouds, meaning that a huge amount of Fourier coefficients are needed in order to have something even approximatively working.


bottom of page