So now that we have our key concerns listed, we can finally look at how we solved these problems in KSP2. That way we don’t have to deal with the model to world transformation, keeping the position values in a reasonable range. How do we deal with this possible problem? Well, one simple solution is to generate the vertex data so it is relative to the floating origin already. If we then are viewing the terrain while it is close to the camera, creating very small distances in camera space, we may have some visual artifacts (as seen above). If we defined the planet vertex data in model space, then during rendering, when we transformed its position to camera projection space, we could possibly be dealing with some large transformation values. For our planet rendering purposes, this means that our planet center will usually be further from the origin than its radius. This allows artists to have full control over the look of the planet from this distance, and is a good starting point to add more detail to as you approach the planet surface.Īnother core gameplay feature we have to keep in mind when rendering the planets is that their position may be moved around relative to our floating origin (for more info, see the previous dev blog by Michael Dodd). We do however use this basic approach when rendering planets from further away. Given these problems, we don’t use this basic approach when rendering planets up close. One other big issue has to deal with the size of our planets and precision issues when trying to position the planet in camera projection space. GPU tessellation is also a possibility, but that wouldn’t really give us much control over the terrain height. We could mitigate these problems with level of detail approaches, and perhaps trying to break up the planet into chunks, so we could only load in the chucks that are relevant. The biggest issues we would have revolve around the memory usage that it would take to store all that vertex data for planets that are as large and detailed as we have in the game. We could try and use this approach for our planets, but there are a couple big issues we would have when trying to achieve the level of detail we would like. Generally, the mesh data is sent from system memory over to the GPU, where shaders read it, place it at the correct pixels on screen and output the correct color given some material properties. Lets start by looking at how most meshes are rendered in KSP2 (and most games for that matter). I’ll try to give as much detail as I can without having this take an hour to read… Lets dive a bit deeper into how we solve this problem in KSP2. How do we solve all the problems associated with a graphics feature such as this? Can we just use traditional approaches for level of detail? We want transition between these distances to appear as you would expect, as you get closer to the planet surface, you just see more detail. We need a system to render the planets while in orbit and interstellar travel, as well as up close, on the planet surface. One of the most obvious of these systems is how we generate, position, and render the planets in the game. My job is to create technical solutions to the graphics features we have on KSP2. Hi, I’m Eric DeFelice, a graphics engineer on the KSP2 team.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |