How does a video game decide how far you can see? Discovering the drawing distance and its complications

The worlds of modern videogames are vast in size and rich in detail and, even so, with the passage of time we have gained a greater viewing distance generation after generation, a phenomenon that is undoubtedly surprising, but why do we see how far we see? in our video games?

Who hasn’t felt chills in the opening scene of The Legend of Zelda: Breath of the Wild? After coming out of our rest and welcoming the world of Hyrule, the Nintendo title presents us with a general plan of its world, one in which, if we stop for a moment, we can differentiate almost everything that is in the distance. Taking into account that the Nintendo Switch is a relatively unpowered console, that scene, when we have it on screen, can awaken the spark that fuels a whole battery of questions: how are modern video games capable of achieving this? Why can I see the distance clearly in works like Ghost of Tsushima while in others like Minecraft things seem to appear face down? How is it possible that in Microsoft Flight Simulator you can see the world in its entirety from a cockpit?

All these doubts are answered through the call viewing distance, a term that almost unintentionally encompasses elements of game design, of the technology associated with them. Throughout this text we aspire to be able to introduce you very slightly to some of these topics, using this term and its most direct applications as a common thread, using it as a springboard to explain in a plain way how the graphics of our video games work, and why we watch “as far as we can” while at the controls of our favorite titles. We invite you to join us once again.

As far as our eyes can see

Just Cause 4.Just Cause 4.

The draw distance is a common element in 3D titlesWhen dealing with the representation of the virtual worlds of our video games, we usually fall in the comparison with photography, for the easy comparison between capturing the real world through a camera and perceiving the virtual world through a game camera. However, in reality how these paintings are processed has more to do, on a conceptual or spiritual level, with painting. When we speak of drawing distance in a video game, we refer to the maximum visible distance in which the objects of the virtual world of a video game are presented in the final plane. The greater the drawing distance, the more distance, but not necessarily more detail or more elements. If this drawing needs to be reestablished and we do not reach it satisfactorily, what is known as Pop-in, which is nothing more than the abrupt appearance of items on screen. Works as remembered and acclaimed as RAGE, on certain platforms, accused of this problem, sometimes with models and others with textures. To understand why video games can reproduce the elements of the game world with fidelity, regardless of their distance, it is necessary to know how graphics work in our favorite titles, albeit at a basic level.

A little story about polygons

How does a video game decide how far you can see?  Discovering the drawing distance and its complications

Let’s start with the basics. Let’s suppose that we are playing a title in 3D, a common practice in our environment today. What we see on our screens, although it may seem like a living three-dimensional space, is nothing more than a rapid succession of pre-generated images that create the illusion of movement. If we were able to take a general photo of one of these images, we would have before us what is normally referred to as a rendered “frame”, which is nothing more than the graphic representation of a series of polygons, textures and other effects in a three-dimensional space, which are being reproduced on a flat surface – the screen itself – divided into small segments: the popular pixels. The process through which we get to this point is called “drawing” a rendered scene, but let’s not get ahead of ourselves yet.

How does a video game decide how far you can see?  Discovering the drawing distance and its complications

If we separate the different elements that we have mentioned, polygons are the foundation on which any three-dimensional scene is built. The models that we see on the screen, regardless of their level of detail or realism, are formed by a conjunction of these polygons, which are usually triangular or square and are joined through their vertices to “close” a model completely, established using coordinates through space. These vertices serve as a point of attachment, but they are also an important signal from which to determine the position of other elements that we see in the final scene, as with texture maps; they are a kind of “support points” for the space in which a scene is built. The more polygons, the more complex they are and the more of these support points are brought up, the heavier the process through which our teams draw the squares that are then represented on our screens.

How does a video game decide how far you can see?  Discovering the drawing distance and its complications

What we see on our screens is an elaborate set of smoke and mirrorsThe last piece we need to know to understand “the basics of the basics” within a scene is how viewpoints work, commonly called cameras, which correspond to the frame through which the point of view of a scene is established, is our window to the three-dimensional world of the video game. It does not matter that the models are within a scene, if they are not under the watchful eye of the camera, we will not be able to see them.

Drawing on virtual canvases

How does a video game decide how far you can see?  Discovering the drawing distance and its complications

Set a scene in three-dimensional space it’s only half the process necessary to produce what we see on the screen while we play, the scenes have to be drawn so that our screens can represent them; a task that takes place throughout several complex processes both to understand and to explain, but which by the nature of this text we are going to separate into three major processes: the projection of space, the drawing of the scene and the optimization of the painting drawn .

We do not have to draw what we do not see.We do not have to draw what we do not see.

The projection of space is a process that happens through the camera, of which we have spoken lightly in the previous paragraph. From it, the visualization cone is established, called “field of view” or “frustum” in the language we are dealing with. This frustum is what we will see on our monitor, a kind of lens that takes an image of three-dimensional space, leaving the rest out of our vision. But talking about “taking an image” is also an analogy, the three-dimensional scene to which we refer does not exist outside its virtual environment until we draw it, something that happens through the so-called image rendering, being the most popular method rasterization, which is the process by which vector information -those support points we talked about above- are converted into a bitmap that our screens can understand.

Due to the two-dimensional nature of the image on our screens, many of the elements drawn do not have to be processed in the painting. As if it were a canvas, “painting” what is within the field of vision would be enough. Similarly, not everything has to be perfectly represented, distant figures. This is where they come in optimization techniques, responsible for lightening the work of representing each image, as well as facilitating the work of representing the elements to be drawn.

How does a video game decide how far you can see?  Discovering the drawing distance and its complications

We like to separate these techniques into two groups, the first being the group dedicated to the optimization of the drawing process itself, and the second to the stylistic makeup derived from the possible shortcomings or roughness of the final painting. In the latter, techniques such as LOD, to which we dedicated an article some time ago, the use of Difuse Maps or simpler elements -conceptually at least- such as game fog, fall into this group. For the first grouping, however, techniques such as culling They’ve helped games like Horizon: Zero Dawn look great on older devices; while others, such as the Enviroment Cubemap, allow titles like GTA V to show an open world as rich in detail as yours on consoles that were released a whopping fifteen years ago. For Rockstar’s work, due to its current activity, there is an extensive record of its optimization techniques; One of the articles about it that we like the most is that of Adrian Courrèges on his personal blog.

We have covered everything we could within our usual length, and we hope that, with the information given, you will be able to understand a little more of the technical magic that happens behind our screens for each frame, and the limitations when it comes to extending it. towards the infinity of our favorite video games. If you have been left hungry for more, texts like the one that Bart Wronski, former developer of projects as ambitious in his day as The Witcher 3: Wild Hunt, recently left on his personal blog seem worthy of mention, mainly due to the strong inspiration in it for the elaboration of this text.

Other texts from “Learn about video games” that may interest you

Leave a Comment