games
pages
posts

iconVSync

Shearing

What is VSync?
Let's see how displaying something on a monitor works. Basically, conceptionally, there is an area of memory - let's call it the framebuffer - which has all the RGB values of all pixels visible. This gets updated by the graphics card - maybe because you actually sent pixels to it or because you asked the graphics card to draw some 3D scene to it, it doesn't really matter to us here - it's just a simple model how we will view things.
Now the graphics card sends the contents of this framebuffer to the monitor which displays all the pixels. If for example a letter is changed in a displayed text then the framebuffer is modified accordingly and the next time the monitor redraws the corresponding part of the display, this will be visible. The important thing to note is you cannot tell the monitor to update just one specific location. Instead the framebuffer gets modified and then is sent line by line and pixel by pixel to the monitor, starting at the top left corner and ending at the bottom right corner.
Originally in old CRT monitors the electron beam would be sent back up to the top left monitor corner from the lower right corner after a complete update was done and when it started this, it would send a VSYNC signal to the graphics card (now we know where the name of this article comes from). This was then the preferred time to update the contents of video ram - since if you could finish the updating of the framebuffer before the electron beam would start displaying the contents again you could be sure that the image would be displayed correctly.
If this is not done, there will be visible shearing, as can be noticed in games when you disable vsync. Assume, we have two frames in our game:
111111
111111
111111
222222
222222
222222
First a screen full of color 1, then a screen full of color 2. The correct way to display would be, in the first update, the monitor displays all 1s, and in the second update, all 2s. But now assume, we have no vsync. So first we display frame 1.
111111
111111 <-
111111
Next, we switch to frame 2. That is, the contents of the framebuffer are changed from all 1s to all 2s (and compared to the speed of the monitor update, the change happens in an instant). Now let's say the monitor has just updated the line marked with an arrow. So in the next line it will see the updated framebuffer contents.
111111
111111
222222 <-
At this point our monitor displays a picture which never existed. We only had two frames, one with all 1s, one with all 2s - and the result just is wrong, it shows a frame which has a part of the first frame and a part of the second. If for example there's a video with a straight lamp post moving at 10 pixels / frame this lamp post might now have a 10-pixel big step in the middle, like so:
wanted:
O
|
|
|
|
result without vsync:
O
 |
 |
  |
  |
The shearing does not necessarily have to be at one single location. If you switch your graphics let's say with 600 FPS, and your monitor has a 60 Hz update, you will get 10 horizontal lines with shearing. I.e. for one monitor update the actual framebuffer contents change 10 times. If you want to play a classical side-scroller that way you won't have much fun. On the other hand if you are browsing websites you probably wouldn't mind much, since the effect would only be visible while scrolling text.
In the following, we don't talk about shearing any longer, but the problem of getting smooth animation with VSync enabled. So from now on, VSync is implicitly assumed to be always enabled.

Synchronizing logic and rendering

With VSync enabled, the display is only updated at discrete points in time. For example, 60 or 75 times / second. The exact time is determined by the monitor. But a game could also use a discrete logic rate, for example, 100 updates / second. One very basic problem now is that both logic and rendering may take CPU time. If the CPU is too weak to handle the logic updates, the game cannot run at all (or only slowed down). If the CPU can barely handle the logic, there may be not enough time to render all frames - in which case video updates may get skipped, usually making the game unplayable as well, or if only some frames are skipped, somewhat jumpy.
Now, in the following, we assume there always is enough CPU power. In most cases, the rendering will completely happen on the video card's GPU, so the actual time consuming part is memory transfers. In any case, if there's not enough CPU/memory bandwidth to handle it all, it will be hard to get smooth animation.

Smoothness

The smoothness problem discussed here has nothing to do with shearing, or missed updates due to exhausted CPU time. With enough CPU time, you can have smooth animation with shearing (which will still be useless), and you also will get un-smooth animation without any shearing. But why would animation ever be un-smooth?
Well, some might remember the ultra smooth scrolling we got on an Amiga, or any gaming console. The smoothness comes from the simple fact that things are displayed in a regular manner. Let's assume, video is updated 4 times a second (will just as well work with 60 times, but 4 is easier to draw). And assume we have a sprite which moves at a constant speed of 4 pixels / second. Then this can be viewed as a diagram like below:
second  0           1           2
        |           |           |
pixel   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
vsync   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
smooth
The first row is the time in seconds. The second row is the x-coordinate of our sprite in pixel. The third row is the time at which a vsync occurs, i.e. when we updated a complete new frame on the monitor.
In this case, each time the display is updated (4 times / second), the sprite has moved exactly one pixel. Therefore animation is completely smooth

Unsmooth display

Now, assume, we have a PC. The monitor's refresh rate can be anything, let's assume we have three monitors, with 3, 4 and 6 refreshes / second.
The above would then look like this:
3 vsync / second
second  0           1           2
        |           |           |
pixel   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
vsync   0   1   2   3   4   5   6
        |   |   |   |   |   |   |
drawn   0   1   2   4   5   6   8  <- 3 and 7 were skipped
pixel
unsmooth
4 vsync / second
second  0           1           2
        |           |           |
pixel   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
vsync   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
drawn   0  1  2  3  4  5  6  7  8 <- smooth display
pixel
smooth
6 vsync / second
second  0           1           2
        |           |           |
pixel   0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
vsync   0 1 2 3 4 5 6 7 8 9 0 1 2
        | | | | | | | | | | | | |
drawn   0 0 1 2 2 3 4 4 5 6 6 7 8 <- 0, 2, 4, 6 were displayed twice
pixel
unsmooth
When we render something in the game, it only can happen exactly at a vsync. Now, assume, we simply draw at the pixel position available at the time of each vsync.
Obviously, only the middle one can be smooth then. If the integer pixel positions are not synched to the display updates, there will be jitter. Either the sprite will be jumpy, and skip certain frames (in the first case above), or it will stutter, and some frames will stay twice as long as others sometimes (in the last case above).
So, this means, when a game is ported from a console to a PC, all the smoothness is gone, unless you are lucky enough to have a monitor with a video mode to support the right frequency. Two more example GIF animations (in GIF, the frame duration is specified as an integer in centi-seconds, so a GIF cannot have a framerate of e.g. 60 Hz (only 100 Hz, 50 Hz, 33.33 Hz, 25 Hz, 20 Hz, 16.67 Hz, ... I should convert this to flash at some point, the GIFs are easy to create and enough for illustration though).
25 FPS on 25 Hz:
smooth
If your monitor runs at exactly 75 Hz, and your web-browser is good at displaying gif animations, the above animation will be completely smooth. Chances are it doesn't look smooth, that's why the other examples use lower than realistic speeds.
25 FPS on 20 Hz:
unsmooth
Here the animation again runs at 25 FPS, but the gif updates 20 times a second instead. Not all 25 different positions per second can be displayed in 20 frames, so the animation gets jerky. This is just what happens in a game. If something moves 25 pixels per second, it can look smooth on one monitor, and not smooth on another. Luckily, we have some solutions to still get smooth animation even when the frequencies won't match.

Solution 1: variable timesteps

Let's first increase the pixels our sprite moves, e.g. 40 pixels / second in the first example from earlier (but still only updating 4 times a second):
3 vsync / second
second  0           1           2
        |           |           |
pixel   0  10 20 30 40 50 60 70 80
        |  |  |  |  |  |  |  |  |
vsync   0   1   2   3   4   5   6
        |   |   |   |   |   |   |
drawn   0   10  20  40  50  60  80  <- 30 and 70 were skipped
pixel
unsmooth
Here, a straight forward solution exists to get better results: Do your rendering at every vsync, and simply have your game logic compute the position at the required time, given the time delta since the last logic update. This means, in our 3 example cases, we would have:
logic at 1/3 second -> move 40/3 = 13.3 pixel
logic at 1/4 second -> move 40/4 = 10.0 pixel
logic at 1/5 second -> move 40/5 = 8.0 pixel
We pass the time in seconds to the logic() function (i.e. 1/3 or 1/4 or 1/5), and given the constant speed of 40 pixel/second, it will return a position, as seen above. Now we render at this position, and get a smooth update, no matter what vsync rate the PC's monitor has. The example now looks like this:
3 vsync / second
second  0           1           2
        |           |           |
pixel   0  10 20 30 40 50 60 70 80
        |  |  |  |  |  |  |  |  |
vsync   0   1   2   3   4   5   6
        |   |   |   |   |   |   |
drawn   0   13  26  39  53  66  79  <- not completely smooth as we had to round to full integer
pixel
almost smooth
Of course, this trick is not completely exact as we had to truncate to full integer positions, even though our exact update step would have been 13.3 pixels. Especially in the original example, if your sprite moves 4 pixels in a second, but you have 3 or 5 display updates, then you always will get jitter. The pixel movements with variable timestep would be:
logic at 1/3 second -> move 4/3 = 1.33 pixel
logic at 1/4 second -> move 4/4 = 1.00 pixel
logic at 1/5 second -> move 4/5 = 0.80 pixel
And so e.g. for the third case, we would end up with exactly the same positions as then. An easy solution of course is if the display supports subsampling. That is, a sprite drawn to pixel position 1.0 looks differently from one drawn at position 1.33. In this case, we can improve smoothness to almost perfection, as even the inter-pixel fractions are accounted for.

Solution 2: interpolation

For various reasons, variable timestamps are bad in a game though: If you have constant acceleration instead of constant velocity (e.g. not "move 4 pixel/second", but "accelerate 1 pixel / secondĀ²"), you need a complicated integrator to get the right positions. Or think of non-linear motion, e.g. a circle path. And in general, physics quality might now differ depending on the vsync rate. For networked games which need to stay synchronized, it might not be possible at all, and for other multiplayer games, clients with a higher vsync might have an advantage or disadvantage due to more accurate physics prediction. For many classic style games, it simply will make the game logic and collision detection much more complicated.
So, what is the situation here? Assume we have a logic() function, which just ticks 4 times / second again, like the initial example. Each sprite has an integer position sprite.x, which is incremented in each tick. But, our vsync can be 3 or 5 instead of only 4. So, what we want is:
3 vsync / second
second  0           1           2
        |           |           |
tick    0  1  2  3  4  5  6  7  8
        |  |  |  |  |  |  |  |  |
vsync   0   1   2   3   4   5   6
        |   |   |   |   |   |   |
drawn   0 1.33 2.66 4 5.33 6.66 8
pixel
To achieve this, we can give each sprite an extra variable sprite.last_x, and store where it was in the last frame. For vsync #1 above, we get:
sprite.last_x = 1 (from tick #1)
sprite.x = 2 (we already execute tick #2)
So now, we know that our logic tick rate is 4 / second, and our vsync rate is 3 / second. At vsync #1, we are at 0.33 seconds. Tick #1 was at 0.25 seconds, tick #2 at 0.50 seconds. So, we interpolate:
render_x = sprite.last_x + (sprite.x - sprite.last_x) * (0.33 - 0.25) / (0.50 - 0.25) = 1 + 1 * (0.33 - 0.25) / 0.25 = 1.33
Or in general:
t_i = time when sprite was at sprite.last_x
t_j = time when sprite would be exactly at sprite.x
t   = render time
render_x = sprite.last_x + (sprite.x - sprite.last_x) * (t - t_i) / (t_j - t_i)
In this way, we achieve the same as in the variable timestep solution, but do not have to change our logic code at all. We can write the code as if the vsync rate would always be the logic rate, and only the renderer will need to do extra interpolation. Again, for small movements like in this example, there only will be an advantage with sub-sampling. But if the example again would use 40 pixels / second, things should get much smoother (just the same as with variable timesteps) than without interpolation, even for using only full integer positions.

Solution 3: Increase logic rate

If you look what variable timestep and interpolation did, then it gets clear that the effect they have in our example is more serious the bigger the pixel movement gets. E.g. with 4 pixels per second, displaying at only integer positions would have made no difference at all, since compared to the 3/4/5 video updates per second, we never could be off more than a pixel anyway.
So a simple trick to get the same smoothness as variable timesteps or interpolation is to increase the logic rate. E.g. just take an extreme case of 1000 logic updates / second. Now even if a sprite moves very fast, when it is drawn, it can only be at most 1/1000th of a second worth of movement off the ideal position which we would have arrived at with variable timestep or interpolation.
Of course, this has a serious drawback, we are wasting lots of CPU on doing much more logic updates than we need.

Solution 4: Adjust refresh rate

One solution which would be the most simple of all, and which also has the best results: Try to adjust the refresh rate. When you set a graphics mode, you may have a selection of different modes, with refresh rates such as 60Hz, 75Hz, 100Hz - and many others. And if your logic rate, or a multiple thereof, or even something sufficiently close, is among the possible rates, you can try to set that rate, and then simply time your game off the vsync signal. E.g. if the rate 61.2 Hz is available, and your logic is supposed to run at 60 Hz, you may get away with simply using vsync, and letting your game run slightly faster (61.2 Hz instead of 60 Hz), but perfectly smooth, without needing any interpolation.
But note that it is important to time the game off the vsync then. If you let your game run at 60 Hz logic rate, and the monitor updates with 61.2 Hz, this will then still stutter when timed off vsync. Of course this won't work if it's a network game.
And also, sometimes graphics drivers report refresh rates as available, when they really are not. So if you see a nice 100 Hz mode there, switch to fullscreen and set the mode, the user might only get a blanked screen with a message "out of sync" or something.