NVIDIA GeForce GRID
Peter Donnell / 6 years ago
For the longest time, PCs and consoles were the only two options available to gamers. The PC was a platform of complete freedom and the avid PC gamers relished the chance to use the latest hardware to play the latest games. Others preferred the simplicity of consoles; although never on the cutting edge, consoles never needed upgrading and made it easy to game from the comfort of the living room.
Two years ago, a new technology emerged that promised to combine the merits of consoles and PCs. Known as cloud gaming, it was the latest step in the general trend of virtualizing resources in large data centers and serving them up to users on demand via the web. With cloud gaming, instead of running a game locally, a gamer’s keystrokes would be sent to the cloud where they are interpreted, the game frame rendered, and the result sent back as streaming video. Because all the heavy lifting is done in the cloud, the player would get to play all the latest games without ever having to worry about their hardware.
When you play games from the cloud you are also not tied to a single platform like a TV or PC. Game cloud servers render a video stream that can be decoded by any device. Therefore, in the future you will be able to rent a game once and enjoy it on all of your devices: TV, PC, Tablet, Phone and iOS.
As a concept, cloud gaming is immensely attractive, but in reality, it has been very much an emerging technology. The main issue is latency, or the time delay associated with issuing a command and having the screen reflect that command. Today’s cloud gaming services take about 280 milliseconds. Compared to about 160 ms for consoles and sub-100 ms for the PC, cloud gaming feels too “laggy” to be considered a first-rate gaming experience.
NVIDIA GeForce GRID, a second-generation cloud gaming platform announced at the 2012 GPU Technology Conference (GTC), strikes at the heart of the problem by reducing the latency associated with cloud gaming. To understand how it works, let’s take a closer look at the latency that exists in today’s cloud gaming services.
A comparison of latency by system and a breakdown of its various stages.
The above diagram compares the latency of consoles, first-generation cloud gaming solutions, and GeForce GRID.
Consoles have the most straightforward pipeline. Most console games run at 30 frames per second (fps). This equates to 33 ms per frame. Because three frames are buffered, this adds up to about 100 ms of latency for the rendering portion. HDMI TV displays add 66 ms of additional latency. In total, it takes about 166 ms between a command being issued and the picture updated on the screen.
The latency of cloud-based solutions includes all of the above plus three additional stages. When a frame is rendered on a cloud server, it must be encoded (like a video), sent across the network, and then decoded on the client. These processes take up approximately 30 ms, 75 ms, and 15 ms respectively, adding 120 ms to the pipeline. This extra latency is the main reason why first-generation cloud gaming feels laggy when compared to gaming on a local system.
GeForce GRID reduces the latency of cloud gaming by reducing every aspect of the rendering and transmission pipeline. First, it reduces the render time from 100 ms to 50 ms. This is achieved by increasing the framerate from 30 fps to 60 fps. The benefit here is twofold; not only do gamers get reduced latency, they also get higher frame rates. The result is greatly improved perceived smoothness and responsiveness.
Second, the encoding and decoding time has been reduced thanks to a host of new hardware and software technologies. NVIDIA GeForce GRID’s NVENC (low-latency encoder), NVFBC (ultra-fast full-frame buffer capture), and NVIFR (ultra-fast frame read back) capabilities capture, scale, and encode a game frame in a single pass in about 15 ms, shaving 30 ms off the overall latency.
Finally, the network latency will be much lower with GeForce GRID. First-generation cloud gaming servers used a single GPU per server. GeForce GRID servers enable up to four power-efficient Kepler graphics processing units (GPUs) to be connected to each server. This means much more powerful servers, lower cost for the operator, and much wider deployment. For the same cost, operators can deploy servers in a greater number of locations, which brings down the network latency, especially if you do not currently live near a data center.
GeForce GRID has the lower latency than both existing cloud services and gaming consoles. A GeForce powered PC still offers the best responsiveness at under 75 milliseconds.
The net result of these improvements is that the latency of gaming on a GeForce GRID server will be comparable, if not better, than gaming on a console at home.
What the Future Holds
First-generation cloud gaming services gave us a glimpse of the future, but for the most part gamers today still play on PCs or consoles; the latency, image quality, and network speed have not been at the point where cloud gaming could be enjoyed by the masses. GeForce GRID represents the next generation, putting the best of NVIDIA’s GeForce technology into the cloud. Just as scientists can call up NVIDIA Tesla servers to help them advance their discoveries, gamers can now call up GeForce GRID servers to play the latest games. And thanks to the latest rendering and encoding technologies, GeForce GRID dramatically lowers the latency of cloud gaming. With reduced lag, greater penetration of gaming servers, and Kepler-quality graphics, it’s not hard to imagine that, in the near future, cloud gaming will be as ubiquitous as online video. Will traditional PC gaming still be around? Most certainly; for those who demand the best quality and performance, it’s still the best choice. But regardless of whether you are a PC or console gamer, GeForce GRID now offers a new to enjoy your games, on any device, anywhere.