This is a very strange time for computer technology. I mean, it’s always a strange time for one reason or another, because the evolution of the computer has been so unlike other technologies that nobody really has any frame of reference for how things ought to work or what will happen next. The only sure thing in computers is that they will keep getting exponentially faster.
Until now-ish.
And that’s why this time is strange. The only thing we were ever sure of is no longer a sure thing.
I’m sure most of you have heard of Moore’s Law, but just for context: In 1965 co-founder of Intel Gordon Moore observed that every two years, we’d be able to get twice as many circuits onto the same size computer chip. Over the decades the rule gradually morphed into the more informal idea that “computers get twice as fast” every two years.
But this sort of undersells the extreme gains we’ve experienced in performance. Yes, we could fit twice as many circuits onto the same size chip, but at the same time we were ramping up clock speeds. So not only did you have twice as many circuits capable of doing almost double the work, but they were also doing that work twice as fast. On top of this, the newer devices would have roughly twice the memory and twice the storage. (Although magnetic drives didn’t follow quite the same growth curve, it’s close enough for our discussion here.)
It’s like having a car with twice the horsepower, twice the fuel efficiency, twice the braking power, twice the tire grip, half the weight, and twice as aerodynamic. The resulting car is a lot more than just “double” the speed of the previous one. While there are several different systems for trying to measure the overall “usefulness” of a computer, there’s no good way to get an apples-to-apples comparison across computer generations. As the machines get faster, we ask them to do more things and the software we’ve used has gotten less efficient.
The point is, computers have done a lot more than just double in “speed” or “power” every two years.
But like all good things, this trend couldn’t last. About a decade ago clock speeds stopped climbing exponentially and pretty much leveled off. (If they had continued, our computers would be cranking at something like 64Ghz, instead of being stuck somewhere around 4Ghz. You’d also need a liquid nitrogen cooling system to keep them from melting your computer.) Making that many circuits go that fast generates too much heat and we don’t have a good way to get rid of it. This is even more true now that so many chips are aimed at mobile devices where heat and power consumption are far greater concerns than raw processing power. At the same time, the circuit density is climbing much more slowly now. Things are still getting faster (by way of getting smaller) but progress is more incremental and less exponential.
So what does this mean for games? I’m getting there.
When we couldn’t ramp up the clock speeds any more, we started packing in more cores. If we can’t make the new CPU twice as fast, we’ll make it the same speed but give you two of them. The problem with more cores is that they’re useless unless the developer can put them to use by breaking their game into multiple threads.
But it’s not quite that easy. You can’t just keep chopping videogame logic into threads, because some tasks have to be done before or after other tasks. The game needs to process user input, see that they fired their weapon, calculate the trajectory of the bullet, and apply damage to the victim. You can’t really break that down into multiple tasks because they must be done in order. You can’t calculate damage until you know who the bullet hit. You can’t plot the trajectory of the bullet until you know the user fired, and you can’t know they fired until you process their input.
No discussion of computer technology is complete without a terrible car analogy, so here’s mine: Imagine you have two errands to run: “Get groceries”, and “pick up the kids from school”. If you had two cars (with drivers, obviously) you could do both tasks at the same time. One car gets the food, the other gets the kids. But if your errands were instead “Get kids at school” and “take kids to soccer practice” then they can’t be done simultaneously, and the extra car would be useless. In computing, tasks that must be done in order like this are referred to as “serial”.
The point is that we’ve pretty much pushed conventional multi-threading about as far as it can go. We’ve offloaded most heavy-duty non-serial tasks to background threads. This includes stuff like animation, sound, polygon pushing, and maybe some AI pathfinding. Every thread adds to the overall complexity that the programmer has to manage, and all of the big jobs are already done. All that’s left are difficult tasks for very small gains. If someone made a thirty-two core processor tomorrow, it wouldn’t do anything to speed up your games because no normal game could keep that many processors busy.
Except!
The one area that can always use more cores is the graphics processor. Graphics processing – taking millions of triangles and turning them into a single frame of gameplay for you to look at – can use as many processors as we throw at it. In some extreme theoretical case, you could have one processor for each pixel on the screen. Yes, that would result in a graphics card about thirty square meters in size and have the power draw of a small neighborhood. (Based on the current core density of 2048 CUDA cores in 114 square cm on the NVIDIA 980.) But the point is that you can keep adding graphics cores and get more pixels and higher framerates.
That’s assuming the general processor – the one running the game – can keep up, which it can’t. I don’t have any proof, but I strongly suspect that the current-gen console games that can’t nail 60fps are limited by their CPU, not their graphics throughput.
Which means that we’re running into a soft ceiling on general computing power (which games are hungry for) and have plenty of graphics power, which is the one kind of power we don’t need. Graphics already look amazing, and the really taxing high-end graphics are so expensive to produce that only the top AAA developers can afford to use them. And having the power to draw tons of frames isn’t really useful if the CPU can’t run the game fast enough to make those extra frames happen. On top of all that, it’s now really hard to notice improvements to graphics technology, because they already look so good. So even if you quadruple the graphics power, it wouldn’t mean anything to the consumer. It won’t make the game smoother and it will only look a tiny bit better even if developers can afford to put the power to use. The kind of power we can have is the power we really don’t need.
So the end – or “slowing down”, if you prefer – of Moore’s Law isn’t going to mean anything right away. But it does mean console generations should last longer. (Sony thought the PS3 was going to be a ten-year console. They miscalculated, but it’s more likely to happen for the PS4.)
The two wildcards here are VR and 60fps gaming. If VR takes off, it might give us a push into another console generation. If Sony and Microsoft decide that the general public wants 60fps games and not just the hardcore, it might push us into another generation.
For me, this new status quo is pretty great. I’m not going to miss the 90’s where I needed to buy a new PC every two years just to keep up. And I’m also not going to miss the aughts, where I needed a new graphics card every two years. We’re finally entering a time where we can worry less about hardware and more about the games.
(Have a question for the column? Ask me!.)
Shamus Young is a programmer, critic, comic, and crank.