Experienced Points

Graphics Are Sometimes Hard

ExperiencedPoints 3x3

Time to answer some more reader questions. This time it’s a bunch of questions about graphics:

Dear Shamus,

As I understand it, the bounding boxes or physical meshes or whatever is used are a lot more basic than what the graphics for those same objects look like. Why is this? Is it inherently harder to have high fidelity bounding boxes? Are they just blowing the hardware resource budget on visuals and leaving nothing for physics?

Ever a fan,

Wide and Nerdy

(Edited for length.)

There are a couple of similar-sounding concepts that we need to pull apart, here. For the uninitiated: We’re talking about the sort of abstract notions that programmers use under the hood to drive the logic of the game. You’ll never see a bounding box in a game, but they’re used everywhere in 3D programming. (And in 2D, we use bounding rectangles. Same idea.)

A bounding box is the smallest cube that can contain the given geometry. So if there’s a bounding box on a character’s head, then the tip of their nose ought to touch the front of the box and the highest point on their head ought to touch the top. A hitbox is also an invisible cubic region, but in this case it represents the area that’s checked for collision. In a shooter, you’re not really shooting the bad guy in the head, you’re shooting an invisible cube that envelops his head.

This means that your eyes won’t always agree with the computer on whether or not a bullet should have hit. Maybe from the side the bullet should have passed just under their nose, but instead it scored a hit because it entered the hitbox. Or – if the hitbox is smaller so the nose isn’t included – it might look like the bullet passed right through the nose without scoring a hit. Wide and Nerdy is asking why games don’t take the time to check for a real hit instead of mucking about with these inaccurate cubes. This is a really interesting question, since this is something that we used to do, and have now abandoned. Back in the 90’s, some of the shooters bragged about having “per pixel hit detection”, which is not longer true of games today.

The reason we shoot cubes instead of properly checking to see if shots actually hit is threefold: First, it’s just far easier to code. Basic collision detection isn’t terribly hard to write (in a relative sense) but it still takes time and processing power. Unlike rendering, collision detection is a job for your CPU, not your graphics card. Graphics power is growing way faster than the general-processing power of the average gaming device. Which means the number of polygons in any given character is going up faster than processor speeds. Therefore high-definition hit detection gets more expensive and harder to do with each new graphics generation.

Secondly, there are concerns over game balance. If we do super-accurate hit detection, then anything that changes the shape of a character becomes a game balance issue. If the artists decide to give this guy a beard, we now have to run the design by a bunch of other people to see if the beard will make it too easy to hit the head, thus giving this particular character model a disadvantage.

And finally, it’s just not that big a deal to most players. For the die-hard multiplayer shooter fans, they care about things being fair, and would rather everyone have the exact same hitbox than worry about whether or not having a big nose or sunglasses makes their head a bigger target. And for the rest of us, we generally don’t notice. The consoles make up a majority of the market these days, and on consoles we have little bits of auto-aim going on. There’s no reason to knock yourself out coding some super-sophisticated hitscan checking system that can tell the difference between shooting someone on the nose or under the nose, because on a console the auto-aim would turn that into a headshot anyway.

Per-pixel hit detection was a cool idea, but it’s really only useful and practical on high-framerate, low-latency multiplayer shooters on the PC. And that’s a pretty niche market.

Recommended Videos
ExperiencedPoints 3x3

Hi Shamus !

– Where would the bottleneck be if a game, originally planed to be rendered at 1080p and constant 30fps (not locked), get converted to 4k as in simply adjusting the resolution ?

Have a nice Day !

(Question edited for brevity.)

The answer to this – like so many programming questions – is really annoying:

“It depends.”

Rather than answer the question, I’ll explain why the question is impossible to answer in a general sense.

There are two major bottlenecks we talk about in rendering: Throughput and fill rate.

Say we’re running some space-marine shooter thing. The computer gets all the polygons for the level, the marines, the monsters, the guns, and everything else in the gameworld. It shoves all that stuff over to the graphics hardware and says, “Here, draw this.” Now, in an ideal (to the programmer) world, we would only need to do this once. We give the graphics card a snapshot of the world and then we can draw it as many times as we like. But annoyingly, people don’t like to play videogames where nothing happens. They expect characters to move around, doors to open, things to blow up, and particle effects to fly all over the place. Which means every frame we have to send different data to the graphics card. This is where the throughput problems happen. If you’re trying to draw two million pieces of flying rubble, the game might slow down because it takes too long to describe each little bit of debris.

The other end of the problem is fill rate. Even if we’re not sending too much data, the graphics hardware might have trouble drawing it all in time, either because there’s too much of it, or it’s just too dang complex.

To put it another way: Throughput is how long it takes to describe to the painter what you want, and fill rate is how long it takes them to paint it.

Doubling the framerate of a game (say, from 30fps to 60fps) will (roughly) double your throughput load. Increasing the resolution of a game will increase the fill rate load. The jump from 1080p to 4k requires four times as much fill.

Some games with a lot of animated stuff will struggle with throughput, but have fill rate to spare, and a game with a lot of non-animated but incredibly fancy polygons will struggle with fill. So the jump to 4k might be easy for some games and impossible for others. It all depends on where the bottleneck is.

ExperiencedPoints 3x3

Hi Shamus,

one interesting I noticed recently is that in games like The Elder Scrolls and
Fallout:NV, pushing the distance settings to the maximum doesn’t give all that
much viewing distance.

Could viewing distance be significantly improved if game studios would use a
lower level of detail and graphical realism? Or is high viewing distance too
intensive on a graphics card even if using so-called ‘worse’ grpahics?

Thanks for your time,

-Tim

The answer is a partial “yes”. View distance in games hasn’t increased nearly as fast as other stuff. (Like polygon density.) You can help that a bit by making environments less detailed, but the numbers are really against you here. It all comes down to the throughput / fill problem I talked about above.

The problem is that the amount of crap you have to draw shoots up dramatically with each tiny increase in view distance. Let’s say we’ve got this game with a tiny “Turok” level viewing distance. Let’s say ten meters. You need to be able to spin in a circle and see 10m of stuff all around you. Which means you’re at the center of a rectangular region that’s 20m on a side. (Since you need to be able to see 10m in front, and another 10m when you look behind you.) So the area of this rectangle is 20×20. Which means the game is holding about 400 square meters of scenery in memory. So:

meters of scenery = (visible radius * 2)2

So far so good.

(Yes, we could use a circle instead of a rectangle. Many games do this. But that comes with a different set of tradeoffs and I don’t want to get sidetracked.)

If I decide to double the visible range from 10m to 20m then we end up with a frightening 1,600m of scenery. Every time we double the view distance we quadruple how much scenery we need to handle. Those numbers run away from you quick. And even if your graphics hardware can handle it, there’s always the challenge of getting those assets off the hard drive and into memory in time.

On top of this, the speed of graphics hardware is growing faster than the speed of our CPU and hard drive, and the size of our computer memory. Polygon counts just keep going up. Which makes it take longer to load. Which makes it harder still to push that visible radius farther.

Shamus Young has been writing programs for over 30 years, from the early days of BASIC programming in the 80’s to writing graphics and tech prototypes today. Have a question about games programming for Shamus? Ask him! Email [email protected].

About the author

Not A Diplomatic Mission: New Look At Princess Leia #1

Previous article

Hello There! A “Smiley Face” Galaxy Cluster And Other Hubble Photos

Next article